Adaptive Computing Unveils Moab 8.0

June 23, 2014

LEIPZIG, Germany, June 23 — Adaptive Computing, the company that powers many of the world’s largest private/hybrid cloud and technical computing environments with its Moab optimization and scheduling software, today announced Moab HPC Suite-Enterprise Edition 8.0 (Moab 8.0) will be generally available within 30 days with sneak peak demos in booth No. 710 at the International Supercomputing Conference (ISC) 2014 from June 22–26, 2014 in Leipzig, Germany. The new features include significant updates for managing and optimizing workloads across technical computing environments. Moab HPC Suite-Enterprise Edition 8.0 also enhances Big Workflow by processing intensive simulations and big data analysis to accelerate insights.

“This latest version of Moab underscores our commitment to innovation in the technical computing sectors,” said Rob Clyde, CEO at Adaptive Computing. “HPC’s powerful engine is at the core of extracting insights from big data, and these updates will enable enterprises to capitalize on HPC’s convergence with cloud and big data to garner faster insights for data-driven decisions.”

Adaptive’s Big Workflow solution delivers dynamic scheduling, provisioning and management of multi-step/multi-application services across HPC, cloud and big data environments. Moab 8.0 bolsters Big Workflow’s core services: unifying data center resources, optimizing the analysis process and guaranteeing services to the business.

Key updates to Moab 8.0 include the following:

Unify Data Center Resources

Adaptive Computing continues to innovate new ways to break down siloed environments to speed the time to discovery. With Adaptive’s Big Workflow innovations, users can utilize all available resources across multiple platforms, environments and locations, managing them as a single ecosystem. Moab 8.0 enhances resource unification such as its new OpenStack Integration – available for selected beta, offers virtual and physical resource provisioning for Information as a Service and Platform as a Service.

“In our end-user surveys, Adaptive Computing’s Moab and TORQUE, were the top two job management packages named, with 40% of the mentions combined,” comments Addison Snell, CEO of Intersect360 Research. “Organizations are investing in big data and HPC; more than half the respondents in our most recent study were spending at least 10% of their IT budgets on Big Data projects. With Adaptive’s Big Workflow, the general idea is to provide a way for big data, HPC, and cloud environments to interoperate, and do so dynamically based on what applications are running. With the added benefits of a unified platform, OpenStack is a promising platform to interoperate multiple environments.”

Optimize the Analysis Process

Massive Performance enhancements in workload optimization streamline the analytical process, which increases throughput and productivity as well as reduces cost, complexity and errors. These new optimization features include:

  • Performance Boost  Moab 8.0 enables users to achieve up to three times improvement in overall workload optimization performance. In order to achieve three times the scale and performance, Moab 8.0 offers the following improvements:
    • Reduced Command Latency – Through a combination of cached data and more efficient use of background threads, it is now possible to submit read-only commands and get an answer within seconds.
    • Decreased Scheduling Cycle Time – New placement decisions are now three and six times faster.
    • Improved Multi-threading – Due to increased parallelism using multi-threading, Moab now scales up with hardware—the more CPU horsepower dedicated to the scheduler, the faster Moab goes by making full use of multiple cores during its scheduling cycle.
    • Faster Moab and TORQUE Job Communication – Moab now communicates newly submitted jobs to TORQUE using a more efficient API.
    • Advanced High Throughput Computing – Nitro delivers 100 times faster job throughput for short computing jobs. Nitro is now generally available, with Beta version previously announced in November 2013 under the code name Moab Task Manager. Nitro is a stand-alone product and is available for a FREE trial.
  • Advanced Power Management – Moab 8.0 creates energy cost savings up to 15-30 percent with new clock frequency control and additional power state options. With clock frequency control, administrators can adjust CPU speeds to align with workload processing speeds through job templates. In addition, administrators can manage multiple power states and automatically place compute nodes in new low-power or no-power states (suspend, hibernation and shutdown modes) when nodes are idle.
  • Advanced Workflow Data Staging – Moab 8.0 updates the current data staging model in Moab by running data staging in a scheduling out-of-band process. This enables improved cluster utilization, multiple transfer methods and new transfer types, more reliable job execution time consistency and allows Moab to more effectively account for data staging in resource scheduling decisions within Grid environments.

Guarantee Services to the Business

Improved features allow the data center to ensure SLAs, maximize uptime and prove services were delivered and resources were allocated fairly. Moab 8.0 offers an enhanced Web-based graphical user interface called Moab Viewpoint. Viewpoint is the next generation of Adaptive’s administrative dashboard that today monitors and reports workload and resource utilization.

About Adaptive Computing

Adaptive Computing powers many of the world’s largest private/hybrid cloud and technical computing environments with its award-winning Moab optimization and scheduling software. Moab enables large enterprises in oil and gas, financial, manufacturing, and research as well as academic and government to perform simulations and analyze Big Data faster, more accurately and most cost effectively with its Technical Computing, Cloud and Big Data solutions for Big Workflow applications. Moab gives users a competitive advantage, inspiring them to develop cancer-curing treatments, discover the origins of the universe, lower energy prices, manufacture better products, improve the economic landscape and pursue game-changing endeavors. Adaptive is a pioneer in private/hybrid cloud, technical computing and big data, holding 50+ issued or pending patents.

Source: Adaptive Computing

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This