Adaptive Revs Moab, Debuts Remote Virtualization Edition

By Tiffany Trader

November 28, 2012

At SC12, Adaptive Computing announced its Moab HPC Suite 7.2 release, which includes several productivity enhancements and introduces support for Intel Xeon Phi coprocessors. The workload management vendor also launched two new products as part of its Moab HPC Suite: Application Portal edition, which adds support for a wider variety of applications, and Remote Visualization, a type of technical compute cloud.

Adaptive debuted their big Moab 7.0 release back in March and now they’ve come out with an incremental release, Moab HPC Suite 7.2. One of the major highlights of this version is support for the Intel Xeon Phi coprocessor, or “Intel’s answer to the GPU,” as company rep Chad Harrington put it when I sat down with him during SC12 in Salt Lake City.

Moab HPC Suite automatically detects installed Phi chips and determines how many cores are available. It also collects other metrics in real-time to enhance scheduling and optimization, addressing such issues as: Is it hot? How much RAM is it using? Will it support additional workload or should workload be removed? Moab interacts with the coprocessor and manages it very efficiently, says Harrington.

Harrington told me that Adaptive customers who were beta testing the Phi chips have reported “great performance increases,” and find the Phi easier to work with from a programming standpoint, compared to GPUs. While Adaptive also supports GPUs – including the latest graphics chips from NVIDIA and AMD – they are especially keen on the Intel Phi technology.

“With the introduction of the Intel Xeon Phi technology, we’re seeing a new generation of supercomputers that are faster and more agile than ever,” comments CEO Robert Clyde. “Adaptive is proud to offer Intel Xeon Phi capability in its latest version of Moab HPC Suite, to allow today’s HPC centers to take full advantage of Intel Xeon Phi cores without the need for extensive reprogramming of their systems.”

Another new Moab capability, one that was developed in response to customer requests, is dual-domain scheduling for Cray systems, which allows for a single job to straddle both Cray and non-Cray nodes. The Oak Ridge National Laboratory’s Titan supercomputer, the current TOP500 chart topper, is an Adaptive customer who is using this heterogenous scheduling option.

The latest release also includes an upgrade to the Moab accounting and usage module, which is very cloud-like in its “pay-per-drink” model. Adaptive has added the ability to automate periodic budget resets as well as the ability to implement roll-over minutes – which means if you didn’t use all of your allocation from last month, you can use it the following month.

Users of Moab 7.2 will want to take note of RPM-based deployments, a Linux-oriented package management system that minimizes installation time. A from-scratch installation, including downloading software, takes about eight minutes.

The Moab 7.2 release is already showing up in some very high-profile systems, for example, the COSMOS supercomputer, launched by Professor Stephen Hawking earlier this year. Housed at the University of Cambridge, the SGI UV 2000 is the most powerful shared-memory supercomputer in Europe, outfitted with 1,856 Intel Xeon E5 cores and 1,891 Intel Xeon Phi cores. As such, optimal scheduling and management are a top priority and will help the system fulfill its role in unlocking the mysteries of the universe.

“Research in fundamental cosmology is fast moving and internationally competitive,” commented Professor Paul Shellard, COSMOS Director, in an official statement. “We have to adapt our flexible operating model rapidly, and we need a company breaking new ground to support the very latest HPC technologies, thus we selected Adaptive Computing for our workload management software.”

Next >> New Editions

As part of its SC12 news push, workload management specialists Adaptive Computing launched two new additions to its Moab HPC Suite: Application Portal Edition, which provides single-point access to common technical applications, and Remote Visualization Edition, which enables a technical compute cloud. The company reports these two new product versions “leverage next-generation access models to simplify the collection and interpretation of data, improving the time it takes to achieve meaningful results.”

Application Portal Edition

Technical and engineering applications need to be able to integrate with the job scheduler, and this used to be a manual process that required significant HPC expertise. Adaptive has automated this functionality into a portal to allow users from all backgrounds to start their jobs, check statuses, and get results. Moab Application Portal Edition shifts the skill level from power users to novice users, Harrington explains. The portal, which was designed in collaboration with NICE Software, offers application-centric job submission templates for common applications in a variety of domains, including manufacturing, energy, life-sciences, education and others. The interface relies on NICE technology on the front-end for integration with the different applications, and Moab technology on the backend, for the scheduling and the sharing.

Remote Visualization Edition

A technical user that’s doing simulation and modeling used to require an expensive workstation with a dedicated graphics processor, and data would need to be moved to the workstation in order to be processed. With remote visualization, all the compute-intensive work is happening in the datacenter or server room and only pixels are pushed to the remote site. This allows the company to save money on hardware and it’s also faster and more secure because the data stays in the datacenter.

Remote visualization lets users around the world access and manipulate the same set of data. Harrington gives the example of a car manufacturer based in Germany who has built a vehicle simulation model and now their California lab wants to do some analysis. To ship a few terabytes of data from Germany to California is expensive and time-consuming, but this solution lets them do the visualization remotely in Germany and view the results from California over the Internet.

“In terms of bandwidth, pushing pixels is less bandwidth-intensive than your average Youtube video,” notes Harrington. “If you try to move the data itself, it’s not feasible, but a picture of the data works fine over a company’s internal network and the consumer Internet,” he adds.

Is this cloud? I ask.

“Cloud is about independence of data,” Harrington responds. “It doesn’t matter where the compute happens. It would even be possible to do this on an iPad. Adaptive calls this a technical compute cloud: the visualization happens somewhere else and you’re witnessing it locally.”

“This is along the same idea to VDI [Virtual desktop infrastructure], except in traditional VDI, you’re using Microsoft Office or some kind of general productivity app, but in this case, you’re using a simulation app, ANSYS Fluent or the like, technical HPC apps.”

“The key here is that the processing is happening “elsewhere” – it could be in a different room in the same building, or in a public cloud like Amazon, or a company-owned datacenter on the other side of the world.”

Adaptive developed this offering in partnership with NICE Software. As with the Moab Portal Edition, the NICE technology corresponds to the front-end, and they’ve combined it with the Moab scheduler, which works to manage the hardware side. A GPU can have hundreds of cores, explains Harrington, and Moab schedules the allocation of those cores. So a workload from User A may require 10 cores, while a workload from User B needs 30 cores and User C’s workload wants 50 cores, and so on. Moab enables the sharing of one GPU across many users.

Earlier this month, Adaptive announced the 7.2 release for its Moab Cloud Suite. The Cloud Suite product comes with the same core Moab intelligence engine as the HPC suite, but offers specific features for private cloud. The latest version was designed for ease of integration to minimize the need for system upgrades. Other enhancements include multi-group management, a streamlined dashboard portal, and periodic budget reset capability.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Use Supercomputing to Study Links Between Hurricanes and Climate Change

July 19, 2019

As climate change looms, researchers are scrambling to answer the question of how a warming planet will affect the frequency and severity of already-deadly hurricanes. Now, a team of researchers from the University of Il Read more…

By Oliver Peckham

San Diego Supercomputer Center to Welcome ‘Expanse’ Supercomputer in 2020

July 18, 2019

With a $10 million dollar award from the National Science Foundation, San Diego Supercomputer Center (SDSC) at the University of California San Diego is procuring a new supercomputer, called Expanse, to be deployed next Read more…

By Staff report

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts flight characteristics. However, modeling the complexities and su Read more…

By Rob Johnson

HPE Extreme Performance Solutions

Bring the Combined Power of HPC and AI to Your Business Transformation

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Read more…

IBM Accelerated Insights

Smarter Technology Revs Up Red Bull Racing

In 21st century business, companies that effectively leverage their information resources – thrive. As it turns out, the same is true in Formula One racing. Read more…

How Fast is Your Rubik Solver; This One’s Probably Faster

July 18, 2019

In the race to solve Rubik’s Cube, the time-to-finish keeps shrinking. This year Philipp Weyer from Germany won the 10th World Cube Association (WCA) Championship held in Melbourne, Australia, with a 6.74-second perfo Read more…

By John Russell

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts fligh Read more…

By Rob Johnson

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitme Read more…

By Oliver Peckham

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, som Read more…

By Dan Olds

Nvidia Expands DGX-Ready AI Program to 19 Countries

July 11, 2019

Nvidia’s DGX-Ready Data Center Program, announced in January and designed to provide colo and public cloud-like options to access the company’s GPU-powered Read more…

By Doug Black

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

By Tiffany Trader

Applied Materials Embedding New Memory Technologies in Chips

July 9, 2019

Applied Materials, the $17 billion Santa Clara-based materials engineering company for the semiconductor industry, today announced manufacturing systems enablin Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This