HPC Clouds — Alto Cirrus or Cumulonimbus

By Thomas Sterling and Dylan Stark

November 21, 2008

The “cloud” model of exporting user workload and services to remote, distributed and virtual environments is emerging as a powerful paradigm for improving efficiency of client and server operations, enhancing quality of service, and enabling early access to unprecedented resources for many small enterprises. From single users to major commercial organizations, cloud computing is finding numerous niche opportunities, often by simplifying rapid availability of new capabilities, with minimum time to deployment and return on requirements. Yet, one domain that challenges this model in its characteristics and needs is high performance computing (HPC).

The unique demands and decades’ long experiences of HPC on the one hand hunger for the level of service that clouds promise while on the other hand impose stringent properties, at least in some cases, that may be beyond the potential of this otherwise remarkable trend. The question is, can cloud computing reach the ethereal heights of Alto Cirrus for HPC, or will it inflict the damaging thunderclap of cumulonimbus?

While HPC immediately invokes images of TOP500 machines, the petaflops performance regime, and applications that boldly compute where no machine has calculated before, in truth this domain is multivariate with many distinct class of demand. The potential role and impact of cloud computing to HPC must be viewed across the range of disparate uses embodied by the HPC community. One possible delineation of the field (in order of most stringent first) is:

  1. Highest possible delivered capability performance (strong scaling).
  2. Weak scaling single applications.
  3. Capacity, or throughput job-stream, computing.
  4. Management of massive data sets, possibly geographically distributed.
  5. Analysis and visualization of data sets.
  6. Management and administrative workloads supporting the HPC community.

Consideration of these distinct workflows exposes opportunities for the potential exploitation of the cloud model and the benefits this might convey. Starting from the bottom of the list, the HPC community involves many everyday data processing requirements that are similar to any business or academic institution. Already some of the general infrastructure needs are quietly being outsourced to cloud-like services including databases, email, web-management, information retrieval and distribution, and other routine but critical functions. However, many of these tasks can be provided by the local set of distributed workstation and small enterprise servers. Therefore the real benefit is in reducing cost of software maintenance and per head cost of software licenses, rather than reduction of cost of hardware facilities.

Offloading tasks directly associated with doing computational science, such as data analysis and visualization, are appropriate to the use of cloud services in certain cases. This is particularly true for smaller organizations that do not have the full set of software systems that are appropriate to the local requirements. Occasionally, availability of mid-scale hardware resources, such as enterprise servers, may be useful as well if queue times do not impede fast turnaround. This domain can be expanded to include the frequent introduction of new or upgraded software packages not readily available at the local site, even if open source. Where such software is provided by ISVs, the cost of ownership or licensing may exceed the budget or even the need of occasional use.

Offerings by cloud providers may find preferable incentives for use of such software. It also removes the need for local expertise in installing, tuning, and maintaining such arcane packages. This is particularly true for small groups or individual researchers. However, a recurring theme is that HPC users tend to be in environments that incorporate high levels of expertise including motivated students and young researchers, and therefore are more likely to have access to such capabilities. The use of clouds in this case will be determined by the peculiarities of the individual and his/her situation.

Although HPC is often equated to FLOPS, it is as dependent, even sometimes more so, on bytes. Much science is data oriented, comprising data acquisition, product generation, organization, correlation, archiving, mining, and presentation. Massive data sets, especially those that are intrinsically distributed among many sites are a particularly rich target for cloud services. Maintenance of large tertiary storage facilities is particularly difficult and expensive, even for the most facilities rich environments. Data management is one area of HPC in which commercial enterprises are significantly advanced, even with respect to scientific computing expertise, with significant commercial investment being applied compared to the rarified boutique scientific computing community.

One very important factor is that confidence in data integrity of large archives may ultimately be higher among cloud resource suppliers both because of their potentially distributed nature removes issues of single point failure (like hurricanes, lightening strikes, floods), and their ability to exploit substantial investments available due to economy of scale. But one, perhaps insurmountable, challenge may impose fundamental limits in the use of clouds for data storage for some mission-critical HPC user agencies and commercial research institutions: data security. Where the potential damage for leakage or corruption of data would be strategic in nature for national security or intellectual property protection, it may be implausible that such data, no matter what the quantity or putative guarantees, will be trusted to remote and sometimes unspecified service entities.

Throughput computing is an area of strong promise for HPC in the exploitation of the emerging cloud systems. Cloud services are particularly well suited for the provisioning of resources to handle application loads of many sequential or slightly parallel (everything will have to become multicore) application tasks limited to size-constrained SMP units, such as for moderate duration parametric studies. In this case, cloud services have the potential to greatly enhance an HPC institution’s available resources and operational flexibility while improving efficiency and reducing overall cost of equipment and maintenance personnel. By offloading throughput computing workloads to cloud resources, HPC investments may be better applied to those resources unique to the needs of STEM applications not adequately served by the widely-available cloud-class processing services. However, this is tempered by the important constraint discussed above related to workloads that are security or IP sensitive.

The final two regimes of the HPC scientific and technical computing arena prove more problematic for clouds. Although weak scaling applications, where the problem size grows with the system scale such that granularity of concurrency remains approximately constant, may be suitable for a subset of the class of machines available within a cloud, the virtualization demanded by the cloud environment will preclude the hardware-specific performance tuning essential to effective HPC application execution. Virtualization is an important means of achieving user productivity, but as yet it is not a path to optimal performance, especially for high scale supercomputer grade commodity clusters (e.g., Beowulf) and MPPs (e.g., Cray XT3/4/5 and IBM BG/L/P/Q). And, while auto-tuning (as part of an autonomic framework) may one day offer a path to scalable performance, current practices at this time by users of major applications demand hands-on access to the detailed specifics of the physical machine.

Where the HPC community is already plagued with sometimes single-digit efficiencies for highly-tuned codes that may run for weeks or months to completion, the loss of substantially more performance to virtualization is untenable in many cases. An additional challenge relates to I/O bandwidth, which is sometimes a serious bottleneck if not balanced with the application needs that cannot be ensured by the abstraction of the cloud. Also, the problem of checkpoint and restart is critical to major application runs but may not be a robust service incorporated as part of most cloud systems. Therefore, a suitable system would need to make appropriate guarantees with respect to the availability of hardware and software configurations that would not be representative of the broad class of clouds.

Finally, the most challenging aspect of HPC is the constantly advancing architecture and application of capability computing systems. In their most pure form they enable strong scaling where response time is reduced for fixed sized applications with increasing system scale. Such systems imply a premium cost not just because of their mammoth size comprising upwards of a million cores and tens of terabytes of main memory, but also because of their unique design and limited market, which results in the loss of economy of scale. Even when integrating many commodity devices such as microprocessors and DRAM components, the cost of such systems may be tens of millions to over a hundred million dollars.

With the very high bandwidth, low latency internal networks with specialized functionality (e.g., combining networks) and high bandwidth storage area networks for attached secondary storage, there are few commercial user domains that can help offset the NRE costs of such major and optimized computing systems. It is unlikely that a business model can be constructed that would justify such systems being made available through cloud economics. Added to this are the same issues with virtualization versus performance optimization through hands-on performance tuning as described above. Therefore, it is unlikely that clouds will satisfy capability computing challenges for computational science in the foreseeable future.

The evolution of the cloud paradigm is an important maturing of the power of microelectronics, distributed computing, the Internet, and the rapidly expanding role of computing in all aspects of human enterprise and social context. The HPC and scientific computing community will benefit in tangential ways from the cloud environments as they evolve and where appropriate. However, challenges of virtualization and performance optimization, security and intellectual property protection, and unique requirements of scale and functionality, will result in certain critical aspects of the requirements of HPC falling outside the domain of cloud computing, relying instead on the strong foundation upon which HPC is well grounded.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

San Diego Supercomputer Center to Welcome ‘Expanse’ Supercomputer in 2020

July 18, 2019

With a $10 million dollar award from the National Science Foundation, San Diego Supercomputer Center (SDSC) at the University of California San Diego is procuring a new supercomputer, called Expanse, to be deployed next Read more…

By Staff report

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts flight characteristics. However, modeling the complexities and su Read more…

By Rob Johnson

How Fast is Your Rubik Solver; This One’s Probably Faster

July 18, 2019

In the race to solve Rubik’s Cube, the time-to-finish keeps shrinking. This year Philipp Weyer from Germany won the 10th World Cube Association (WCA) Championship held in Melbourne, Australia, with a 6.74-second perfo Read more…

By John Russell

HPE Extreme Performance Solutions

Bring the Combined Power of HPC and AI to Your Business Transformation

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Read more…

IBM Accelerated Insights

Smarter Technology Revs Up Red Bull Racing

In 21st century business, companies that effectively leverage their information resources – thrive. As it turns out, the same is true in Formula One racing. Read more…

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated more efforts (academic, government, and commercial) but whose Read more…

By John Russell

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts fligh Read more…

By Rob Johnson

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitme Read more…

By Oliver Peckham

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, som Read more…

By Dan Olds

Nvidia Expands DGX-Ready AI Program to 19 Countries

July 11, 2019

Nvidia’s DGX-Ready Data Center Program, announced in January and designed to provide colo and public cloud-like options to access the company’s GPU-powered Read more…

By Doug Black

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

By Tiffany Trader

Applied Materials Embedding New Memory Technologies in Chips

July 9, 2019

Applied Materials, the $17 billion Santa Clara-based materials engineering company for the semiconductor industry, today announced manufacturing systems enablin Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This