HPC Clouds — Alto Cirrus or Cumulonimbus

By Thomas Sterling and Dylan Stark

November 21, 2008

The “cloud” model of exporting user workload and services to remote, distributed and virtual environments is emerging as a powerful paradigm for improving efficiency of client and server operations, enhancing quality of service, and enabling early access to unprecedented resources for many small enterprises. From single users to major commercial organizations, cloud computing is finding numerous niche opportunities, often by simplifying rapid availability of new capabilities, with minimum time to deployment and return on requirements. Yet, one domain that challenges this model in its characteristics and needs is high performance computing (HPC).

The unique demands and decades’ long experiences of HPC on the one hand hunger for the level of service that clouds promise while on the other hand impose stringent properties, at least in some cases, that may be beyond the potential of this otherwise remarkable trend. The question is, can cloud computing reach the ethereal heights of Alto Cirrus for HPC, or will it inflict the damaging thunderclap of cumulonimbus?

While HPC immediately invokes images of TOP500 machines, the petaflops performance regime, and applications that boldly compute where no machine has calculated before, in truth this domain is multivariate with many distinct class of demand. The potential role and impact of cloud computing to HPC must be viewed across the range of disparate uses embodied by the HPC community. One possible delineation of the field (in order of most stringent first) is:

  1. Highest possible delivered capability performance (strong scaling).
  2. Weak scaling single applications.
  3. Capacity, or throughput job-stream, computing.
  4. Management of massive data sets, possibly geographically distributed.
  5. Analysis and visualization of data sets.
  6. Management and administrative workloads supporting the HPC community.

Consideration of these distinct workflows exposes opportunities for the potential exploitation of the cloud model and the benefits this might convey. Starting from the bottom of the list, the HPC community involves many everyday data processing requirements that are similar to any business or academic institution. Already some of the general infrastructure needs are quietly being outsourced to cloud-like services including databases, email, web-management, information retrieval and distribution, and other routine but critical functions. However, many of these tasks can be provided by the local set of distributed workstation and small enterprise servers. Therefore the real benefit is in reducing cost of software maintenance and per head cost of software licenses, rather than reduction of cost of hardware facilities.

Offloading tasks directly associated with doing computational science, such as data analysis and visualization, are appropriate to the use of cloud services in certain cases. This is particularly true for smaller organizations that do not have the full set of software systems that are appropriate to the local requirements. Occasionally, availability of mid-scale hardware resources, such as enterprise servers, may be useful as well if queue times do not impede fast turnaround. This domain can be expanded to include the frequent introduction of new or upgraded software packages not readily available at the local site, even if open source. Where such software is provided by ISVs, the cost of ownership or licensing may exceed the budget or even the need of occasional use.

Offerings by cloud providers may find preferable incentives for use of such software. It also removes the need for local expertise in installing, tuning, and maintaining such arcane packages. This is particularly true for small groups or individual researchers. However, a recurring theme is that HPC users tend to be in environments that incorporate high levels of expertise including motivated students and young researchers, and therefore are more likely to have access to such capabilities. The use of clouds in this case will be determined by the peculiarities of the individual and his/her situation.

Although HPC is often equated to FLOPS, it is as dependent, even sometimes more so, on bytes. Much science is data oriented, comprising data acquisition, product generation, organization, correlation, archiving, mining, and presentation. Massive data sets, especially those that are intrinsically distributed among many sites are a particularly rich target for cloud services. Maintenance of large tertiary storage facilities is particularly difficult and expensive, even for the most facilities rich environments. Data management is one area of HPC in which commercial enterprises are significantly advanced, even with respect to scientific computing expertise, with significant commercial investment being applied compared to the rarified boutique scientific computing community.

One very important factor is that confidence in data integrity of large archives may ultimately be higher among cloud resource suppliers both because of their potentially distributed nature removes issues of single point failure (like hurricanes, lightening strikes, floods), and their ability to exploit substantial investments available due to economy of scale. But one, perhaps insurmountable, challenge may impose fundamental limits in the use of clouds for data storage for some mission-critical HPC user agencies and commercial research institutions: data security. Where the potential damage for leakage or corruption of data would be strategic in nature for national security or intellectual property protection, it may be implausible that such data, no matter what the quantity or putative guarantees, will be trusted to remote and sometimes unspecified service entities.

Throughput computing is an area of strong promise for HPC in the exploitation of the emerging cloud systems. Cloud services are particularly well suited for the provisioning of resources to handle application loads of many sequential or slightly parallel (everything will have to become multicore) application tasks limited to size-constrained SMP units, such as for moderate duration parametric studies. In this case, cloud services have the potential to greatly enhance an HPC institution’s available resources and operational flexibility while improving efficiency and reducing overall cost of equipment and maintenance personnel. By offloading throughput computing workloads to cloud resources, HPC investments may be better applied to those resources unique to the needs of STEM applications not adequately served by the widely-available cloud-class processing services. However, this is tempered by the important constraint discussed above related to workloads that are security or IP sensitive.

The final two regimes of the HPC scientific and technical computing arena prove more problematic for clouds. Although weak scaling applications, where the problem size grows with the system scale such that granularity of concurrency remains approximately constant, may be suitable for a subset of the class of machines available within a cloud, the virtualization demanded by the cloud environment will preclude the hardware-specific performance tuning essential to effective HPC application execution. Virtualization is an important means of achieving user productivity, but as yet it is not a path to optimal performance, especially for high scale supercomputer grade commodity clusters (e.g., Beowulf) and MPPs (e.g., Cray XT3/4/5 and IBM BG/L/P/Q). And, while auto-tuning (as part of an autonomic framework) may one day offer a path to scalable performance, current practices at this time by users of major applications demand hands-on access to the detailed specifics of the physical machine.

Where the HPC community is already plagued with sometimes single-digit efficiencies for highly-tuned codes that may run for weeks or months to completion, the loss of substantially more performance to virtualization is untenable in many cases. An additional challenge relates to I/O bandwidth, which is sometimes a serious bottleneck if not balanced with the application needs that cannot be ensured by the abstraction of the cloud. Also, the problem of checkpoint and restart is critical to major application runs but may not be a robust service incorporated as part of most cloud systems. Therefore, a suitable system would need to make appropriate guarantees with respect to the availability of hardware and software configurations that would not be representative of the broad class of clouds.

Finally, the most challenging aspect of HPC is the constantly advancing architecture and application of capability computing systems. In their most pure form they enable strong scaling where response time is reduced for fixed sized applications with increasing system scale. Such systems imply a premium cost not just because of their mammoth size comprising upwards of a million cores and tens of terabytes of main memory, but also because of their unique design and limited market, which results in the loss of economy of scale. Even when integrating many commodity devices such as microprocessors and DRAM components, the cost of such systems may be tens of millions to over a hundred million dollars.

With the very high bandwidth, low latency internal networks with specialized functionality (e.g., combining networks) and high bandwidth storage area networks for attached secondary storage, there are few commercial user domains that can help offset the NRE costs of such major and optimized computing systems. It is unlikely that a business model can be constructed that would justify such systems being made available through cloud economics. Added to this are the same issues with virtualization versus performance optimization through hands-on performance tuning as described above. Therefore, it is unlikely that clouds will satisfy capability computing challenges for computational science in the foreseeable future.

The evolution of the cloud paradigm is an important maturing of the power of microelectronics, distributed computing, the Internet, and the rapidly expanding role of computing in all aspects of human enterprise and social context. The HPC and scientific computing community will benefit in tangential ways from the cloud environments as they evolve and where appropriate. However, challenges of virtualization and performance optimization, security and intellectual property protection, and unique requirements of scale and functionality, will result in certain critical aspects of the requirements of HPC falling outside the domain of cloud computing, relying instead on the strong foundation upon which HPC is well grounded.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This