A Sterling Future For HPC

By Nicole Hemsoth

February 11, 2013

For the past decade, keynote speakers at the International Supercomputing Conference (ISC) have examined the major accomplishments in HPC during the preceding year. This time the talk is more ambitious. At ISC ’13 in Leipzig, Germany in June, Thomas Sterling will deliver a keynote that examines the HPC accomplishments over the last decade. He plans to reveal “the true achievement of our field.”

You already know Sterling, of course. He’s famous as the “father of Beowulf,” the commodity computing cluster he and NASA Goddard colleague Donald Becker pioneered in 1994, for which they won a Gordon Bell Prize.

He’s now Professor of Informatics and Computing at the Indiana University School of Informatics and Computing, leading a team conducting research associated with the ParalleX advanced execution model for extreme scale computing. The goal: to develop a new model of computation that will enable a new generation of extreme scale computing systems and applications.

He’s also Chief Scientist and Associate Director of the PTI Center for Research in Extreme Scale Technologies (CREST), Adjunct Professor at Louisiana State University, and CRSI Fellow at Sandia National Laboratories. He has co-authored six books and holds six patents. To top it off, he’s one of HPCwire’s People to Watch for 2013!

His speech will examine the innovations in technology and architectures in HPC, as well as their contributions to science and other fields. He’ll also offer a collection of predictions for the next decade from key HPC leaders.

In anticipation of that talk, HPCwire asked Dr. Sterling to make a few predictions of his own.

HPCwire: It seems like the push toward exascale has lost some momentum over the last year. Do you think exascale will slip into the next decade?

Sterling: This is a complicated issue, but my view is that, if anything, momentum towards exascale in the US is building, not waning. There are two tracks to exascale, both being led by DOE in the US.

NNSA [National Nuclear Security Administration] is driving the incremental track. That is an attempt to extend conventional practices, both in architecture and programming, to deploy an exascale version of what we have today. This is prudent, responsible, and low-risk. It will support important mission-critical workloads, and will present a ready, if not seamless, migration path for legacy codes. However, it’s likely to be limited in applicability, scalability, and efficiency for many problems.

OS/ASCR is guiding the advanced track. This approach is to create innovations in architecture, system software, and programming models and methods. It could achieve exascale-era computing systems that are truly general-purpose, usable, reliable, and cost-effective (in terms of both operations and power.) It’s possible that we’ll even shift paradigms to a new execution model.

NNSA is likely to deliver its incremental platform to the national labs sometime between 2018 and 2020. R&D timeline projections suggest an advanced-class system is likely by 2022 or shortly after.

Still, the process of a congressionally-validated plan is complex. Its formulation is well along and is being refined, but there are other issues related to how it moves through the obscure (at least to mere mortals such as myself) layers of authorization.

The apparent path for supercomputing is now entering a multifaceted period. We have matured, I think, beyond the adolescent obsession of the next Linpack number. The trends leading to exascale should be measured in terms of progress toward unprecedented accomplishments in science, engineering, societal, commercial, and defense-related goals. I think we are sustaining a mid-course correction that is placing us on the new trend lines: the ones that actually matter.

HPCwire: Will another nation beat the US to the exascale milestone? Which one has the best shot?

Sterling: It is possible of course that another nation will beat the US to the exascale milestone.

However, there is an unstated assumption that “the exascale milestone” is 1 exaflops Rmax [maximal LINPACK performance]. Such systems don’t have to emphasize networking capability or even memory capacity (which, in combination, are the most expensive part of the system when balanced) to gain high marks. Any nation that wants the stature of being the first exascale system by this definition can probably do so in five years or slightly more, if they are willing to pay for it, by deploying a stunt machine.

Who may get to 1 exaflops Rmax first? History shows that, if not the US, it is likely to be Japan or China, but otherwise I have no deep insight. The EU is taking on new leadership in hardware and is expanding its energies in software infrastructure. Japan continues to extend its own advances with, for example, Kei and Tsubame-2. The Chinese have announced Tianhe-2, to exceed 100 petaflops by 2015.

But the US, guided by DOE programs, is pursuing opportunities with radically different approaches for true general-purpose exascale computation. The X-stack program begun in September 2012, for example, is getting dramatic improvements to efficiency, scalability, generality, and programmability, and is aggressively pursuing innovations to improve power consumption and reliability. If the milestone is general purpose exascale computing, then I think the US is in a compelling leadership position through the DOE partnership of Thuc Huang and Bill Harrod.

Still, I wish we had a science accomplishments benchmark – something like the X-prize. Perhaps some end-game computational achievement, like proving the process producing gamma ray bursts (including neutrinos); or some microbiology challenge involving viruses; or perhaps demonstrating climate change at a level that is provably predictive (and yes, I know it’s inherently chaotic.) We need something that matters. We need to stop playing the horses and ensure that we can pull the plow.

Next >>

HPCwire: With the emergence of big data analytics in HPC, and certainly elsewhere, as a growing application area, is there less of a reason to build systems that are just optimized for FLOPS?

Sterling: The answer, of course, is yes. But we don’t have to invoke big data to justify that. Any number of studies of large, multi-scale, multi-physics applications with short transient time constants and long times to steady state, show the relatively high importance of memory access patterns and system wide data movement.

Relatively speaking, floating point capacity is easy to achieve compared to effective memory access bandwidth or low overhead control of complex parallel execution. In the long term, we need to bridge the gap between data that computers treat as actionable, and knowledge that humans act upon. However this is achieved, it will involve meta-data more like that of advanced graph analytic problems and less like DGEMM.

The problem is cost and the dominance by some of Linpack. It is less expensive to build a high Rmax system with cheap flops than to build a balanced architecture of large main memories and high bandwidth, low-latency networks. Until we define a new standard of quality, we are likely to drift back in to our comfort zone and go for the flops.

HPCwire: Given that scientific computing will need both physics simulations and analytics going forward, should we be designing different types of machines for each of these application areas, or is there enough similarity between the two that a single architecture can suffice?

Sterling: For each application algorithm, there may be an optimal balance of computation, memory, and communication resources and structures. Examples like Anton, and somewhat more-generic GPU components, certainly demonstrate exceptional capabilities for specific workloads and flows.

It is tempting to prescribe particular machine designs for specific algorithms. Alternatively, there have been proposals to configure heterogeneous systems with ensembles of highly specialized functional units, any class of which may be employed for a given problem, allowing others to lie fallow. The greatest value of such optimizations may ultimately be in the area of energy, which would focus primarily on data movement.

Such structural variations may ultimately be important when Moore’s Law does flatten out beyond a nanometer of feature size. The greatest challenge is to satisfy not any single application, but the mix of applications that must be supported by any truly large-scale deployed system. My inclination is that at the system level we will generally shoot for broadly general-purpose, while at the local level we will choose to use or exclude specialized functional units based on expectations of workloads to be supported at individual sites of deployment. The memory wall is still the major challenge for many classes of application, both numeric-intensive and big data. Improvements in this aspect of system architecture will significantly enhance performance for both genres of computation.

Next >>

HPCwire: To the public, much of the work of supercomputing seems esoteric, many of the applications incomprehensible. Can we point to the results of work done by supercomputers that connects to the concerns of people outside the HPC community, that show it has made a difference in their lives?

Sterling: It has been said that supercomputing is the third pillar of human exploration and understanding, following empiricism (from the dawn of humanity) and theory (in recent centuries, with some priceless gems more than two millennia ago from people like Euclid and Eratosthenes). It provides a new window on to the universe – mega, macro, and micro. It allows us to explain the past, control the present and, in certain restricted but important cases, predict the future.

Challenges to the US and world societies in the 21st century require solutions to shared scientific and engineering problems that will affect this and the next two generations if quality of life is to improve and the disparities in access to life-enabling resources are to be mitigated.

One example: There is an interrelationship between determining the possible effect of anthropogenic chemicals on global climate change, and the future availability of safe, healthy, low cost energy. Both depend on bringing the highest capability computing to bear on these problems. Climate modeling must operate at significantly greater resolution in space, time, chemistry and physical phenomenology for any certainty about the degree of change that is of human origin. Should it prove to be, as many expect, that the burning of fossil fuels is a principal contributing factor aggravating global warming, then we will need to apply supercomputing to the design and operation of controlled-fusion reactors (e.g., ITER.) This could be the source of abundant, safe, and (eventually) low-cost electrical power that will ultimately save human civilization.

Supercomputers are also exploring the chemistry, processes, and materials for mobile energy storage in order to dramatically extend the travel range of electric vehicles.

Finally, treating the physical human condition as a system-engineering and simulation problem demands exascale computing. That may provide the ultimate understanding of diseases and their treatments, whether through drugs, organ regeneration, or supplementary replacement devices.

And if these driving issues are beyond the ken of the mainstream citizenry, certainly access to information of all forms through myriad search engines, on-line purchasing, interaction with friends, families, and social groups are highly visible on a daily basis. Entertainment such as on-demand movies and interactive multi-player gaming employ computing resources at the same scale as high performance computing systems used for technical computing. Then there is the less visible but pervasive contributions of high performance computing in such areas as national security, air traffic control, weather forecasting, and many other applications that silently serve all of use on a daily basis.

It is not clear that our community has adequately conveyed the importance and accomplishments of the field of high performance computing to the broad public in a way that they can understand and appreciate. When I consider how other fields successfully expose our citizenry to their foundation ideas, I realize that they play a role in K-12 education.

Children learn about telescopes, microscopes, and even particle accelerators. But they don’t learn about supercomputers. The concept of simulation is something that a student may not encounter until college, and then only in the sciences and engineering disciplines. I believe that we need to inculcate the process of teaching at all levels of our young people so that everyone in the U.S. is routinely exposed to supercomputing as one of the few important means of advancing goals in science and technology in the 21st century.

HPCwire: What do you think 2013 will bring to the world of HPC? Any predictions you care to make?

Sterling: 2013 may prove to be the pivotal year for HPC, although I may be being a bit impatient and it will turn out that history will look back and decide that 2014 was the delineating point. Here are a few things to watch:

MPI-3 – A major overhaul of MPI has been completed with the release of the MPI-3 specification. This year we will see if the changes incorporated will get traction and will extend the utility of the highly successful predecessor programming model to areas that were not well-served before.

Lightweight Core Architectures – Many organizations, including the world’s largest microprocessor manufacturing company (Intel), are guessing that a new generation of microprocessor architecture will be required to fully realize the promise of exascale computing. MIC (Xeon Phi) represents a new direction in processor core design. ARM is another path being pursued by Russia, EU and, in the US, Nvidia to find an improved balance of processing logic.

3-D Die Stacking – the packaging of multiple memory and logic dies in a single stack may dramatically increase parts density while significantly increasing local bandwidth and reducing latencies, as well as reducing energy consumption.

Runtime systems are emerging as an alternative to static control for resource management and task scheduling. With their overhead costs, runtime systems may not prove optimal for all workload classes. But early experiments for multi-scale, multi-physics problems have demonstrated promising results for efficiency and scalability. More work is required and it is premature to assume this as a final solution. This coming year may provide sufficient results to validate or refute this approach; an important result if achieved.

Lightweight Kernel OS – new work in operating systems this year may lead to environments capable of providing necessary capability and services while delivering vastly superior efficiency and scalability. Early examples like Catamount and CNK are informing new developments conducted currently and potentially in the future under new DOE research programs.

 

Related Articles:

HPC Programming in the Age of Multicore: One Man’s View

UK Creates Massive 200,000-Core ‘HPC Service’

Experts Discuss the Future of Supercomputers

Waiting for Exascale

DOE to Field Pre-Exascale Supercomputers Within Four Years

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Democratization of HPC Part 3: Ninth Graders Tap HPC in the Cloud to Design Flying Boats

October 18, 2018

This is the third in a series of articles demonstrating the growing acceptance of high-performance computing (HPC) in new user communities and application areas. In this article we present UberCloud use case #208 on how Read more…

By Wolfgang Gentzsch and Håkon Bull Hove

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phase, on the near side of a difficult chasm to cross. In respon Read more…

By Tiffany Trader

When Water Quality—Not Quantity—Hinders HPC Cooling

October 18, 2018

Attention has been paid to the sheer quantity of water consumed by supercomputers’ cooling towers – and rightly so, as they can require thousands of gallons per minute to cool. But in the background, another factor can emerge, bottlenecking efficiency and raising costs: water quality. Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

One Small Step Toward Mars: One Giant Leap for Supercomputing

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. Read more…

IBM Accelerated Insights

Paper Offers ‘Proof’ of Quantum Advantage on Some Problems

October 18, 2018

Is quantum computing worth all the effort being poured into it or should we just wait for classical computing to catch up? An IBM blog today posed those questions and, you won’t be surprised, offers a firm “it’s wo Read more…

By John Russell

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phas Read more…

By Tiffany Trader

When Water Quality—Not Quantity—Hinders HPC Cooling

October 18, 2018

Attention has been paid to the sheer quantity of water consumed by supercomputers’ cooling towers – and rightly so, as they can require thousands of gallons per minute to cool. But in the background, another factor can emerge, bottlenecking efficiency and raising costs: water quality. Read more…

By Oliver Peckham

Paper Offers ‘Proof’ of Quantum Advantage on Some Problems

October 18, 2018

Is quantum computing worth all the effort being poured into it or should we just wait for classical computing to catch up? An IBM blog today posed those questio Read more…

By John Russell

Dell EMC to Supply U Michigan’s Great Lakes Cluster

October 16, 2018

The University of Michigan (U-M) today announced Dell EMC is the lead vendor for U-M’s $4.8 million Great Lakes HPC cluster scheduled for deployment in first Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Nvidia Platform Pushes GPUs into Machine Learning, High Performance Data Analytics

October 10, 2018

GPU leader Nvidia, generally associated with deep learning, autonomous vehicles and other higher-end enterprise and scientific workloads (and gaming, of course) Read more…

By Doug Black

Federal Investment in Exascale – What It Really Means

October 10, 2018

Earlier this month, the EuroHPC JU (Joint Undertaking) reached critical mass, and it seems all EU and affiliated member states, bar the UK (unsurprisingly), have or will sign on. The EuroHPC JU was born from a recognition that individual EU member states, and the EU as a whole, were significantly underinvesting in HPC compared to the US, China and Japan, who all have their own exascale investment and delivery strategies (NSCI, 13th 5 Year Plan, Post-K, etc). Read more…

By Dairsie Latimer

NERSC-9 Clues Found in NERSC 2017 Annual Report

October 8, 2018

If you’re eager to find out who’ll supply NERSC’s next-gen supercomputer, codenamed NERSC-9, here’s a project update to tide you over until the winning bid and system details are revealed. The upcoming system is referenced several times in the recently published 2017 NERSC annual report. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

Leading Solution Providers

HPC on Wall Street 2018 Booth Video Tours Playlist

Arista

Dell EMC

IBM

Intel

RStor

VMWare

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Aerodynamic Simulation Reveals Best Position in a Peloton of Cyclists

July 5, 2018

Eindhoven University of Technology (TU/e) and KU Leuven research group conducts the largest numerical simulation ever done in the sport industry and cycling discipline. The goal was to understand the aerodynamic interactions in the peloton, i.e., the main pack of cyclists in a race. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This