A Sterling Future For HPC

By Nicole Hemsoth

February 11, 2013

For the past decade, keynote speakers at the International Supercomputing Conference (ISC) have examined the major accomplishments in HPC during the preceding year. This time the talk is more ambitious. At ISC ’13 in Leipzig, Germany in June, Thomas Sterling will deliver a keynote that examines the HPC accomplishments over the last decade. He plans to reveal “the true achievement of our field.”

You already know Sterling, of course. He’s famous as the “father of Beowulf,” the commodity computing cluster he and NASA Goddard colleague Donald Becker pioneered in 1994, for which they won a Gordon Bell Prize.

He’s now Professor of Informatics and Computing at the Indiana University School of Informatics and Computing, leading a team conducting research associated with the ParalleX advanced execution model for extreme scale computing. The goal: to develop a new model of computation that will enable a new generation of extreme scale computing systems and applications.

He’s also Chief Scientist and Associate Director of the PTI Center for Research in Extreme Scale Technologies (CREST), Adjunct Professor at Louisiana State University, and CRSI Fellow at Sandia National Laboratories. He has co-authored six books and holds six patents. To top it off, he’s one of HPCwire’s People to Watch for 2013!

His speech will examine the innovations in technology and architectures in HPC, as well as their contributions to science and other fields. He’ll also offer a collection of predictions for the next decade from key HPC leaders.

In anticipation of that talk, HPCwire asked Dr. Sterling to make a few predictions of his own.

HPCwire: It seems like the push toward exascale has lost some momentum over the last year. Do you think exascale will slip into the next decade?

Sterling: This is a complicated issue, but my view is that, if anything, momentum towards exascale in the US is building, not waning. There are two tracks to exascale, both being led by DOE in the US.

NNSA [National Nuclear Security Administration] is driving the incremental track. That is an attempt to extend conventional practices, both in architecture and programming, to deploy an exascale version of what we have today. This is prudent, responsible, and low-risk. It will support important mission-critical workloads, and will present a ready, if not seamless, migration path for legacy codes. However, it’s likely to be limited in applicability, scalability, and efficiency for many problems.

OS/ASCR is guiding the advanced track. This approach is to create innovations in architecture, system software, and programming models and methods. It could achieve exascale-era computing systems that are truly general-purpose, usable, reliable, and cost-effective (in terms of both operations and power.) It’s possible that we’ll even shift paradigms to a new execution model.

NNSA is likely to deliver its incremental platform to the national labs sometime between 2018 and 2020. R&D timeline projections suggest an advanced-class system is likely by 2022 or shortly after.

Still, the process of a congressionally-validated plan is complex. Its formulation is well along and is being refined, but there are other issues related to how it moves through the obscure (at least to mere mortals such as myself) layers of authorization.

The apparent path for supercomputing is now entering a multifaceted period. We have matured, I think, beyond the adolescent obsession of the next Linpack number. The trends leading to exascale should be measured in terms of progress toward unprecedented accomplishments in science, engineering, societal, commercial, and defense-related goals. I think we are sustaining a mid-course correction that is placing us on the new trend lines: the ones that actually matter.

HPCwire: Will another nation beat the US to the exascale milestone? Which one has the best shot?

Sterling: It is possible of course that another nation will beat the US to the exascale milestone.

However, there is an unstated assumption that “the exascale milestone” is 1 exaflops Rmax [maximal LINPACK performance]. Such systems don’t have to emphasize networking capability or even memory capacity (which, in combination, are the most expensive part of the system when balanced) to gain high marks. Any nation that wants the stature of being the first exascale system by this definition can probably do so in five years or slightly more, if they are willing to pay for it, by deploying a stunt machine.

Who may get to 1 exaflops Rmax first? History shows that, if not the US, it is likely to be Japan or China, but otherwise I have no deep insight. The EU is taking on new leadership in hardware and is expanding its energies in software infrastructure. Japan continues to extend its own advances with, for example, Kei and Tsubame-2. The Chinese have announced Tianhe-2, to exceed 100 petaflops by 2015.

But the US, guided by DOE programs, is pursuing opportunities with radically different approaches for true general-purpose exascale computation. The X-stack program begun in September 2012, for example, is getting dramatic improvements to efficiency, scalability, generality, and programmability, and is aggressively pursuing innovations to improve power consumption and reliability. If the milestone is general purpose exascale computing, then I think the US is in a compelling leadership position through the DOE partnership of Thuc Huang and Bill Harrod.

Still, I wish we had a science accomplishments benchmark – something like the X-prize. Perhaps some end-game computational achievement, like proving the process producing gamma ray bursts (including neutrinos); or some microbiology challenge involving viruses; or perhaps demonstrating climate change at a level that is provably predictive (and yes, I know it’s inherently chaotic.) We need something that matters. We need to stop playing the horses and ensure that we can pull the plow.

Next >>

HPCwire: With the emergence of big data analytics in HPC, and certainly elsewhere, as a growing application area, is there less of a reason to build systems that are just optimized for FLOPS?

Sterling: The answer, of course, is yes. But we don’t have to invoke big data to justify that. Any number of studies of large, multi-scale, multi-physics applications with short transient time constants and long times to steady state, show the relatively high importance of memory access patterns and system wide data movement.

Relatively speaking, floating point capacity is easy to achieve compared to effective memory access bandwidth or low overhead control of complex parallel execution. In the long term, we need to bridge the gap between data that computers treat as actionable, and knowledge that humans act upon. However this is achieved, it will involve meta-data more like that of advanced graph analytic problems and less like DGEMM.

The problem is cost and the dominance by some of Linpack. It is less expensive to build a high Rmax system with cheap flops than to build a balanced architecture of large main memories and high bandwidth, low-latency networks. Until we define a new standard of quality, we are likely to drift back in to our comfort zone and go for the flops.

HPCwire: Given that scientific computing will need both physics simulations and analytics going forward, should we be designing different types of machines for each of these application areas, or is there enough similarity between the two that a single architecture can suffice?

Sterling: For each application algorithm, there may be an optimal balance of computation, memory, and communication resources and structures. Examples like Anton, and somewhat more-generic GPU components, certainly demonstrate exceptional capabilities for specific workloads and flows.

It is tempting to prescribe particular machine designs for specific algorithms. Alternatively, there have been proposals to configure heterogeneous systems with ensembles of highly specialized functional units, any class of which may be employed for a given problem, allowing others to lie fallow. The greatest value of such optimizations may ultimately be in the area of energy, which would focus primarily on data movement.

Such structural variations may ultimately be important when Moore’s Law does flatten out beyond a nanometer of feature size. The greatest challenge is to satisfy not any single application, but the mix of applications that must be supported by any truly large-scale deployed system. My inclination is that at the system level we will generally shoot for broadly general-purpose, while at the local level we will choose to use or exclude specialized functional units based on expectations of workloads to be supported at individual sites of deployment. The memory wall is still the major challenge for many classes of application, both numeric-intensive and big data. Improvements in this aspect of system architecture will significantly enhance performance for both genres of computation.

Next >>

HPCwire: To the public, much of the work of supercomputing seems esoteric, many of the applications incomprehensible. Can we point to the results of work done by supercomputers that connects to the concerns of people outside the HPC community, that show it has made a difference in their lives?

Sterling: It has been said that supercomputing is the third pillar of human exploration and understanding, following empiricism (from the dawn of humanity) and theory (in recent centuries, with some priceless gems more than two millennia ago from people like Euclid and Eratosthenes). It provides a new window on to the universe – mega, macro, and micro. It allows us to explain the past, control the present and, in certain restricted but important cases, predict the future.

Challenges to the US and world societies in the 21st century require solutions to shared scientific and engineering problems that will affect this and the next two generations if quality of life is to improve and the disparities in access to life-enabling resources are to be mitigated.

One example: There is an interrelationship between determining the possible effect of anthropogenic chemicals on global climate change, and the future availability of safe, healthy, low cost energy. Both depend on bringing the highest capability computing to bear on these problems. Climate modeling must operate at significantly greater resolution in space, time, chemistry and physical phenomenology for any certainty about the degree of change that is of human origin. Should it prove to be, as many expect, that the burning of fossil fuels is a principal contributing factor aggravating global warming, then we will need to apply supercomputing to the design and operation of controlled-fusion reactors (e.g., ITER.) This could be the source of abundant, safe, and (eventually) low-cost electrical power that will ultimately save human civilization.

Supercomputers are also exploring the chemistry, processes, and materials for mobile energy storage in order to dramatically extend the travel range of electric vehicles.

Finally, treating the physical human condition as a system-engineering and simulation problem demands exascale computing. That may provide the ultimate understanding of diseases and their treatments, whether through drugs, organ regeneration, or supplementary replacement devices.

And if these driving issues are beyond the ken of the mainstream citizenry, certainly access to information of all forms through myriad search engines, on-line purchasing, interaction with friends, families, and social groups are highly visible on a daily basis. Entertainment such as on-demand movies and interactive multi-player gaming employ computing resources at the same scale as high performance computing systems used for technical computing. Then there is the less visible but pervasive contributions of high performance computing in such areas as national security, air traffic control, weather forecasting, and many other applications that silently serve all of use on a daily basis.

It is not clear that our community has adequately conveyed the importance and accomplishments of the field of high performance computing to the broad public in a way that they can understand and appreciate. When I consider how other fields successfully expose our citizenry to their foundation ideas, I realize that they play a role in K-12 education.

Children learn about telescopes, microscopes, and even particle accelerators. But they don’t learn about supercomputers. The concept of simulation is something that a student may not encounter until college, and then only in the sciences and engineering disciplines. I believe that we need to inculcate the process of teaching at all levels of our young people so that everyone in the U.S. is routinely exposed to supercomputing as one of the few important means of advancing goals in science and technology in the 21st century.

HPCwire: What do you think 2013 will bring to the world of HPC? Any predictions you care to make?

Sterling: 2013 may prove to be the pivotal year for HPC, although I may be being a bit impatient and it will turn out that history will look back and decide that 2014 was the delineating point. Here are a few things to watch:

MPI-3 – A major overhaul of MPI has been completed with the release of the MPI-3 specification. This year we will see if the changes incorporated will get traction and will extend the utility of the highly successful predecessor programming model to areas that were not well-served before.

Lightweight Core Architectures – Many organizations, including the world’s largest microprocessor manufacturing company (Intel), are guessing that a new generation of microprocessor architecture will be required to fully realize the promise of exascale computing. MIC (Xeon Phi) represents a new direction in processor core design. ARM is another path being pursued by Russia, EU and, in the US, Nvidia to find an improved balance of processing logic.

3-D Die Stacking – the packaging of multiple memory and logic dies in a single stack may dramatically increase parts density while significantly increasing local bandwidth and reducing latencies, as well as reducing energy consumption.

Runtime systems are emerging as an alternative to static control for resource management and task scheduling. With their overhead costs, runtime systems may not prove optimal for all workload classes. But early experiments for multi-scale, multi-physics problems have demonstrated promising results for efficiency and scalability. More work is required and it is premature to assume this as a final solution. This coming year may provide sufficient results to validate or refute this approach; an important result if achieved.

Lightweight Kernel OS – new work in operating systems this year may lead to environments capable of providing necessary capability and services while delivering vastly superior efficiency and scalability. Early examples like Catamount and CNK are informing new developments conducted currently and potentially in the future under new DOE research programs.

 

Related Articles:

HPC Programming in the Age of Multicore: One Man’s View

UK Creates Massive 200,000-Core ‘HPC Service’

Experts Discuss the Future of Supercomputers

Waiting for Exascale

DOE to Field Pre-Exascale Supercomputers Within Four Years

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

InfiniBand Still Tops in Supercomputing

July 19, 2018

In the competitive global HPC landscape, system and processor vendors, nations and end user sites certainly get a lot of attention--deservedly so--but more than ever, the network plays a crucial role. While fast, perform Read more…

By Tiffany Trader

HPC for Life: Genomics, Brain Research, and Beyond

July 19, 2018

During the past few decades, the life sciences have witnessed one landmark discovery after another with the aid of HPC, paving the way toward a new era of personalized treatments based on an individual’s genetic makeup Read more…

By Warren Froelich

WCRP’s New Strategic Plan for Climate Research Highlights the Importance of HPC

July 19, 2018

As climate modeling increasingly leverages exascale computing and researchers warn of an impending computing gap in climate research, the World Climate Research Programme (WCRP) is developing its new Strategic Plan – and high-performance computing is slated to play a critical role. Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Are Your Software Licenses Impeding Your Productivity?

In my previous article, Improving chip yield rates with cognitive manufacturing, I highlighted the costs associated with semiconductor manufacturing, and how cognitive methods can yield benefits in both design and manufacture.  Read more…

U.S. Exascale Computing Project Releases Software Technology Progress Report

July 19, 2018

As is often noted the race to exascale computing isn’t just about hardware. This week the U.S. Exascale Computing Project (ECP) released its latest Software Technology (ST) Capability Assessment Report detailing progress so far. Read more…

By John Russell

InfiniBand Still Tops in Supercomputing

July 19, 2018

In the competitive global HPC landscape, system and processor vendors, nations and end user sites certainly get a lot of attention--deservedly so--but more than Read more…

By Tiffany Trader

HPC for Life: Genomics, Brain Research, and Beyond

July 19, 2018

During the past few decades, the life sciences have witnessed one landmark discovery after another with the aid of HPC, paving the way toward a new era of perso Read more…

By Warren Froelich

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Tsinghua Powers Through ISC18 Field

July 10, 2018

Tsinghua University topped all other competitors at the ISC18 Student Cluster Competition with an overall score of 88.43 out of 100. This gives Tsinghua their s Read more…

By Dan Olds

HPE, EPFL Launch Blue Brain 5 Supercomputer

July 10, 2018

HPE and the Ecole Polytechnique Federale de Lausannne (EPFL) Blue Brain Project yesterday introduced Blue Brain 5, a new supercomputer built by HPE, which displ Read more…

By John Russell

Pumping New Life into HPC Clusters, the Case for Liquid Cooling

July 10, 2018

High Performance Computing (HPC) faces some daunting challenges in the coming years as traditional, industry-standard systems push the boundaries of data center Read more…

By Scott Tease

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This