A Sterling Future For HPC

By Nicole Hemsoth

February 11, 2013

For the past decade, keynote speakers at the International Supercomputing Conference (ISC) have examined the major accomplishments in HPC during the preceding year. This time the talk is more ambitious. At ISC ’13 in Leipzig, Germany in June, Thomas Sterling will deliver a keynote that examines the HPC accomplishments over the last decade. He plans to reveal “the true achievement of our field.”

You already know Sterling, of course. He’s famous as the “father of Beowulf,” the commodity computing cluster he and NASA Goddard colleague Donald Becker pioneered in 1994, for which they won a Gordon Bell Prize.

He’s now Professor of Informatics and Computing at the Indiana University School of Informatics and Computing, leading a team conducting research associated with the ParalleX advanced execution model for extreme scale computing. The goal: to develop a new model of computation that will enable a new generation of extreme scale computing systems and applications.

He’s also Chief Scientist and Associate Director of the PTI Center for Research in Extreme Scale Technologies (CREST), Adjunct Professor at Louisiana State University, and CRSI Fellow at Sandia National Laboratories. He has co-authored six books and holds six patents. To top it off, he’s one of HPCwire’s People to Watch for 2013!

His speech will examine the innovations in technology and architectures in HPC, as well as their contributions to science and other fields. He’ll also offer a collection of predictions for the next decade from key HPC leaders.

In anticipation of that talk, HPCwire asked Dr. Sterling to make a few predictions of his own.

HPCwire: It seems like the push toward exascale has lost some momentum over the last year. Do you think exascale will slip into the next decade?

Sterling: This is a complicated issue, but my view is that, if anything, momentum towards exascale in the US is building, not waning. There are two tracks to exascale, both being led by DOE in the US.

NNSA [National Nuclear Security Administration] is driving the incremental track. That is an attempt to extend conventional practices, both in architecture and programming, to deploy an exascale version of what we have today. This is prudent, responsible, and low-risk. It will support important mission-critical workloads, and will present a ready, if not seamless, migration path for legacy codes. However, it’s likely to be limited in applicability, scalability, and efficiency for many problems.

OS/ASCR is guiding the advanced track. This approach is to create innovations in architecture, system software, and programming models and methods. It could achieve exascale-era computing systems that are truly general-purpose, usable, reliable, and cost-effective (in terms of both operations and power.) It’s possible that we’ll even shift paradigms to a new execution model.

NNSA is likely to deliver its incremental platform to the national labs sometime between 2018 and 2020. R&D timeline projections suggest an advanced-class system is likely by 2022 or shortly after.

Still, the process of a congressionally-validated plan is complex. Its formulation is well along and is being refined, but there are other issues related to how it moves through the obscure (at least to mere mortals such as myself) layers of authorization.

The apparent path for supercomputing is now entering a multifaceted period. We have matured, I think, beyond the adolescent obsession of the next Linpack number. The trends leading to exascale should be measured in terms of progress toward unprecedented accomplishments in science, engineering, societal, commercial, and defense-related goals. I think we are sustaining a mid-course correction that is placing us on the new trend lines: the ones that actually matter.

HPCwire: Will another nation beat the US to the exascale milestone? Which one has the best shot?

Sterling: It is possible of course that another nation will beat the US to the exascale milestone.

However, there is an unstated assumption that “the exascale milestone” is 1 exaflops Rmax [maximal LINPACK performance]. Such systems don’t have to emphasize networking capability or even memory capacity (which, in combination, are the most expensive part of the system when balanced) to gain high marks. Any nation that wants the stature of being the first exascale system by this definition can probably do so in five years or slightly more, if they are willing to pay for it, by deploying a stunt machine.

Who may get to 1 exaflops Rmax first? History shows that, if not the US, it is likely to be Japan or China, but otherwise I have no deep insight. The EU is taking on new leadership in hardware and is expanding its energies in software infrastructure. Japan continues to extend its own advances with, for example, Kei and Tsubame-2. The Chinese have announced Tianhe-2, to exceed 100 petaflops by 2015.

But the US, guided by DOE programs, is pursuing opportunities with radically different approaches for true general-purpose exascale computation. The X-stack program begun in September 2012, for example, is getting dramatic improvements to efficiency, scalability, generality, and programmability, and is aggressively pursuing innovations to improve power consumption and reliability. If the milestone is general purpose exascale computing, then I think the US is in a compelling leadership position through the DOE partnership of Thuc Huang and Bill Harrod.

Still, I wish we had a science accomplishments benchmark – something like the X-prize. Perhaps some end-game computational achievement, like proving the process producing gamma ray bursts (including neutrinos); or some microbiology challenge involving viruses; or perhaps demonstrating climate change at a level that is provably predictive (and yes, I know it’s inherently chaotic.) We need something that matters. We need to stop playing the horses and ensure that we can pull the plow.

Next >>

HPCwire: With the emergence of big data analytics in HPC, and certainly elsewhere, as a growing application area, is there less of a reason to build systems that are just optimized for FLOPS?

Sterling: The answer, of course, is yes. But we don’t have to invoke big data to justify that. Any number of studies of large, multi-scale, multi-physics applications with short transient time constants and long times to steady state, show the relatively high importance of memory access patterns and system wide data movement.

Relatively speaking, floating point capacity is easy to achieve compared to effective memory access bandwidth or low overhead control of complex parallel execution. In the long term, we need to bridge the gap between data that computers treat as actionable, and knowledge that humans act upon. However this is achieved, it will involve meta-data more like that of advanced graph analytic problems and less like DGEMM.

The problem is cost and the dominance by some of Linpack. It is less expensive to build a high Rmax system with cheap flops than to build a balanced architecture of large main memories and high bandwidth, low-latency networks. Until we define a new standard of quality, we are likely to drift back in to our comfort zone and go for the flops.

HPCwire: Given that scientific computing will need both physics simulations and analytics going forward, should we be designing different types of machines for each of these application areas, or is there enough similarity between the two that a single architecture can suffice?

Sterling: For each application algorithm, there may be an optimal balance of computation, memory, and communication resources and structures. Examples like Anton, and somewhat more-generic GPU components, certainly demonstrate exceptional capabilities for specific workloads and flows.

It is tempting to prescribe particular machine designs for specific algorithms. Alternatively, there have been proposals to configure heterogeneous systems with ensembles of highly specialized functional units, any class of which may be employed for a given problem, allowing others to lie fallow. The greatest value of such optimizations may ultimately be in the area of energy, which would focus primarily on data movement.

Such structural variations may ultimately be important when Moore’s Law does flatten out beyond a nanometer of feature size. The greatest challenge is to satisfy not any single application, but the mix of applications that must be supported by any truly large-scale deployed system. My inclination is that at the system level we will generally shoot for broadly general-purpose, while at the local level we will choose to use or exclude specialized functional units based on expectations of workloads to be supported at individual sites of deployment. The memory wall is still the major challenge for many classes of application, both numeric-intensive and big data. Improvements in this aspect of system architecture will significantly enhance performance for both genres of computation.

Next >>

HPCwire: To the public, much of the work of supercomputing seems esoteric, many of the applications incomprehensible. Can we point to the results of work done by supercomputers that connects to the concerns of people outside the HPC community, that show it has made a difference in their lives?

Sterling: It has been said that supercomputing is the third pillar of human exploration and understanding, following empiricism (from the dawn of humanity) and theory (in recent centuries, with some priceless gems more than two millennia ago from people like Euclid and Eratosthenes). It provides a new window on to the universe – mega, macro, and micro. It allows us to explain the past, control the present and, in certain restricted but important cases, predict the future.

Challenges to the US and world societies in the 21st century require solutions to shared scientific and engineering problems that will affect this and the next two generations if quality of life is to improve and the disparities in access to life-enabling resources are to be mitigated.

One example: There is an interrelationship between determining the possible effect of anthropogenic chemicals on global climate change, and the future availability of safe, healthy, low cost energy. Both depend on bringing the highest capability computing to bear on these problems. Climate modeling must operate at significantly greater resolution in space, time, chemistry and physical phenomenology for any certainty about the degree of change that is of human origin. Should it prove to be, as many expect, that the burning of fossil fuels is a principal contributing factor aggravating global warming, then we will need to apply supercomputing to the design and operation of controlled-fusion reactors (e.g., ITER.) This could be the source of abundant, safe, and (eventually) low-cost electrical power that will ultimately save human civilization.

Supercomputers are also exploring the chemistry, processes, and materials for mobile energy storage in order to dramatically extend the travel range of electric vehicles.

Finally, treating the physical human condition as a system-engineering and simulation problem demands exascale computing. That may provide the ultimate understanding of diseases and their treatments, whether through drugs, organ regeneration, or supplementary replacement devices.

And if these driving issues are beyond the ken of the mainstream citizenry, certainly access to information of all forms through myriad search engines, on-line purchasing, interaction with friends, families, and social groups are highly visible on a daily basis. Entertainment such as on-demand movies and interactive multi-player gaming employ computing resources at the same scale as high performance computing systems used for technical computing. Then there is the less visible but pervasive contributions of high performance computing in such areas as national security, air traffic control, weather forecasting, and many other applications that silently serve all of use on a daily basis.

It is not clear that our community has adequately conveyed the importance and accomplishments of the field of high performance computing to the broad public in a way that they can understand and appreciate. When I consider how other fields successfully expose our citizenry to their foundation ideas, I realize that they play a role in K-12 education.

Children learn about telescopes, microscopes, and even particle accelerators. But they don’t learn about supercomputers. The concept of simulation is something that a student may not encounter until college, and then only in the sciences and engineering disciplines. I believe that we need to inculcate the process of teaching at all levels of our young people so that everyone in the U.S. is routinely exposed to supercomputing as one of the few important means of advancing goals in science and technology in the 21st century.

HPCwire: What do you think 2013 will bring to the world of HPC? Any predictions you care to make?

Sterling: 2013 may prove to be the pivotal year for HPC, although I may be being a bit impatient and it will turn out that history will look back and decide that 2014 was the delineating point. Here are a few things to watch:

MPI-3 – A major overhaul of MPI has been completed with the release of the MPI-3 specification. This year we will see if the changes incorporated will get traction and will extend the utility of the highly successful predecessor programming model to areas that were not well-served before.

Lightweight Core Architectures – Many organizations, including the world’s largest microprocessor manufacturing company (Intel), are guessing that a new generation of microprocessor architecture will be required to fully realize the promise of exascale computing. MIC (Xeon Phi) represents a new direction in processor core design. ARM is another path being pursued by Russia, EU and, in the US, Nvidia to find an improved balance of processing logic.

3-D Die Stacking – the packaging of multiple memory and logic dies in a single stack may dramatically increase parts density while significantly increasing local bandwidth and reducing latencies, as well as reducing energy consumption.

Runtime systems are emerging as an alternative to static control for resource management and task scheduling. With their overhead costs, runtime systems may not prove optimal for all workload classes. But early experiments for multi-scale, multi-physics problems have demonstrated promising results for efficiency and scalability. More work is required and it is premature to assume this as a final solution. This coming year may provide sufficient results to validate or refute this approach; an important result if achieved.

Lightweight Kernel OS – new work in operating systems this year may lead to environments capable of providing necessary capability and services while delivering vastly superior efficiency and scalability. Early examples like Catamount and CNK are informing new developments conducted currently and potentially in the future under new DOE research programs.

 

Related Articles:

HPC Programming in the Age of Multicore: One Man’s View

UK Creates Massive 200,000-Core ‘HPC Service’

Experts Discuss the Future of Supercomputers

Waiting for Exascale

DOE to Field Pre-Exascale Supercomputers Within Four Years

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is built to run artificial intelligence (AI) workloads and, as Read more…

By Tiffany Trader

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Leading Solution Providers

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This