A Sterling Future For HPC

By Nicole Hemsoth

February 11, 2013

For the past decade, keynote speakers at the International Supercomputing Conference (ISC) have examined the major accomplishments in HPC during the preceding year. This time the talk is more ambitious. At ISC ’13 in Leipzig, Germany in June, Thomas Sterling will deliver a keynote that examines the HPC accomplishments over the last decade. He plans to reveal “the true achievement of our field.”

You already know Sterling, of course. He’s famous as the “father of Beowulf,” the commodity computing cluster he and NASA Goddard colleague Donald Becker pioneered in 1994, for which they won a Gordon Bell Prize.

He’s now Professor of Informatics and Computing at the Indiana University School of Informatics and Computing, leading a team conducting research associated with the ParalleX advanced execution model for extreme scale computing. The goal: to develop a new model of computation that will enable a new generation of extreme scale computing systems and applications.

He’s also Chief Scientist and Associate Director of the PTI Center for Research in Extreme Scale Technologies (CREST), Adjunct Professor at Louisiana State University, and CRSI Fellow at Sandia National Laboratories. He has co-authored six books and holds six patents. To top it off, he’s one of HPCwire’s People to Watch for 2013!

His speech will examine the innovations in technology and architectures in HPC, as well as their contributions to science and other fields. He’ll also offer a collection of predictions for the next decade from key HPC leaders.

In anticipation of that talk, HPCwire asked Dr. Sterling to make a few predictions of his own.

HPCwire: It seems like the push toward exascale has lost some momentum over the last year. Do you think exascale will slip into the next decade?

Sterling: This is a complicated issue, but my view is that, if anything, momentum towards exascale in the US is building, not waning. There are two tracks to exascale, both being led by DOE in the US.

NNSA [National Nuclear Security Administration] is driving the incremental track. That is an attempt to extend conventional practices, both in architecture and programming, to deploy an exascale version of what we have today. This is prudent, responsible, and low-risk. It will support important mission-critical workloads, and will present a ready, if not seamless, migration path for legacy codes. However, it’s likely to be limited in applicability, scalability, and efficiency for many problems.

OS/ASCR is guiding the advanced track. This approach is to create innovations in architecture, system software, and programming models and methods. It could achieve exascale-era computing systems that are truly general-purpose, usable, reliable, and cost-effective (in terms of both operations and power.) It’s possible that we’ll even shift paradigms to a new execution model.

NNSA is likely to deliver its incremental platform to the national labs sometime between 2018 and 2020. R&D timeline projections suggest an advanced-class system is likely by 2022 or shortly after.

Still, the process of a congressionally-validated plan is complex. Its formulation is well along and is being refined, but there are other issues related to how it moves through the obscure (at least to mere mortals such as myself) layers of authorization.

The apparent path for supercomputing is now entering a multifaceted period. We have matured, I think, beyond the adolescent obsession of the next Linpack number. The trends leading to exascale should be measured in terms of progress toward unprecedented accomplishments in science, engineering, societal, commercial, and defense-related goals. I think we are sustaining a mid-course correction that is placing us on the new trend lines: the ones that actually matter.

HPCwire: Will another nation beat the US to the exascale milestone? Which one has the best shot?

Sterling: It is possible of course that another nation will beat the US to the exascale milestone.

However, there is an unstated assumption that “the exascale milestone” is 1 exaflops Rmax [maximal LINPACK performance]. Such systems don’t have to emphasize networking capability or even memory capacity (which, in combination, are the most expensive part of the system when balanced) to gain high marks. Any nation that wants the stature of being the first exascale system by this definition can probably do so in five years or slightly more, if they are willing to pay for it, by deploying a stunt machine.

Who may get to 1 exaflops Rmax first? History shows that, if not the US, it is likely to be Japan or China, but otherwise I have no deep insight. The EU is taking on new leadership in hardware and is expanding its energies in software infrastructure. Japan continues to extend its own advances with, for example, Kei and Tsubame-2. The Chinese have announced Tianhe-2, to exceed 100 petaflops by 2015.

But the US, guided by DOE programs, is pursuing opportunities with radically different approaches for true general-purpose exascale computation. The X-stack program begun in September 2012, for example, is getting dramatic improvements to efficiency, scalability, generality, and programmability, and is aggressively pursuing innovations to improve power consumption and reliability. If the milestone is general purpose exascale computing, then I think the US is in a compelling leadership position through the DOE partnership of Thuc Huang and Bill Harrod.

Still, I wish we had a science accomplishments benchmark – something like the X-prize. Perhaps some end-game computational achievement, like proving the process producing gamma ray bursts (including neutrinos); or some microbiology challenge involving viruses; or perhaps demonstrating climate change at a level that is provably predictive (and yes, I know it’s inherently chaotic.) We need something that matters. We need to stop playing the horses and ensure that we can pull the plow.

Next >>

HPCwire: With the emergence of big data analytics in HPC, and certainly elsewhere, as a growing application area, is there less of a reason to build systems that are just optimized for FLOPS?

Sterling: The answer, of course, is yes. But we don’t have to invoke big data to justify that. Any number of studies of large, multi-scale, multi-physics applications with short transient time constants and long times to steady state, show the relatively high importance of memory access patterns and system wide data movement.

Relatively speaking, floating point capacity is easy to achieve compared to effective memory access bandwidth or low overhead control of complex parallel execution. In the long term, we need to bridge the gap between data that computers treat as actionable, and knowledge that humans act upon. However this is achieved, it will involve meta-data more like that of advanced graph analytic problems and less like DGEMM.

The problem is cost and the dominance by some of Linpack. It is less expensive to build a high Rmax system with cheap flops than to build a balanced architecture of large main memories and high bandwidth, low-latency networks. Until we define a new standard of quality, we are likely to drift back in to our comfort zone and go for the flops.

HPCwire: Given that scientific computing will need both physics simulations and analytics going forward, should we be designing different types of machines for each of these application areas, or is there enough similarity between the two that a single architecture can suffice?

Sterling: For each application algorithm, there may be an optimal balance of computation, memory, and communication resources and structures. Examples like Anton, and somewhat more-generic GPU components, certainly demonstrate exceptional capabilities for specific workloads and flows.

It is tempting to prescribe particular machine designs for specific algorithms. Alternatively, there have been proposals to configure heterogeneous systems with ensembles of highly specialized functional units, any class of which may be employed for a given problem, allowing others to lie fallow. The greatest value of such optimizations may ultimately be in the area of energy, which would focus primarily on data movement.

Such structural variations may ultimately be important when Moore’s Law does flatten out beyond a nanometer of feature size. The greatest challenge is to satisfy not any single application, but the mix of applications that must be supported by any truly large-scale deployed system. My inclination is that at the system level we will generally shoot for broadly general-purpose, while at the local level we will choose to use or exclude specialized functional units based on expectations of workloads to be supported at individual sites of deployment. The memory wall is still the major challenge for many classes of application, both numeric-intensive and big data. Improvements in this aspect of system architecture will significantly enhance performance for both genres of computation.

Next >>

HPCwire: To the public, much of the work of supercomputing seems esoteric, many of the applications incomprehensible. Can we point to the results of work done by supercomputers that connects to the concerns of people outside the HPC community, that show it has made a difference in their lives?

Sterling: It has been said that supercomputing is the third pillar of human exploration and understanding, following empiricism (from the dawn of humanity) and theory (in recent centuries, with some priceless gems more than two millennia ago from people like Euclid and Eratosthenes). It provides a new window on to the universe – mega, macro, and micro. It allows us to explain the past, control the present and, in certain restricted but important cases, predict the future.

Challenges to the US and world societies in the 21st century require solutions to shared scientific and engineering problems that will affect this and the next two generations if quality of life is to improve and the disparities in access to life-enabling resources are to be mitigated.

One example: There is an interrelationship between determining the possible effect of anthropogenic chemicals on global climate change, and the future availability of safe, healthy, low cost energy. Both depend on bringing the highest capability computing to bear on these problems. Climate modeling must operate at significantly greater resolution in space, time, chemistry and physical phenomenology for any certainty about the degree of change that is of human origin. Should it prove to be, as many expect, that the burning of fossil fuels is a principal contributing factor aggravating global warming, then we will need to apply supercomputing to the design and operation of controlled-fusion reactors (e.g., ITER.) This could be the source of abundant, safe, and (eventually) low-cost electrical power that will ultimately save human civilization.

Supercomputers are also exploring the chemistry, processes, and materials for mobile energy storage in order to dramatically extend the travel range of electric vehicles.

Finally, treating the physical human condition as a system-engineering and simulation problem demands exascale computing. That may provide the ultimate understanding of diseases and their treatments, whether through drugs, organ regeneration, or supplementary replacement devices.

And if these driving issues are beyond the ken of the mainstream citizenry, certainly access to information of all forms through myriad search engines, on-line purchasing, interaction with friends, families, and social groups are highly visible on a daily basis. Entertainment such as on-demand movies and interactive multi-player gaming employ computing resources at the same scale as high performance computing systems used for technical computing. Then there is the less visible but pervasive contributions of high performance computing in such areas as national security, air traffic control, weather forecasting, and many other applications that silently serve all of use on a daily basis.

It is not clear that our community has adequately conveyed the importance and accomplishments of the field of high performance computing to the broad public in a way that they can understand and appreciate. When I consider how other fields successfully expose our citizenry to their foundation ideas, I realize that they play a role in K-12 education.

Children learn about telescopes, microscopes, and even particle accelerators. But they don’t learn about supercomputers. The concept of simulation is something that a student may not encounter until college, and then only in the sciences and engineering disciplines. I believe that we need to inculcate the process of teaching at all levels of our young people so that everyone in the U.S. is routinely exposed to supercomputing as one of the few important means of advancing goals in science and technology in the 21st century.

HPCwire: What do you think 2013 will bring to the world of HPC? Any predictions you care to make?

Sterling: 2013 may prove to be the pivotal year for HPC, although I may be being a bit impatient and it will turn out that history will look back and decide that 2014 was the delineating point. Here are a few things to watch:

MPI-3 – A major overhaul of MPI has been completed with the release of the MPI-3 specification. This year we will see if the changes incorporated will get traction and will extend the utility of the highly successful predecessor programming model to areas that were not well-served before.

Lightweight Core Architectures – Many organizations, including the world’s largest microprocessor manufacturing company (Intel), are guessing that a new generation of microprocessor architecture will be required to fully realize the promise of exascale computing. MIC (Xeon Phi) represents a new direction in processor core design. ARM is another path being pursued by Russia, EU and, in the US, Nvidia to find an improved balance of processing logic.

3-D Die Stacking – the packaging of multiple memory and logic dies in a single stack may dramatically increase parts density while significantly increasing local bandwidth and reducing latencies, as well as reducing energy consumption.

Runtime systems are emerging as an alternative to static control for resource management and task scheduling. With their overhead costs, runtime systems may not prove optimal for all workload classes. But early experiments for multi-scale, multi-physics problems have demonstrated promising results for efficiency and scalability. More work is required and it is premature to assume this as a final solution. This coming year may provide sufficient results to validate or refute this approach; an important result if achieved.

Lightweight Kernel OS – new work in operating systems this year may lead to environments capable of providing necessary capability and services while delivering vastly superior efficiency and scalability. Early examples like Catamount and CNK are informing new developments conducted currently and potentially in the future under new DOE research programs.

 

Related Articles:

HPC Programming in the Age of Multicore: One Man’s View

UK Creates Massive 200,000-Core ‘HPC Service’

Experts Discuss the Future of Supercomputers

Waiting for Exascale

DOE to Field Pre-Exascale Supercomputers Within Four Years

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information ab Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented t Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out plans to push deeper into climate science and develop more gran Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale companies and their embrace of AI and deep learning – tha Read more…

By Doug Black

HPE Extreme Performance Solutions

Creating a Roadmap for HPC Innovation at ISC 2017

In an era where technological advancements are driving innovation to every sector, and powering major economic and scientific breakthroughs, high performance computing (HPC) is crucial to tackle the challenges of today and tomorrow. Read more…

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network designed to emulate and compete with the human brain. In thi Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big data and artificial intelligence software to its top-of-the-l Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “global” launch event in Austin TX. In many ways it was a fu Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it, analysts and journalists want to report on it. Deep learni Read more…

By Doug Black

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid wh Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out pla Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale Read more…

By Doug Black

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big d Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “g Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it Read more…

By Doug Black

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This