A Sterling Future For HPC

By Nicole Hemsoth

February 11, 2013

For the past decade, keynote speakers at the International Supercomputing Conference (ISC) have examined the major accomplishments in HPC during the preceding year. This time the talk is more ambitious. At ISC ’13 in Leipzig, Germany in June, Thomas Sterling will deliver a keynote that examines the HPC accomplishments over the last decade. He plans to reveal “the true achievement of our field.”

You already know Sterling, of course. He’s famous as the “father of Beowulf,” the commodity computing cluster he and NASA Goddard colleague Donald Becker pioneered in 1994, for which they won a Gordon Bell Prize.

He’s now Professor of Informatics and Computing at the Indiana University School of Informatics and Computing, leading a team conducting research associated with the ParalleX advanced execution model for extreme scale computing. The goal: to develop a new model of computation that will enable a new generation of extreme scale computing systems and applications.

He’s also Chief Scientist and Associate Director of the PTI Center for Research in Extreme Scale Technologies (CREST), Adjunct Professor at Louisiana State University, and CRSI Fellow at Sandia National Laboratories. He has co-authored six books and holds six patents. To top it off, he’s one of HPCwire’s People to Watch for 2013!

His speech will examine the innovations in technology and architectures in HPC, as well as their contributions to science and other fields. He’ll also offer a collection of predictions for the next decade from key HPC leaders.

In anticipation of that talk, HPCwire asked Dr. Sterling to make a few predictions of his own.

HPCwire: It seems like the push toward exascale has lost some momentum over the last year. Do you think exascale will slip into the next decade?

Sterling: This is a complicated issue, but my view is that, if anything, momentum towards exascale in the US is building, not waning. There are two tracks to exascale, both being led by DOE in the US.

NNSA [National Nuclear Security Administration] is driving the incremental track. That is an attempt to extend conventional practices, both in architecture and programming, to deploy an exascale version of what we have today. This is prudent, responsible, and low-risk. It will support important mission-critical workloads, and will present a ready, if not seamless, migration path for legacy codes. However, it’s likely to be limited in applicability, scalability, and efficiency for many problems.

OS/ASCR is guiding the advanced track. This approach is to create innovations in architecture, system software, and programming models and methods. It could achieve exascale-era computing systems that are truly general-purpose, usable, reliable, and cost-effective (in terms of both operations and power.) It’s possible that we’ll even shift paradigms to a new execution model.

NNSA is likely to deliver its incremental platform to the national labs sometime between 2018 and 2020. R&D timeline projections suggest an advanced-class system is likely by 2022 or shortly after.

Still, the process of a congressionally-validated plan is complex. Its formulation is well along and is being refined, but there are other issues related to how it moves through the obscure (at least to mere mortals such as myself) layers of authorization.

The apparent path for supercomputing is now entering a multifaceted period. We have matured, I think, beyond the adolescent obsession of the next Linpack number. The trends leading to exascale should be measured in terms of progress toward unprecedented accomplishments in science, engineering, societal, commercial, and defense-related goals. I think we are sustaining a mid-course correction that is placing us on the new trend lines: the ones that actually matter.

HPCwire: Will another nation beat the US to the exascale milestone? Which one has the best shot?

Sterling: It is possible of course that another nation will beat the US to the exascale milestone.

However, there is an unstated assumption that “the exascale milestone” is 1 exaflops Rmax [maximal LINPACK performance]. Such systems don’t have to emphasize networking capability or even memory capacity (which, in combination, are the most expensive part of the system when balanced) to gain high marks. Any nation that wants the stature of being the first exascale system by this definition can probably do so in five years or slightly more, if they are willing to pay for it, by deploying a stunt machine.

Who may get to 1 exaflops Rmax first? History shows that, if not the US, it is likely to be Japan or China, but otherwise I have no deep insight. The EU is taking on new leadership in hardware and is expanding its energies in software infrastructure. Japan continues to extend its own advances with, for example, Kei and Tsubame-2. The Chinese have announced Tianhe-2, to exceed 100 petaflops by 2015.

But the US, guided by DOE programs, is pursuing opportunities with radically different approaches for true general-purpose exascale computation. The X-stack program begun in September 2012, for example, is getting dramatic improvements to efficiency, scalability, generality, and programmability, and is aggressively pursuing innovations to improve power consumption and reliability. If the milestone is general purpose exascale computing, then I think the US is in a compelling leadership position through the DOE partnership of Thuc Huang and Bill Harrod.

Still, I wish we had a science accomplishments benchmark – something like the X-prize. Perhaps some end-game computational achievement, like proving the process producing gamma ray bursts (including neutrinos); or some microbiology challenge involving viruses; or perhaps demonstrating climate change at a level that is provably predictive (and yes, I know it’s inherently chaotic.) We need something that matters. We need to stop playing the horses and ensure that we can pull the plow.

Next >>

HPCwire: With the emergence of big data analytics in HPC, and certainly elsewhere, as a growing application area, is there less of a reason to build systems that are just optimized for FLOPS?

Sterling: The answer, of course, is yes. But we don’t have to invoke big data to justify that. Any number of studies of large, multi-scale, multi-physics applications with short transient time constants and long times to steady state, show the relatively high importance of memory access patterns and system wide data movement.

Relatively speaking, floating point capacity is easy to achieve compared to effective memory access bandwidth or low overhead control of complex parallel execution. In the long term, we need to bridge the gap between data that computers treat as actionable, and knowledge that humans act upon. However this is achieved, it will involve meta-data more like that of advanced graph analytic problems and less like DGEMM.

The problem is cost and the dominance by some of Linpack. It is less expensive to build a high Rmax system with cheap flops than to build a balanced architecture of large main memories and high bandwidth, low-latency networks. Until we define a new standard of quality, we are likely to drift back in to our comfort zone and go for the flops.

HPCwire: Given that scientific computing will need both physics simulations and analytics going forward, should we be designing different types of machines for each of these application areas, or is there enough similarity between the two that a single architecture can suffice?

Sterling: For each application algorithm, there may be an optimal balance of computation, memory, and communication resources and structures. Examples like Anton, and somewhat more-generic GPU components, certainly demonstrate exceptional capabilities for specific workloads and flows.

It is tempting to prescribe particular machine designs for specific algorithms. Alternatively, there have been proposals to configure heterogeneous systems with ensembles of highly specialized functional units, any class of which may be employed for a given problem, allowing others to lie fallow. The greatest value of such optimizations may ultimately be in the area of energy, which would focus primarily on data movement.

Such structural variations may ultimately be important when Moore’s Law does flatten out beyond a nanometer of feature size. The greatest challenge is to satisfy not any single application, but the mix of applications that must be supported by any truly large-scale deployed system. My inclination is that at the system level we will generally shoot for broadly general-purpose, while at the local level we will choose to use or exclude specialized functional units based on expectations of workloads to be supported at individual sites of deployment. The memory wall is still the major challenge for many classes of application, both numeric-intensive and big data. Improvements in this aspect of system architecture will significantly enhance performance for both genres of computation.

Next >>

HPCwire: To the public, much of the work of supercomputing seems esoteric, many of the applications incomprehensible. Can we point to the results of work done by supercomputers that connects to the concerns of people outside the HPC community, that show it has made a difference in their lives?

Sterling: It has been said that supercomputing is the third pillar of human exploration and understanding, following empiricism (from the dawn of humanity) and theory (in recent centuries, with some priceless gems more than two millennia ago from people like Euclid and Eratosthenes). It provides a new window on to the universe – mega, macro, and micro. It allows us to explain the past, control the present and, in certain restricted but important cases, predict the future.

Challenges to the US and world societies in the 21st century require solutions to shared scientific and engineering problems that will affect this and the next two generations if quality of life is to improve and the disparities in access to life-enabling resources are to be mitigated.

One example: There is an interrelationship between determining the possible effect of anthropogenic chemicals on global climate change, and the future availability of safe, healthy, low cost energy. Both depend on bringing the highest capability computing to bear on these problems. Climate modeling must operate at significantly greater resolution in space, time, chemistry and physical phenomenology for any certainty about the degree of change that is of human origin. Should it prove to be, as many expect, that the burning of fossil fuels is a principal contributing factor aggravating global warming, then we will need to apply supercomputing to the design and operation of controlled-fusion reactors (e.g., ITER.) This could be the source of abundant, safe, and (eventually) low-cost electrical power that will ultimately save human civilization.

Supercomputers are also exploring the chemistry, processes, and materials for mobile energy storage in order to dramatically extend the travel range of electric vehicles.

Finally, treating the physical human condition as a system-engineering and simulation problem demands exascale computing. That may provide the ultimate understanding of diseases and their treatments, whether through drugs, organ regeneration, or supplementary replacement devices.

And if these driving issues are beyond the ken of the mainstream citizenry, certainly access to information of all forms through myriad search engines, on-line purchasing, interaction with friends, families, and social groups are highly visible on a daily basis. Entertainment such as on-demand movies and interactive multi-player gaming employ computing resources at the same scale as high performance computing systems used for technical computing. Then there is the less visible but pervasive contributions of high performance computing in such areas as national security, air traffic control, weather forecasting, and many other applications that silently serve all of use on a daily basis.

It is not clear that our community has adequately conveyed the importance and accomplishments of the field of high performance computing to the broad public in a way that they can understand and appreciate. When I consider how other fields successfully expose our citizenry to their foundation ideas, I realize that they play a role in K-12 education.

Children learn about telescopes, microscopes, and even particle accelerators. But they don’t learn about supercomputers. The concept of simulation is something that a student may not encounter until college, and then only in the sciences and engineering disciplines. I believe that we need to inculcate the process of teaching at all levels of our young people so that everyone in the U.S. is routinely exposed to supercomputing as one of the few important means of advancing goals in science and technology in the 21st century.

HPCwire: What do you think 2013 will bring to the world of HPC? Any predictions you care to make?

Sterling: 2013 may prove to be the pivotal year for HPC, although I may be being a bit impatient and it will turn out that history will look back and decide that 2014 was the delineating point. Here are a few things to watch:

MPI-3 – A major overhaul of MPI has been completed with the release of the MPI-3 specification. This year we will see if the changes incorporated will get traction and will extend the utility of the highly successful predecessor programming model to areas that were not well-served before.

Lightweight Core Architectures – Many organizations, including the world’s largest microprocessor manufacturing company (Intel), are guessing that a new generation of microprocessor architecture will be required to fully realize the promise of exascale computing. MIC (Xeon Phi) represents a new direction in processor core design. ARM is another path being pursued by Russia, EU and, in the US, Nvidia to find an improved balance of processing logic.

3-D Die Stacking – the packaging of multiple memory and logic dies in a single stack may dramatically increase parts density while significantly increasing local bandwidth and reducing latencies, as well as reducing energy consumption.

Runtime systems are emerging as an alternative to static control for resource management and task scheduling. With their overhead costs, runtime systems may not prove optimal for all workload classes. But early experiments for multi-scale, multi-physics problems have demonstrated promising results for efficiency and scalability. More work is required and it is premature to assume this as a final solution. This coming year may provide sufficient results to validate or refute this approach; an important result if achieved.

Lightweight Kernel OS – new work in operating systems this year may lead to environments capable of providing necessary capability and services while delivering vastly superior efficiency and scalability. Early examples like Catamount and CNK are informing new developments conducted currently and potentially in the future under new DOE research programs.

 

Related Articles:

HPC Programming in the Age of Multicore: One Man’s View

UK Creates Massive 200,000-Core ‘HPC Service’

Experts Discuss the Future of Supercomputers

Waiting for Exascale

DOE to Field Pre-Exascale Supercomputers Within Four Years

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs

September 21, 2017

Google has taken down the notice on its cloud platform website that says Nvidia Tesla P100s are “coming soon.” That's because the search giant has announced the beta launch of the high-end P100 Nvidia Tesla GPUs on t Read more…

By George Leopold

Cray Wins $48M Supercomputer Contract from KISTI

September 21, 2017

It was a good day for Cray which won a $48 million contract from the Korea Institute of Science and Technology Information (KISTI) for a 128-rack CS500 cluster supercomputer. The new system, equipped with Intel Xeon Scal Read more…

By John Russell

Adolfy Hoisie to Lead Brookhaven’s Computing for National Security Effort

September 21, 2017

Brookhaven National Laboratory announced today that Adolfy Hoisie will chair its newly formed Computing for National Security department, which is part of Brookhaven’s new Computational Science Initiative (CSI). Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENATE’s ambitious mission was to be a proving ground for near- Read more…

By John Russell

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

September 21, 2017

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, Read more…

By Wolfgang Gentzsch, UberCloud, and Francisco Sahli, Stanford University

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENAT Read more…

By John Russell

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is s Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakt Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

Cubes, Culture, and a New Challenge: Trish Damkroger Talks about Life at Intel—and Why HPC Matters More Than Ever

September 13, 2017

Trish Damkroger wasn’t looking to change jobs when she attended SC15 in Austin, Texas. Capping a 15-year career within Department of Energy (DOE) laboratories, she was acting Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL). Her mission was to equip the lab’s scientists and research partners with resources that would advance their cutting-edge work... Read more…

By Jan Rowell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

Leading Solution Providers

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

  • arrow
  • Click Here for More Headlines
  • arrow
Share This