Getting to Exascale

By Tiffany Trader

July 24, 2014

As the exascale barrier draws ever closer, experts around the world turn their attention to enabling this major advance. Providing a truly deep dive into the subject matter is the Harvard School of Engineering and Applied Science. The institution’s summer 2014 issue of “Topics” takes a hard look at the way that supercomputing is progressing.

In the feature article “Built for Speed: Designing for exascale computers,” Brian Hayes considers all of the remarkable science that will be enabled if only the computer is fast enough.

Hayes explains that the field of hemodymamics is poised for a breakthrough, where a surgeon would be able to perform a detailed simulation of blood flow in a patient’s arteries in order to pinpoint the best repair strategy. Currently, however, simulating just one second of blood flow takes about five hours on even the fastest supercomputer. To have a truly transformative effect on medicine, scientists and practitioners need computers that are one-thousand times faster than the current crop.

Getting to this next stage in computing is high up on the list of priorities of SEAS. Hayes writes that science and engineering groups in the school are contributing to software and hardware projects to support this goal while researchers in domains such as climatology, materials science, molecular biology, and astrophysics are gearing up to use such powerful resources.

From here, Hayes details the numerous challenges that make exascale a more onerous challenge than previous 1000x milestones. For a while, chipmakers relied on increasing clock rates to drive performance gains, but this era is over.

“The speed limit for modern computers is now set by power consumption,” writes Hayes. “ If all other factors are held constant, the electricity needed to run a processor chip goes up as the cube of the clock rate: doubling the speed brings an eightfold increase in power demand.”

Shrinking transistors and putting multiple cores on each chip (multicore) has helped boost the total number of operations per second since about 2005. However, there is of course a fundamental limit as to how small the feature sizes can be before reliability becomes untenable.

From an architecture perspective, systems have gone from custom-built hardware in the 1980s to vanilla off-the-shelf components through the 1990s and 2000s. Now there is a swing back to specialized technologies again. The first petaflopper, Roadrunner, used a hybrid design with CPU working in tandem with specialized Cell BE coprocessors. Now most of the top supercomputers are based on a heterogenous architecture, using some combination of CPUs and accelerators/coprocessors.

The challenges are not just on the hardware side. Hanspeter Pfister, a Wang Professor of Computer Science and director of IACS who was interviewed by Hayes, believes getting to exascale will require fundamentally new programming models. Pfister points out that the LINPACK benchmark is the only program that can rate and rank machines at full speed. Other software may harness only 10 percent of the system’s potential. There are also issues with operating systems, file systems and middleware that connects databases and networks.

Pfister is also quite skeptical of the future of programming tools like MPI and CUDA. “We can’t be thinking about a billion cores in CUDA,” he says. “And when the next protocol emerges, I know in my heart it’s not going to be MPI. We’re beyond the human capacity for allocating and optimizing resources.”

Some believe that the only tenable solution to extreme-scale computing is getting the hardware and software folks in the same room. This approach, called “co-design” will help bridge the gap between what users want and what manufacturers can supply. The US Department of Energy has established three co-design centers to facilitate this kind of approach.

The US DOE originally intended to field an exascale machine sometime around 2018, but that timeline slipped due primarily to a lack of political will to fund the effort. Since then 2020 has been bandied about as a target, but that may also be overly optimistic. One data point in support of getting to exascale sooner rather than later is the need to conduct virtual nuclear testing in support of stockpile stewardship. This program alone, according to one expert interviewed for the piece, is enough to ensure that exascale machines are built. There are other applications that could also come to be regarded as critical for national security, for example climate modeling.

Check out the entire article here, and the complete issue here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Turing Architecture, Focusing on Real-Time Ray Tracing

August 16, 2018

From the SIGGRAPH professional graphics conference in Vancouver this week, Nvidia CEO Jensen Huang unveiled Turing, the company's next-gen GPU platform that introduces new RT Cores to accelerate ray tracing and new Tenso Read more…

By Tiffany Trader

HPC Coding: The Power of L(o)osing Control

August 16, 2018

Exascale roadmaps, exascale projects and exascale lobbyists ask, on-again-off-again, for a fundamental rewrite of major code building blocks. Otherwise, so they claim, codes will not scale up. Naturally, some exascale pr Read more…

By Tobias Weinzierl

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum technology used. One idea is to mitigate noisiness and perh Read more…

By John Russell

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&a Read more…

By Tiffany Trader

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum Read more…

By John Russell

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This