A recent presentation that came out of the 2014 Computational Science Graduate Fellowship (CSGF) HPC workshop, held in July in Arlington, Virginia, holds that HPC is on the cusp of new era.
In “Supercomputing 101: A History of Platform Evolution and Future Trends,” Rob Neely, Associate Division Leader, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, traces the history of the high performance computing as defined by the dominant platforms, starting with mainframes and continuing through vector architectures, massively parallel architectures, and the current emerging trends that will define the upcoming exascale era.
Neely also covers basic terminology and important HPC concepts including scalability, shared versus distributed memory, Amdahl’s law, Dennard scaling, data locality, burst buffers and I/O, heterogenous computing, and co-design. Neely also looks at the dominant programming models like MPI, OpenMP, as well as emerging PGAS and task-based approaches.
With the first slide, Neely identifies three major eras of computing: mainframes, vector era, and distributed memory era (MPP). But there’s a fourth emerging era that has so-far proved to be difficult to define.
“There is no one defining feature of this new era like there has been in the past,” says Neely. “This manycore era for lack of a better term is where we are now. It’s characterized by accelerators, lots and lots of simple cores. It’s really all about extracting parallelism from your applications.
“If you notice this curve actually bent upward a bit it’s all due to the ability for us to scale out machines, add and more more processors to these architectures instead of just relying on single processor performance increases (Moore’s law). The last twenty years we’ve gotten spoiled a little bit by the acceleration in capabilities in high-performance computing. Unless we do something very smart very soon we are going to lose that acceleration.”
Keeping this progress going is crucial for scientific discovery and it’s an integral part of the DOE’s mission. This rests on a simple equation:
Programming model stability + Regular advances in realized performance = scientific discovery through computation
The rest of the one-hour talk charts the history of supercomputing up through the current “manycore” paradigm. Highlights include the difficulty of parallelism, the need for scalability, the upcoming merger of HPC and data science (see minute 56:00), and the “exascale problem.” Because of the level of complexity involved in designing the next-generation of hardware and software, support is building for codesign initiatives, which facilitate deep collaboration between application developers and vendors. DesignForward and FastForward are two such DOE programs.