Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

August 27, 2014

HPC Task Force Publishes Recommendations

Tiffany Trader

The Secretary of Energy Advisory Board (SEAB) Task Force on Next Generation High Performance Computing (HPC), established in December by the Secretary of Energy to review the mission and national capabilities related to next generation high performance computing, has released a “final version” draft report.

The Task Force was asked to examine the problems and opportunities that will drive the need for next generation high performance computing. The report addresses what will be required to execute a successful path to deliver next generation leading edge HPC, and makes recommendations regarding if and to what degree the government should be involved in facilitating this goal, and what specific role the DOE should take in such a program.

The Task Force’s findings and recommendations were framed by three broad considerations:

1. We recognize and recommend a “new” alignment between classical and data centric computing to develop a balanced computational ecosystem.
2. We recognize the DOE historical role and expertise in the science, technology, program management and partnering, and recognize its vital role across US Government (USG), including in the National Strategic Computing Initiative (NSCI).
3. We examine and make recommendations on exascale investment but also on nurturing the health of the overall high performance computing ecosystem, which includes investment in people, and in mathematics, computer science, software engineering, basic sciences, and materials science and engineering.

In the report’s executive summary, the authors describe how today’s machines have achieved performance in the tens of petaflops range largely by following the historical path of the last several decades, i.e., “taking advantage of Moore’s law progression to smaller /and faster CMOS computing elements, augmented by the highly parallel architectures that followed the vector processing change at the pre-teraflop generation.”

The draft report also points to the evolution of a more data-centric computing paradigm brought about by sensor networks, financial systems, scientific instruments, and simulations themselves.

“The need to extract useful information from this explosion of data becomes as important as sheer computational power,” the authors assert. “This has driven a much greater focus on data centric computing, linked to integer operations, as opposed to floating point operations. Indeed, computational problems and data centric problems are coming together in areas that range from energy, to climate modeling, to healthcare.

“This shift dictates the need for a balanced ecosystem for high performance computing with an undergirding infrastructure that supports both computationally-intensive and data centric computing.

In fact, the architecture of computing hardware is evolving, and this means that the elements of the backbone technology – including memory, data movement, and bandwidth – must progress together. As we move to the era of exascale computing, multiple technologies have to be developed in a complementary way, including hardware, middleware, and applications software.”

Among the report’s key findings is the undeniable importance of investing in exascale computing. The NNSA mission and basic science applications “demonstrate real need and real deliverables from a significant performance increase in classical high performance computing at several orders of magnitude beyond the tens of petaflop performance delivered by today’s leadership machines,” the authors write.

They add that current technology is only capable of one last “current” generation machine, to wit: “Optimization of current CMOS, highly parallel processing within the remaining limits of Moore’s law and Dennard scaling likely provides one last “generation” of conventional architecture at the 1-10 exascale performance level, within acceptable power budgets. Significant, but projectable technology and engineering developments are needed to reach this performance level.”

The report recommends five steps to carrying out its proposals. The first assertion is that the “DOE, through a program jointly established and managed by the NNSA and the Office of Science, should lead the program and investment to deliver the next class of leading edge machines by the middle of the next decade. These machines should be developed through a co-design process that balances classical computational speed and data centric memory and communications architectures to deliver performance at the 1-10 exaflop level, with addressable memory in the exabyte range.”

Achieving and maintaining a healthy exascale and beyond ecosystem will necessitate a DOE investment in the range of $100-$150 million per year, according to the draft report.

The SEAB Task Force is composed of SEAB members and independent experts from academia and industry. The full report includes a thorough justification for exascale computing investment, including a discussion of the new era of supercomputing, the rise of data-centric computing, implications for industry, and the need for balanced progress.

The report’s recommendations cover three spans of time: greater petascale, exascale, and beyond exascale. Greater petascale will straddle the next five years, characterized by systems in the many tens to hundreds of petaflops and requiring up to a combination of 5-10 petabytes of addressable and buffer memory and over one hundred petabytes of storage. The DOE-funded CORAL program is operating to satisfy these requirements in a data centric architectural context, with a focus on power efficiency, reliability and productive usability. The exascale time frame covers the following five to ten years. It is characterized by systems in the hundreds of petaflops to tens of exaflops, requiring tens of petabytes of memory and possibly an exabyte of storage. Programs that support this period are still in their formative stages and funding is just beginning to come to fruition. The final and most uncertain stage is “beyond exascale.” The successors to CMOS technology and current architectures that could facilitate a post-exascale computing era may already be under development or may come from an as-yet unknown path.

The Task Force was asked to deliver its report by June 2014 and to discuss its report and its conclusion at the June 2014 SEAB meeting.

Share This