There is little point to building expensive exaflop-class computing machines if applications are not available to exploit the tremendous scale and parallelism. Consider that exaflop-class supercomputers will exhibit billion-way parallelism, and that calculations will be restricted by energy consumption, heat generation, and data movement. This level of complexity is sufficient to stymy application development, which Read more…
Noted HPC pioneers weigh in on the coming class of exascale systems.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/blue_box_small.bmp” alt=”” width=”103″ height=”79″ />As we move down the road toward exascale computing and engage in discussion of zettascale, one issue becomes increasingly obvious: we are leaving a large part of the HPC community behind. But it needn’t be so. If we developed compact, power efficient petascale computers, not only could we help broaden the base of high-end users, but we could also provide a foundation for future bleeding-edge supercomputers.
Is the HPC community too focused on the 10-year milestone?
In Michael Wolfe’s second column on programming for exascale systems, he underscores the importance of exposing parallelism at all levels of design, either explicitly in the program, or implicitly within the compiler. Wolfe calls on developers to express this parallelism, in a language and in the generated code, and to exploit the parallelism, efficiently and effectively, at runtime on the target machine. He reminds the community that the only reason to pursue parallelism is for higher performance.
There are at least two ways exascale computing can go, as exemplified by the top two systems on the latest TOP500 list: Tianhe-1A and Jaguar. The Chinese Tianhe-1A uses 14,000 Intel multicore processors with 7,000 NVIDIA Fermi GPUs as compute accelerators, whereas the American Jaguar Cray XT-5 uses 35,000 AMD 6-core processors.
With exascale predictions all the rage, here’s a more sobering look at the next big thing in supercomputing.
The US Defense Advanced Research Projects Agency has selected four “performers” to develop prototype systems for its Ubiquitous High Performance Computing (UHPC) program. According to a press release issued on August 6, the organizations include Intel, NVIDIA, MIT, and Sandia National Lab. Georgia Tech was also tapped to head up an evaluation team for the systems under development.
Ten-teraflop laptops and exaflop supercomputers by 2020.
Pleiades super carries big load for space agency.