Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems. To explain Read more…
We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.
Additional performance increases for supercomputers are being confounded by three walls: the power wall, the memory wall and the datacenter wall (the “wall wall”). To overcome these hurdles, the market is currently looking to a combination of four strategies: parallel applications development, adding accelerators to standard commodity compute nodes, developing new purpose-built systems, and waiting for a technology breakthrough.
In his third column on programming for exascale systems, Michael Wolfe shares his views on what programming at the exascale level is likely to require, and how we can get there from where we are today. He explains that it will take some work, but it’s not a wholesale rewrite of 50 years of high performance expertise.
In Michael Wolfe’s second column on programming for exascale systems, he underscores the importance of exposing parallelism at all levels of design, either explicitly in the program, or implicitly within the compiler. Wolfe calls on developers to express this parallelism, in a language and in the generated code, and to exploit the parallelism, efficiently and effectively, at runtime on the target machine. He reminds the community that the only reason to pursue parallelism is for higher performance.
There are at least two ways exascale computing can go, as exemplified by the top two systems on the latest TOP500 list: Tianhe-1A and Jaguar. The Chinese Tianhe-1A uses 14,000 Intel multicore processors with 7,000 NVIDIA Fermi GPUs as compute accelerators, whereas the American Jaguar Cray XT-5 uses 35,000 AMD 6-core processors.
Exascale computing promises incredible science breakthroughs, but it won’t come easily, and it won’t come free.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the computing power on display at SC10′s Student Cluster Competition; the University of Portsmouth’s new supercomputer; IBM Watson’s SUSE Linux platform; multicore advances at North Carolina State; and Intel’s new approach to university funding.