Tag: exascale computing
To build exascale systems, power is probably the biggest technical hurdle on the hardware side. In terms of getting to exascale computing, demonstrating the value of supercomputing to funders and the public is a more urgent challenge. But the top roadblock for realizing the potential benefits from exascale is software. That title is probably controversial Read more…
Moore’s Law is projected to come to an end sometime around the middle of the next decade — a timeframe that coincides with the epoch of exascale computing. A white paper by Marc Snir, Bill Gropp and Peter Kogge discusses what we should be doing now to prepare high performance computing for the post-Moore’s Law era.
The first international effort to bring climate simulation software onto the next-generation exascale platforms got underway earlier this spring. The project, named Enabling Climate Simulation (ECS) at Extreme Scale, is being funded by the G8 Research Councils Initiative on Multilateral Research and brings together some of the heavy-weight organizations in climate research and computer science, not to mention some of the top supercomputers on the planet.
The challenge of climate change brings out the worst in us.
Hewlett Packard’s Partha Ranganathan outlines a path for exascale computing.
In his third column on programming for exascale systems, Michael Wolfe shares his views on what programming at the exascale level is likely to require, and how we can get there from where we are today. He explains that it will take some work, but it’s not a wholesale rewrite of 50 years of high performance expertise.
Is the HPC community too focused on the 10-year milestone?
In Michael Wolfe’s second column on programming for exascale systems, he underscores the importance of exposing parallelism at all levels of design, either explicitly in the program, or implicitly within the compiler. Wolfe calls on developers to express this parallelism, in a language and in the generated code, and to exploit the parallelism, efficiently and effectively, at runtime on the target machine. He reminds the community that the only reason to pursue parallelism is for higher performance.
There are at least two ways exascale computing can go, as exemplified by the top two systems on the latest TOP500 list: Tianhe-1A and Jaguar. The Chinese Tianhe-1A uses 14,000 Intel multicore processors with 7,000 NVIDIA Fermi GPUs as compute accelerators, whereas the American Jaguar Cray XT-5 uses 35,000 AMD 6-core processors.
Exascale computing promises incredible science breakthroughs, but it won’t come easily, and it won’t come free.