Exascale systems are certainly the current buzz in high performance computing. While theoretical projections suggest the possibility to have an exascale system by 2018, reality tells us that a usable supercomputer of that size will require at least few years into the next decade. Simply adopting the current approach – more of the same but
The United States Department of Energy has announced a plan to field an exascale system by 2022, but says in order to meet this objective it will require an investment of $1 billion to $1.4 billion for targeted research and development.
Ahead of his opening conference keynote at ISC’13, Bill Dally, chief scientist at NVIDIA and senior vice president of NVIDIA Research, shares his views on where HPC is headed. Among the key topics covered are the demand for heterogenous computing, overcoming the memory wall, the implications of government belt-tightening, and much more…
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/doe-logo-small.png” alt=”” width=”96″ height=”96″ />The national labs at Oak Ridge, Argonne and Lawrence Livermore are banding together for their next refresh of supercomputers. In late 2016 or early 2017, all three Department of Energy (DOE) centers are looking to deploy their first 100-plus petaflop systems, which will serve as precursors to their exascale machine further down the line. The labs will issue a request for proposal (RFP) later this year with the goal of awarding the work to two prime subcontractors.
The National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab has recently begun installing Edison, the Cray supercomputer that will exceed two peak petaflops when its fully deployed in a couple of months. But the center is already prepping for its next-generation system, which is expected to be an order of magnitude more powerful. That supercomputer may be the center’s last big deployment prior to the exascale era.
First US exaflop super might not boot up until 2022.
The Indian government wants the world’s fastest computer to reside in a BRIC house.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/TooBigtoFlop_1_small.jpg” alt=”” width=”94″ height=”90″ />At the cutting edge of HPC, bigger has always been seen as better and user demand has been the justification. However, as we now grapple with trans-petaflop machines and strive for exaflop ones, is evidence emerging that contradicts these notions? Might computers be getting too big to effectively serve up those FLOPS?
I<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/computer_chips_on_die_small.jpg” alt=”” width=”102″ height=”93″ />ntel, AMD, NVIDIA, and Whamcloud have been awarded tens of millions of dollars by the US Department of Energy (DOE) to kick-start research and development required to build exascale supercomputers. The work will be performed under the FastForward program, a joint effort run by the DOE Office of Science and the National Nuclear Security Administration (NNSA) that will focus on developing future hardware and software technologies capable of supporting such machines.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Blue_Gene_Q_rack_blue.jpg” alt=”” width=”82″ height=”103″ />The latest Green500 rankings were announced last week, revealing that top performance and power efficiency can indeed go hand in hand. According to the latest list, the greenest machines, in fact the top 20 systems, were all IBM Blue Gene/Q supercomputers. Blue Gene/Q, of course, is the platform that captured the number one spot on the latest TOP500 list, and is represented by four of the ten fastest supercomputers in the world.