When so many folks from the HPC community come at us with credible details about something as important as the next top system on the planet, it’s hard to ignore. To quiet things down (and hopefully bring forth more information) we’ve published the consistent details about what we know from (very) credible sources.about this year’s upcoming Top500 announcement. While unconfimed, we have….
For the largest computer systems in the world, keeping IT assets safe presents a unique set of challenges.
Getting scientific applications to scale across Titan’s 300,000 compute cores means there will be bugs. Finding those bugs is where Allinea DDT comes in.
<img src=”http://media2.hpcwire.com/hpcwire/icex.jpg” alt=”” width=”94″ height=”83″ />European oil and gas giant, Total, has looked to SGI again to supply a super that meets their modeling and simulation needs–but that is focused on power and cooling. The result, based on the SGI ICE X, should pull a top ten ranking on this year’s Top 500 list–the most powerful commercial….
LLNL researchers have successfully harnessed all 1,572,864 of Sequoia’s cores for one impressive simulation.
As NCSA’s Blue Waters supercomputer approaches full service status, we thought it would be appropriate to see how the machine was built.
<img src=”http://media2.hpcwire.com/hpcwire/NREL_logo222222222.jpg” alt=”” width=”95″ height=”51″ />The DOE’s National Renewable Energy Laboratory (NREL) has just completed construction on a state-of-the-art datacenter in preparation for a brand new supercomputer. The high-efficiency 1-petaflops system features the latest servers from HP, including a proprietary direct-to-chip cooling system. NREL has already taken delivery of an initial 200-teraflops machine, and expects the system to reach full capacity this summer.
<img src=”http://media2.hpcwire.com/hpcwire/puzzle.jpg” alt=”” width=”95″ height=”95″ />The TOP500 list provides a valuable source of information to the HPC community. But every year, some of the data requested by the organizers is missing. And wouldn’t it be a good idea to add some new data points to the list?
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/doe-logo-small.png” alt=”” width=”96″ height=”96″ />The national labs at Oak Ridge, Argonne and Lawrence Livermore are banding together for their next refresh of supercomputers. In late 2016 or early 2017, all three Department of Energy (DOE) centers are looking to deploy their first 100-plus petaflop systems, which will serve as precursors to their exascale machine further down the line. The labs will issue a request for proposal (RFP) later this year with the goal of awarding the work to two prime subcontractors.
The National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab has recently begun installing Edison, the Cray supercomputer that will exceed two peak petaflops when its fully deployed in a couple of months. But the center is already prepping for its next-generation system, which is expected to be an order of magnitude more powerful. That supercomputer may be the center’s last big deployment prior to the exascale era.