Failure to incorporate big data computing insights into efforts to achieve exascale computing would be a critical mistake argue Daniel Reed and Jack Dongarra in their article, Exascale Computing and Big Data, published in the July issue of the Communications of the ACM journal. While scientific and big data computing have historically taken different development Read more…
The future of high performance computing is now being defined both in how it will be achieved and in the ways in which it will impact diverse fields in science and technology, industry and commerce, and security and society. At this time there is great expectation but much uncertainty creating a climate of opportunity, challenge, Read more…
The Indian government has approved a seven-year supercomputing program worth $730 million (Rs. 4,500-crore) intended to restore the nation’s status as a world-class computing power. The prime mandate of the National Supercomputing Mission, first revealed last October, is the construction of a vast supercomputing grid connecting academic and R&D institutions and select departments and ministries. The Read more…
When so many folks from the HPC community come at us with credible details about something as important as the next top system on the planet, it’s hard to ignore. To quiet things down (and hopefully bring forth more information) we’ve published the consistent details about what we know from (very) credible sources.about this year’s upcoming Top500 announcement. While unconfimed, we have….
For the largest computer systems in the world, keeping IT assets safe presents a unique set of challenges.
Getting scientific applications to scale across Titan’s 300,000 compute cores means there will be bugs. Finding those bugs is where Allinea DDT comes in.
<img src=”http://media2.hpcwire.com/hpcwire/icex.jpg” alt=”” width=”94″ height=”83″ />European oil and gas giant, Total, has looked to SGI again to supply a super that meets their modeling and simulation needs–but that is focused on power and cooling. The result, based on the SGI ICE X, should pull a top ten ranking on this year’s Top 500 list–the most powerful commercial….
LLNL researchers have successfully harnessed all 1,572,864 of Sequoia’s cores for one impressive simulation.
As NCSA’s Blue Waters supercomputer approaches full service status, we thought it would be appropriate to see how the machine was built.
<img src=”http://media2.hpcwire.com/hpcwire/NREL_logo222222222.jpg” alt=”” width=”95″ height=”51″ />The DOE’s National Renewable Energy Laboratory (NREL) has just completed construction on a state-of-the-art datacenter in preparation for a brand new supercomputer. The high-efficiency 1-petaflops system features the latest servers from HP, including a proprietary direct-to-chip cooling system. NREL has already taken delivery of an initial 200-teraflops machine, and expects the system to reach full capacity this summer.