German researchers are helping to push back the goalposts on large-scale simulation. Using the the IBM “SuperMUC” high performance computer at the Leibniz Supercomputing Center (LRZ), a cross-disciplinary team of computer scientists, mathematicians and geophysicists successfully scaled an earthquake simulation to more than one petaflop/s, i.e., one quadrillion floating point operations per second. The collaboration included participants Read more…
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the TeraGrid effort to support the Japanese research community; NNSA’s ‘Supercomputing Week’ coverage; Mellanox’s new double-duty switch silicon; Platform’s latest Symphony; and the Oracle Sun Server-based Sandia Red Sky/Red Mesa supercomputer upgrades.
Research institutions out of action after 9.0 temblor.
The horrendous aftermath of last Friday’s 9.0 earthquake off the east coast of Japan is still unfolding and the ensuing destruction from tsunamis, infrastructure collapse, fires and now nuclear plant radiation is being tracked and analyzed, some with the help of computer technology designed for just such an event.
As the dust settles, both literally and figuratively, in Japan following the series of disasters more comprehansive assessments of the damage are emerging on a number of fronts–human, environmental, structural and otherwise. While clouds have played an important role in allowing global sharing and collaboration the datacenters that support them in Japan were put to the ultimate test.