Sandia National Laboratories has named Jill Hruby to its highest post. When Hruby officially take the reins of the nation’s largest laboratory on July 17, she becomes Sandia’s 14th director in its 60-plus year history as well as the first woman to lead one of the three National Nuclear Security Administration (NNSA) laboratories. As director of Read more…
Sometimes the impetus behind large-scale computing endeavors can be surprising. Take the case of nuts and bolts. Given the right context, these everyday objects become a much bigger deal. Like when the context is nuclear missile design. Every component of a nuclear weapon body must go through a painstaking review process. As an article at Deixis Magazine Read more…
We’ve been anticipating news around the Trinity supercomputer for some time now and today were graced with the news that Cray will be supplying the machine in two phases with the final phase being complete in 2016. For the original background, the first run of the story can be found here. Since that time this Read more…
Sandia National Labs decommissions legendary supercomputer.
A researcher at the New Mexico State University is modeling new ways to address common challenges with data-intensive, graph-based problems.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the Cray/Sandia partership to found a knowledge institute; RenderStream’s FireStream-based workstations and servers; NVIDIA’s latest CUDA centers; Reservoir Labs and Intel’s extreme scale ambitions; and Jülich Supercomputing Centre’s new hybrid cluster.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the TeraGrid effort to support the Japanese research community; NNSA’s ‘Supercomputing Week’ coverage; Mellanox’s new double-duty switch silicon; Platform’s latest Symphony; and the Oracle Sun Server-based Sandia Red Sky/Red Mesa supercomputer upgrades.
Data-intensive applications are quickly emerging as a significant new class of HPC workloads. For this class of applications, a new kind of supercomputer, and a different way to assess them, will be required. That is the impetus behind the Graph 500, a set of benchmarks that aim to measure the suitability of systems for data-intensive analytics applications.
Researchers virtualize Sandia’s Red Storm supercomputer; and Princeton University announces plans for new research center. We recap those stories and more in our weekly wrapup.