In what is being called an unprecedented upgrade, the NASA Center for Climate Simulation (NCCS) is tripling the peak performance of its Discover supercomputer to more than 3.3 petaflops to power NASA’s Earth science modeling efforts. The open procurement process included the benchmarking of NCCS codes – notably the Goddard Earth Observing System Model, Version Read more…
When the Jaguar supercomputer at Oak Ridge National Laboratory morphed into Titan in 2012, it delivered a huge increase in computational power. Recently, the ORNL’s parallel file system, called Spider, received a similar overhaul, and is in the process of emerging as Spider II.
A year ago, NOAA and DOE signed an agreement calling for closer cooperation between NOAA and Oak Ridge National Laboratory. Jim Rogers, director of operations for the National Center for Computational Sciences at ORNL, discusses the agreement and the goals for the Climate Modeling and Research System (CMRS), the initial supercomputer chosen for the collaborative work.
NASA Center for Climate Simulation doubles computational power with new Dell PowerEdge servers; Amazon introduces HPC-level computing on demand; and Carnegie Mellon announces $7 million initiative aimed at boosting computer science enrollment. We recap those stories and more in our weekly wrapup.
When it comes to scientific computing, the amount of science reaped from a simulation is largely determined by the speed and scalability of the software. Likewise, a code’s speed is often at the mercy of its I/O performance. The more efficient the I/O, the faster the code and the more simulations can be run over a period of time.
Spider, the world’s biggest Lustre-based, centerwide file system, has been fully tested to support Oak Ridge National Laboratory’s new petascale Cray XT4/XT5 Jaguar supercomputer and is now offering early access to scientists.