Last month Rensselaer Polytechnic Institute announced it had been awarded a $2.65 million grant to acquire a 100 teraflop Blue Gene/Q supercomputer for its Computational Center for Nanotechnology Innovations. The new system will also include a multi-terabyte RAM-based storage accelerator, petascale disk storage, and rendering cluster plus remote display wall system for visualization.
At SC11 in Seattle, the stage is set for data-intensive computing to steal the show. This year’s theme correlates directly to the “big data” trend that is reshaping enterprise and scientific computing. We give an insider’s view of some of the top sessions for the big data crowd and a broader sense of how this year’s conference is shaping up overall.
SGI, Microsoft warm up their Hadoop offerings.
Convey recently noted that HPC is “no longer just numerically intensive, it’s now data-intensive—with more and different demands on HPC system architectures.” They claim that the “whole new HPC” that is gathered under the banner of data-intensive computing possesses a number of unique characteristics and see unique opportunities for all the of the data, and new memory and co-processor architectures.
SGI has been getting a lot of mileage out of its SGI UV shared memory platform, having delivered close to 500 systems since it started shipping them in June 2010. Now, with the recent addition of support for Microsoft’s Windows Server, the company is looking to expand its customer base in a big way.
When announced in 2006, the Cray XMT supercomputer attracted little attention. The machine was originally targeted for high-end data mining and analysis for a particular set of government clients in the intelligence community. While the feds have given the XMT support over the past five years, Cray is now looking to move these machines into the commercial sphere. And with the next generation XMT-2 on the horizon, the company is gearing up to accelerate that strategy in 2011.
Data-intensive applications are quickly emerging as a significant new class of HPC workloads. For this class of applications, a new kind of supercomputer, and a different way to assess them, will be required. That is the impetus behind the Graph 500, a set of benchmarks that aim to measure the suitability of systems for data-intensive analytics applications.
The naming of Michael Norman as director of the San Diego Supercomputer Center (SDSC) last week was long overdue. SDSC has been without an official director for more than 14 months, with Norman filling the spot as the interim head since last July. The appointment could mark something of a comeback for the center, which has not only gone director-less during this time, but has been operating without a high-end supercomputer as well.
TeraGrid ’10, the fourth annual conference of the TeraGrid, took place last week in Pittsburgh, Pa. HPCwire will be running a series of articles highlighting the conference. The first in the series covers Gabrielle Allen’s keynote talk on Cactus, an open, collaborative software framework for numerical relativity.
Solid-state devices based on Flash and PCIe are emerging as a new class of enterprise storage option — Tier-0. Tier-0 is an optimized storage tier specifically for high performance workloads, which can benefit the most from using flash memory.