The National Center for Atmospheric Research (NCAR) is building a new high performance computing center in Wyoming, just west of Cheyenne. The facility will host Yellowstone, a petascale supercomputer, as well as new storage, visualization, and data analytics clusters.
The machines will be used to support research in weather, climate, air pollution, earthquakes, carbon sequestration and water issues. The idea is to give Earth scientists access to much greater computing and storage capabilities in order to create more accurate simulations of these atmospheric and geophysical models.
The Republic covered the construction of NCAR-Wyoming Supercomputing Center, where Yellowstone will be housed. The 153,000 square foot building is costing roughly $70 million, funded by business groups, the state government and the NSF. The center is set to open on October 15th.
IBM won the bid to build the supercomputer, beating out three other competitors. Based on Big Blue’s iDataPlex server platform, the system will consist of 4,518 dual-socket Sandy Bridge EP nodes, amounting to 72,288 cores. Each 16-core node will be equipped with 32 GB of DDR3-1600 memory. The nodes will be hooked together with Mellanox FDR (56 Gbps) InfiniBand. The system is being installed now and is expected to come online by summer’s end.
Delivering an estimated 1.55 peak petaflops, Yellowstone is expected to earn a top ten spot on the upcoming TOP500 list. As such, it will deliver about 30 times the performance of Bluefire, the NCAR supercomputer that Yellowstone is in line to replace.
Such power does not come cheap though. The system is expected to cost between $25 and $30 million, which will be covered by the state and the University of Wyoming (UW). $20 million has been provided by the state, while the University will pay $1 million each year over the next 20 years.
As part of Yellowstone’s supporting cast are three data analysis and visualization (DAV) systems – Geyser, Caldera, and a Knights Corner cluster, which will be used to post-process the data produced by simulation runs. Like Yellowstone, all the clusters will be outfitted with FDR InfiniBand.
Geyser, a 16-node IBM x3850 cluster, will provide large-scale analytics for the supercomputer. Each Geyser node will have a terabyte of memory and house four 10-core Westmere EX processors plus an NVIDIA GPU.
The visualization cluster, Caldera, will also have 16 nodes, but in this case, each node has a much smaller memory footprint (64 GB), less CPU performance (two Sandy Bridge EP processors) and more graphics horsepower (two NVIDIA GPUs).
The third DAV system is an Intel Knights Corner-powered system. Again, it’s a 16-node cluster, with each node pairing two of the MIC coprocessors with two Sandy Bridge EP chips. Interestingly, that system is scheduled to be installed in November 2012, a few months before the Knights Corner parts are expected to be in volume production.
The new NCAR center will also house a data storage system, known as GLADE. It will act as a centralized file resource for Yellowstone and the DAV clusters. GLADE will be made up of 76 IBM DCS3700 storage servers and run GPFS. Using 2TB disk drives, total usable storage capacity will be 10.9 petabytes. The next phase of the system, scheduled for Q1 2014, will incorporate 3TB drives and increase that capacity to 16.9 petabytes.
With petascale storage, compute and visualization, the new NCAR facility will represent one of more impressive HPC setups in the world when it comes online later this year.