Visit additional Tabor Communication Publications
June 11, 2012
The National Center for Atmospheric Research (NCAR) is building a new high performance computing center in Wyoming, just west of Cheyenne. The facility will host Yellowstone, a petascale supercomputer, as well as new storage, visualization, and data analytics clusters.
The machines will be used to support research in weather, climate, air pollution, earthquakes, carbon sequestration and water issues. The idea is to give Earth scientists access to much greater computing and storage capabilities in order to create more accurate simulations of these atmospheric and geophysical models.
The Republic covered the construction of NCAR-Wyoming Supercomputing Center, where Yellowstone will be housed. The 153,000 square foot building is costing roughly $70 million, funded by business groups, the state government and the NSF. The center is set to open on October 15th.
IBM won the bid to build the supercomputer, beating out three other competitors. Based on Big Blue’s iDataPlex server platform, the system will consist of 4,518 dual-socket Sandy Bridge EP nodes, amounting to 72,288 cores. Each 16-core node will be equipped with 32 GB of DDR3-1600 memory. The nodes will be hooked together with Mellanox FDR (56 Gbps) InfiniBand. The system is being installed now and is expected to come online by summer's end.
Delivering an estimated 1.55 peak petaflops, Yellowstone is expected to earn a top ten spot on the upcoming TOP500 list. As such, it will deliver about 30 times the performance of Bluefire, the NCAR supercomputer that Yellowstone is in line to replace.
Such power does not come cheap though. The system is expected to cost between $25 and $30 million, which will be covered by the state and the University of Wyoming (UW). $20 million has been provided by the state, while the University will pay $1 million each year over the next 20 years.
As part of Yellowstone’s supporting cast are three data analysis and visualization (DAV) systems – Geyser, Caldera, and a Knights Corner cluster, which will be used to post-process the data produced by simulation runs. Like Yellowstone, all the clusters will be outfitted with FDR InfiniBand.
Geyser, a 16-node IBM x3850 cluster, will provide large-scale analytics for the supercomputer. Each Geyser node will have a terabyte of memory and house four 10-core Westmere EX processors plus an NVIDIA GPU.
The visualization cluster, Caldera, will also have 16 nodes, but in this case, each node has a much smaller memory footprint (64 GB), less CPU performance (two Sandy Bridge EP processors) and more graphics horsepower (two NVIDIA GPUs).
The third DAV system is an Intel Knights Corner-powered system. Again, it’s a 16-node cluster, with each node pairing two of the MIC coprocessors with two Sandy Bridge EP chips. Interestingly, that system is scheduled to be installed in November 2012, a few months before the Knights Corner parts are expected to be in volume production.
The new NCAR center will also house a data storage system, known as GLADE. It will act as a centralized file resource for Yellowstone and the DAV clusters. GLADE will be made up of 76 IBM DCS3700 storage servers and run GPFS. Using 2TB disk drives, total usable storage capacity will be 10.9 petabytes. The next phase of the system, scheduled for Q1 2014, will incorporate 3TB drives and increase that capacity to 16.9 petabytes.
With petascale storage, compute and visualization, the new NCAR facility will represent one of more impressive HPC setups in the world when it comes online later this year.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.