Visit additional Tabor Communication Publications
June 11, 2012
The National Center for Atmospheric Research (NCAR) is building a new high performance computing center in Wyoming, just west of Cheyenne. The facility will host Yellowstone, a petascale supercomputer, as well as new storage, visualization, and data analytics clusters.
The machines will be used to support research in weather, climate, air pollution, earthquakes, carbon sequestration and water issues. The idea is to give Earth scientists access to much greater computing and storage capabilities in order to create more accurate simulations of these atmospheric and geophysical models.
The Republic covered the construction of NCAR-Wyoming Supercomputing Center, where Yellowstone will be housed. The 153,000 square foot building is costing roughly $70 million, funded by business groups, the state government and the NSF. The center is set to open on October 15th.
IBM won the bid to build the supercomputer, beating out three other competitors. Based on Big Blue’s iDataPlex server platform, the system will consist of 4,518 dual-socket Sandy Bridge EP nodes, amounting to 72,288 cores. Each 16-core node will be equipped with 32 GB of DDR3-1600 memory. The nodes will be hooked together with Mellanox FDR (56 Gbps) InfiniBand. The system is being installed now and is expected to come online by summer's end.
Delivering an estimated 1.55 peak petaflops, Yellowstone is expected to earn a top ten spot on the upcoming TOP500 list. As such, it will deliver about 30 times the performance of Bluefire, the NCAR supercomputer that Yellowstone is in line to replace.
Such power does not come cheap though. The system is expected to cost between $25 and $30 million, which will be covered by the state and the University of Wyoming (UW). $20 million has been provided by the state, while the University will pay $1 million each year over the next 20 years.
As part of Yellowstone’s supporting cast are three data analysis and visualization (DAV) systems – Geyser, Caldera, and a Knights Corner cluster, which will be used to post-process the data produced by simulation runs. Like Yellowstone, all the clusters will be outfitted with FDR InfiniBand.
Geyser, a 16-node IBM x3850 cluster, will provide large-scale analytics for the supercomputer. Each Geyser node will have a terabyte of memory and house four 10-core Westmere EX processors plus an NVIDIA GPU.
The visualization cluster, Caldera, will also have 16 nodes, but in this case, each node has a much smaller memory footprint (64 GB), less CPU performance (two Sandy Bridge EP processors) and more graphics horsepower (two NVIDIA GPUs).
The third DAV system is an Intel Knights Corner-powered system. Again, it’s a 16-node cluster, with each node pairing two of the MIC coprocessors with two Sandy Bridge EP chips. Interestingly, that system is scheduled to be installed in November 2012, a few months before the Knights Corner parts are expected to be in volume production.
The new NCAR center will also house a data storage system, known as GLADE. It will act as a centralized file resource for Yellowstone and the DAV clusters. GLADE will be made up of 76 IBM DCS3700 storage servers and run GPFS. Using 2TB disk drives, total usable storage capacity will be 10.9 petabytes. The next phase of the system, scheduled for Q1 2014, will incorporate 3TB drives and increase that capacity to 16.9 petabytes.
With petascale storage, compute and visualization, the new NCAR facility will represent one of more impressive HPC setups in the world when it comes online later this year.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.