The NVIDIA GPU Technology Conference is in full-swing today in San Jose, Calif. The annual event kicked off this morning with a keynote from NVIDIA CEO Jen-Hsun Huang, who revealed that the Swiss National Supercomputing Center (CSCS) is building Europe’s fastest GPU-accelerated supercomputer, an extension of a Cray system that was announced last year.
As Cray Vice President, Storage & Data Management Barry Bolding told HPCwire, this will be the first Cray supercomputer equipped with Intel Xeon processors and NVIDA GPUs.
CSCS is part of ETH Zurich, one of the top universities in the world and the alma mater of Albert Einstein. The supercomputing center installed phase one of its shiny new Cray XC30 back in December 2012.
The supercomputer, called “Piz Daint,” sports the latest Intel Xeon “Sandy Bridge” CPUs and Cray’s next-generation Aries system interconnect and tops out at 750 teraflops of processing power. The 12-cabinet system is the largest Cray XC30 installation yet with 2,256 compute nodes, each consisting of two Intel Xeon E5-2670 processors, running at 2.60GHz and 32 gigabytes of memory (70 terabytes total memory). With 16 cores total per node, there are 36,096 physical cores and the possibility to use 72,192 virtual cores with hyperthreading enabled. Compared to its predecessor, Monta Rosa, Piz Daint offers a five-fold increase in system bisection network bandwidth.
The “phase two” system announced today will include the accelerative power of Tesla Kepler-based GPUs, specifically the souped-up K20X parts. The upgrades will rocket Piz Daint’s computational output into the petascale range, although it looks like we’ll have to wait for CSCS to reveal the exact performance metrics because Cray and NVIDIA aren’t disclosing this information. The GPU-bedazzled Piz Daint is expected to be in operation early next year.
CSCS is proud to be the owner of such a ground-breaking supercomputer, the first phase of which it expects to be in production mode as soon as next month. This Swiss pride extends to the naming of the machine: which was christened “Piz Daint” after a local mountain of the Swiss Ortler Alps. The CSCS datacenter also houses a Cray XE6 system named Monta Rosa, a Cray XMT called Matterhorn and an IBM cluster that answers to Piz Julier. The prefix Piz comes from the Romansch language and is a linguistic marker for South-Eastern Switzerland.
CSCS is a very large general-purpose datacenter that runs all the traditional science codes – material science, chemistry, physics, life sciences, and so on – but when it comes to this new system, the center is emphasizing the expected benefit for its weather and climate modeling work. Sumit Gupta, general manger of NVIDIA’s Tesla business, explains that the GPU-accelerated supercomputer will enable more accurate weather prediction for the Swiss Alps.
The Alps cover 65 percent of Switzerland’s surface area, creating a unique topology with many distinct microclimates. With small communities nestled in between the mountains, each town essentially has its own climate, so CSCS researchers want to be able to do higher-resolution models and do them more accurately. Gupta reports that the acceleration offered by GPUs is compelling. With respect to a popular weather prediction application called COSMO, NVIDIA says it expects key modules to run 3x faster using Tesla K20X GPUs.
According to an NVIDIA blog post, this speedup will allow CSCS to predict “local and national weather patterns days, or even weeks, ahead of time with the highest degree of accuracy.”
“Piz Daint will help advance our research into alpine climate and weather patterns by leaps and bounds,” said Thomas Schulthess, director of CSCS. “With GPU acceleration, researchers can run many more sophisticated, ultra-high-resolution models, giving us an unprecedented level of visibility and understanding into how these systems work.”
As for official benchmarks, Cray’s Bolding notes that there are no GPU or Xeon Phi-equipped XC30 systems in production yet, but Cray does have a large install base of hybrid GPU systems that are being benchmarked every day on real applications. He says there are also a lot of science data coming out of Oak Ridge National Labs and NCSA on very large-scale GPU performance.
“There are a lot of research groups tuning their codes and applications for GPUs,” says Bolding. He adds that the XC line was created to scale and to be compatible for blades with either x86 parts or Intel Xeon MICs or GPUs, and customers can easily make upgrades using any of these parts. The Pawsey Centre in Australia is gearing up to install an XC30 Cray with Phi coprocessors and Cray anticipates that more of its XC customers will be taking advantage of this upgrade path.
Compared to previous Cray machines, what sets the XC30 apart are Intel processors and Cray’s next-generation Aries interconnect. “This is the first time that we’ve put GPUs with the XC30. So compared to Titan and Blue Waters, there’s a move from AMD to Intel and a move from our second-generation interconnect (Gemini) to our third-generation interconnect (Aries),” says Bolding.
Bolding adds that Aries is a much more powerful, high-bandwidth interconnect able to keep pace with the current best-of-class coprocessors.
Die shot of Aries interconnect |
He explains that the more powerful each individual connection or compute node on a network becomes, the potential for more bandwidth is needed. As those nodes are able to complete pieces of their jobs more quickly, they need to communicate with the network more often. As coprocessors get faster, it puts a heavier burden on the interconnect, which is what makes Aries such a critical enabler.
When it comes to big I/O applications, storage is also important. Cray is seeing a lot of customers that are driving much higher data rates out of their large systems as part of a continuing trend to drive fast I/O. CSCS has a Cray Sonexion storage system with more than two petabytes of usable storage and more than 100 gigabytes-per-second of sustained aggregate IO performance storage. Sonexion is Cray’s flagship very fast I/O file system technology. The disks, the software, and the Lustre file system are all integrated in a single product.
Cray and CSCS have a relationship that goes back many years installing early versions of hardware, explains Bolding, referring to the center as “one of the best partners we have for bringing in serial number one versions of our systems.” In addition to being one of the first Cascade partners, CSCS also installed the first Cray XE6 machine and was the launch customer for the XK6 system, Cray’s first GPU-accelerated supercomputer.
According to the press release put out by Cray, the latest project is “valued at” over $32 million. It’s reasonable to speculate that CSCS received a discount on account of their launch partner status and for their help troubleshooting this new hybrid design.
As for what architecture the Cray XC30 is most similar to, Bolding gives the question careful consideration. “To be honest, it’s the most similar to future exascale architectures,” he says with a laugh that suggests he’s aware it sounds like marketing hype, but Bolding is sincere.
“Cray is driving both the topology – what we call our Dragonfly topology – a set of software, and an interconnect with Aries, with its global addressability,” he says, “it’s really designed to be a prototype for future exascale systems, it’s more related to what’s to come than what’s behind it.”
On a roll now, Bolding asserts that this topology is more scalable than InfiniBand, with a much more efficient network architecture than commodity architectures, yet it still supports the commodity processors.
“We believe that if we can follow the commodity processor roadmap and put those into this next-generation architecture, we can give the community the scalable tools to build applications for exascale. That’s really our goal here,” says Bolding.
Related Articles
Cray Reports Record Revenue and Operating Income in 2012
NVIDIA Unveils 1.3 Teraflop GPU for Supercomputing
Cray Launches Cascade, Embraces Intel-Based Supercomputing
Cray Wins Contract to Build Cluster at Swiss Supercomputing Center