Cray Strikes Balance with Next-Generation XC40 Supercomputer

By Nicole Hemsoth

September 30, 2014

This morning Cray unveiled the full details about its next generation supercomputer to follow on the heels of the XC30 family, which serves as the backbone for a number of Top 500 systems, including the top ten-ranked “Piz Daint” machine.

The newly announced XC40 already serves as the underpinnings for the massive Trinity system at Los Alamos National Laboratory, the upcoming NERSC-8 “Cori” machine, and several other early release systems installed at iVEC and other locations. While the announcement of the system is not unexpected since the aforementioned centers have already shared that they are installing “next generation” Cray systems, the meat is in the details. For instance, we knew about a “burst buffer” component on these machine, but knew little about the Cray-engineered I/O caching tier, not to mention what the configuration options in terms of snapping in the new Haswell (and future) future chips, accelerators or coprocessors might be.

Cray’s Jay Gould shed light on the XC40 for us, noting the early successes of the machine and what they hope its trajectory will be at the high end. Gould says that many of the large-scale early ship customers are using the configurability options offered to build high core-count, high frequency systems that take advantage of DDR4 memory options as well as the new DataWarp I/O acceleration offering built into XC40.

While early customers have been eager to take advantage of the cores available with the new Haswell processors, Cray is not offering the full range of SKUs Intel released recently with its Haswell news. While we’re still waiting on a list of what will be available, he did note that there are plenty of core, frequency, and thermal profiles to choose from, which is only part of their story around customization and configurability. With a 2X improvement in performance and scalability proven thus far over the XC30, Gould says that fine-tuning an XC40 for application performance needs is no different than it was with the XC30, and they have sought to make upgrades simple (including the ability to plug in the new Broadwell cards when they arrive) as well as offering new boosts, including DDR4 memory.

Gould said that when they arrived at figure that this offered at the 2X performance improvement over the XC30, this was based on the 16-core, 2.6 Ghz Intel (2693 v3) part, even if it might have gone higher with the 18-core variant. The reason, as you might have guessed, is all about heat. Even with the liquid cooled systems, the 18-core chips were running too hot, although he says that the 16-core chips offer a sweet spot between performance and thermal concerns. Cray is offering a scaled-down version of the XC40 that is air cooled and 16 blades instead of the XC40’s 48 blades in a liquid cooled wrapper that users are already tapping to prove out their applications before moving to a full XC40 machine, although it’s likely the same SKUs for the XC40 will be offered here as well, despite reduced density and better airflow.

But aside from configuration options, there is more to this machine than meets the eye, starting at the blade design level. With the arrival of the new Xeon E5 v3 series, Cray took to rethinking their existing blade design to make sure they could balance all that compute with more memory. They’ve made the shift to higher capacity DIMMs in DDR4, which provides higher memory bandwidth per blade as well as some options around memory, offering up 64-256 GB per node.

The most unique feature of the XC40, however, is a combination of hardware and software. Cray has created a third tier to address high I/O demands called DataWarp. In essence, this is home-cooked application I/O technology designed to address the imbalances that continue to plague large-scale systems that have a performance and efficiency chasm separating the compute nodes, local memory, and the parallel file systems and spinning disks. Currently, a lot of sites end up overprovisioning their storage to address peak I/O activity, which is expensive, inefficient, and can be addressed by the “burst buffer” concept. Cray’s approach involves what you see below with the SSDs on an “I/O blade” that can inserted into a bank of compute nodes, providing ready access to I/O cache at the compute level without putting all the data across the network to meet the file system and storage.

datawarparch

datawarp3

This can be used in the burst buffer sense that Gary Grider, an early user of the system and this feature, described for us in detail not long ago. However, this is just one of the use cases possible with the DataWarp layer, hence Cray’s avoidance of calling it an actual burst buffer in any of its early literature on the system.The point is it pushes “70,000 to 40 million IOPS” per system, which is a 5x performance improvement over a disk-based system for the same price. Getting a proper balance of the compute and memory and the new DataWarp I/O and disk, we can rebalance those tiers and offer the fastest performance,” said Gould.

We’ll be looking forward to following up with more early users to explore in a specific article a few of the other use cases of the DataWarp capability, including NERSC, which is using it for application acceleration as well as checkpoint/restart. We’re also expecting news around a few more deployments of this machine at a few global centers aside from those that are public, including Trinity, Cori, and the iVec system.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire