Cray Strikes Balance with Next-Generation XC40 Supercomputer

By Nicole Hemsoth

September 30, 2014

This morning Cray unveiled the full details about its next generation supercomputer to follow on the heels of the XC30 family, which serves as the backbone for a number of Top 500 systems, including the top ten-ranked “Piz Daint” machine.

The newly announced XC40 already serves as the underpinnings for the massive Trinity system at Los Alamos National Laboratory, the upcoming NERSC-8 “Cori” machine, and several other early release systems installed at iVEC and other locations. While the announcement of the system is not unexpected since the aforementioned centers have already shared that they are installing “next generation” Cray systems, the meat is in the details. For instance, we knew about a “burst buffer” component on these machine, but knew little about the Cray-engineered I/O caching tier, not to mention what the configuration options in terms of snapping in the new Haswell (and future) future chips, accelerators or coprocessors might be.

Cray’s Jay Gould shed light on the XC40 for us, noting the early successes of the machine and what they hope its trajectory will be at the high end. Gould says that many of the large-scale early ship customers are using the configurability options offered to build high core-count, high frequency systems that take advantage of DDR4 memory options as well as the new DataWarp I/O acceleration offering built into XC40.

While early customers have been eager to take advantage of the cores available with the new Haswell processors, Cray is not offering the full range of SKUs Intel released recently with its Haswell news. While we’re still waiting on a list of what will be available, he did note that there are plenty of core, frequency, and thermal profiles to choose from, which is only part of their story around customization and configurability. With a 2X improvement in performance and scalability proven thus far over the XC30, Gould says that fine-tuning an XC40 for application performance needs is no different than it was with the XC30, and they have sought to make upgrades simple (including the ability to plug in the new Broadwell cards when they arrive) as well as offering new boosts, including DDR4 memory.

Gould said that when they arrived at figure that this offered at the 2X performance improvement over the XC30, this was based on the 16-core, 2.6 Ghz Intel (2693 v3) part, even if it might have gone higher with the 18-core variant. The reason, as you might have guessed, is all about heat. Even with the liquid cooled systems, the 18-core chips were running too hot, although he says that the 16-core chips offer a sweet spot between performance and thermal concerns. Cray is offering a scaled-down version of the XC40 that is air cooled and 16 blades instead of the XC40’s 48 blades in a liquid cooled wrapper that users are already tapping to prove out their applications before moving to a full XC40 machine, although it’s likely the same SKUs for the XC40 will be offered here as well, despite reduced density and better airflow.

But aside from configuration options, there is more to this machine than meets the eye, starting at the blade design level. With the arrival of the new Xeon E5 v3 series, Cray took to rethinking their existing blade design to make sure they could balance all that compute with more memory. They’ve made the shift to higher capacity DIMMs in DDR4, which provides higher memory bandwidth per blade as well as some options around memory, offering up 64-256 GB per node.

The most unique feature of the XC40, however, is a combination of hardware and software. Cray has created a third tier to address high I/O demands called DataWarp. In essence, this is home-cooked application I/O technology designed to address the imbalances that continue to plague large-scale systems that have a performance and efficiency chasm separating the compute nodes, local memory, and the parallel file systems and spinning disks. Currently, a lot of sites end up overprovisioning their storage to address peak I/O activity, which is expensive, inefficient, and can be addressed by the “burst buffer” concept. Cray’s approach involves what you see below with the SSDs on an “I/O blade” that can inserted into a bank of compute nodes, providing ready access to I/O cache at the compute level without putting all the data across the network to meet the file system and storage.

datawarparch

datawarp3

This can be used in the burst buffer sense that Gary Grider, an early user of the system and this feature, described for us in detail not long ago. However, this is just one of the use cases possible with the DataWarp layer, hence Cray’s avoidance of calling it an actual burst buffer in any of its early literature on the system.The point is it pushes “70,000 to 40 million IOPS” per system, which is a 5x performance improvement over a disk-based system for the same price. Getting a proper balance of the compute and memory and the new DataWarp I/O and disk, we can rebalance those tiers and offer the fastest performance,” said Gould.

We’ll be looking forward to following up with more early users to explore in a specific article a few of the other use cases of the DataWarp capability, including NERSC, which is using it for application acceleration as well as checkpoint/restart. We’re also expecting news around a few more deployments of this machine at a few global centers aside from those that are public, including Trinity, Cori, and the iVec system.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire