NERSC Signs Up for Multi-Petaflop “Cascade” Supercomputer

By Michael Feldman

July 3, 2012

The US Department of Energy’s National Energy Research Scientific Computing Center (NERSC) has ordered a two-petaflop “Cascade” supercomputer, Cray’s next-generation HPC platform. The DOE is shelling out $40 million dollars for the system, including about 6.5 petabytes of the company’s Sonexion storage. The contract covers both hardware and services, which will extend over multiple years. Installation is scheduled for sometime in 2013.

The NERSC acquisition represents Cray’s third publicly announced pre-sale of a Cascade system and the first in the US. The other two deals in the pipeline include a multi-petaflop machine destined for HLRS, at the University of Stuttgart, and a 400-teraflop one for Kyoto University.

Cascade is a big step for Cray. Not only does it represent the company’s first foray in Intel-based supercomputing, but it also fills out Cray’s Adaptive Supercomputing vision to a much greater degree than the previous XT and XE product lines. DARPA, which poured hundreds of millions of dollars into the design via the agency’s High Productivity Computing Systems (HPCS) program, helped to make Cascade a much bigger deal than just a platform refresh.

For example, a good portion of the funding went into developing more sophisticated compilers, tools and libraries, including the creation of the Chapel language, all aimed at making the platform more productive and easier to use. The extra money also allowed Cray the breathing room for a critical system redesign, in particular, the opportunity to ditch its AMD Opteron-only architecture.

Although much of the talk surrounding Cascade has been about putting Intel silicon into Cray hardware, the platform is actually designed to support multiple processor types. According to Cray CEO Peter Ungaro, they’ll be able to build blades with AMD processors, as they do now, as well as those with accelerators, like GPUs and Intel MIC (Xeon Phi) coprocessors, and even blades with future ARM chips, if they so desire. “It’s really going to open up our options to have targeted nodes for targeted workloads,” he told HPCwire.

The key is the new Aries interconnect, which is integrated with PCI Express (PCIe), a standard on-board bus that virtually all server processors will support. Prior to this, Cray’s interconnect technology (SeaStar, then Gemini) was tied to HyperTransport, which restricted the company’s supercomputers to AMD CPUs. With the faster speeds of PCIe 3.0, and its ubiquity, the bus technology is now in a position to serve as the underlying substrate for system networks, even for custom interconnects.

All of this potential heterogeneity is likely to be bypassed by NERSC though, at least initially. At a time when many other national labs are opting for GPUs on their fastest machines, NERSC-7 will be based entirely on Intel Xeon CPUs. No GPU or Intel MIC parts are to be used, although future upgrades with those accelerators are theoretically possible. According to Jeff Broughton, who heads NERSC’s Systems Department, the deployment will be based on “the latest generation of Intel processors available at the time of installation.” Given the 2013 timeframe, those chips could very well be Ivy Bridge CPUs rather than the Sandy Bridge parts in the field today.

By going with the more traditional CPU-only platform for NERSC’s first multi-petaflop super, the DOE lab has bucked a trend begun by other national labs like Oak Ridge, NCSA, and TACC , which are using GPUs or, in the case of TACC, Intel MIC accelerators, to get into the double-digit petaflop realm. NERSC-7 was also originally supposed to be a 10-petaflop machine, but getting there via x86 CPUs (that is, not with an IBM Blue Gene or Fujitsu K-type architecture) is not really economically feasible right now without accelerator add-ons.

According to NERSC director Kathy Yelick, the lab supports 4,500 users running hundreds of different codes, across many science disciplines and there is concern about forcing all that software to be rewritten for PCIe-based GPUs or Intel MIC devices. “Current accelerators have a separate memory space and are configured as coprocessors rather than first-class cores, both features that we are hoping will change,” she explained. “So while we are encouraging users to experiment with low-power processor technology, such as GPUs, in our testbeds, we do not think the time is right to transition all of the users.”

They do expect to move their users to some type of low-power manycore architecture over the next several years, but would like to make this transition just once. The first opportunity is likely to present itself with NERSC-8, the next major system procurement following NERSC-7. By the time that system is deployed a few years down the road, the system planners are probably thinking (or at least hoping) there will be a range of integrated low-power manycore architectures to choose from.

That’s a reasonable bet. Certainly, by the middle of the decade, we should at least see the appearance of NVIDIA’s ARM64-GPU “Maxwell” processor, an AMD server-class APU, and an Intel MIC chip integrated with some big Xeon CPU cores.

In the meantime, it should be relatively straightforward to run current user codes on NERSC-7 hardware since the lab’s existing petascale machine, Hopper, is a Cray XE6 system, and from an application point of view, will be nearly indistinguishable from its successor. Getting those codes to scale up to a machine with about twice the performance of Hopper could be somewhat of a challenge, but NERSC sees many potential candidates, both for simulation (LQCD, fusion, turbulence, astrophysics, chemistry, quantum Monte Carlo, molecular dynamics and cloud resolving climate models) and data analysis (bioinformatics and material screening). Of course, few if any applications are expected to use all two petaflops, but these big machines also function quite nicely as capacity clusters.

NERSC is likely to be only one of a number of US national labs signing up for Cascade supercomputers over the next few years. Given DARPA’s DoD pedigree, we should expect, at the very least, to see some defense labs acquire these next-generation Cray machines as they upgrade their HPC machinery.

Cascade will also be an opportunity for Cray to re-establish its dominance at the top of the supercomputing heap in the face of renewed competition from IBM. In the world’s top 100 systems, Blue Gene supercomputers are now the most numerous single platform, outdistancing Cray XT/XE installations by a 21 to 17 margin. That was the result of the recent surge of Blue Gene/Q deployments over the last six months, which was able to capture a lot of new business as it squared off against the now two-year-old Cray XE6.

Cray is certainly expecting great things from Cascade. Over the past eight years, the company has managed to steadily expand sales of its x86 supercomputing portfolio. Starting with its Red Storm supercomputer in 2004, which led to the company’s first commercial x86-based product, XT3, and then to subsequent platforms, XT4, XT5, XT6 and XE6/XK6, Cray has sold more cabinets with each successive generation. “If we keep that trend going,” says Ungaro, “we’ll be in good shape.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Google Frames Quantum Race as Two-Dimensional

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a pres Read more…

By Tiffany Trader

Affordable Optical Technology Needed Says HPE’s Daley

April 26, 2018

While not new, the challenges presented by computer cabling/PCB circuit routing design – cost, performance, space requirements, and power management – have coalesced into a major headache in advanced HPC system desig Read more…

By John Russell

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is built to run artificial intelligence (AI) workloads and, as Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

Google Frames Quantum Race as Two-Dimensional

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field an Read more…

By Tiffany Trader

Affordable Optical Technology Needed Says HPE’s Daley

April 26, 2018

While not new, the challenges presented by computer cabling/PCB circuit routing design – cost, performance, space requirements, and power management – have Read more…

By John Russell

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Leading Solution Providers

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This