Cray Sets Sights On Cascade Supercomputer, Exascale Milestone

By Michael Feldman

June 10, 2010

Cray’s recent unveiling of its XE6 supercomputer, previously codenamed “Baker,” marks the beginning of a larger strategy that lays the foundation for the company’s future heterogeneous supercomputing products. At last week’s International Supercomputing Conference (ISC) in Hamburg, HPCwire sat down with Cray CTO Steve Scott to talk about life after Baker, where he revealed the company’s plans for its upcoming “Cascade” supercomputer and how the exascale landscape is shaping up.

Cascade will be Cray’s first capability supercomputer based on Intel x86 processors. Starting with the XT3 machine in 2004, all of the company’s non-proprietary top-end supers have been built with AMD Opteron CPUs. According to Scott, the first Cascade system delivered will sport Xeon-powered server blades, but the intention is to eventually support AMD Opteron processors on this architecture as well. Like most HPC vendors, Cray appears committed to following this dual-x86 product path.

The development of Cascade is being subsidized by DARPA’s HPCS (High Productivity Computing Systems) program. The third and final phase of the contract with Cray set aside $250 million to help the company complete development of the hardware and the supporting system software. (IBM was allocated $244 million for its corresponding PERCS system.) According to Scott, Cascade is currently on track to be delivered sometime in the second half of 2012. Specific product timetables for the Opteron version are still to be determined, and will ultimately depend upon customer demand as well as AMD’s processor schedule.

A new system interconnect, codenamed “Aries,” is being developed for the Cascade-class machines. To support a dual Intel-AMD strategy on this architecture, Cray is going to begin using PCI-Express as the processor interface to the interconnect ASIC. The current SeaStar, and now Gemini interconnect, are tied to Opteron’s native HyperTransport link. While it might seem natural to think that Cray would hook into Intel’s QPI for network connectivity on a Xeon-based machine, opting for PCI-Express meant Cray could support the same network across both processor architectures — and any future ones as well. According to Scott, they’re looking to tape out the Aries chip by the end of 2010.

For Cray, Cascade represents a fairly significant break with the XT/XE line of supercomputers, which have maintained a smooth hardware upgrade path for the past six years. Although the software stack and application codes can be carried forward onto Cascade, the reworked hardware architecture means users will no longer be able to extend their XT or XE infrastructure with this new technology.

Cascade will also have an accelerator blade to go along with the x86-based blades. Originally, this component was going to be developed under the HPCS contract, but for various reasons the work got canceled, which culminated in a contract renegotiation to reduce the scope of the contract late in 2009. According to Scott, Cray was working with Intel on the technology, but as of now they are undecided about which accelerator will end up in the Cascade product line. The most likely candidates include NVIDIA’s Tesla GPUs, AMD’s FireStream GPUs, and Intel’s “Many Integrated Core” (MIC) coprocessor, which was announced at ISC last week. At present, Cray is talking with all three vendors about the roadmaps for their respective accelerator solutions.

The XE6 supercomputer, slated for delivery in Q3 2010, will also get an accelerator blade, said Scott, who confirmed that it will be based on the latest NVIDIA Tesla-20 (Fermi) GPUs, which are just coming into production now. As of now, the release date for the XE6 accelerator blade option is still under wraps, but it’s reasonable to think that it will be announced before the end of the year. Cray also partnered with NVIDIA to put in a bid for DARPA’s Ubiquitous High Performance Computing (UHPC) program for “ExtremeScale architectures,” which is aimed at innovative terascale to petascale supercomputing systems.

Accelerators appear to be a big part of Cray’s strategy going forward. “In the long run we’re going to have to change the trajectory,” said Scott. “Plain old multicore x86 won’t do it. Most of the heavy lifting is going to have to be done by processors that are specifically designed, first order, for power efficiency, not for running single threads fast. So we’re going to need heterogeneity in some form.”

Right now, the software support for accelerators is in its infancy. So Scott is not expecting the HPC community to shift en masse to this new computing model overnight. Even after the XE6 accelerator blades hit the streets, Scott expects the majority of systems sold will be straight Opteron-based machines. “Over time that’s going to shift, said Scott. “I would predict five years from now, the bulk of serious HPC is going to be done with some kind of accelerated heterogeneous architecture.”

Further down the road, heterogeneous processing will form the foundation of Cray exascale architectures. In 2018, the year Scott predicts Cray will have a machine that can deliver a sustained application exaflop, heterogeneous computing will likely be much more highly integrated. According to Scott, CPU-GPU hybrid processors (or the equivalent), along the lines of AMD’s Fusion architecture, will be generally available and powerful enough to form the basis of HPC machines. He believes both NVIDIA and Intel will be offering their own versions of integrated CPU-accelerator chips. “That’s clearly the direction to take,” he asserted. “The more tightly you can couple those two different types of processors together, the better off we’ll be.”

He also foresees optical interconnects integrated directly into the chip package, with possibly some electrical interconnect on the board, as well as networks that are very low diameter so that you don’t have to expend a lot of power retransmitting data. In addition, Scott envisions another level of memory between the off-chip DIMMs and on-chip cache, implemented perhaps with 3D stacking technology — the idea being to substantially increase the bandwidth to the processors, while reducing power. “It’s not like it’s going to be easy,” noted Scott. “But I think there’s definitely a path.”

As far as what lies beyond exascale, Cray has nothing on the drawing board yet, but neither does anyone else. Assuming, historical trends hold, the first zettaflop systems will show up around 2028. But they are likely to be based on technologies that have yet to make it out of the research lab.

“I do think that exascale is going to be the last one that we’re going to get to with traditional silicon technology,” said Scott. “I don’t know what’s going to be next, but if you look back 100 years, we’ve gone from mechanical tabulating machines, to electro-mechanical relays, to vacuum tubes, to discrete transistors, to integrated circuits. If you look at that history you see a straight line of performance growth through multiple technology transitions. That doesn’t prove a damn thing. But it gives me some sort of hope that we’ll come up with something post-silicon ICs to take us forward.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource managed by the institution’s Advanced Center for Computing and C Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernizatio Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Experience Memory & Storage Solutions that will Transform Your Data Performance

High performance computing (HPC) has revolutionized the way we harness insight, leading to a dramatic increase in both the size and complexity of HPC systems. Read more…

Topological Quantum Superconductor Progress Reported

February 20, 2018

Overcoming sensitivity to decoherence is a persistent stumbling block in efforts to build effective quantum computers. Now, a group of researchers from Chalmers University of Technology (Sweden) report progress in devisi Read more…

By John Russell

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource manage Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This