Titan Sets High Water Mark for GPU Supercomputing

By Michael Feldman

October 29, 2012

Oak Ridge National Laboratory (ORNL) has officially launched its much-anticipated Titan supercomputer, a Cray XK7 machine that will challenge IBM’s Sequoia for petaflop supremacy. With Titan, ORNL gets a system that is 10 times as powerful as Jaguar, the lab’s previous top system upon which the new machine is based. With a reported 27 peak petaflops, Titan now represents the most powerful number-cruncher in the world.

The 10-fold performance leap from Jaguar to Titan is courtesy of NVIDIA’s brand new K20 processors – the Kepler GPU that will be formally released sometime before the end of the year. Although the Titan upgrade also includes AMD’s latest 16-core Opteron CPUs, the lion’s share of the FLOPS will be derived from the NVIDIA chips.

In the conversion from Jaguar, a Cray XT5, ORNL essentially gutted the existing 200 cabinets and retrofitted them with nearly ten thousand XK7 blades. Each blade houses two nodes and each one of them holds a 16-core Opteron 6274 CPU and a Tesla K20 GPU module. The x86 Opteron chips run at a respectable 2.2 GHz, while the K20 hums along at a more leisurely 732 MHz. But because to the highly parallel nature of the GPU architecture, the K20 delivers around 10 times the FLOPS as its CPU companion. (Using the 27 peak PF value for Titan, a back-of-the-envelope calculation puts the new K20 at about 1.2-1.3 double precision teraflops.)

Thanks to the energy efficiency of the K20, which NVIDIA claims is going to three times as efficient its previous-generation Fermi GPU, Titan draws a mere 12.7 MW to power the whole system. That’s especially impressive when you consider that the x86-only Jaguar required 7 megawatts for a mere tenth of the FLOPS.

It would appear, though, that IBM’s Blue Gene/Q may retain the crown for energy-efficient supercomputing. The Sequoia system at Lawrence Livermore Lab draws just 7.9 MW to power its 20 peak petaflops. However, it’s a little bit of apples and oranges here. That 7.9 MW is actually the power draw for Sequoia’s Linpack run, which topped out at 16 petaflops. Since we don’t have the Linpack results for Titan just yet, it’s hard to tell if the GPU super will be able to come out ahead of Blue Gene/Q platform.

For multi-petaflopper, Titan is a little shy on memory capacity, claiming just 710 terabytes – 598 TB on the CPU side and 112 TB for the GPUs. The FLOPS-similar Sequoia has more than twice that – nearly 1.6 petabytes. Back in the day, the goal for balanced supercomputing was at least one byte of memory for every FLOP, but that era is long gone.

Titan provides around 1/40 of a byte per FLOP and from the GPU’s point of view, most of the memory on the wrong side of the PCIe bus – that is, next to the CPU. Welcome to the new normal.

Titan is more generous with disk space though, 13.6 PB in all, although again, a good deal less than that of its Sequoia cousin at 55 PB. Apparently disk storage is being managed by 192 Dell I/O servers, which, in aggregate, provide 240 GB/second of bandwidth to the storage arrays.
Titan’s big claim to fame is that it’s the first GPU-accelerated supercomputer in the world that’s has been scaled into the multi-petaflop realm. IBM’s Blue Gene/Q and Fujitsu’s K computer — both powered by custom CPU SoCs — are the only other platforms that have broken the 10-petaflop mark. Titan is also the first GPU-equipped machine of any size in the US. As such, it will provide a test platform for a lot of big science codes that have yet to take advantage of accelerators at scale.

Acceptance testing is already underway at Oak Ridge and users are in the process of porting and testing a variety of DOE-type science applications to the CPU-GPU supercomputer. These include codes in climate modeling (CAM-SE), biofuels (LAMMPS), astrophysics (NRDF), combustion (S3D), material science (WL-LSMS), and nuclear energy (Denovo).

According to Markus Eisenbach, his team has already been able to run the WL-LSMS code above the 10-petaflop mark on Titan. He says that level of performance will allow them to study the behavior of materials at temperatures above the point where they lose their magnetic properties.

At the National Center for Atmospheric Research (NCAR), they are already using the new system to speed atmospheric modeling codes. With Titan, Warren Washington’s NCAR team has been able to execute high-resolution models representing one to five years of simulations in just one computing day. On Jaguar, a computing day yielded only three months worth of simulations.

ORNL’s Tom Evans is using Titan cycles to model nuclear energy production. The simulations are for the purpose of improving the safety and performance of the reactors, while reducing the amount of waste. According to Evans, they’ve been able to run 3D simulations of a nuclear reactor core in hours, rather than weeks.

The machine will figure prominently into the upcoming INCITE awards. INCITE, which stands for Innovative and Novel Computation Impact on Theory of Experiment, is the DOE’s way of sharing with  the FLOPS with scientists and industrial users on the agency’s fastest machines. The program only accepts proposals for end users with “grand challenge”-type problems worthy of top tier supercomputing.

With its 20-plus-petaflop credentials, Titan will be far and away the most powerful system available for open science. (Sequoia belongs to the NNSA and spends most its cycles on classified nuclear weapons codes.) The DOE has received a record number of proposals for the machine, representing three times what Titan will be able to donate to the INCITE program.

Undoubtedly some of that pent-up demand is a result of the delayed entry of the US into GPU-accelerated supers. Over the past three years, American scientists and engineers have watched heterogeneous petascale systems being built overseas. China (with Tianhe-1A, Nebulae, and Mole 8.5), Japan (with TSUBAME 2.0), and even Russia (with Lomonosov) all managed to deploy ahead of the US.

Some of that is due to the slow uptake of GPU computing by IBM and Cray, the US government’s two largest providers of top tier HPC machinery. IBM offers GPU-accelerated gear on it x86 cluster offerings, but its flagship supercomputers are based on their in-house Blue Gene and Power franchises. Cray waited until May 2011 to deliver its first GPU-CPU platform, the XK6 (with Fermi Tesla GPUs), preferring to skip the earlier renditions of NVIDIA technology.

While Titan could be viewed as just another big supercomputer, there is a lot on the line here, especially for NVIDIA. If the system can be a productive petascale machine, it will go a long way toward establishing the company’s GPU computing architecture as the go-to accelerator technology for the path to exascale. The development that makes this less than assured is the imminent emergence of Intel’s Xeon Phi manycore coprocessor, and to a lesser extent, AMD’s future GPU and APU platforms.

Intel will get its initial chance to prove Xeon Phi’s worth as an HPC accelerator with Stampede, a 10 petaflop supercomputer that will be installed at the Texas Advanced Computing Center (TACC) before the end of the year. That Dell cluster will have 8 of those 10 petaflops delivered by Xeon Phi silicon and, as such, the system will represent the first big test case for Intel’s version of accelerated supercomputing.

It also represents the first credible challenge to NVIDIA on this front since the GPU-maker got into the HPC business in 2006. Whichever company is more successful at delivering HPC on a chip, the big winners will be the users themselves, who will soon have two vendors offering accelerator cards with over a teraflop of double precision performance. At a few thousand dollars per teraflop, supercomputing has never been so accessible.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This