Titan Sets High Water Mark for GPU Supercomputing

By Michael Feldman

October 29, 2012

Oak Ridge National Laboratory (ORNL) has officially launched its much-anticipated Titan supercomputer, a Cray XK7 machine that will challenge IBM’s Sequoia for petaflop supremacy. With Titan, ORNL gets a system that is 10 times as powerful as Jaguar, the lab’s previous top system upon which the new machine is based. With a reported 27 peak petaflops, Titan now represents the most powerful number-cruncher in the world.

The 10-fold performance leap from Jaguar to Titan is courtesy of NVIDIA’s brand new K20 processors – the Kepler GPU that will be formally released sometime before the end of the year. Although the Titan upgrade also includes AMD’s latest 16-core Opteron CPUs, the lion’s share of the FLOPS will be derived from the NVIDIA chips.

In the conversion from Jaguar, a Cray XT5, ORNL essentially gutted the existing 200 cabinets and retrofitted them with nearly ten thousand XK7 blades. Each blade houses two nodes and each one of them holds a 16-core Opteron 6274 CPU and a Tesla K20 GPU module. The x86 Opteron chips run at a respectable 2.2 GHz, while the K20 hums along at a more leisurely 732 MHz. But because to the highly parallel nature of the GPU architecture, the K20 delivers around 10 times the FLOPS as its CPU companion. (Using the 27 peak PF value for Titan, a back-of-the-envelope calculation puts the new K20 at about 1.2-1.3 double precision teraflops.)

Thanks to the energy efficiency of the K20, which NVIDIA claims is going to three times as efficient its previous-generation Fermi GPU, Titan draws a mere 12.7 MW to power the whole system. That’s especially impressive when you consider that the x86-only Jaguar required 7 megawatts for a mere tenth of the FLOPS.

It would appear, though, that IBM’s Blue Gene/Q may retain the crown for energy-efficient supercomputing. The Sequoia system at Lawrence Livermore Lab draws just 7.9 MW to power its 20 peak petaflops. However, it’s a little bit of apples and oranges here. That 7.9 MW is actually the power draw for Sequoia’s Linpack run, which topped out at 16 petaflops. Since we don’t have the Linpack results for Titan just yet, it’s hard to tell if the GPU super will be able to come out ahead of Blue Gene/Q platform.

For multi-petaflopper, Titan is a little shy on memory capacity, claiming just 710 terabytes – 598 TB on the CPU side and 112 TB for the GPUs. The FLOPS-similar Sequoia has more than twice that – nearly 1.6 petabytes. Back in the day, the goal for balanced supercomputing was at least one byte of memory for every FLOP, but that era is long gone.

Titan provides around 1/40 of a byte per FLOP and from the GPU’s point of view, most of the memory on the wrong side of the PCIe bus – that is, next to the CPU. Welcome to the new normal.

Titan is more generous with disk space though, 13.6 PB in all, although again, a good deal less than that of its Sequoia cousin at 55 PB. Apparently disk storage is being managed by 192 Dell I/O servers, which, in aggregate, provide 240 GB/second of bandwidth to the storage arrays.
Titan’s big claim to fame is that it’s the first GPU-accelerated supercomputer in the world that’s has been scaled into the multi-petaflop realm. IBM’s Blue Gene/Q and Fujitsu’s K computer — both powered by custom CPU SoCs — are the only other platforms that have broken the 10-petaflop mark. Titan is also the first GPU-equipped machine of any size in the US. As such, it will provide a test platform for a lot of big science codes that have yet to take advantage of accelerators at scale.

Acceptance testing is already underway at Oak Ridge and users are in the process of porting and testing a variety of DOE-type science applications to the CPU-GPU supercomputer. These include codes in climate modeling (CAM-SE), biofuels (LAMMPS), astrophysics (NRDF), combustion (S3D), material science (WL-LSMS), and nuclear energy (Denovo).

According to Markus Eisenbach, his team has already been able to run the WL-LSMS code above the 10-petaflop mark on Titan. He says that level of performance will allow them to study the behavior of materials at temperatures above the point where they lose their magnetic properties.

At the National Center for Atmospheric Research (NCAR), they are already using the new system to speed atmospheric modeling codes. With Titan, Warren Washington’s NCAR team has been able to execute high-resolution models representing one to five years of simulations in just one computing day. On Jaguar, a computing day yielded only three months worth of simulations.

ORNL’s Tom Evans is using Titan cycles to model nuclear energy production. The simulations are for the purpose of improving the safety and performance of the reactors, while reducing the amount of waste. According to Evans, they’ve been able to run 3D simulations of a nuclear reactor core in hours, rather than weeks.

The machine will figure prominently into the upcoming INCITE awards. INCITE, which stands for Innovative and Novel Computation Impact on Theory of Experiment, is the DOE’s way of sharing with  the FLOPS with scientists and industrial users on the agency’s fastest machines. The program only accepts proposals for end users with “grand challenge”-type problems worthy of top tier supercomputing.

With its 20-plus-petaflop credentials, Titan will be far and away the most powerful system available for open science. (Sequoia belongs to the NNSA and spends most its cycles on classified nuclear weapons codes.) The DOE has received a record number of proposals for the machine, representing three times what Titan will be able to donate to the INCITE program.

Undoubtedly some of that pent-up demand is a result of the delayed entry of the US into GPU-accelerated supers. Over the past three years, American scientists and engineers have watched heterogeneous petascale systems being built overseas. China (with Tianhe-1A, Nebulae, and Mole 8.5), Japan (with TSUBAME 2.0), and even Russia (with Lomonosov) all managed to deploy ahead of the US.

Some of that is due to the slow uptake of GPU computing by IBM and Cray, the US government’s two largest providers of top tier HPC machinery. IBM offers GPU-accelerated gear on it x86 cluster offerings, but its flagship supercomputers are based on their in-house Blue Gene and Power franchises. Cray waited until May 2011 to deliver its first GPU-CPU platform, the XK6 (with Fermi Tesla GPUs), preferring to skip the earlier renditions of NVIDIA technology.

While Titan could be viewed as just another big supercomputer, there is a lot on the line here, especially for NVIDIA. If the system can be a productive petascale machine, it will go a long way toward establishing the company’s GPU computing architecture as the go-to accelerator technology for the path to exascale. The development that makes this less than assured is the imminent emergence of Intel’s Xeon Phi manycore coprocessor, and to a lesser extent, AMD’s future GPU and APU platforms.

Intel will get its initial chance to prove Xeon Phi’s worth as an HPC accelerator with Stampede, a 10 petaflop supercomputer that will be installed at the Texas Advanced Computing Center (TACC) before the end of the year. That Dell cluster will have 8 of those 10 petaflops delivered by Xeon Phi silicon and, as such, the system will represent the first big test case for Intel’s version of accelerated supercomputing.

It also represents the first credible challenge to NVIDIA on this front since the GPU-maker got into the HPC business in 2006. Whichever company is more successful at delivering HPC on a chip, the big winners will be the users themselves, who will soon have two vendors offering accelerator cards with over a teraflop of double precision performance. At a few thousand dollars per teraflop, supercomputing has never been so accessible.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

2022 HPC Road Trip: LBNL, NERSC, and ESnet Briefings

February 7, 2023

Time to finally(!) clear the 2022 decks and get the rest of the 2022 Great American Supercomputing Road Trip content out into the wild. The last part of the year was grueling with more than 5,000 miles of driving over Read more…

Decarbonization Initiative at NETL Gets Computing Boost

February 7, 2023

A major initiative by U.S. president Joe Biden called EarthShots to decarbonize the power grid by 2035 and the U.S. economy by 2050 is getting a major boost through a computing breakthrough at the National Energy Technol Read more…

Nvidia Touts Strong Results on Financial Services Inference Benchmark

February 3, 2023

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it’s the STAC-ML inference benchmark, produced by the Securi Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnerships in strategic technologies and defense industries across th Read more…

AWS Solution Channel

Shutterstock 1072473599

Optimizing your AWS Batch architecture for scale with observability dashboards

AWS Batch is a fully managed service enabling you to run computational jobs at any scale without the need to manage compute resources. Customers often ask for guidance to optimize their architectures and make their workload to scale rapidly using the service. Read more…

 

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Pittsburgh Supercomputing Enables Transparent Medicare Outcome AI

February 2, 2023

Medical applications of AI are replete with promise, but stymied by opacity: with lives on the line, concerns over AI models’ often-inscrutable reasoning – and as a result, possible biases embedded in those models Read more…

2022 HPC Road Trip: LBNL, NERSC, and ESnet Briefings

February 7, 2023

Time to finally(!) clear the 2022 decks and get the rest of the 2022 Great American Supercomputing Road Trip content out into the wild. The last part of the y Read more…

Decarbonization Initiative at NETL Gets Computing Boost

February 7, 2023

A major initiative by U.S. president Joe Biden called EarthShots to decarbonize the power grid by 2035 and the U.S. economy by 2050 is getting a major boost thr Read more…

Nvidia Touts Strong Results on Financial Services Inference Benchmark

February 3, 2023

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnership Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for Read more…

Roadmap for Building a US National AI Research Resource Released

January 31, 2023

Last week the National AI Research Resource (NAIRR) Task Force released its final report and roadmap for building a national AI infrastructure to include comput Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Leading Solution Providers

Contributors

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire