GPUs Will Morph ORNL’s Jaguar Into 20-Petaflop Titan

By Michael Feldman

October 11, 2011

Jaguar’s days as a CPU-only supercomputer are numbered. Over the next year, the 2.3 petaflop machine at the Department of Energy’s Oak Ridge National Lab (ORNL) will be upgraded by Cray with the new NVIDIA “Kepler” GPUs, producing a system with about 10 times Jaguar’s peak performance. The transformed supercomputer will be renamed Titan and should deliver in the neigborhood of 20 peak petaflops sometime in late 2012.

The current Jaguar system, which has already been upgraded numerous times since it was first deployed in 2009, currently sits at number three on the TOP500 list with a Linpack reading of 1.76 petaflops. Titan will certainly keep the machine in the top 5, even as machines with tens of petaflops start making their way into the big labs over the next couple years.

Titan will also represent the US entry in the echelons of top tier GPU-accelerated supercomputing. As it stands today, three of the top five systems are GPU accelerated: Tianhe-1A and Nebulae in China, and TSUBAME 2.0 in Japan. The current top GPU machine in the US is Edge, a 240-teraflop Appro cluster at Lawrence Livermore National Laboratory. Even Russia, Germany, Italy have larger systems.

According to Steve Scott, the newly minted chief technology officer for NVIDIA’s Tesla Business Unit, the fact that ORNL is making such a significant commitment to GPU computing is a big endorsement for the architecture. It’s no secret that HPC is now constrained by energy use. Moore’s Law has managed to shrink the transistor geometries, but the power wall has become the defining limitation for performance increases. “It’s all about power efficiency” Scott told HPCwire, “which is why we think the GPU story is so compelling.”

While GPUs are not truly general-purpose processors, their ability to perform data-parallel computation in a much more energy-efficient manner than CPUs has vaulted them to prominence in the HPC realm. “It’s hard to overstate the importance of the sea change that has happened in high performance computing,” notes Scott. “This wonderful ride we’ve been on for the past 30 years — every time we halve the size of transistor, the voltage drops, power stays the same, and performance improves exponentially — has been fantastic, but it’s done.”

Although the US, in general, has been a bit late in embracing GPU technology for HPC, the Titan supercomputer has been on the drawing board at Oak Ridge for at least a couple of years. But the technology necessary to implement that machine is just now catching up with those requirements.

Beginning this fall, most of 18,688 of Jaguar’s current XT5 nodes will be retrofitted with Cray’s new XK6 blades, which the company unveiled in May. The immediate result is that the current dual-socket 6-core AMD Opteron nodes will be swapped out for a single 16-core “Interlagos” CPU node and the interconnect will upgraded from SeaStar 2 to Gemini. Each XK6 blade encompasses four compute nodes, with an Opteron on each one, and the ability to connect each of those CPUs to a Tesla GPU on a PCIe daughter card.

Initially, 960 of those XK6 nodes will be outfitted with the Fermi-class Tesla M2090 GPUs, with the other odd 17 thousand remaining as CPU-only blades for the time being. This first phase of Titan is expected to be completed before the end of the year. Then in the second half of 2012, all 18,688 nodes, including the original Fermi-equipped blades, will be populated with NVIDIA’s next-generation Kepler Teslas.

NVIDIA has not provided detailed specs on the Kepler GPUs, but according to Scott their performance per watt will be more be than double that of the Fermi parts, while fitting into the same power envelope. Given the current Fermi Tesla cards (GPUs plus memory) deliver 665 gigaflops, the new Kepler GPU should yield at least 1330 gigaflops.

For the time being, Oak Ridge is promising only 10 to 20 petaflops for the final system, although the peak performance could go considerably higher. According to Buddy Bland, project director at ORNL’s Leadership Computing Facility, they currently don’t have the money in hand to upgrade all 18K nodes. The actual scope of the Titan build-out will “depend on the budget available.”

Theoretically though, if all existing nodes are populated with the new Kepler parts, the system should deliver at least 24.8 petaflops of GPU power. An equal number of Interlagos CPUs should contribute more than two additional petaflops on top of that. By the time all the dust has settled, Titan could be within spitting distance of 30 petaflops. 

The amount of power the new system will draw is also unknown, but it will certainly have a better performance per watt ratio than Jaguar, which sucks up nearly 7 MW for its 2.33 peak petaflops. By contrast, Japan’s Fermi-accelerated TSUBAME system uses just 1.4 MW for its 2.29 petaflops. Since ORNL’s new machine will use the more efficient Kepler GPUs, its efficiency should be significantly better. “We view Titan as the leading indicator of where people are going as they look to solve the energy challenges for the next five to ten years,” says Scott.

How all those peak flops turn into actual application performance remains to be seen. Extracting high levels of sustained computation from these multi-petaflop machines is notoriously difficult, with only a handful of codes able to attain more than a petaflop of performance. Adding GPUs to the mix has made that harder, at least in the short term.

In this regard, Oak Ridge, with one of the premier computational lab’s on the planet, has a good chance of pushing the envelope. Using smaller GPU clusters, computations scientists at ORNL and elsewhere have been busy porting six flagship science codes to CUDA, include Wang-Landau/LSMS for material science; S3D for engine combustion; PFLOTRAN for underground C02 sequestration and for underground contaminant containment; Denovo for radiation transport code in nuclear engineering; CAM-SE for climate change modeling; and LAMMPS, a molecular dynamics simulation code. Scott says ORNL, Cray and NVIDIA have been working together to adapt these science codes for heterogenous computing so that they are ready to go when Titan boots up.

This first phase of Titan is expected to generate more than $60 million in revenue for Cray, which could end up in the company’s hands before the end of the year. Over the lifetime of the contract, Cray is looking to collect more than $97 million, although if upgrade options are exercised, that number could go considerably higher.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Pfizer HPC Engineer Aims to Automate Software Stack Testing

January 17, 2019

Seeking to reign in the tediousness of manual software testing, Pfizer HPC Engineer Shahzeb Siddiqui is developing an open source software tool called buildtest, aimed at automating software stack testing by providing the community with a central repository of tests for common HPC apps and the ability to automate execution of testing. Read more…

By Tiffany Trader

Senegal Prepares to Take Delivery of Atos Supercomputer

January 16, 2019

Update (Jan. 21): HPCwire has received confirmation from Atos that the system will have a peak speed of 537.6 teraflops, not 320 teraflops as had previously been reported. We plan to report additional details as we recei Read more…

By Tiffany Trader

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Resource Management in the Age of Artificial Intelligence

New challenges demand fresh approaches

Fueled by GPUs, big data, and rapid advances in software, the AI revolution is upon us. Read more…

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By John Russell

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three Read more…

By Tiffany Trader

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchm Read more…

By John Russell

A Big Data Journey While Seeking to Catalog our Universe

January 16, 2019

It turns out, astronomers have lots of photos of the sky but seek knowledge about what the photos mean. Sound familiar? Big data problems are often characterize Read more…

By James Reinders

Intel Bets Big on 2-Track Quantum Strategy

January 15, 2019

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment. Read more…

By Doug Black

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM’s New Global Weather Forecasting System Runs on GPUs

January 9, 2019

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By Oliver Peckham

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This