Revelations on Roadrunner’s Retirement

By Nicole Hemsoth

April 4, 2013

Earlier this week we reported on the decommissioning of the Roadrunner supercomputer at Los Alamos National Laboratory, which was being shuttered following a stint of fame as the first system to break the petascale barrier back in 2008.

According to Paul Henning from the computational physics division at Los Alamos, Roadrunner’s checkout made big news, but the end of the line for the super was well-planned, if not right on schedule.

The system served its purpose chewing a bevy of mostly classified and some key civilian code. However, in the end, the combination of a finite contract, an extinct chip, the cost of crumpling up code to fit into IBM’s Cell, and the promise of swifter, more efficient technologies were main factors in the planned clipped lifecycle of the petaflop pioneer.

“Rather than think of these machines as physical entities, we think of them as projects,” he explained. “At the beginning of the Roadrunner acquisition we laid out a project lifetime for this—and that lifetime considered a number of things, including the cost of maintenance, power, vendor and licensing contracts, and how we would upgrade the system.”

Henning detailed that the support contract with IBM was up and since they don’t even produce the core of the machine’s architecture, the Cell, the question of even scrounging up some spare parts would have presented a rather tricky issue. The retirement party had been planned years ago anyway, but there are some meaty learning opportunities to glean from the scrap metal.

When any system at the lab is shuttered, the autopsy, which looks at everything from the integrity of the memory and OS to the more nuts and bolts physical properties, is performed. A key finding of the post-mortem revolves around the condition of the boxes after five years of heat, wear and tear—it’s here where the materials analysis begins. It’s given the renowned materials science team at the center an insider’s view into the real stress on systems after high-yield, high-heat production—and from what we read between the lines, these boxes are maxed out.

Then again, there were never any plans to build the system out to new glory ala the Jaguar to Titan transformation. Anyway, even if the hardware wasn’t on its last, weak leg, considering they’d have to retrofit the entire system since IBM would return a 404 on their build-out needs, it makes sense that they’d want to rip…and of course, replace.

Currently, Los Alamos has sent its applications on a redirect course to the smaller, slightly more efficient and roughly performance-equivalent Cielo system, which is housed in the same space as the now-defunct Roadrunner. Henning said the developer-friendly architecture saves time and money on code retooling, ostensibly while they try to fit something new into their environment.

And so here is where things get interesting. Because we can speculate on what Los Alamos might dream up to fill the 6,000 square foot gap left behind. That’s a pretty large spate of empty space for any upstart system to settle into. Titan’s sprawl is right under 5,000 square feet and a lot of flops have fit in less than that.

There are a few hints at what might sit on the charred spot Roadrunner once occupied post-ripdown. However, it’s worth noting that a quick perusal of the NNSA’s procurement plans for the next year include something on the order of a $50 million to (yes) one billion dollar project, which is currently accepting proposals. And it’s kind of hard to imagine what else would be filed under tech procurements to that monetary tune. If any of you know anything about this, that comments section down there looks awfully empty….(hint, hint).

All speculation aside, it looks like we’ll find out soon enough—probably later this year—just what will turn off that vacancy sign at the lab. Until then, the Roadrunner story serves as a reminder about how quickly the tides of this type of tech shift and leave superhero machines drifting into forgotten waters.

When national labs and large HPC sites sit down to spill ink on new system designs, they’re hedging their bets on what future technologies will look like. It’s rare, unless folks are on a TACC/Stampede-like course to go from ground to super in a tick over a year, to know what innovations on the architecture, efficiency or acceleration front will yield big price-performance dividends. So at the time that Los Alamos set about architecting Roadrunner based on the very unique Cell approach, they were placing their bets on the future of that technology.

Since that development cycle, the rise of GPU acceleration, the introduction of the promising Phi, and some efficiency tweaks on the software side have rendered some of what made Roadrunner shine seem rather date. It’s now possible to get more compute power in a smaller power envelope…and with a lot less in the way of programming hassle, as well, notes Henning.  However, for the NNSA and Los Alamos, whatever the clandestine code was they cooked around the Cell, it must have been worth the effort on the retooling side.

Although the story of the Roadrunner being forced into retirement found its way into a number of mainstream tech media stories over the course of the week, this is a pretty standard order of operations for large HPC centers, especially national labs. Henning stressed that the shutdown of the once-famous system is not unlike the series of other supers they’ve shuttered in succession at the center. They build a plan for acquisition, see a machine run its course, learn from it post-mortem and shuttle it off in parts to make way for something fresh.

Related Articles

Requiem for Roadrunner

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This