The U.S. former fastest supercomputer, Titan, was decommissioned on August 1. Nvidia recently saluted the supercomputer and highlighted its achievements in a blog.
After seven years of groundbreaking service at Oak Ridge National Laboratory in Tennessee, the former fastest supercomputer in the U.S. is being decommissioned on August 1.
First coming online in 2012, Titan achieved peak performance of 27 petaflops, made possible by its 18,000+ NVIDIA GPUs and NVIDIA’s CUDA software platform. It was supplanted last year by the Summit supercomputer, also located at ORNL, which provides 10x Titan’s simulation performance.
Prior to the invention of Summit, Titan’s speed and energy efficiency made it a “time machine,” according to Buddy Bland, project director at ORNL.
Below is a brief recap of its legacy of accelerating pioneering work in AI, simulation and modeling.
Cleaning up waste is no fun. When it comes to the radioactive debris left over from the Manhattan Project, it’s also dangerous and nearly impossible. The difficulty of separating radioactive elements in order to safely store them presented unknown outcomes — until the emergence of Titan.
Scientists at ORNL used the supercomputer to simulate the effects of different decontamination methods on actinides — highly dangerous and radioactive elements such as uranium and plutonium — without wasting time and money on failed ventures.
The U.S. Department of Energy’s BioEnergy Science Center also took advantage of Titan to perform one of the most complex biomolecular simulations of ethanol. The result is a deeper understanding of lignin’s selective binding processes, which could eventually lead to a boost in biofuel yields.
And in 2013, Titan’s capacity for simulation made possible the efforts of four Gordon Bell Prize finalists. From simulating the behavior of 18,000 proteins for the first time, to the evolution of the universe, Titan was up for the challenge.
Using Titan’s NVIDIA accelerators, General Electric scientists modeled water molecules on a variety of materials in an effort to build wind turbines that resist the formation of ice. This would render heaters — which immediately consume a portion of the energy produced by the turbines — obsolete. Success could mean more global electricity derived from wind power.
Rather than training a neural network, MENNDL — the Multi-node Evolutionary Neural Networks for Deep Learning — creates the network itself. Developed by the ORNL team in 2017, MENNDL reduces the time necessary to develop neural networks for complex data sets from months to weeks. This is made possible through the acceleration provided by Titan’s 18,688 NVIDIA GPUs.
Source: Geetika Gupta, NVIDIA