Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
November 15, 2013

Titan Runs 15 Petaflop Superconductor Simulation

Tiffany Trader
HM_DCA-300x300

A research feature at the Oak Ridge Leadership Computing website explores how sheer computing speed and algorithm improvements have put an international team of scientists in the running for this year’s Gordon Bell Prize.

The team from ETH Zurich in Switzerland and Oak Ridge National Laboratory (ORNL) have performed simulations of high-temperature superconductors topping 15 petaflops on Oak Ridge National Laboratory’s (ORNL’s) Titan supercomputer. Their work employed an algorithm that overcomes two hurdles to achieving realistic superconductor modeling. The application is called DCA++ (DCA stands for “dynamical cluster approximation”) and it earned its development team the Gordon Bell Prize in 2008.

The promise of superconducting materials is that they conduct electricity without resistance, and therefore without energy loss. For this reason, they are appealing for energy applications such as power transmission and also make powerful magnets that can be employed in maglev trains and MRI scanners.

But getting these materials cold enough to exhibit superconductivity is both labor- and cost-intensive. The holy grail of superconductivity research is discovering or creating natural superconductors that didn’t need to be cooled. It would transform the power transmission and the energy sector.

To explore the potential for such materials, the researchers began by making some improvements to DCA++. They used the new method, called DCA+, on the full 18,688-node Titan system, taking full advantage of the system’s NVIDIA GPUs, to reach a remarkable 15.4 petaflops.

Titan’s hybrid architecture proved to to be energy-efficient as well. A key simulation consumed 4,300 kilowatt-hours, while a simulation on a comparable CPU-only system, the Cray XE6, would have required nearly eight times as much energy, or 33,580 kilowatt-hours, according to the article.

The DCA+ algorithm also helped resolve two common problems that arise with dynamic cluster quantum Monte Carlo simulations: the fermionic sign problem and the cluster shape dependency.

Team member Thomas Schulthess of ETH Zurich and ORNL explained that the DCA+ algorithm is nearly 2 billion times faster than its DCA predecessor, alleviating the sign problem by a large degree and making way for more useful simulations by allowing for more atoms at lower temperatures, important since superconductivity occurs in very cold environs.

Improving the algorithm also took care of cluster shape dependence. “Before you would get vastly different results for the superconducting transition temperature, but now you get pretty much the same,” says project partner Thomas Maier of ORNL.

The Gordon Bell Prize, which is awarded each year for outstanding achievement in high-performance computing, will be presented on November 21 at SC13 in Denver.