A new blog post from NVIDIA’s George Millington looks at how Russia is counting on the power of NVIDIA GPUs to advance science and research and bolster national competitiveness. The strategy is paying off, writes Millington.
Russia recently released its twice-yearly list of the country’s 50 most performant systems and for the fifth time in a row, Moscow State University’s “Lomonosov” supercomputer – powered by NVIDIA GPUs – took the top honors.
Over the last decade or so, the technique of using general-purpose GPUs (GPGPUs) to boost computational power (by as much as 50X) has had an enormous impact on HPC. The current fastest US system, the Cray Titan, installed at Oak Ridge National Laboratory, is outfitted with 18,688 Nvidia Tesla K20X nodes. Titan is the first GPU-CPU hybrid system to perform over 10 petaflops.
When China’s Tianhe-1A supercomputer became the world’s fastest in October 2010, NVIDIA stated that it would have taken “50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone.”
On the Russian list, three of the top 10 systems employ GPUs and nearly one-third of the entire list leverage these massively parallel GPUs. The fact that three years ago, there were no GPU-based systems on this list emphasizes just how quickly GPGPU computing has infiltrated the high-end of HPC.
Designed by T-Platforms for Moscow State University, Lomonsov was upgraded with NVIDIA Tesla 2070 GPUs in 2011, boosting its peak performance to 1.7 petaflops. With 900 teraflops Linpack, the system is currently number 31 on the TOP500.
According to NVIDIA, Lomonosov is not just the fastest accelerator-based supercomputer in Russia but it is Europe’s fastest as well. The system will be used for a large variety of scientific work, including magnetic hydrodynamics, quantum chemistry, seismology, drug discovery, geology and materials science. As a CUDA Center of Excellence, Moscow State University is directly engaged in advancing CUDA-based scientific research.
On the Russian Top 50 list, GPU-based supercomputers as a category have demonstrated steady growth over the last few years – while others segments have flat-lined or decline – as the chart below illustrates:
There is little question that accelerative hardware like GPUs from NVIDIA and AMD and the Phi coprocessor from Intel have boosted FLOPs and FLOPs per watt. The jumps in top system speed over the last five years owe much to this accelerative power. The problem is that after these gains have been extracted, what comes next? While there are many technologies on the horizon, there is no clear successor to usher in the exascale era.