Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

December 11, 2013

HPC Progress: No More Free Lunch

Tiffany Trader

As HPC news hits its end-of-year slump before the raft of new activity begins anew in January, what better time to take in some SC13 highlights that you may have missed. During the show, NVIDIA Corp. hosted a number of compelling booth sessions at its GPU Technology Theatre, and full videos of these talks are now available on NVIDIA’s website.

The sessions covered a wide range of topics, including the future of accelerated computing, massively parallel computing for physics, being green at exascale, the latest additions to OpenACC and CUDA – and much much more. With 39 talks to choose from, there should be something for everyone.

One session that stands out is titled “Efficiency and Programmability: Enablers for Exascale.” Delivered by NVIDIA Chief Scientist and Senior VP of Research Bill Dally, the 28-minute talk provides a concise and clear overview of the main challenges that HPC is facing over the next five to 10 years and NVIDIA’s plans to address them.

There are two essential demands, says Dally, which are aligned in their capabilities. On one side, let’s call it HPC, climate scientists are working to refine their grid models down from tens of kilometers today to kilometer scale models in order to answer fundamental questions about greenhouse gases and climate change. On the other side, data analytics professionals need to be able to digest enormous amounts of data in order to gain insight from it. For both of these fields, HPC and data analytics, there is an insatiable demand for computing. And they need the same three things: number crunching, data movement/communication and memory/storage. The main difference between HPC and data-analytics comes down to how these elements are balanced.

SC13 NVIDIA Session slide - end of historic scaling 800x

The days of waiting for Moore’s Law to take care of your problems is over, notes Dally, as he displays a chart detailing the end of historic scaling. During the 80s and 90s, single thread performance was increasing at 50 percent per year. Those days are over. From the late 80s to about 2000, HPC was exploiting parallelism but doing it in a covert way. At the beginning of that period, machines took 10 cycles to do one instruction. At the end of that period, they executed four instructions per cycle. In the same time frame, the clock cycle went from 100 gate delays to 10 gate delays. But all that performance got mined out; experts could not make the clock cycles any shorter, and there were diminishing returns to issuing more than four instructions per cycle.

At the same time, semiconductor processes reached a limit with regard to scaling voltage without facing leakage and overheating. This put the kibosh on Dennard scaling.

Here Dally references DARPA’s Bob Colwell, who earlier this year observed, “Moore’s law gives us more transistors…Dennard scaling makes them useful.”

“The number of transistors is still going up in a very healthy way,” Dally contends, “We’re still getting the Moore’s law increase in the number of devices, but without Dennard scaling to make them useful, all the things we care about – like clock frequency of our parts, single thread performance and even the number of cores we can put in our parts – is flattening out, leading to the end of scaling as we know it. We’re no longer getting this free lunch of technology giving us faster computers.”

This is a problem that needs to be solved because our 21st century economy hinges on continued computational progress. With the end of Dennard scaling, all computing power becomes limited, and performance going forward is determined by energy efficiency. As process technology can no longer be counted upon for exponential advances, the focus must shift to architecture and circuits, according to Dally.

Using Titan as the current benchmark, Dally maintains that fielding an exascale machine (by 2020) will require:

  • 50X improvement in performance
  • 4X scaling in the number of nodes
  • 2X scaling in energy
  • 25X improvement in energy efficiency
  • 1000X thread increase (parallelism)

The most difficult challenges, according to Dally, are the last two: energy efficiency and parallelism. The rest of the talk goes into detail about what it will take to hit these goals and NVIDIA’s vision for what a machine will look like in 2020.

Share This