NVIDIA CEO Jen-Hsun Huang did indeed announce the company’s next-generation GPU architecture on Wednesday at its GPU Technology Conference. If you caught our coverage of the new processor, nicknamed Fermi, you probably already realize that NVIDIA has set the GPGPU bar pretty darn high for rivals AMD and Intel.
A good portion of Huang’s keynote was about advanced visualization, and how real-time ray tracing and photo-realistic 3D imaging is changing the game in that arena. But the crowd definitely took notice when the CEO started dealing from the Fermi GPU slide deck. (It’s the first time I remember seeing the mention of double precision floating point and ECC elicit a big round of applause from an audience.) With Fermi, Huang said, GPU computing has now reached a “tipping point.”
Even with the new wonder chip, Huang stuck with the company line of GPU-as-coprocessor, in which the CPU does the serial work, and the GPU takes on the data parallel processing. But with Fermi’s inclusion of ECC memory, multi-level cache and hefty double precision horsepower, that division of labor gets even sharper. Said Huang: “We believe central processing will give way to co-processing.”
Although the first products aren’t expected until next year, NVIDIA is already playing with some early silicon. In fact, during his keynote, Huang ran a short demo of a Fermi GPU crunching away in double precision next to the much slower T10 (GT200) architecture inside a Tesla C1060. See video below:
Huang thinks there’s already a pent-up demand for Fermi parts in workstations, servers, and supercomputers, and is racing the chips into production. He predicted we’ll start seeing the first products in “a few short months,” and expects the new GPU will be the most successful the company has ever introduced.
Oak Ridge National Lab (ORNL) has already announced it will be using Fermi technology in an upcoming super that is “expected to be 10-times more powerful than today’s fastest supercomputer.” Since ORNL’s Jaguar supercomputer, for all intents and purposes, holds that title, and is in the process of being upgraded to 2.3 petaflops thanks to a new truckload of AMD Istanbul chips, we can surmise that the upcoming Fermi-equipped super is going to be in the 20 petaflops range. No timetable was offered for this particular deployment, but I’m guessing 2011.
And it looks like ORNL’s Fermi machine will be built by Cray. At the “Breakthroughs in High Performance Computing” session on Wednesday evening, Cray CTO Steve Scott basically gave Fermi the seal of approval for its use in high-end supercomputers. The new features that made that possible: ECC, a lot more DP performance, a unified address space, and support for concurrent kernels. Cray intends to add the upcoming GPUs in next year’s new XT line (XT6?). Scott said the Fermi chips will be integrated into Cray’s SeaStar interconnect, presumably co-habitating with AMD Opteron hardware.
The GPU as floating point accelerator fits in perfectly with Cray’s Adaptive Computing Strategy that it started talking about in 2005. But it’s interesting to note that GPUs were barely mentioned in the original cast of processor architectures that might make up future hybrid supercomputers. Now it looks like they could very well end up being the dominant co-processor technology for such machines.