Visit additional Tabor Communication Publications
October 01, 2009
NVIDIA CEO Jen-Hsun Huang did indeed announce the company's next-generation GPU architecture on Wednesday at its GPU Technology Conference. If you caught our coverage of the new processor, nicknamed Fermi, you probably already realize that NVIDIA has set the GPGPU bar pretty darn high for rivals AMD and Intel.
A good portion of Huang's keynote was about advanced visualization, and how real-time ray tracing and photo-realistic 3D imaging is changing the game in that arena. But the crowd definitely took notice when the CEO started dealing from the Fermi GPU slide deck. (It's the first time I remember seeing the mention of double precision floating point and ECC elicit a big round of applause from an audience.) With Fermi, Huang said, GPU computing has now reached a "tipping point."
Even with the new wonder chip, Huang stuck with the company line of GPU-as-coprocessor, in which the CPU does the serial work, and the GPU takes on the data parallel processing. But with Fermi's inclusion of ECC memory, multi-level cache and hefty double precision horsepower, that division of labor gets even sharper. Said Huang: "We believe central processing will give way to co-processing."
Although the first products aren't expected until next year, NVIDIA is already playing with some early silicon. In fact, during his keynote, Huang ran a short demo of a Fermi GPU crunching away in double precision next to the much slower T10 (GT200) architecture inside a Tesla C1060. See video below:
Huang thinks there's already a pent-up demand for Fermi parts in workstations, servers, and supercomputers, and is racing the chips into production. He predicted we'll start seeing the first products in "a few short months," and expects the new GPU will be the most successful the company has ever introduced.
Oak Ridge National Lab (ORNL) has already announced it will be using Fermi technology in an upcoming super that is "expected to be 10-times more powerful than today's fastest supercomputer." Since ORNL's Jaguar supercomputer, for all intents and purposes, holds that title, and is in the process of being upgraded to 2.3 petaflops thanks to a new truckload of AMD Istanbul chips, we can surmise that the upcoming Fermi-equipped super is going to be in the 20 petaflops range. No timetable was offered for this particular deployment, but I'm guessing 2011.
And it looks like ORNL's Fermi machine will be built by Cray. At the "Breakthroughs in High Performance Computing" session on Wednesday evening, Cray CTO Steve Scott basically gave Fermi the seal of approval for its use in high-end supercomputers. The new features that made that possible: ECC, a lot more DP performance, a unified address space, and support for concurrent kernels. Cray intends to add the upcoming GPUs in next year's new XT line (XT6?). Scott said the Fermi chips will be integrated into Cray's SeaStar interconnect, presumably co-habitating with AMD Opteron hardware.
The GPU as floating point accelerator fits in perfectly with Cray's Adaptive Computing Strategy that it started talking about in 2005. But it's interesting to note that GPUs were barely mentioned in the original cast of processor architectures that might make up future hybrid supercomputers. Now it looks like they could very well end up being the dominant co-processor technology for such machines.
Posted by Michael Feldman - September 30, 2009 @ 10:40 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.