Visit additional Tabor Communication Publications
January 06, 2011
Well NVIDIA waited exactly five days into the new year to announce a major new direction for its product roadmap. On Wednesday, the GPU-maker -- and soon to be CPU-maker -- revealed its plans to build heterogeneous processors, which will encompass high performance ARM CPU cores alongside GPU cores. The strategy parallel's AMD's Fusion architectural approach that marries x86 CPUs with ATI GPUs on-chip.
The upcoming NVIDIA processors, developed under the codename "Project Denver," will span NVIDIA's non-mobile product line, powering personal computers, workstations, servers and supercomputers. The announcement was made by company CEO Jen-Hsun Huang at the annual Consumer Electronics Show (CES) in Las Vegas. Huang called the news "one of the most strategic announcements we have ever made at NVIDIA." And that might be an understatement.
NVIDIA already uses ARM cores on its Tegra line of processors for mobile computing platforms. That SoC design integrates a 32-bit ARM CPU alongside its GPU cores to power handheld devices such as smartphones, personal digital assistants and tablets. (The company also announced its Tegra 2 generation of processors this week at CES.) With the upcoming Project Denver processors, this heterogeneous platform will be extended across the rest of NVIDIA's product lines, up to and including the Tesla HPC offerings.
As part of this strategy, the company has obtained rights to develop its own NVIDIA-designed high performance CPU cores using ARM's future processor architecture. Presumably this will be based on a future 64-bit implementation of the ARM ISA, given that 64-bit computing is the accepted standard outside of the mobile space.
According to the HPC luminary Jack Dongarra, NVIDIA’s decision to marry ARM with GPUs makes sense. "They couldn’t license the X86 architecture and needed a CPU platform for their move to more general computing, integrating both CPU- and GPU-based computing." he said. "ARM is a logical choice, giving NVIDIA an opportunity to move in both the low power direction and up to high performance computing."
The overarching rationale here is essentially the same as AMD's: to glue CPU and GPU logic together on the same chip so as to take advantage of the sequential and parallel processing capabilities, respectively, of the two architectures. The proximity of both logic engines to main memory and on-chip resources makes for a much more efficient computing environment. Integration also affords major power efficiency advantages, something that is absolutely critical in both the handheld space and now the datacenter. In particular, as supercomputers move from petascale to exascale, power constraints will force system builders to abandon monolithic x86-based systems, a process that has already begun with the latest generation of GPGPU-equipped supercomputers.
Each of NVIDIA product lines (Tegra, Quadro, GeForce, and Tesla) have their own roadmaps on how the ARM CPU will be folded in. For the Tesla line, ARM integration will take place on the upcoming "Maxwell" generation, according to Andy Keane, general manager of NVIDIA's Tesla business. The Maxwell architecture is scheduled to be introduced in 2013, following the "Kepler" GPUs that due to be unveiled later this year.
By moving their entire portfolio to a CPU-GPU architecture, NVIDIA is looking to leverage their R&D costs across all product segments, from handhelds to PCs to supercomputers -- in the same way Intel and AMD do with their x86-based chips. In fact, it's the same business model NVIDIA already employs with their own CUDA GPU architecture.
"The technologies for the future have to have some basis in the volume market," Keane told HPCwire. "It has to have some reason to exist other than the relatively small volume of the HPC business. That's why this makes sense."
The wildcard here is ARM. For this to work, NVIDIA needs to create that volume market in Project Denver clients and servers. For decades, the x86 CPU has been the standard-bearer for non-mobile computing, and this new approach is a direct challenge to that status quo. In announcing the new architecture, Huang pointed out the ARM shipments already far outstrip x86 volume, and, thanks to the rise of mobile computing, that gap is expected to increase substantially over the next four years.
As a result, there are a wealth of existing compiler and other software development tools for ARM platforms. Conveniently, support for Linux (and now Windows) is also in place. "What we have to do for the Tesla business, like we have done currently with the GPU, is to make sure that the [ARM] ecosystem is adapted correctly for HPC," said Keane.
ARM's disadvantage is that the architecture's footprint is currently non-existent in the PC and server arena. Attracting OEMs and system integrators to build non-x86 platforms will certainly be a hurtle for the GPU-maker. However, with the new emphasis on power, especially in the datacenter, the RISC-architected ARM has some real advantages. Combined with a mature software stack and backed by NVIDIA, ARMed GPUs have the potential to upset many segments of x86-dominated computing.
NVIDIA's new path puts it in much more direct competition with Intel and AMD, who are now all vying for the same market segments, and who will soon have little if any reliance on each other's chips. With Project Denver processors now gearing up to go head-to-head against Intel MIC/integrated graphics and AMD Fusion chips, this young decade just got a lot more interesting.
Posted by Michael Feldman - January 06, 2011 @ 3:38 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.