Visit additional Tabor Communication Publications
March 05, 2009
Although rumors of NVIDIA developing its own x86 products have been circulating for years, a comment this week by Michael Hara, the company's senior VP of investor relations, all but confirmed the GPU maker's intention to bring x86 silicon to market.
The x86 remarks were made toward the end of an NVIDIA "fireside chat" at Morgan Stanley's Technology Conference on Tuesday in San Francisco. In response to a question about NVIDIA's plans to enter the general-purpose processor business, Hara revealed the idea of duplicating its Tegra approach (ARM CPU-based SoC) for mobile internet devices, but with an x86 core.
"I think some time down the road it makes sense to take the same level of integration that we've done with Tegra," said Hara. "Tegra is by any definition a complete computer on a chip, and the requirements of that market are such that you have to be very low power, very small, but highly efficient. So in that particular state it made a lot of sense to take that approach, and someday it's going to make sense to take the same approach in the x86 market as well."
He went on to say that it was not a matter of if the company will do this, but when, and gave a two or three year timeframe when we might expect to see the first NVIDIA x86 parts. At that point, SoC architectures will even make sense for larger platforms like small form factor PCs (netbooks and nettops), a market NVIDIA is currently going after with its ION platform. ION incorporates a GeForce 9400 GPU with an Intel Atom CPU on a hand-sized board.
So how will this impact HPC? At this point, there was no talk of NVIDIA going after the x86 server market, a la Xeon or Opteron, so we're not likely to see NVIDIA x86-based servers anytime soon. For the foreseeable future, the company's Tesla-based products (along with CUDA) will be NVIDIA's main contribution to high performance computing.
But NVIDIA's survival may depend upon having an x86 play to be viable as a company over the long term. With the integration of GPUs and CPUs proceeding apace at Intel and AMD, NVIDIA would be left in a precarious position selling only discrete GPU parts, integrated chipsets, and ARM-based ASICs. SoC is going to be where the action is for mobile and embedded devices, and x86-based parts will probably end up grabbing a large chunk of those markets.
Also, even though the volume SoC parts won't end up in HPC datacenters, by the time these chips get to 32nm, and then 22nm, a lot of these mobile devices will be powerful enough to run some high performance technical workloads, like image recognition and language translation. The advent of OpenCL promises to help pave the way for these types of applications on all sorts of handheld electronic devices.
How NVIDIA goes about getting a license to build x86 silicon is still an open question. Right now, Intel is not exactly on speaking terms with NVIDIA, having recently taken the GPU maker to court over a cross-licensing dispute regarding Nehalem chipsets. Even if the two chipmakers decide to kiss and make up, it's hard to imagine why Intel would grant NVIDIA an x86 license to compete in the same markets.
However, NVIDIA could gain access to such a license by buying VIA Technologies, the Taiwan chip manufacturer that has developed the x86-compatible Nano processor. Rumors of such an acquisition have been floating around for almost a year. To be sure, it's not clear if VIA's x86 license would be transferrable in the event of a buyout, so NVIDIA may have to seek another type of arrangement. But with NVIDIA's intention to enter the mobile x86 arena out in the open, an alliance of some sort with VIA now seems more likely than ever.
Not that NVIDIA has extra cash to throw around right now. The company's earnings have certainly taken a beating lately. Last month, it reported a quarterly loss of $147 million, reflecting a 60 percent drop in revenue from the same quarter of the previous year. NVIDIA is trying to right the ship by lowering operating expenses and focusing on the healthiest parts of the GPU business -- namely mobile graphics and cutting-edge GPUs.
Hara said he thinks the biggest upside surprises this year will likely come from Tegra at the low end and Tesla at the high end. He did note that the recession is holding back Tesla right now. "It's getting great traction as we speak, but it's also being somewhat contained by the economy," he admitted. Without offering specific numbers, he said that the number of people programming for Tesla and the number of applications ported to the platform continues to be "very high." Also, according to him, since Tesla carries a gross margin of about 50 percent, as opposed to a corporate average of 35 percent, an uptick in Tesla revenue could lift the business quite effectively.
Asked about the competition from Intel's upcoming Larrabee CPU-GPU hybrid processor for high end graphics and visual computing applications, Hara said he thinks Intel will be behind the performance curve. Larrabee, unlike traditional GPUs, relies on software rather than hardware to provide a lot of the graphics smarts.
"Ultimately if they can't benchmark well in applications against a traditional hardwired GPU, then they have to do things like add cores, which will then make the chip bigger and add issues with power," he noted. "So I think the work they have to do to get up to the levels of the current architectures in the market [i.e., AMD and NVIDIA] is going to be very high. Obviously, we're not sitting still, so by the time they come out with their parts, we'll have raised the bar again."
What makes all of this so interesting is the prospect of a three-way competition in the general-purpose microprocessor business. If integrated CPU-GPU chips represent the default architecture over the next several years -- and I think this likely -- it would be a lot more healthy for the industry if all the major players were involved. That's assuming, of course, they all survive the current economic calamity. Here's hoping.
Posted by Michael Feldman - March 05, 2009 @ 4:38 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.