Visit additional Tabor Communication Publications
August 07, 2008
After a week of media buzz about Intel's upcoming manycore Larrabee processor, I thought I'd try to get a sense of how the competition -- namely AMD and NVIDIA -- is reacting to the news. If Intel is able to deliver the goods with Larrabee, both its rivals have a lot to lose.
With Intel in the mix, all three chip vendors are now looking to expand their share of the visual computing and supercomputing pie. Intel, though, is definitely taking a different path. Larrabee's integration of a stripped-down x86 with a 512-bit SIMD unit gives it a very HPC flavor compared to the standard GPUs offered by its rivals. One way to look at it is that AMD and NVIDIA (and IBM, for that matter) took a game processor and transformed it into an HPC platform, while with Larrabee, Intel is attempting to perform the trick in reverse. As I pointed out in this week's feature article, from Intel's point of view, this is a necessary strategy since the company needs to build volume and ecosystem support in the consumer graphics space before venturing further afield.
What Intel is presumably hoping for is that in the next decade Larrabee will be the platform of choice for a new set of terascale applications, which incorporate both visual computing and HPC. The company's concept of this is something called RMS (Recognition, Mining, and Synthesis), which describes a set of applications that manipulate complex models and events. Until this particular Nirvana arrives, Intel has to beat the GPU vendors at their own game, so to speak. From just a performance point of view, that means Larrabee will have to achieve multiple (single precision) teraflops to match up well with GPUs it will be going up against in the next couple of years.
But Raja Koduri, chief technology officer for AMD's graphics processing group, doubts Larrabee will be competitive compared to discrete GPUs. "From a performance per watt and performance per dollar perspective, GPUs should still have a big advantage on existing workloads," he told me. Based on what he could extrapolate from the technical paper that Intel will be presenting at SIGGRAPH later this month, he thought a 16- or 32-core Larrabee chip would be equivalent to the performance AMD offered in their previous generation of low-end GPUs. And according to Koduri, by the end of 2009 or 2010, low-end graphics processors will be three times faster.
Even for general-purpose HPC workloads and user codes such as FFT or matrix multiplication, Koduri thinks GPUs will still have a performance advantage because their compute density will be much higher. He allows that a Larrabee architecture might be useful for software that requires both fine-grained control, which is suitable for multicore CPUs, and highly data-parallel operations, which are applicable to GPUs. But he's not sure which applications would fit in that sweet spot.
NVIDIA's take on Larrabee appears to be very similar. Andy Keane, NVIDIA's general manager of its GPU computing group, says it's going to be very difficult to build a platform from a set of general-purpose cores that competes against GPU performance or features. From his point of view, the separation of CPU and GPU is still valid since there is no unifying software model that brings those two architectures together.
Keane thinks Larrabee's dependence on software will limit its utility as a GPU and make it even more problematic to program than a multicore processor. "The multicore scaling problem still exists," he says, noting that adding more cores just exacerbates the problem. In fact, scaling codes for Larrabee is potentially a lot more challenging than it will be for the current crop of quad-core chips because 16 or 32 cores will be much more difficult to manage.
At this point, I'll interject that I was a little surprised Intel didn't at least give a mention to Ct in its first public unveiling of the new architecture. Ct is the company's parallel programming language being developed for throughput computing. Since Larrabee is a throughput architecture, why not at least give a shout out to the matching language/compiler technology? Even if Ct never makes it out of the lab, I'm sure Intel will end up supporting Larrabee in its vanilla C/C++ compiler and libraries. I can only guess that Intel wants to initially push the Larrabee-as-GPU message, so limited the software talking points to DirectX and OpenGL support.
The other missing piece of the puzzle is the nature of Larrabee's vector instruction set. Intel kind of glossed over the fact that it was inventing a bunch of new non-x86 instructions, which the user, the compiler, or some other layer of software will have to deal with. It's been suggested that Larrabee's vector instructions will be based on Intel's upcoming AVX vector extensions to SSE, but since the Larrabee unit is 512 bits wide and AVX is currently spec'ed at 256 bits, I'm not sure how this gets resolved. It would probably make the most sense if Larrabee just supported a double-wide version of AVX. Despite Intel's emphasis on the x86-ness of Larrabee, Keane thinks that the nonstandard vector unit will lock developers into a proprietary architecture.
Intel will have to work hard to get programmers to adopt Larrabee for general-purpose computing since most won't want to have to deal with assembly code or inline intrinsics. Software companies like RapidMind could help here, since supporting high-level code development for these types of parallel computing platforms is the company's forte. RapidMind currently supports the Cell processor, GPUs (NVIDIA and AMD), and multicore x86 platforms with its development platform.
Michael McCool, founder and chief scientist at RapidMind, says Larrabee represents a very different design than the existing x86 architecture, GPUs or the Cell processor. "From a software development point of view, it really has to be treated as a new kind of processor," he explains, adding that software will be the key to extracting performance. "It is important to realize, it's not necessarily going to be an easy transition from ordinary x86 code to high-performance Larrabee code."
According to RapidMind CEO Ray DePaul, the company does intend to offer Larrabee support in its product. "This is an important development for Intel and we expect that it will get traction," he says. "It is a very good target for our platform and we'll be able to do very interesting things for applications by mapping applications to Larrabee using RapidMind."
Intel still has 12 to 18 months to refine the architecture and set the software stage before Larrabee's debut. In the interim, McCool expects the other major players -- AMD, NVIDIA, IBM -- won't be sitting still. He believes we'll see some pretty remarkable evolution of the Cell processor and discrete GPUs over this time period, adding, "I think it will be a very interesting next couple of years."
Posted by Michael Feldman - August 06, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.