Visit additional Tabor Communication Publications
August 06, 2008
In the shadow of Intel's Larrabee unveiling, AMD announced today that it intends to support the new DirectX 11 standard in its stream computing software development kit. DirectX is a Microsoft API that has traditionally been available for multimedia and game development. Version 11, which Microsoft talked up at the Gamefest conference in Seattle last month, will include GPGPU support as well as improved support for multicore CPUs.
From AMD's press release:
"Just as it ushered in the era of advanced 3-D gaming for the masses, DirectX is poised to be at the vanguard of the GPGPU revolution," said Anantha Kancherla, manager of Windows desktop and graphics technologies for Microsoft. "DirectX 11 gives developers the power to more easily harness the astonishing capabilities of AMD GPUs for general purpose computation, and gives consumers an effortless way to experience all that AMD Stream has to offer, on the hundreds of millions of Microsoft Windows powered systems worldwide."
The fact that DirectX is jumping on the GPGPU bandwagon is a real sign of the times. With teraflop-capable graphics processors becoming more commonplace in desktop PCs over the next few years, combined with the insatiable processing demands of multimedia computing, it was only a matter of time before GPGPU went mainstream. For AMD to support it was a no-brainer, since it offers a much more mainstream software platform for the FireStream GPGPU product than what they currently have (Brook+) .
AMD had previously announced support for OpenCL (Open Computing Language), a new parallel programming language that is intended to be used across GPUs and multicore CPUs. The Khronos Group is currently in the process of drafting language standards and will be getting input from AMD, Intel, NVIDIA, Apple, IBM, and others. Apple says it will introduce OpenCL in Mac OS X v10.6 ('Snow Leopard'), which is expected to be released next year. If OpenCL garners wide industry backing, it could provide a common parallel programming solution across many architectures.
With support for DirectX and OpenCL, AMD is hoping to jump-start its fledgling GPGPU business, based on the company's FireStream offerings. In a recent article in TG Daily, author Theo Valich says AMD is changing its GPGPU software strategy and will ditch its Close-To-Metal platform in favor of OpenCL and DirectX 11.
In his speech GPG CTO Technology Day held in Iceland’s capital, Raja Koduri, CTO of AMD GPG (ex-ATI), announced that AMD believes that the time for proprietary software solutions such as AMD's own Close-to-Metal and Nvidia's CUDA has passed. As a result, AMD will throw its efforts behind DirectX 11 Computational Shaders and the OpenCL GPGPU language and will focus on standardized solutions only.
Both DirectX and OpenCL are initially aimed at desktop systems, but with teraflop-level performance showing up in the latest high-end NVIDIA and AMD GPUs, there are a growing number of technical computing apps that can now be hosted on GPU-equipped PCs. However it remains to be seen whether either of these programming models will catch on in the GPGPU space. For one thing, NVIDIA has a big headstart with CUDA, the company's own C programming language environment for GPU computing. According to Andy Keane, general manager of the GPU computing group at NVIDIA, users have already downloaded close to 80,000 copies of the software, and he estimates there are currently between 10,000 and 20,000 active CUDA developers.
Meanwhile the other GPGPU standards are still being hammered out. AMD says it's planning to add DirectX 11 support to its software kit "over the course of the next 18 months," and the OpenCL standard is still in the "specification" stage. For the time being AMD is sticking with Brook+, the company's open source C language extension for stream computing. But I have a feeling AMD would ditch it if an industry-standard solution caught on.
In any case, the chipmaker is going to be playing catch-up to NVIDIA in the GPGPU arena for awhile. On the bright side, AMD has less ground to make up here than Intel, which is at least a year away from a Larrabee product release. I'll talk more about how Larrabee might stack up against the competition in tomorrow's blog post.
Posted by Michael Feldman - August 05, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.