Visit additional Tabor Communication Publications
August 06, 2008
In the shadow of Intel's Larrabee unveiling, AMD announced today that it intends to support the new DirectX 11 standard in its stream computing software development kit. DirectX is a Microsoft API that has traditionally been available for multimedia and game development. Version 11, which Microsoft talked up at the Gamefest conference in Seattle last month, will include GPGPU support as well as improved support for multicore CPUs.
From AMD's press release:
"Just as it ushered in the era of advanced 3-D gaming for the masses, DirectX is poised to be at the vanguard of the GPGPU revolution," said Anantha Kancherla, manager of Windows desktop and graphics technologies for Microsoft. "DirectX 11 gives developers the power to more easily harness the astonishing capabilities of AMD GPUs for general purpose computation, and gives consumers an effortless way to experience all that AMD Stream has to offer, on the hundreds of millions of Microsoft Windows powered systems worldwide."
The fact that DirectX is jumping on the GPGPU bandwagon is a real sign of the times. With teraflop-capable graphics processors becoming more commonplace in desktop PCs over the next few years, combined with the insatiable processing demands of multimedia computing, it was only a matter of time before GPGPU went mainstream. For AMD to support it was a no-brainer, since it offers a much more mainstream software platform for the FireStream GPGPU product than what they currently have (Brook+) .
AMD had previously announced support for OpenCL (Open Computing Language), a new parallel programming language that is intended to be used across GPUs and multicore CPUs. The Khronos Group is currently in the process of drafting language standards and will be getting input from AMD, Intel, NVIDIA, Apple, IBM, and others. Apple says it will introduce OpenCL in Mac OS X v10.6 ('Snow Leopard'), which is expected to be released next year. If OpenCL garners wide industry backing, it could provide a common parallel programming solution across many architectures.
With support for DirectX and OpenCL, AMD is hoping to jump-start its fledgling GPGPU business, based on the company's FireStream offerings. In a recent article in TG Daily, author Theo Valich says AMD is changing its GPGPU software strategy and will ditch its Close-To-Metal platform in favor of OpenCL and DirectX 11.
In his speech GPG CTO Technology Day held in Iceland’s capital, Raja Koduri, CTO of AMD GPG (ex-ATI), announced that AMD believes that the time for proprietary software solutions such as AMD's own Close-to-Metal and Nvidia's CUDA has passed. As a result, AMD will throw its efforts behind DirectX 11 Computational Shaders and the OpenCL GPGPU language and will focus on standardized solutions only.
Both DirectX and OpenCL are initially aimed at desktop systems, but with teraflop-level performance showing up in the latest high-end NVIDIA and AMD GPUs, there are a growing number of technical computing apps that can now be hosted on GPU-equipped PCs. However it remains to be seen whether either of these programming models will catch on in the GPGPU space. For one thing, NVIDIA has a big headstart with CUDA, the company's own C programming language environment for GPU computing. According to Andy Keane, general manager of the GPU computing group at NVIDIA, users have already downloaded close to 80,000 copies of the software, and he estimates there are currently between 10,000 and 20,000 active CUDA developers.
Meanwhile the other GPGPU standards are still being hammered out. AMD says it's planning to add DirectX 11 support to its software kit "over the course of the next 18 months," and the OpenCL standard is still in the "specification" stage. For the time being AMD is sticking with Brook+, the company's open source C language extension for stream computing. But I have a feeling AMD would ditch it if an industry-standard solution caught on.
In any case, the chipmaker is going to be playing catch-up to NVIDIA in the GPGPU arena for awhile. On the bright side, AMD has less ground to make up here than Intel, which is at least a year away from a Larrabee product release. I'll talk more about how Larrabee might stack up against the competition in tomorrow's blog post.
Posted by Michael Feldman - August 05, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
The Top 500 list of the world's fastest computers has just been announced. Not surprisingly, since it's been reported on prior to the official announcement, the Chinese Tianhe-2 system tops the list. And that is an understatement. We talk with Jack Dongarra, Horst Simon, Hans Meuer and others from the....
Outside of the main attractions, including the keynote sessions, vendor showdowns, Think Tank panels, BoFs, and tutorial elements, the International Supercomputing Conference has balanced its five-day agenda with some striking panels, discussions and topic areas that are worthy of some attention....
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
Jun 12, 2013 |
At 31 petaflops of sustained LINPACK capacity, the new Chinese Tianhe-2 supercomputer will be the fastest supercomputer in the world when this month's Top 500 list comes out, as we reported previously in HPCwire.
Jun 12, 2013 |
HPC system makers are lining up to announce compatibility with the new fourth generation Intel Core processor, codenamed "Haswell." The new Iris GPUs based on the Haswell architecture are giving Intel new credibility in the graphics processing department.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.