Visit additional Tabor Communication Publications
October 20, 2006
Back in January of this year when I talked to Phil Hester, AMD's CTO, I remember him telling me about the company's undivided commitment to the x86 architecture. "We're pretty religious about the x86," he declared. Today, with the imminent merger of the company with GPU powerhouse ATI Technologies, it appears the AMD execs have become a bit more secular. At some point, they must have realized that x86 cores have their limits, no matter how many you can stuff into a processor.
As I wrote in my commentary a couple of weeks ago, GPUs, besides providing traditional visualization capabilities, can offer an extra dimension to general-purpose computing by providing high performance vector-type data parallelism in a commodity package. Applications as diverse as genomics research, seismic analysis, options pricing, and image and signal processing represent just some of the applications that could take advantage of graphics engines. And with the AMD-ATI merger, the interest in GPUs in high performance computing has taken off.
GPUs' ascent to power was driven largely by game enthusiasts, who continually demanded more realism and increased performance from these devices. At the same time, visualization became ubiquitous across almost all personal electronic devices, creating a broader need for graphics processing. As GPUs were asked to do more and fill more roles, their capabilities became more generalized.
David Orton, currently the CEO of ATI, has expressed the vision that GPUs will become even more capable in the next few years. Orton, who will probably continue to manage the ATI division after the merger, has said that they are currently working to improve data bandwidth performance and add double-precision (64-bit) capabilities to these devices. He believes this can be done without sacrificing the efficiency of traditional graphics processing. By using the same semiconductor technologies that are driving price/performance in CPUs, GPUs will continue to improve. A 500-gigaflop processor is apparently already in the works.
Supercomputing users are envisioning CPU-GPU hybrid systems offering boatloads of vector performance at a reasonable price. Vendors like PANTA Systems are busy introducing such systems. PANTA's new platform, which can combine traditional Opteron modules with NVIDIA GPU modules, is profiled in this week's issue of HPCwire. On the software side, startups like PeakStream and RapidMind are building development platforms for general-purpose computing with GPUs (GPGPU), giving application programmers access to the raw computing power on these devices.
At this year's Supercomputing Conference (SC06) in November, there are a number of presentations that focus on using GPUs for high performance computing (at SC05, they were barely mentioned). GPGPU now vies with the Cell BE, FPGAs and multi-core processors as one of the hottest topics in HPC. We'll be reporting on GPGPU developments during our upcoming LIVEwire coverage of SC06, next month.
AMD seems to be out in front of the GPGPU wave. Whatever the company's plans are for the ATI devices, one can assume it involves making these devices even more commonplace across computing platforms. Whether they migrate from external devices to co-processors to on-chip cores remains to be seen. But however it plays out, AMD's plans to bring the GPU to mainstream computing is certainly a bold move.
And where does that leave Intel? The rumors a fortnight ago of an Intel-NVIDIA merger turned out to be false (at least for now). But the biggest chipmaker in the world must be at least considering the possibility of getting into the GPU game themselves. If these devices are destined to become as important to software as, for example, the FPU, Intel needs to be involved. Having played catch-up with AMD over the past few years, the execs at Intel must realize that they're not infallible.
On the other hand, AMD has plenty of challenges to contend with as well. For one thing, a lot of time and energy is going to be spent swallowing ATI -- a whole new company with a different technology, a different culture and a different market. And AMD itself is now playing catch-up with Intel in the realm of process technology. Intel has already shipped 40 million 65nm chips before AMD has shipped a single one. And next month Intel will begin shipping their first quad-core x86 processor, beating AMD by at least six months. Finally, with this year's introduction of the new Intel Core architecture, AMD's processor performance lead has evaporated.
Because of the resurgence of Intel, AMD's financial position has taken somewhat of a beating lately. Over the past year, its stock has tanked; the current price is about half what it was in January. Also, as a result of renewed price competition from Intel, AMD's profit margin has been slipping, down to 51.4 percent this quarter, a drop of 4 points from last year.
In general though, AMD is sound financially and its strong product lineup will continue to vex Intel for the foreseeable future. The ATI merger is an additional annoyance, presenting Intel with an unusual, asymmetric challenge. It will be interesting to see how they respond. The fun has just begun.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - October 19, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.