Visit additional Tabor Communication Publications
January 15, 2009
In my comments earlier this week on AMD's purpose-built "Fusion Render Cloud" supercomputer, I neglected to mention a possible downside for AMD's GPU business. In a nutshell, if this new supercomputer is going to be doing all the heavy lifting rendering-wise in the server, why do you need GPUs in the client?
The issue is probably more obvious when you realize that the supercomputer is being built with essentially the same "Dragon" chipset destined for high-end multimedia PCs. Specifically, it's the ATI Radeon HD 4800 GPU in the chipset that delivers all the nifty HD multimedia capabilities coveted by hard-core gamers and video enthusiasts. And it's not just for supercomputers and desktop machines. On Wednesday, AMD introduced a slightly less powerful offshoot of the HD 4800, the ATI Mobility Radeon HD 4000 series GPUs. These chips are aimed at the notebook market and promise to deliver "a home theatre-quality HD multimedia experience."
But if AMD's petaflop rendering monster (containing 1,000 Radeon HD 4800 GPUs) is truly able to deliver a cutting-edge multimedia experience to low-end PCs, then why buy the expensive box at all? And since the 1,000 GPUs in the supercomputer will probably be utilized more efficiently than in a 1,000 separate PCs, overall AMD will need to manufacture fewer of them to deliver the same computational performance.
Right now AMD is probably more focused on the upside of the Render Cloud. Since any device smaller than a notebook (netbooks, mobile phones, etc.) is not likely to house a CD/DVD drive for multimedia -- not to mention a discrete graphics processor -- the mobile market presents a natural opportunity for HD streaming, without the threat of cannibalizing current GPU revenue.
On the other hand, multimedia notebooks and desktops could get trapped in the crossfire. Users might decide to jettison the pricey GPUs (and DVDs) in favor of streamed multimedia content for the sake of convenience. Not only could gamers stop buying DVDs at $29.99 a pop, they also wouldn't have to upgrade their machine every time a graphics processor came out promising the latest whiz-bang special effects. Instead the cloud would get the upgrade, while the thin clients automagically pick up the new capabilities. So is there room for high-end GPUs on both the client and the server?
I guess the answer revolves around the pricing structure of content serving versus media ownership. Presumably content providers are planning to use some sort of subscription service to deliver games and other HD content from the Render Cloud. Since most online games and HD media on the Internet are currently available for free, there is a lot of pressure to keep prices low. But how low?
A model that already exists for something like this is Amazon's Video On-Demand, an online service in which digital video content can be purchased for lifetime ownership or merely rented for 24 hours. The content can be viewed offline (downloaded) or online (streamed) and can be run on a PC in a standard Web browser, or on other devices, even TVs. Using the Video On-Demand service, renting the "The Dark Knight" movie for a day costs $3.99, while buying it costs $14.99. Alternatively, if you want to own the movie as a DVD, the price is $20.99. The Video On-Demand means the DVD player -- the content renderer, in this case -- has become superfluous.
Obviously, content providers and chipmakers don't have the same interests. Game companies and other media developers will find a way to make money from their intellectual property, even when its form or distribution changes. But if AMD is going to serve up graphics computation on demand, it's risking making the client-side GPU hardware redundant. Of course, if the company can convince users that there is unique value for both client- and server-side GPUs, then problem solved. I guess that's why God made marketing departments.
Posted by Michael Feldman - January 15, 2009 @ 5:24 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.