Visit additional Tabor Communication Publications
September 23, 2010
The end is in sight for cheap GPU-based supercomputing, according to an International Science Grid This Week (iSGTW) opinion piece out this week. Author Greg Pfister argues that CUDA development has thus far been subsidized by the high volume sales of NVIDIA's mass market low-end GPUs.
But sometime next year, we will see the arrival of chips that integrate the GPU and CPU on the same die. Intel's "Sandy Bridge" processor chip and AMD's Llano processor are both due out in mid-2011, and AMD's CPU/GPU Fusion architecture is also in the works.
If the mass market consumer, using graphics mostly for games and entertainment purposes, can take advantage of these double-duty chips (as a referenced Anandtech article says they can) then where will be the market for NVIDIA's low-end GPUs?
This means the end of the low-end graphics subsidy of high-performance GPGPUs like Nvidia's CUDA. That subsidy is very significant, because the fixed costs of developing any chip family are very large; spreading them out over a high-volume low end makes a major difference, even if the high end has substantial revenue. So prices will rise, since GPGPUs will no longer have a huge price advantage over purpose-built HPC gear. How much will they rise? It's very hard to say, but I have one somewhat wobbly data point saying that the difference will be substantial.
The "wobbly data point" is arrived at by comparing a PS3 (mass market subsidized through volume and games) versus a custom built IBM HPC appliance, and extrapolating a 10 to 1 cost differential. Guess which one's cheaper?
Even if the HPC market is growing as data suggests, Pfister notes that high-end GPU revenue is no match for the dollars generated by the demand for GPUs at the consumer level.
It may seem a stretch, but one way to still tap at least some of that mass market would be for NVIDIA to come up with their own integrated graphics processor. In fact, they already have, as noted in a recent HPCwire blog. NVIDIA's Tegra line of processors, designed for mobile devices, has a heterogenous architecture including the low-power ARM processor and the GeForce GPU. Who's to say NVIDIA isn't planning another heterogenous processor using the CUDA-class chip with ARM CPUs? They could also choose to co-opt a higher-end CPU by teaming with another chipmaker. Or NVIDIA could even start designing its own x86 processor and then with that create an integrated graphics chip worthy of a gaming-class desktop or a Blue-Ray-capable notebook media machine. "NVIDIA-inside" anyone?
Full story at International Science Grid This Week
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.