Visit additional Tabor Communication Publications
May 15, 2012
On Tuesday at the GPU Technology Conference (GTC), NVIDIA CEO Jen-Hsun Huang surprised the audience by revealing a new set of technologies that would launch the GPU maker into the cloud computing business. Starting later this year, NVIDIA, along with its partners, will be offering a server platform that can virtualize on-board GPUs for virtual desktop infrastructure (VDI).
The underlying hardware will be based on the newly hatched Kepler GPU technology, which apparently has baked in intelligence to share processing across multiple remote users. NVIDIA has constructed a four-GPU server board, known as VGX, which is purpose-built for GPU virtualization.,
The idea is to deploy them ino servers, where they can be used to power graphics-heavy desktop apps like 3D design tools, simulations, and other visually-oriented programs. The target audience is enterprise users -- designers, knowledge workers, and other PC power users -- who (like consumers) are migrating to thinner clients. And since there are hundreds of millions of these users out there, the market seems primed for such an offering.
The idea is also being driven by the number and heterogeneity of computing device people are using -- everything from desktops and laptops to tablets and smartphones. Since a virtualized GPU in a datacenter is operates at a level above all that architectural noise, it's in a position to smooth out that heterogeneity. And given that you can put a lot more GPU into a server than most personal devices, these thin client machines will soon be able to tap into a lot more graphics power.
The VGX board, along with the supporting software (a GPU hypervisor and some configuration tools) can support up to 100 sessions at a time, depending upon the particular usage. And, according to NVIDIA at least, it can do so rather seamless, with the latency you would expect on a client device.
But this is not designed for supercomputing in the cloud. The GPUs on the VGX board, although Kepler-grade, are fairly modest in performance. According to NVIDIA, they support only purely single precision floating point, and are even less powerful than the non-HPC Kepler GPUs (GK104) that went into the new GeForce GTX 680 discrete part. Also the virtualization technology can't aggregate the GPUs in a single super-GPU, either in the board or across servers.
NVIDIA is busy gathering OEMs and has signed up all the major server makers including IBM, HP, Dell, Supermicro and Cisco, as well cloud hoster Amazon. On the client hypervisor side is Citrix, Microsoft, VMware, and Xen. If all goes as planned, these virtualized GPUs should start popping up in datacenters by the end of the year.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.