July 27, 2010

GPGPU Computing Demand Spurs Cloud Offering

by Michael Feldman

The world’s largest public GPGPU computing on-demand service was launched this week at the SIGGRAPH International Conference in Los Angeles. PEER 1 Hosting, a provider of IT infrastructure, has constructed a 128-GPU compute cloud that incorporates NVIDIA Tesla gear and mental image’s RealityServer 3D Web platform. The new service is aimed at customers who want to offload image rendering and technical computing workloads on GPU-accelerated servers.

Although the GPU cloud can serve scientific apps, the SIGGRAPH announcement was timed to entice customers looking to host visualization work. In this case, the provided RealityServer software is used in an SaaS (Software-as-a-Service) fashion to distribute image processing-type work across the cloud. This can entail applications such as CG rendering, medical or seismic imaging, and product design. Since RealityServer also ties into Internet protocols such as HTML and PHP, this gives Web applications a path to GPU-accelerated services. On the other hand, for HPC-type technical computing, the RealityServer layer can be bypassed entirely. In this way, the cloud is delivered as a straight IaaS (Infrastructure-as-a-Service) platform. Bioinformatics, financial analytics, and a wide range of scientific research applications that benefit from GPU acceleration are all fair game.

PEER 1’s GPU setup is housed at two locations — London, England, and Toronto, Canada — but can be accessed from anywhere in the world. Hardware-wise, the cloud is made up of S1070 Tesla servers and M2050-equipped x86 servers, in approximately a 50:50 ratio. The S1070 is a 4-GPU server that uses the older 10-series Tesla hardware and is paired with a traditional CPU server to funnel work to it. The M2050 is the Fermi-class 20-series Tesla module that is integrated into an x86 server and talks directly to the CPU within the same box.

According to Robert Miggins, PEER 1’s senior vice president of business development, the pricing model is based on dedicated service rather than renting GPU-hours in a virtual environment. The nominal rate for the S1070 infrastructure is about $500 per GPU/month, which works out to about $0.70 per GPU-hour. The new Fermi M2050-equipped servers are being offered at $800 to $900 per GPU/month ($1.18 per GPU-hour). Miggins says the pricing is going to vary somewhat, depending on the CPUs, main memory capacity, and disks that are paired with the GPUs. Leasing GPUs with an annual contract will cost less than the monthly rate, but renting them on an hourly or daily basis is likely to cost more. In addition, if the customer opts for the RealityServer platform, that licensing cost is tacked on top of the GPU pricing, but in most cases only adds about 10 percent to the total bill.

Problem size ultimately dictates how many GPUs an application can take advantage of, but beyond that, most GPU computing software is not currently optimized to use more than a handful of graphics devices. For the PEER 1 cloud, most users are looking to grab 4 to 8 GPUs at a time, although they’ve seen some interest for 16, 32, and even 64 GPUs. Scaling apps to hundreds of GPUs is still mostly in the research arena.

Miggins says interest in using the 10-series S1070 GPU servers tends to come more from the traditional graphics side, while the Fermi GPU-equipped gear is more apt to appeal to technical computing customers, such as insurance firms, banks, bioinformatics providers, and oil & gas companies. User preferences are going to be driven mainly by PEER 1’s pricing model, which puts a cost premium on the more capable 20-series Fermi hardware. Since visualization apps, such as image rendering, don’t require Fermi goodies like ECC memory and double-precision performance, users with this type of work might as well go with the less expensive 10-series Teslas.

PEER 1’s 128-GPU cloud may be the biggest one out there, but it’s not the first. SGI’s Cyclone offers a GPU acceleration option, as does Penguin’s on-demand service. Application-specific GPU clouds are starting to appear as well, including the AMD Fusion Render Cloud from Supermicro and OTOY, announced in March. This one is built with the latest ATI GPU and Opteron CPU hardware, and, as its name implies, is intended to deliver HD games and video streams to Web devices, as well as serve as a platform for real-time image rendering. Private GPU clouds based on NVIDIA GPUs (and RealityServer) are even more numerous. These include mydeco.com (3D visualization for virtual furniture shopping), scenecaster.com (building 3D Facebook content), and luminova.net (professional design collaboration), among others.

All of which seems to point to a growing market for GPU computing on-demand. Even prior to the official rollout of the PEER 1 GPU cloud, the hosting provider was receiving a lot of inquires from potential customers. These included users looking to host RealityServer-driven apps as well as more traditional GPU computing work. At this point, Miggins estimates that about two-thirds of the demand is coming from users interested in GPU computing on the bare infrastructure. He says they have already signed up a couple of paying customers and have two other trial customers taking the GPUs for a spin.

From PEER 1’s perspective, the need for managed GPU hosting is clearly there, and they expect to expand capacity accordingly as customer demand ramps up. Besides their partnership with NVIDIA, which is shuttling prospective customers to PEER 1, the company is hoping the official launch at SIGGRAPH will net additional business. With so few players offering GPU computing on-demand, PEER 1 may not have to work very hard locating new customers. “I’m more worried about our ability to keep up with the demand than I am about locating where the demand is,” says Miggins.