Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
February 15, 2013

A Tale of On-Demand Supercomputing

Ian Armas Foster

Supercomputing applications in the enterprise are driven by what Stillwater CEO Theodore Omtzigt considers “valuable economic activities of the business.” For example, FedEx and Exxon require extensive logistical modeling and optimization to reduce their operational costs by billions of dollars. As such, those two companies are compelled to dedicate billions of dollars into investing in supercomputing and need not worry as much about cloud-based computing.

However, according to Omtzigt, while institutions that only have millions to invest in HPC would benefit in equal proportion (about 10 percent) from supercomputing applications, the overall pay-off would not be worth it. Further, cloud-based applications have too high a latency to solve FedEx-like complex logistical problems.

Stillwater’s answer was to combine the two, creating essentially a cloud-based supercomputer, or “on-demand supercomputer.”

“We were asked to design, construct, and deploy an on-demand supercomputing service for a Chinese cloud vendor,” Omtzigt said of the inception of Stillwater’s service. “The idea was to build an interconnected set of supercomputer centers in China, and offer a multi-tenant on-demand service for high-value, high-touch applications, such as logistics, digital content creation, and engineering design and optimization.”

The below diagram shows the system’s topology.

The design relies on a large network of interconnects that link storage servers to compute nodes. “So half the quad can fall away, and the system would still have full connectivity between storage and computes,” said Omtzigt. The idea is to build several levels of redundancy into the system such that operations can continue when certain servers take longer than expected with their job.

Keeping costs down would be important to maintain such an infrastructure, something alleviated by keeping the InfiniBand-based storage system near the computing. “To lower the cost of the system, storage was designed around IB-based storage servers that plugged into the same infrastructure as the compute nodes…This is less expensive than designing a separate NAS storage subsystem, and it gives the infrastructure flexibility to build high-performance storage solutions,” said Omtzigt.

Virtualization would have been another way to balance demand and keep costs down, but the designers opted for bare metal provisioning to avoid the I/O latency hit.

According to Omtzigt, the result, when tested, was able to maintain 18 teraflops with a peak of 19.2 at a cost of $3.6 million. The Chinese vendor turns around and rents out time on the system: a full dual-socket server with 64GB of memory goes for about $5/hour. A content firm in Beijing is using 100 servers at the cost of $20,000 a month.

Ease-of-use, pay-per-use and and lower setup and operating costs are compelling traits. The notion of a redundant, bare-metal supercomputing service will be something to watch in the months and years to come.