I spent a couple of days this week at the Supercomputing 08 conference in Austin, Texas, and I was excited to write this blog about how cloud computing might be relevant for high-performance computing. Then I read this article on HPCwire, written by Thomas Sterling and Dylan Stark of LSU, which does the subject just a tad more justice than I can do.
I still want to make a few extra points, though. The first is that I saw a presentation by John Storm, an executive director within Morgan Stanley’s Institutional Securities division, who talked about how financial services firms are using HPC. Two disparate comments by Storm caught my attention: (1) that Monte Carlo simulations comprise the majority (up to 70 percent) of HPC computations; and (2) that the law of diminishing returns rears its ugly head most notably around power bills. It’s not unheard of for banks to use Amazon EC2 for Monte Carlo sims, so I wonder how many, after doing the energy math, actually are. How many are seriously considering it?
Also on the power front, a Wednesday panel discussed the power struggles surrounding high-end supercomputers and large enterprise datacenters. There are about a dozen computers on the Top500 list using between 1.2 and 7 megawatts of power (the peak belonging to Cray’s new Jaguar supercomputer), and commercial datacenters tend to use between 36 and 100 megawatts (and now consume up to 200,000 square feet of space). I’m not suggesting the types of apps running on Jaguar would work in a cloud environment, but, certainly, small-time or infrequent HPC users could experience significant capital and operational savings by utilizing an HPC-capable cloud like EC2 instead of buying their own system. Commercial users might note the cloud’s increasing readiness for them, too.
God knows there are plenty of HPC solutions already leveraging EC2. Univa UD’s UniCluster software can run on EC2, and a company called CycleComputing builds on-demand Condor pools for its customers with its CycleCloud service. Wolfram Research has enabled its Mathematica product to run on EC2, as well. Oh, and Amazon itself made life easier a few months back with its High-CPU instances. According to Amazon:
Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.
- High-CPU Medium Instance 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each), 350 GB of instance storage, 32-bit platform
- High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform
EC2 Compute Unit (ECU) — One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.
More on HPC and cloud computing, in general, can be found here.
In case you missed it …
Be sure to check out these announcements, which could have big impacts:
- Platform is leading the way in bringing virtualization to HPC clusters with its new Virtual Computing Cluster management solution.
- Amazon is challenging Akamai and its wildly successful EdgePlatform with the newly announced Amazon CloudFront offering.
- GoGrid is partnering with parent company ServePath to offer a fully hosted hybrid cloud computing solution, Cloud Connect. ServePath provides the physical infrastructure, GoGrid provides the infrastructure, and users get everything they need without (theoretically, of course) owning a single piece of hardware.