Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

November 5, 2009

Clouds Envelop HPC

Michael Feldman

I’ve looked at clouds from both sides now
From up and down and still somehow
It’s cloud’s illusions I recall
I really don’t know clouds at all

— Both Sides Now, Joni Mitchell

As we all now realize, when Joni Mitchell penned the words to that song in 1968, she was obviously lamenting the ambiguities that would characterize cloud computing 40 years hence. What a tech visionary she was!

Kidding aside, recalling those lyrics from my youth reminded me how slippery the definition of cloud computing is, and probably always will be. What was once promoted as a way to deliver utility computing, has morphed into an all-encompassing IT phenomenon, swallowing servers, storage, networks, application hosting, cluster management software, service providers — you name it. It seems like the deeper the cloud paradigm penetrates into the IT universe, the more diffuse it has become. But that’s probably as it should be. If everyone is expected to plug into the cloud, the whole ecosystem has to be involved.

HPC’ers, of course, have special needs, especially in regard to computational performance, low latency communication, and interoperability with strange legacy codes. But as you can see from our special HPC in the Cloud supplement this week, there are plenty of ideas on how this can happen, and the vendors are lining up. The inevitability of it all is starting to set in.

This week we saw three companies introduce new cloud (or at least cloud-ish) offerings for HPC’ers. Startup 3Leaf Systems unveiled its brand new “Dynamic Data Center Server,” which is the company’s unique ASIC-plus-software solution to dynamically build big virtualized shared memory servers from a farm of x86-based nodes. Along the same lines, ScaleMP introduced its cloud solution to do essentially the same thing, but using only its vSMP software. Meanwhile Platform Computing released its private cloud provisioning product, ISF Adaptive Cluster, as well as a capability to manage applications in private-public hybrid clouds.

The common thread in all three products centers on the dynamic creation of virtual environments suitable for running highly parallel codes. In essence, these solutions aim to replace the clustered silos model with a monolithic cluster infrastructure that can be carved up into spiffy HPC machines on a per job basis. It’s sort of the inverse of server partitioning used in traditional enterprise virtualization.

One last thought. There is a battle brewing between private, public, and hybrid clouds, and this will be played out in the HPC space too. For now there are plenty of good reasons to have all three environments, mostly having to do with security issues, IT culture, interoperability, and peak load costs. Most private cloud purveyors are building in support for public cloud access, usually via some sort of cloud bursting feature. Whether this capability turns out to be a driver for popularizing hybrid clouds or a gateway drug to get customers hooked on public clouds remains to be seen.

Share This