Chances are, if you’ve been lurking around here for some time you’re already quite familiar with cloud computing in the HPC context. However, it’s easy to get lost in the minutia that constitutes those clouds—the management layers, virtualization, latency, and beyond…
To put things into perspective, we’re posting provide a decent overview (and a link for some free time on Azure, which is running in tandem with the free Amazon trials) from a researcher focused directly on the practical applications of running HPC applications on remote resources.
Rob Gillen, a cloud computing researcher with Planet Technologies out of Knoxville, Tennessee spent a brief few moments on video to lay down some of the core concepts behind scientific uses for HPC clouds.
In the brief video below, he carves out the concept of cloud as it applies to the technical and research computing space and provides a few details about how clouds signal the democratization of large-scale computing.
Gillen’s host asks him what HPC encompasses generally, to which he provides a litany of examples. However, he notes that HPC cloud computing is the “lower end of the HPC space” noting that it works well for average researchers or academics that lack access to high-end machines.
Using Microsoft’s Windows Azure as a starting point, he provides the example of the genome sequence alignment tool BLAST, which runs as an Excel worksheet that is used to define problems, fill in details and shoot it off for remote processing. He notes that this is where the democratization layer comes in. For instance, a professor can use actual BLAST in a class but when it’s over, just shut down and stop incurring charges.
Outside of the rapid-fire definition, did you happen to wonder who you contract right this moment to build you a wall-to-wall dry erase room like the one shown?