Renting HPC: What’s Cloud Got to Do with It?

By Nicole Hemsoth

June 30, 2010

If virtualization is one of the biggest hurdles to clouds being able to handle HPC applications with supercomputer might and speed, why not simply remove the abstraction and ditch the virtualization altogether? The result would be very recognizable to many in research and academia; it’s the ages-old “rent a cluster” paradigm. And what is wrong with it? Clarification — what is wrong with it now that the term “cloud” has all the gloss of a gleaming new datacenter and cluster rental as a concept (CRAC) seems far less appealing? Still nothing.

There are a handful of vendors offering time-tested HPC-as-a-Service to clients across the HPC application spectrum, but unfortunately, they are too often trumpeting these broadly useful services as cloud, which may not be the best approach. This is especially true if they’re trying to reach traditional HPC folks who are often more likely to scoff at the cloud concept for their work (remember, their jobs might depend on waving away the fluff) than embrace it. Even those in the HPC space who are open to the idea of the cloud for their needs — simply calling it cloud when it is simply renting time on a cluster might be misleading at first or even off-putting at the worst.

Here’s the only problem with chucking the virtualization — and it is not one that’s likely to keep anyone awake at night, except for maybe a few marketing managers for vendors offering “cloud” services. It’s not really the cloud that everyone recognizes if you remove the virtualization, is it? At least not by some definitions. But then again, getting into complicated definitions-based discussions isn’t really useful since this space is still evolving (and defining it ala the grid days) will only serve to stifle development.

Fluff, But Not in Cloud-Like Way

One of the greatest sources of frustration to those evaluating alternatives to buying their own clusters is determining if and to what extent cloud computing will enter the picture. And if the group selecting the new solution is locked into one definition of cloud or another, chances are, they’re thinking about the virtualization aspect (and all the sinister performance-related issues that entails). These perceptions, which are to varying degrees of true depending on what applications we’re talking about, are instilled in the minds of anyone who doesn’t already have a nice, pleasantly parallel set of applications to toss into to the cloud.

What is ignored far too often, however, is the value or ranking of the core elements of cloud. While virtualization is central to many definitions, HPC has no reason to rely on the same criteria that suits enterprise. For HPC, the cornerstone, the beacon is availability. It is on-demand access. One of the most valuable and attractive aspects of cloud across the HPC spectrum is, without a doubt, availability of resources — and in a scalable fashion, no less. If HPC-as-a-Service eliminates the problems caused by a virtualized environment performance-wise while lending flexibility, scalability and immediate access to resources, clouds start to seem like more trouble than they’re worth, at least in the context of a certain range of applications that are not cloud-ready to begin with yet are needed by shops that can’t plunk down many thousands for a cluster.

What HPC-as-a-Service Really Means

HPC-as-a-Service is not new. You have seen this before. But the technology behind making it possible is being refined to the point where it is going to eclipse the more comprehensive, virtualized side of cloud definitions.

Cycle Computing CEO Jason Stowe summarized the concept of HPC-as-a-Service beautifully, stating “cloud HPC cluster users can start up clusters without having to worry about putting in place various applications, operating systems, security, encryption and other software.” Yes, this is something that can be done in a private, public or even hybrid cloud environment with relative ease — but only after the dues have been paid. After all, before entering into the blessed realm of the cloud there’s some major work to be done. Major. You do not simply ship your data to Amazon and let them plug everything in for you, not if you’re a small enterprise with a relatively light load and certainly not if you have any type of HPC applications. You no longer have a detailed view of your operating environment, nothing is tailored to your hardware, you have to program using specific APIs to make sure that everything is provisioned and setup properly or your experiment with the cloud is going to fail. It is no easy task — at least not from any end users that have been directly interviewed by this little lady. No matter what the cloud structure, provider, expected use scenario, it is not something one can simply walk into and this is doubly true for HPC applications, of course, especially those that require some highly specialized behind the scenes manipulation to begin with.”

Stowe continued that in the HPC-as-a-Service model, “Scientists can create clusters that automatically add servers when work is added and turn the servers off when the work is completed,” which means that once the calculations are done, the researcher simply clicks what amounts to a power down button to put an end to the massive availability of resources. It is in this simplicity — in this easy off and on capability — the on-demand essence — that this could revolutionize how HPC is managed.

According to Joshua Bernstein from Penguin Computing, a company that also provides virtualization-free HPC-as-a-Service (thus it’s the rent-a-cluster paradigm where the environment is easier to configure and visualize not to mention manage), HPC-as-a-Service has enormous value for users for a number of reasons, among which simple economics sits at the heart. Bernstein says it is simple for customers to look at their current IT environment, whether it’s a few machines or close to nothing at all and know right away if they have the $150k+ to invest in a new cluster. That’s the easy part. Furthermore, in addition to the capex issue, there is also the important question about whether or not they have the floor space to accommodate it and more importantly, whether or not they have the systems administrative expertise to keep it humming. Bernstein suggests that if you’re going to use a cluster at 30-50 percent capacity all the time, then you’re better off buying a cluster based on observations over a year, or better, a three-year term. He notes, however, “it turns out that most of the time, customers don’t run it at that rate all the time — they’ll say they run it at 100 percent but if we ask them about what it was like in the previous month, it turns out that it was at almost nothing so over the course of three years, it seems most are utilized at around 20-30% of the time. So it’s much cheaper to rent than to buy.”

HPC-as-a-Service, in other words, might make more sense than actual clouds for a range of applications that might otherwise be thrown into the peril of a hostile cloudy environment — and makes it possible for smaller research centers and shops to actually compete without the investment. And herein lies that revolution that’s going on.

On Demand Flexibility and Configurability Keys

HPC-as-a-Service as offered by SGI and its Cyclone, Cycle Computing, or Penguin On-Demand (POD) or even from smaller companies like Sabalcore, for instance, are formidable foes to the mega-IaaS/PaaS providers seeking HPC converts. The problem is, they’re too often invoking the cloud name, which for this particular audience, might not be a good idea.

Drop standard definitions of cloud that too often hinge on virtualization and focus on one of the core elements that makes “cloud” attractive for HPC users. It all boils down to availability. It’s having resources on-demand. This means there is no waiting for precious time for a job to run — throw in the ability to scale back down or shoot off the charts and you’ve got yourself a deal. Supposedly, anyway. Furthermore, as Joshua Bernstein of Penguin noted of the Penguin On-Demand Service (POD), companies are able to try before they buy a cluster to see what is possible before they get their own to unleash on society — again, viva la revolution.

The only problem right now is small conceptually but it’s a really big deal from an adoption standpoint: when nearly everything has the “cloud” label slapped on it (which vendors can still get away with since the definitions depend largely on each vendor’s marketing team’s creativity) it can be almost impossible to look at one’s options without completely overlooking solutions that might be far more appealing when compared to the standard public sense of cloud.

Perhaps HPC-as-a-Service providers should call themselves what they really are — and leave clouds out of it. For now.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the University of Chicago, leads Chameleon. This innovative projec Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable quantum memory framework. “This work provides a promising Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Shutterstock 1748437547

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the Uni Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire