GPU Acceleration for the On-Demand Era

By Nicole Hemsoth

September 28, 2010

Last week during a chat at the GPU Technology Conference (GTC) in San Jose, Sumit Gupta, product manager for NVIDIA’s Tesla GPU Computing Group, suggested that GPUs in the cloud are just a natural evolution of HPC in the cloud.

It’s hard to argue with Gupta’s point; once you have an application in the cloud already, the GPU enhances and accelerates it, thus lending researchers added capabilities, particularly in the arenas of climate modeling, computational fluid dynamics, and a wide range of other application areas that are cloud-ready.

If we are to take a broader look at the basic relationships between HPC and clouds in terms of their development and use by providing the reminder that we have development push coming from different sides of the spectrum. Furthermore, as GPUs become more widely used in HPC, it makes sense that they would become more widely available in the cloud, although certainly one should make the distinction that this “cloud” is more like “on demand” since there is no virtualization involved.

From the Bottom Up

In the beginning, HPC was rooted in science and discovery back when weather modeling was the first “killer app” before it began to trickle into the enterprise. What we’re seeing now with cloud represents a major shift as well, but this time the roots of innovation are coming from the enterprise up to the world of scientific computing. Instead of coming from climate research centers, the movement is being driven by the customer side of the computing equation; the virtualization of the office is now driving the virtualization and on-demand era for the scientific and technical computing world.

NVIDIA’s Tesla product manager stated that in his view, “the true promise of the cloud is being able to handle bursts” and these bursts, not to mention the capacity itself can be delivered by clouds—whether you define them as virtualized servers or simply as a rented infrastructure. It is this capability, which is now made available through lower costs, both on an opex and capex level that is driving growth in the enterprise markets, not to mention the broader market for GPUs.

Gupta is seeing dramatic interest in GPU technology in a number of areas that are already primed for virtualized environments, including remote transcoding of video and big data analytics. While he admits that these two areas are not a “slam dunk for GPUs, they are definitely accelerated by GPUs” and customers are repeatedly asking about GPU acceleration. Oftentimes, the applications that are in question do not fall neatly into the category of HPC but they are, without argument, high performance computing applications that require extreme computing capabilities.

Transcoding and similar enterprise applications already have needs that are well met by cloud or on-demand computing because very often such needs are “bursty” in nature and do not require the massive machines required to crunch the level of data. Take Netflix, for example, a company that has massive transcoding needs that must be met in a relatively short time frame—sometimes as fast as 24 hours. When demand for a title suddenly surges, the company needs to transcode that same video into over a hundred different formats to suit the many device types and resolution requirements but that same vast need might not be present the next day, of for that matter the next week.

The convergence of GPU acceleration and on-demand access to vast computational resources via the cloud or an on-demand GPU-accelerated resource has significant value for the same types of customers who already have been able to benefit, even just in theory at this early cloud stage, from on-demand access to resources. Those with “bursty” needs are numerous, but only recently has this need been matched with the types of HPC resources required to handle the application and data-specific demands.

Among other parallels, enterprise and scientific computing is producing ever-larger sets of data to be analyzed and combed through, but again, a great deal of the innovation on this front is being propelled forward by the enterprise since acceleration—real-time results based on such large volumes—is yielding immediate monetary benefit. For instance, a company called Milabra that was present at the Emerging Companies Summit at GTC is powering their photo recognition software with GPUs to make real-time connections between web-based images and advertising. The company’s unique application recognizes, for example, the shape of a toddler’s head and features and immediately turns this around to an ad-serving platform that then can serve an ad—in microseconds—that is for toddler toys.

The incentive powering real-time results on huge datasets is clear here; accelerating the time for the application to achieve its results has a perfect match in real-time revenue. The sooner that application can recognize the target and turn this around to the platform, the sooner funds flow. It’s a beautiful thing and while it is certainly not rooted in scientific or academia-driven HPC—this blend of on-demand or cloud resources matched with accelerated computation has its benefit to science and technical computing. The needs here are bursty, are reliant on real-time results, and for a company like Milabra (a startup) do not require NSF funding to get off the ground. Seeing a pattern here?

Many of the scientific users and computer scientists themselves are invested in data analytics in just the same way that companies with real-time concerns are, just for different purposes. While they may not be as reliant on the instant photorealistic responses of Autodesk software delivered via the cloud that allows for nearly instant rendering of complex models and they may not have the same concerns as Netflix or a large e-commerce website, the level of computation is extreme and benefits are being derived not only from the GPGPU movement, but from this being available to a new class of users.

A Natural Evolution?

While GPUs cannot be virtualized, there are still some companies, including PEER1 Hosting and Penguin Computing who are calling their GPU on demand services cloud. While it seems to be a waste of time arguing about the issue of whether or not this is cloud or not any longer (let’s just agree that the on-demand portion is the essence here once and for all), these companies are poised for growth given the high costs of hardware.

While GPU clusters are less expensive in general, in an era where scientists and enterprise can accelerate their applications and take advantage of this in an on-demand fashion, it’s hard to find fault with the prediction that over the next few years GPGPU will find its way into some mainstream arenas in a far bigger way than we could have imagined a couple of years ago. Gupta suggests that finance and oil and gas companies are two of the biggest potential customer bases they’ve seen expressing interest in “cloud” GPU capabilities but it does take them time to evaluate their options.

When asked if it seemed that more companies were hoping to offer GPU-as-a-Service, Gupta stated that they have been talking to several cloud providers and that as more mainstream applications become available—applications that used to run on workstations and required major operational costs—there will be a greater move in their direction. Already Matlab and Autodesk’s foray into the on-demand era has proven rather successful, at least from this early point, so the future is wide open for other applications and vendors to step in and offer the capability for users to tap into their cloud. There is nothing preventing this from happening now, after all.
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire