Top 10 Objections to GPU Computing Reconsidered

By Dr. Vincent Natoli

June 9, 2011

By any measure, the emergence of GPU computing in the HPC world over the last few years has been a remarkable and unprecedented development. It has all the markings of a disruptive technology. It began with a small cadre of adherents willing to brave the complications of shoehorning graphics languages like Cg and OpenGL’s GLSL into performing general-purpose calculations. It was championed by a hungry new player in HPC. It provides step out, order of magnitude performance gains over conventional solutions in more cases than people would like to admit and it’s just plain different.

As we approach the four-year release anniversary of NVIDIA CUDA, arguably the ground zero of the GPGPU movement, there are many who have flirted, piloted and adopted the technology, but many more who are sitting on the sidelines for various reasons. In our work, we have come across many of the latter, and have thus compiled a list of the most common questions, concerns and assertions that preempt efforts to evaluate the technology.

What follows below is an attempt to list and address these objections, reconsidering them in light of recent developments and our best guess at how the future will unfold. To be clear, GPGPU is not the solution for all HPC applications but many have found the technology cost effective in domains as diverse as seismic imaging, electromagnetics, molecular dynamics, financial valuation, medical imaging and others.

1. I don’t want to rewrite my code or learn a new language

It’s true that you will have to rewrite your code to make use of GPUs. However if you are developing a parallel CPU implementation from a mostly serial code rewriting your code is inevitable. The only question is which platform will you target. Targeting multicore CPUs presents a three-tier model with parallelism at the process, thread and register levels where you will use MPI, OpenMP/pthreads and SSE/AVX extensions. Programming for GPUs using CUDA is no more difficult and the advantage for both compute-bound and memory-bound codes can be significant, as we’ll see.

If you already have a parallel code, what benefit will you reap from a GPU port? Back-of-the-envelope calculations yield between 5x and 40x improvements in chip-to-chip comparisons on typical codes. This metric is also what we have seen anecdotally and in the many publications on GPU enabled applications. These comparisons have remained fairly constant over the past few years spanning two generations of Intel and NVIDIA products.

CUDA is an extension to C and relatively easily picked up by experienced coders. The model for parallel programming that will take us to the exascale is far from settled, however I believe that the eventual solution will look more like the data parallel CUDA model than the task parallel CPU alternatives. In previous HPCwire contributions I have expressed the opinion that CUDA, by forcing developers to consider the irreducible level of parallel work in their problems and mapping that to threads, is a better programming model for extensible parallelism, lending itself more naturally to scalability across multiple GPUs on a single node and across multiple nodes.

Steps in this direction are already underway with examples from the academic world with the excellent work on Global Memory for Accelerators (GMAC) being jointly developed by Universitat Politecnica de Catalunya and the IMPACT Research Group at the University of Illinois, and the commercial world with the unique scaling capabilities of the HUESPACE compute API offered by Oslo-based startup HUE and their sister company focusing in oil and gas, Headwave.

2. I don’t know what kind of performance to expect

HPC codes are either compute-bound or memory-bound. For compute-bound codes we compare the NVIDIA Fermi M2090 with the Intel Westmere. The Fermi has 512 cores running at about 1.3GHz while Westmere has 6 cores running at up to 3.4 GHz. The ratio of core-Hz yields a factor of about 32x. If your CPU code is effectively using SSE instructions, that might give you an additional boost of 4x on the CPU side reducing GPU efficacy to 8x (close to the ratio of peak GFLOPS).

For memory-bound codes we compare GPU main memory bandwidth of 177 GB/second to 32 GB/second available per processor on CPU motherboards for a ratio of about 5.5x. The bottom line is that if your code is compute bound expect a GPU boost of between 5x for highly optimized SSE implemented codes and 20x or more for typical codes. If your code is memory bound expect a boost of about 5x in chip to chip comparisons.

It’s helpful when considering a parallel solution to think about the problem in terms of marginal cost. If your code is memory bound then you should consider the least expensive option for adding bandwidth. One can add another GPU card and buy bandwidth for about $15 per GB/second. Alternatively one can add a separate node at a nominal cost of about $80 per GB/second. The latter solution also increases the compute footprint and adds another O/S instance.

For compute-bound codes a similar calculation can be performed to produce the marginal cost of gigaflops. For codes that have a mixture of compute-bound and memory-bound sections i.e., most codes, the GPU does a better job of hiding the latency of memory-bound sections by juggling thousands of threads and computing continuously on those threads that have their data registers filled and ready to go. By hiding memory latency the GPU is better able to achieve the higher compute-bound performance limits.

3. The PCIe bandwidth will kill my performance

The argument against GPU computing based on PCIe bandwidth is really one of computational intensity. Computational intensity is variously defined but for these purposes let us say it is the number of floating point operations performed on each float of data transferred. There is a threshold of work which must be applied to every byte of data transferred to the GPU board in order for a GPU compute effort to be worthwhile.

For example, PCIe v2.0 x16 bandwidth totals about 6 GB/second in practice. In perspective it can fill the 6 GB of main memory on the M2090 board in about a second. At 665 double precision gigaflops peak performance the M2090 is a floating point monster and can clearly do a lot of processing in the second that it takes to fill the board memory. If for example you’d like the PCIe transfer time to be no more than 10 percent of the compute time, the M2090 must do thousands of floating point operations on the stored data before flushing it. GPU algorithms must therefore strive to retain data on the boards for as long as possible.

In addition, CUDA allows the asynchronous overlap of PCIe data transfer with calculation. Clever use of this feature lets the developer hide some or all of the PCIe data transfer time behind computations. Algorithms that work well include time stepping algorithms with local physics such as Finite Difference Time Domain (FDTD) or classical molecular dynamics where the N2 particle-particle interactions lead to significant data reuse and high computational intensity.

Algorithms that are not effective in isolation on GPUs such as simple vector dot products have very low computational intensity. If problems are mapped across multiple GPUs they should minimize data transfer, for example sending only border data in a domain decomposition.

4. What about Amdahl’s law?

Amdahl’s law quantitatively expresses the fact that if you are going to accelerate part of a large serial code either by whatever means, parallelization or magic gnomes, you had better accelerate a significant portion or you will not realize much benefit. Amdahl’s law is often held forth as a talisman against any suggestion that code performance can be improved via parallelization.

Imagine, for example, you could reduce the runtime of a part of your serial code to zero. If that portion of the code accounted for 50 percent of the runtime then your overall speedup would only be 2x; if it accounted for 90 percent of runtime your speedup would approach 10x. For more realistic accelerations speedup results are lower.

The most effective counter to an Amdahl’s law objection to GPU computing is the observation that performance on modern architectures requires all codes to be written for massive parallelism reducing their serial portions to the minimum possible…there is no other game in town. This is true on CPU platforms as well as on GPU platforms.

The only HPC compute platforms we have going forward are massively parallel and Amdahl’s law applies equally well to CPU implementations. The real question is where are you going to run the parallel portion of your code…on the CPU or on the GPU?

5. What if NVIDIA goes away?

The history of HPC is littered with the carcasses of many a supercomputing company that tried to make the leap to parallel computing. A short list includes Thinking Machines, Maspar, KSR, Mitrion and many others. These often heroic efforts and the people behind them should be recognized for their vision and the role they played in the creative disruption that over time led to a greater collective understanding of what works and what doesn’t. We owe a debt of gratitude to them all.

NVIDIA, however, is not a supercomputing company. It’s a $5 billion corporation that makes most of its money from video cards and embedded processors that it sells to the huge avid market of PC gamers. Its relative independence from HPC is a strength, and if all HPC usage of GPU computing disappeared NVIDIA would still have a fine and profitable business. As long as there are caffeine-addicted adolescents willing to play virtual war games NVIDIA will be around. The fact is that NVIDIA’s staying power in the market is greater and more secure than that of long-time HPC icon Cray.

Further, NVIDIA has publicly posted its vision and roadmap for technology development out to roughly 6 years. Reading between the lines it is extremely aggressive with ambitions to move the GPU from its ancillary role living on an accelerator card to a more central role in the compute infrastructure. Along the way they have some powerful compute engines planned.

6. GPU boards don’t offer enough main memory for my problem.

GPU board memory is currently limited to 6 GB available on the M2090 and M2070. This can be particularly problematic for algorithms that require large chunks of memory in excess of that. This problem is somewhat alleviated by access to multiple cards available to a single node.

For example, the Dell C410x PCIe expansion chassis that can contain up to 16 NVIDIA boards for a total of 96 GB. Domain decomposing your problem on 16 separate GPUs can be a very effective way to side-step the memory limitation and it works quite well for problems effecting local physics that calculate properties on volumes and share data at surfaces.

The most problematic are algorithms that require essentially random access to large arrays, for example, huge hash tables or other problems that require random array lookups. The current generation of GPU boards are not effective solutions in this case, however, memory being relatively inexpensive and constantly improving in density, one can expect future generations of boards to come with increased amounts.

7. I will wait for more CPU cores / Knights Corner

More cores will help with compute-bound applications, however one needs to consider that as more cores are added to CPU dies, the same will be true for GPUs. Comparing roadmaps over the past two technology generations shows a persistent gap in compute and bandwidth between CPUs and GPUs that sometimes grows and sometimes wanes but is consistently non-zero. Expect this to continue going forward. For bandwidth-bound problems the situation is somewhat worse as it appears easier to add cores than to add bandwidth.

Intel’s plans for Knights Corner, announced just over a year ago, recognizes the need for an x86 data parallel competitor to GPUs. Full details on Knights Corner are still unknown however using specs from the Knights Ferry prototype as a baseline, we might expect 50 or more 1.2 GHz cores, each with its own 512-bit vector processing unit supporting up to 4 threads each, making it a formidable HPC competitor. However Intel’s plans for the development model, pricing, release dates and other critical information is, at best, poorly understood and communicated at this time.

For Knights Corner to be a success it will have to conform to the commodity market arguments that have allowed the x86 architecture to dominate HPC computing. It must find a broad-based market outside the cloistered world of HPC scientists. Commodity graphics is a logical option for this broader market, but it is already well settled by NVIDIA and AMD.

8. I don’t like proprietary languages

Proprietary languages here refers to languages supported by one organization that may take development into an unknown or unwanted direction or drop support altogether. CUDA would fall into this category. The advantage to using CUDA is quite clear: 1) it can implement NVIDIA hardware specific optimizations; 2) there is no committee to make roadmap decisions; and 3) It supports new NVIDIA hardware features more quickly.

However if proprietary languages are a show-stopper in your organization, then OpenCL is a perfectly good option for non-proprietary language development. OpenCL, supported by Apple, NVIDIA, AMD, Intel and many others provides functional portability across multiple hardware platforms. I emphasize functional portability in contrast to performance portability that is still lagging. OpenCL kernels are very similar to CUDA kernels, with more differences found in the host-based setup and launch code.

9. I’m waiting for the magic CPU-to-GPU code converter

There’s good news and bad news here. The good news is that CPU-to-GPU converters already exist. The bad news is that its unlikely to produce code that is as performant as a native port by experts. With no experience using these tools and affiliation with a company that does significant technical code ports using native CUDA, the author is a poor and not unbiased source for reliable data on such approaches, however it would seem that a trial license for PGI Workstation from The Portland Group (PGI) and/or the CAPS HMPP workbench, two providers of such compilers, is easy enough to obtain and this option can be put to the test. Up the road we may expect some standardization in these compiler directives as they are incorporated into OpenMP.

10. I have N codes but one IT budget

More colloquially this may be referred to as the “go big or go home” dilemma. To add GPU-enabled nodes to the infrastructure of most organizations on a fixed IT budget requires a choice between fewer more powerful heterogeneous GPU nodes or more, less powerful, traditional CPU nodes. For economies of scale it makes sense for some organizations to have either 100 percent or 0 percent GPU-enabled nodes. This is particularly true of margin-based businesses working in competitive markets where cluster farms crunch data 24/7/365. Splitting the IT infrastructure complicates scheduling and requires, in the worst case, two versions of everything: cluster management scripts, scheduling, compilers, testing and validation, application code, etc.

Technology adoption in large commercial organizations must be done with keen attention to return on investment (ROI). The “go big or go home” argument expresses the anxiety that smart, thoughtful organizations give to this difficult problem in trying to quantify the known costs and speculate about the unknown costs of a technology shift. This last point as well as the previous nine, relate in some manner to either the investment (code development, people skills, new hardware, retraining) or the return (performance, scalability, power).

Each company must work out its unique ROI equation with fear and trembling, a healthy respect for the hurdles they will face and conservative margins. Using traditional financial analysis, capital investments should generate returns to shareholders commensurate with the organization’s weighted cost of capital and must be compared with other investment opportunities available to the company in the area of its domain expertise.

In summary GPU computing has held tenaciously on to the HPC market making notable gains in adoption over the last four years. The ten objections presented above are the most commonly voiced by individuals and organizations and here we have tried to address them. As noted at the beginning GPGPU, is not the solution to all HPC problems, but organizations may be missing out on significant performance gains and cost savings by ignoring the technology for the wrong reasons.

Finally, organizations should not make a move into GPU compute with the idea that it is this year’s solution but rather as a result of a deliberate strategy that determines it to be both currently cost effective and the best solution going forward based on architecture, programming model and power arguments to take us to the exascale.

About the Author

Vincent Natoli is the president and founder of Stone Ridge Technology. He is a computational physicist with 20 years experience in the field of high performance computing. He worked as a technical director at High Performance Technologies (HPTi) and before that for 10 years as a senior physicist at ExxonMobil Corporation, at their Corporate Research Lab in Clinton, New Jersey, and in the Upstream Research Center in Houston, Texas. Dr. Natoli holds Bachelor’s and Master’s degrees from MIT, a PhD in Physics from the University of Illinois Urbana-Champaign, and a Masters in Technology Management from the University of Pennsylvania and the Wharton School. Stone Ridge Technology is a professional services firm focused on authoring, profiling, optimizing and porting high performance technical codes to multicore CPUs, GPUs, and FPGAs.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…


How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers


Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow