Top 10 Objections to GPU Computing Reconsidered

By Dr. Vincent Natoli

June 9, 2011

By any measure, the emergence of GPU computing in the HPC world over the last few years has been a remarkable and unprecedented development. It has all the markings of a disruptive technology. It began with a small cadre of adherents willing to brave the complications of shoehorning graphics languages like Cg and OpenGL’s GLSL into performing general-purpose calculations. It was championed by a hungry new player in HPC. It provides step out, order of magnitude performance gains over conventional solutions in more cases than people would like to admit and it’s just plain different.

As we approach the four-year release anniversary of NVIDIA CUDA, arguably the ground zero of the GPGPU movement, there are many who have flirted, piloted and adopted the technology, but many more who are sitting on the sidelines for various reasons. In our work, we have come across many of the latter, and have thus compiled a list of the most common questions, concerns and assertions that preempt efforts to evaluate the technology.

What follows below is an attempt to list and address these objections, reconsidering them in light of recent developments and our best guess at how the future will unfold. To be clear, GPGPU is not the solution for all HPC applications but many have found the technology cost effective in domains as diverse as seismic imaging, electromagnetics, molecular dynamics, financial valuation, medical imaging and others.

1. I don’t want to rewrite my code or learn a new language

It’s true that you will have to rewrite your code to make use of GPUs. However if you are developing a parallel CPU implementation from a mostly serial code rewriting your code is inevitable. The only question is which platform will you target. Targeting multicore CPUs presents a three-tier model with parallelism at the process, thread and register levels where you will use MPI, OpenMP/pthreads and SSE/AVX extensions. Programming for GPUs using CUDA is no more difficult and the advantage for both compute-bound and memory-bound codes can be significant, as we’ll see.

If you already have a parallel code, what benefit will you reap from a GPU port? Back-of-the-envelope calculations yield between 5x and 40x improvements in chip-to-chip comparisons on typical codes. This metric is also what we have seen anecdotally and in the many publications on GPU enabled applications. These comparisons have remained fairly constant over the past few years spanning two generations of Intel and NVIDIA products.

CUDA is an extension to C and relatively easily picked up by experienced coders. The model for parallel programming that will take us to the exascale is far from settled, however I believe that the eventual solution will look more like the data parallel CUDA model than the task parallel CPU alternatives. In previous HPCwire contributions I have expressed the opinion that CUDA, by forcing developers to consider the irreducible level of parallel work in their problems and mapping that to threads, is a better programming model for extensible parallelism, lending itself more naturally to scalability across multiple GPUs on a single node and across multiple nodes.

Steps in this direction are already underway with examples from the academic world with the excellent work on Global Memory for Accelerators (GMAC) being jointly developed by Universitat Politecnica de Catalunya and the IMPACT Research Group at the University of Illinois, and the commercial world with the unique scaling capabilities of the HUESPACE compute API offered by Oslo-based startup HUE and their sister company focusing in oil and gas, Headwave.

2. I don’t know what kind of performance to expect

HPC codes are either compute-bound or memory-bound. For compute-bound codes we compare the NVIDIA Fermi M2090 with the Intel Westmere. The Fermi has 512 cores running at about 1.3GHz while Westmere has 6 cores running at up to 3.4 GHz. The ratio of core-Hz yields a factor of about 32x. If your CPU code is effectively using SSE instructions, that might give you an additional boost of 4x on the CPU side reducing GPU efficacy to 8x (close to the ratio of peak GFLOPS).

For memory-bound codes we compare GPU main memory bandwidth of 177 GB/second to 32 GB/second available per processor on CPU motherboards for a ratio of about 5.5x. The bottom line is that if your code is compute bound expect a GPU boost of between 5x for highly optimized SSE implemented codes and 20x or more for typical codes. If your code is memory bound expect a boost of about 5x in chip to chip comparisons.

It’s helpful when considering a parallel solution to think about the problem in terms of marginal cost. If your code is memory bound then you should consider the least expensive option for adding bandwidth. One can add another GPU card and buy bandwidth for about $15 per GB/second. Alternatively one can add a separate node at a nominal cost of about $80 per GB/second. The latter solution also increases the compute footprint and adds another O/S instance.

For compute-bound codes a similar calculation can be performed to produce the marginal cost of gigaflops. For codes that have a mixture of compute-bound and memory-bound sections i.e., most codes, the GPU does a better job of hiding the latency of memory-bound sections by juggling thousands of threads and computing continuously on those threads that have their data registers filled and ready to go. By hiding memory latency the GPU is better able to achieve the higher compute-bound performance limits.

3. The PCIe bandwidth will kill my performance

The argument against GPU computing based on PCIe bandwidth is really one of computational intensity. Computational intensity is variously defined but for these purposes let us say it is the number of floating point operations performed on each float of data transferred. There is a threshold of work which must be applied to every byte of data transferred to the GPU board in order for a GPU compute effort to be worthwhile.

For example, PCIe v2.0 x16 bandwidth totals about 6 GB/second in practice. In perspective it can fill the 6 GB of main memory on the M2090 board in about a second. At 665 double precision gigaflops peak performance the M2090 is a floating point monster and can clearly do a lot of processing in the second that it takes to fill the board memory. If for example you’d like the PCIe transfer time to be no more than 10 percent of the compute time, the M2090 must do thousands of floating point operations on the stored data before flushing it. GPU algorithms must therefore strive to retain data on the boards for as long as possible.

In addition, CUDA allows the asynchronous overlap of PCIe data transfer with calculation. Clever use of this feature lets the developer hide some or all of the PCIe data transfer time behind computations. Algorithms that work well include time stepping algorithms with local physics such as Finite Difference Time Domain (FDTD) or classical molecular dynamics where the N2 particle-particle interactions lead to significant data reuse and high computational intensity.

Algorithms that are not effective in isolation on GPUs such as simple vector dot products have very low computational intensity. If problems are mapped across multiple GPUs they should minimize data transfer, for example sending only border data in a domain decomposition.

4. What about Amdahl’s law?

Amdahl’s law quantitatively expresses the fact that if you are going to accelerate part of a large serial code either by whatever means, parallelization or magic gnomes, you had better accelerate a significant portion or you will not realize much benefit. Amdahl’s law is often held forth as a talisman against any suggestion that code performance can be improved via parallelization.

Imagine, for example, you could reduce the runtime of a part of your serial code to zero. If that portion of the code accounted for 50 percent of the runtime then your overall speedup would only be 2x; if it accounted for 90 percent of runtime your speedup would approach 10x. For more realistic accelerations speedup results are lower.

The most effective counter to an Amdahl’s law objection to GPU computing is the observation that performance on modern architectures requires all codes to be written for massive parallelism reducing their serial portions to the minimum possible…there is no other game in town. This is true on CPU platforms as well as on GPU platforms.

The only HPC compute platforms we have going forward are massively parallel and Amdahl’s law applies equally well to CPU implementations. The real question is where are you going to run the parallel portion of your code…on the CPU or on the GPU?

5. What if NVIDIA goes away?

The history of HPC is littered with the carcasses of many a supercomputing company that tried to make the leap to parallel computing. A short list includes Thinking Machines, Maspar, KSR, Mitrion and many others. These often heroic efforts and the people behind them should be recognized for their vision and the role they played in the creative disruption that over time led to a greater collective understanding of what works and what doesn’t. We owe a debt of gratitude to them all.

NVIDIA, however, is not a supercomputing company. It’s a $5 billion corporation that makes most of its money from video cards and embedded processors that it sells to the huge avid market of PC gamers. Its relative independence from HPC is a strength, and if all HPC usage of GPU computing disappeared NVIDIA would still have a fine and profitable business. As long as there are caffeine-addicted adolescents willing to play virtual war games NVIDIA will be around. The fact is that NVIDIA’s staying power in the market is greater and more secure than that of long-time HPC icon Cray.

Further, NVIDIA has publicly posted its vision and roadmap for technology development out to roughly 6 years. Reading between the lines it is extremely aggressive with ambitions to move the GPU from its ancillary role living on an accelerator card to a more central role in the compute infrastructure. Along the way they have some powerful compute engines planned.

6. GPU boards don’t offer enough main memory for my problem.

GPU board memory is currently limited to 6 GB available on the M2090 and M2070. This can be particularly problematic for algorithms that require large chunks of memory in excess of that. This problem is somewhat alleviated by access to multiple cards available to a single node.

For example, the Dell C410x PCIe expansion chassis that can contain up to 16 NVIDIA boards for a total of 96 GB. Domain decomposing your problem on 16 separate GPUs can be a very effective way to side-step the memory limitation and it works quite well for problems effecting local physics that calculate properties on volumes and share data at surfaces.

The most problematic are algorithms that require essentially random access to large arrays, for example, huge hash tables or other problems that require random array lookups. The current generation of GPU boards are not effective solutions in this case, however, memory being relatively inexpensive and constantly improving in density, one can expect future generations of boards to come with increased amounts.

7. I will wait for more CPU cores / Knights Corner

More cores will help with compute-bound applications, however one needs to consider that as more cores are added to CPU dies, the same will be true for GPUs. Comparing roadmaps over the past two technology generations shows a persistent gap in compute and bandwidth between CPUs and GPUs that sometimes grows and sometimes wanes but is consistently non-zero. Expect this to continue going forward. For bandwidth-bound problems the situation is somewhat worse as it appears easier to add cores than to add bandwidth.

Intel’s plans for Knights Corner, announced just over a year ago, recognizes the need for an x86 data parallel competitor to GPUs. Full details on Knights Corner are still unknown however using specs from the Knights Ferry prototype as a baseline, we might expect 50 or more 1.2 GHz cores, each with its own 512-bit vector processing unit supporting up to 4 threads each, making it a formidable HPC competitor. However Intel’s plans for the development model, pricing, release dates and other critical information is, at best, poorly understood and communicated at this time.

For Knights Corner to be a success it will have to conform to the commodity market arguments that have allowed the x86 architecture to dominate HPC computing. It must find a broad-based market outside the cloistered world of HPC scientists. Commodity graphics is a logical option for this broader market, but it is already well settled by NVIDIA and AMD.

8. I don’t like proprietary languages

Proprietary languages here refers to languages supported by one organization that may take development into an unknown or unwanted direction or drop support altogether. CUDA would fall into this category. The advantage to using CUDA is quite clear: 1) it can implement NVIDIA hardware specific optimizations; 2) there is no committee to make roadmap decisions; and 3) It supports new NVIDIA hardware features more quickly.

However if proprietary languages are a show-stopper in your organization, then OpenCL is a perfectly good option for non-proprietary language development. OpenCL, supported by Apple, NVIDIA, AMD, Intel and many others provides functional portability across multiple hardware platforms. I emphasize functional portability in contrast to performance portability that is still lagging. OpenCL kernels are very similar to CUDA kernels, with more differences found in the host-based setup and launch code.

9. I’m waiting for the magic CPU-to-GPU code converter

There’s good news and bad news here. The good news is that CPU-to-GPU converters already exist. The bad news is that its unlikely to produce code that is as performant as a native port by experts. With no experience using these tools and affiliation with a company that does significant technical code ports using native CUDA, the author is a poor and not unbiased source for reliable data on such approaches, however it would seem that a trial license for PGI Workstation from The Portland Group (PGI) and/or the CAPS HMPP workbench, two providers of such compilers, is easy enough to obtain and this option can be put to the test. Up the road we may expect some standardization in these compiler directives as they are incorporated into OpenMP.

10. I have N codes but one IT budget

More colloquially this may be referred to as the “go big or go home” dilemma. To add GPU-enabled nodes to the infrastructure of most organizations on a fixed IT budget requires a choice between fewer more powerful heterogeneous GPU nodes or more, less powerful, traditional CPU nodes. For economies of scale it makes sense for some organizations to have either 100 percent or 0 percent GPU-enabled nodes. This is particularly true of margin-based businesses working in competitive markets where cluster farms crunch data 24/7/365. Splitting the IT infrastructure complicates scheduling and requires, in the worst case, two versions of everything: cluster management scripts, scheduling, compilers, testing and validation, application code, etc.

Technology adoption in large commercial organizations must be done with keen attention to return on investment (ROI). The “go big or go home” argument expresses the anxiety that smart, thoughtful organizations give to this difficult problem in trying to quantify the known costs and speculate about the unknown costs of a technology shift. This last point as well as the previous nine, relate in some manner to either the investment (code development, people skills, new hardware, retraining) or the return (performance, scalability, power).

Each company must work out its unique ROI equation with fear and trembling, a healthy respect for the hurdles they will face and conservative margins. Using traditional financial analysis, capital investments should generate returns to shareholders commensurate with the organization’s weighted cost of capital and must be compared with other investment opportunities available to the company in the area of its domain expertise.

In summary GPU computing has held tenaciously on to the HPC market making notable gains in adoption over the last four years. The ten objections presented above are the most commonly voiced by individuals and organizations and here we have tried to address them. As noted at the beginning GPGPU, is not the solution to all HPC problems, but organizations may be missing out on significant performance gains and cost savings by ignoring the technology for the wrong reasons.

Finally, organizations should not make a move into GPU compute with the idea that it is this year’s solution but rather as a result of a deliberate strategy that determines it to be both currently cost effective and the best solution going forward based on architecture, programming model and power arguments to take us to the exascale.

About the Author

Vincent Natoli is the president and founder of Stone Ridge Technology. He is a computational physicist with 20 years experience in the field of high performance computing. He worked as a technical director at High Performance Technologies (HPTi) and before that for 10 years as a senior physicist at ExxonMobil Corporation, at their Corporate Research Lab in Clinton, New Jersey, and in the Upstream Research Center in Houston, Texas. Dr. Natoli holds Bachelor’s and Master’s degrees from MIT, a PhD in Physics from the University of Illinois Urbana-Champaign, and a Masters in Technology Management from the University of Pennsylvania and the Wharton School. Stone Ridge Technology is a professional services firm focused on authoring, profiling, optimizing and porting high performance technical codes to multicore CPUs, GPUs, and FPGAs.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Talk to Me: Nvidia Claims NLP Inference, Training Records

August 15, 2019

Nvidia says it’s achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the co Read more…

By Doug Black

Trump Administration and NIST Issue AI Standards Development Plan

August 14, 2019

Efforts to develop AI are gathering steam fast. On Monday, the White House issued a federal plan to help develop technical standards for AI following up on a mandate contained in the Administration’s AI Executive Order Read more…

By John Russell

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a good understanding of the early universe, its fate billions Read more…

By Rob Johnson

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Cloudy with a Chance of Mainframes

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

Rapid rates of change sometimes result in unexpected bedfellows. Read more…

Argonne Supercomputer Accelerates Cancer Prediction Research

August 13, 2019

In the fight against cancer, early prediction, which drastically improves prognoses, is critical. Now, new research by a team from Northwestern University – and accelerated by supercomputing resources at Argonne Nation Read more…

By Oliver Peckham

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

Upcoming NSF Cyberinfrastructure Projects to Support ‘Long-Tail’ Users, AI and Big Data

August 5, 2019

The National Science Foundation is well positioned to support national priorities, as new NSF-funded HPC systems to come online in the upcoming year promise to Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This