Kudos for CUDA

By Dr. Vincent Natoli

July 6, 2010

It’s been almost three years since GPU computing broke into the mainstream of HPC with the introduction of NVIDIA’s CUDA API in September 2007. Adoption of the technology since then has proceeded at a surprisingly strong and steady pace. Many organizations that began with small pilot projects a year or two ago have moved on to enterprise deployment, and GPU accelerated machines are now represented on the TOP500 list starting at position two. The relatively-rapid adoption of CUDA by a community not known for the rapid adoption of much of anything is a noteworthy signal. Contrary to the accepted wisdom that GPU computing is more difficult, I believe its success thus far signals that it is no more complicated than good CPU programming. Further, it more clearly and succinctly expresses the parallelism of a large class of problems leading to code that is easier to maintain, more scalable and better positioned to map to future many-core architectures.

The continued growth of CUDA contrasts sharply with the graveyard of abandoned languages introduced to the HPC market over the last 20 to 25 years. Its success can largely be attributed to i) support from a major corporate backer as opposed to a consortium, ii) the maturity of its compilers iii) adherence to a C syntax easily recognized by developers and iv) a more ephemeral feature that can best be described as elegance or simplicity. Physicists and Mathematicians, often use the word “elegant” as a high compliment to describe particularly appealing solutions or equations that neatly represent complex physical phenomena; where the language of mathematics succinctly and…well…elegantly describes and captures symmetry and physics. CUDA is an elegant solution to the problem of representing parallelism in algorithms, not all algorithms, but enough to matter. It seems to resonate in some way with the way we think and code, allowing an easier more natural expression of parallelism beyond the task-level.

HPC developers writing parallel code today have two enterprise options i) traditional multicore platforms built on CPUs from Intel/AMD and ii) platforms accelerated with GPGPU options from NVIDIA and AMD/ATI. Developing performant, scalable parallel code for multicore architectures is still non-trivial and involves a multi-level programming model that includes inter-node parallelism handled with MPI, intra-node parallelism with MPI, OpenMP or pthreads, and register level parallelism expressed via Streaming SIMD Instructions (SSE). The expression of parallelism in this multi-level model is often verbose and messy, obscuring the underlying algorithm. The developer is often left feeling as though he or she is shoehorning in the parallelism.

The CUDA programming model presents a different, in some ways refreshing, approach to expressing parallelism. The MPI, OpenMP and SSE trio evolved from a world centered on serial processing. CUDA, by contrast, arises from a decidedly parallel world, where thousands of simultaneous threads are managed as the norm. The programming model forces the developer to identify the irreducible level of parallelism in his or her problem. In a world that is rapidly moving to manycore, not multicore, this seems to be a better, more intuitive and extensible way to think about our problems.

CUDA is a programming language with constructs that are designed for the natural expression of data-level parallelism. It’s not hard to understand expressibility in languages and the idea that some concepts are more easily stated in specific languages. Computer scientists do this all the time as they create optimal structures to represent their data. DNA base pairs, for example, are neatly and compactly expressed as a sequence of 2-bit data fields much better than a simple minded ASCII representation. Our Italian exchange student was fond of pointing out the vast superiority of Italian over English for passionate argument.

Similarly, we have found in many cases that the expression of algorithmic parallelism in CUDA in fields as diverse as oil and gas, bioinformatics and finance is more elegant, compact and readable than equivalently-optimized CPU code, preserving and more clearly presenting the underlying algorithm. In a recent project we reduced 3,500 lines of highly-optimized C code to a CUDA kernel of about 800 lines. The optimized C was peppered with inline assembly, SSE macros, unrolled loops and special cases, making it difficult to read, extract algorithmic meaning and extend in the future. By comparison the CUDA code was cleaner and more readable. Ultimately it will be easier to maintain.

Commodity parallel processing began as a way to divide large tasks over multiple loosely-connected processors. Programming models supported the idea of dividing problems into a number of smaller pieces of equivalent work. Over time those processors have grown closer to one another in terms of latency and bandwidth, first as single operating system multiprocessor nodes and next as multicore processor components of those nodes. Looking towards the future we see only more cores per chip and more chips per node.

Even though our computing cores are more tightly coupled, our view of them is still very much from a top-down, task parallel mindset, i.e., take a large problem, divide it into many small pieces, distribute them to processing elements and just deal with the communication. In this top-down approach, we must discover new parallelism at each level, domain level parallelism for MPI, “for-loop” level for OpenMP, and data level parallelism for SSE. What is intriguing about CUDA is that it takes a bottom-up point of view, identifying the atomic unit of parallelism and embedding that in a hierarchical structure, e.g., thread::warp::block::grid.

The enduring contribution of GPU computing to HPC may well be a programming model that peels us away from the current top-down, multi-level, task-parallel approach, popularizing instead a more scalable bottom-up, data-parallel alternative. It’s not right for every problem but for those that map well to it, such as finite difference stencils and molecular dynamics among many others, it provides a cleaner, more natural language for expressing parallelism. It should be recognized that the simpler, cleaner expression for these applications in code is a main driver for the relatively-rapid adoption by commercial and academic practitioners. Further, there is no intrinsic reason scaling must stop at the grid or device level. One can easily imagine a generalization of CUDA on future architectures that abstracts one or more levels above the grid to accomplish an implementation across multiple devices, effectively aggregating global memory into one contiguous span; a sort of GPU/NUMA approach. If this can be done, then GPU computing will have made a great leap toward solving a key problem in parallel computing by reducing the programming model from three levels to one level for a simpler more elegant solution.

About the Author
Dr. Vincent NatoliDr. Natoli is the president and founder of Stone Ridge Technology. He is a computational physicist with 20 years experience in the field of high performance computing. He worked as a technical director at High Performance Technologies (HPTi) and before that for 10 years as a senior physicist at ExxonMobil Corporation, at their Corporate Research Lab in Clinton, New Jersey, and in the Upstream Research Center in Houston, Texas. Dr. Natoli holds Bachelor’s and Master’s degrees from MIT, a PhD in Physics from the University of Illinois Urbana-Champaign, and a Masters in Technology Management from the University of Pennsylvania and the Wharton School. Stone Ridge Technology is a professional services firm focused on authoring, profiling, optimizing and porting high performance technical codes to multicore CPUs, GPUs, and FPGAs.

Dr. Natoli can be reached at vnatoli@stoneridgetechnology.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate scientists the ability to use machine learning to identify e Read more…

By Rob Farber

Mellanox Reacts to Activist Investor Pressures in Letter to Shareholders

March 16, 2018

Activist investor Starboard Value has been exerting pressure on Mellanox Technologies to increase its returns. In response, the high-performance networking company on Monday, March 12, published a letter to shareholders outlining its proposal for a May 2018 extraordinary general meeting (EGM) of shareholders and highlighting its long-term growth strategy and focus on operating margin improvement. Read more…

By Staff

Quantum Computing vs. Our ‘Caveman Newtonian Brain’: Why Quantum Is So Hard

March 15, 2018

Quantum is coming. Maybe not today, maybe not tomorrow, but soon enough. Within 10 to 12 years, we’re told, special-purpose quantum systems will enter the commercial realm. Assuming this happens, we can also assume that quantum will, over extended time, become increasingly general purpose as it delivers mind-blowing power. Read more…

By Doug Black

HPE Extreme Performance Solutions

Achieve Optimal Performance at Scale with High Performance Fabrics for HPC

High Performance Computing (HPC) is unlocking a new era of speed and productivity to fuel business transformation. Rapid advancements in HPC capabilities are helping organizations operate faster and more effectively than ever, but in today’s fast-paced marketplace, a new generation of technologies is required to reach greater scalability and cost-efficiency. Read more…

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise IT in its willingness to outsource computational power. The m Read more…

By Chris Downing

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Stephen Hawking, Legendary Scientist, Dies at 76

March 14, 2018

Stephen Hawking passed away at his home in Cambridge, England, in the early morning of March 14; he was 76. Born on January 8, 1942, Hawking was an English theo Read more…

By Tiffany Trader

Hyperion Tackles Elusive Quantum Computing Landscape

March 13, 2018

Quantum computing - exciting and off-putting all at once - is a kaleidoscope of technology and market questions whose shapes and positions are far from settled. Read more…

By John Russell

Part Two: Navigating Life Sciences Choppy HPC Waters in 2018

March 8, 2018

2017 was not necessarily the best year to build a large HPC system for life sciences say Ari Berman, VP and GM of consulting services, and Aaron Gardner, direct Read more…

By John Russell

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

SciNet Launches Niagara, Canada’s Fastest Supercomputer

March 5, 2018

SciNet and the University of Toronto today unveiled "Niagara," Canada's most-powerful supercomputer, comprising 1,500 dense Lenovo ThinkSystem SD530 high-perfor Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

World Record: Quantum Computer with 46 Qubits Simulated

December 18, 2017

Scientists from the Jülich Supercomputing Centre have set a new world record. Together with researchers from Wuhan University and the University of Groningen, Read more…

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This