Heterogeneous Processing: Trite or Trend?

By Dr. Vincent Natoli

June 24, 2009

Heterogeneous processing or co-processing on chips other than the CPU is the most recent trend in HPC. To some extent there has always been a small fringe element pursuing this direction, but as recently as a few years ago, a colleague claiming to be coding a GPU for physics or chemistry calculations would have been politely avoided. Programming FPGAs in strange hardware languages was even more far-fetched.

In the past few years, however, there has been a rich diversity of efforts and support from major HPC vendors. This year brings at least two conferences focused on heterogeneous computing: The Symposium on Application Accelerators in HPC (SAAHPC09, U. Illinois-Urbana, July 28-30) and the CECAM workshop “Algorithmic Re-Engineering for Modern Non-Conventional Processing Units” (Lugano, Sept. 30-Oct. 2). Several other meetings are dedicated to one type or another of specific co-processing approaches.

The most prominent examples of heterogeneous elements and efforts in HPC include the rapidly growing GPU computing community supported by NVIDIA and AMD/ATI and reconfigurable computing on field programmable gate arrays (FPGAs). C-based APIs, such as CUDA put out by NVIDIA, have opened up GPU computing to a much wider audience. Other examples include the IBM Cell chip and ASICs, such as those available from ClearSpeed, as well as soon to be released chips with built-in heterogeneous elements, such as Intel’s Larabee and AMD’s Fusion.

As more HPC practitioners are adopting these platforms today, many organizations are now taking a second look and evaluating them for their needs. Companies, university departments and government agencies want to know if heterogeneous processing is another fleeting trend or a real, sustainable technology transition driven by long-developing forces. The questions organizations are asking are: Will heterogeneous processing be an integral part of future HPC? Is it here to stay? To attempt an answer it’s useful to consider the recent past of HPC that has been characterized by a move to computing on large clusters of commodity chips.

Recent Trends in HPC

The share of the TOP500 machines using x86 programmable machines progressed from negligible in 1999 to roughly 90 percent in 2009, the balance comprised mainly of IBM Power. The numbers for cluster architectures versus MPP and others show the same development. The progression toward HPC computing on large clusters of commodity computing has had many positive impacts, providing great price/performance ratios and a large pool of qualified programmers by pushing affordable and scalable technology down to the department level. While clock speed increased reliably HPC practitioners were willing to turn a blind eye to the deficiencies of commodity solutions; happy to type make on their new platforms and see a doubling of performance every two years. The party ended in 2004, however, when clock speeds began to stall and the problems of HPC commodity computing became more salient, especially the memory wall (further reading here and here) and the divergence problem.

The story of power dissipation and the saturation of CPU clock speed is by now well known in HPC. With more silicon area available and the inability to jack up clock speed further, CPU vendors did what any clever vendor would do — provide more of their key product on die. At Intel it was called “the right hand turn” and it began to show effect in the market in 2004. Before 2004 data from the TOP500 list shows that FLOP performance improved at a healthy factor of 1.8 per year with 1.4 from improved clock and 1.3 from simply having a bigger machine. Plotting machine size against time shows a clear inflection point around 2004 after which machines have mainly improved performance and kept on trend by using more and more cores for processing. The multicore transition started with two cores, is currently at four and six cores, and will soon move to eight cores and higher.

Problems with Commodity HPC

The truth though is that many — in fact, most — HPC codes don’t scale well past 16 processors at least in their current form. In a world where performance can only be improved by use of more cores this is not great news. In short, commodity trends have led to great capacity solutions but not capability systems. Seymour Cray stated it succinctly as “If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?” Clearly one of the seminal influences on HPC and supercomputing preferred oxen to chickens, but the HPC menu appears to favor poultry at the moment.

The recent percolation in the market of heterogeneous or co-processing solutions may be viewed as a response to this capacity/capability gap and the opportunity to use the new silicon area offered by Moore’s law for something other than CPU cores. Once programmers understand multi-level parallelism is required or they reach the scaling limits of their problem, adopting a novel platform to achieve more performance does not seem unreasonable.

The Landscape of Heterogeneous Processing

The landscape of Heterogeneous HPC can be viewed as a continuum when parallelism is plotted along the horizontal axis and core complexity along the vertical (see figure below). At the extremes, CPUs are moderately parallel (2 to 4 cores) but highly complex while FPGAs are massively parallel with hundreds of thousands of very simple processing elements. GPUs and others heterogeneous elements fall in between. It’s interesting to note that multicores are moving down and to the right in this chart with more, simpler cores; an evolutionary approach advocated by the Berkeley report on parallel computing, while FPGAs may be moving up and to the left by including more specialized hard-cores such as DSP blocks. There is no reason to believe a-priori that all applications will map optimally to a CPU architecture. Additionally, the relative complexity of writing codes for each platform needs to be considered.
Complexity Parallelism Chart

Our experience has been that development times for CPU:GPU:FPGA are roughly 1:1.25:3 for the same algorithm. This assumes a full-up parallel CPU optimization using low-level parallelism (SSE) and high-level parallelism (MPI) on the CPU, a CUDA implementation on the GPU and HDL coding for the FPGA by skilled programmers. When does it make sense to implement heterogeneous solutions? Key considerations are how well your algorithm maps to the platform and the operational use case.

Choosing Your Co-Processor

CPUs are obviously the default platform of choice with great clock speed, the ability to handle branching well and relatively easy coding. If your algorithm has a lot of branching and can’t be cast in a streaming or SIMD type formulation, CPUs are your best choice. If your algorithm is a floating point SIMD type problem that can be divided up into many independent threads doing the same operations on different data, GPUs may be a good choice. GPU programming is slightly more complicated than the full-up CPU optimization. It sometimes requires recasting the problem and the cache, or shared memory must be manually managed to achieve performance. If your problem is mainly integer or fixed point, can be cast into a streaming form, has non-traditional data representations and is spatially parallel, that is, able to be written as many independent calculation pipes, FPGAs may be an excellent choice.

Another consideration is the operational mode of your application. Is it under constant development or does development proceed for a time with long operational periods that follow in which the code is essentially run 24/7 in production mode? The latter situation justifies the cost required to port code to a heterogeneous platform and invest in the required hardware since it will be balanced by higher performance and lower operational power consumption per flop.

The Need for Speed

There are a few ways that high performance is actually achieved and they are nicely and symmetrically summarized by both space and time considerations. (This is particularly satisfying for a physicist.) Performance is achieved temporally by 1) operating on data faster with a higher clock speed and 2) implementing temporal parallelism (deep pipelines) for concurrence in time; and spatially by 1) moving data faster and 2) implementing spatial parallelism for concurrence in space (multiple parallel threads). Heterogeneous platforms differ by their relative strengths and weaknesses in one or more of these areas.

Summary

Seen in the context of the decided move to on-chip parallelism and the limits of computing on large clusters of commodity chips, heterogeneous co-processing fills a market gap that is not soon to disappear. Developers today are confronted with multi-level parallelism that spans the domain, process, thread and even the bit level in their traditional CPU-based systems. Confronted with this complexity and the requirements for better performance, they are considering alternate uses of the silicon in non-traditional platforms — GPUs, FPGAs and ASICs — to achieve their requirements.

About the Author
Dr. Vincent NatoliDr. Natoli is the president and founder of Stone Ridge Technology. He is a computational physicist with 20 years experience in the field of high performance computing. He worked as a technical director at High Performance Technologies (HPTi) and before that for 10 years as a senior physicist at ExxonMobil Corporation, at their Corporate Research Lab in Clinton, New Jersey, and in the Upstream Research Center in Houston, Texas. Dr. Natoli holds Bachelor’s and Master’s degrees from MIT, a PhD in Physics from the University of Illinois Urbana-Champaign, and a Masters in Technology Management from the University of Pennsylvania and the Wharton School. Stone Ridge Technology is a professional services firm focused on authoring, profiling, optimizing and porting high performance technical codes to multicore CPUs, GPUs, and FPGAs.

Dr. Natoli can be reached at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are already ensconced at the venue. In any case, you're busy, so he Read more…

By Tiffany Trader

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the GFS – its first new dynamical core in nearly 40 years – w Read more…

By Oliver Peckham

NCSU Researchers Overcome Key DNA-Based Data Storage Obstacles

June 12, 2019

In the race for increasingly dense data storage solutions, DNA-based storage is surely one of the most curious – and a team of North Carolina State University (NCSU) researchers just brought it two steps closer to bein Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Transforming Dark Data for Insights and Discoveries in Healthcare

Healthcare in the USA produces an enormous amount of patient-related data each year. It is likely that the average person will generate over one million gigabytes of health-related data across his or her lifetime, equivalent to 300 million books. Read more…

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are alr Read more…

By Tiffany Trader

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the G Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

The Spaceborne Computer Returns to Earth, and HPE Eyes an AI-Protected Spaceborne 2

June 10, 2019

After 615 days on the International Space Station (ISS), HPE’s Spaceborne Computer has returned to Earth. The computer touched down onboard the same SpaceX Dr Read more…

By Oliver Peckham

Building the Team: South African Style

June 9, 2019

We’re only eight days away from the start of the ISC 2019 Student Cluster Competition. Fourteen student teams from eleven countries will travel to Frankfurt, Read more…

By Dan Olds

Scientists Solve Cosmic Mystery Through Black Hole Simulations

June 6, 2019

An international team of researchers has finally solved a long-standing cosmic mystery – and to do it, they needed to produce the most detailed black hole simulation ever created. Read more…

By Oliver Peckham

Quantum Upstart: IonQ Sets Sights on Challenging IBM, Rigetti, Others

June 5, 2019

Until now most of the buzz around quantum computing has been generated by folks already in the computer business – systems makers, chip makers, and big cloud Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This