New Degrees of Parallelism, Old Programming Planes

By Nicole Hemsoth

August 28, 2014

Exploiting the capabilities of HPC hardware is now more a matter of pushing into deeper levels of parallelism versus adding more cores or overclocking. What this means is that the time is right for a revolution in programming. The question is whether that revolution should be one that torches the landscape or that handles things “diplomatically” with the existing infrastructure.

While some argue for a “rip and replace” approach to rethinking code for the new era of computational capability, others, including Intel’s Director of Software, James Reinders, are advocating approaches that blend the old and new—that preserve the order of existing programming models while still permitting major leaps ahead for parallelism.

To these ends, Reinders described the latest release of Intel’s Parallel Studio XE 2015 for us this week, pointing to the addition of new explicit vector programming capabilities as well as the many features inside OpenMP 4.0., which is a significant part of the new release.

It’s not difficult to imagine the arguments in favor of holding steady with a consistent programming model for a manycore world, but few expect that slope will be simple to scale. At the heart of Intel’s approach to meshing the old and new approaches are some key features inside OpenMP 4.0, which Reinders says still amount to “hidden charms” that haven’t been fully explored by the HPC world yet. More specifically, he notes that three key elements to exploiting new hardware capabilities—tasking, vectorization, and offload—are not just present in OpenMP 4.0, they work together in unison and represent a turning point in how we will view the possibilities of preserving programming models and bases for the future generation of codes.

“The question is, can we keep the challenges limited to scaling across cores and vectorization to evolve into this new era instead—can we make that set of challenges the programming problem to solve versus learning exotic languages or abandoning the strong code base we have?” Reinders asked. In answer to this, he pointed to some new work his team at Intel, as well as partners around the world, are doing to enhance this possibility via OpenMP 4.0. in addition to their other Intel-specific math libraries and tools.

The issue right now with OpenMP 4.0 isn’t that the capabilities to achieve the new parallelism/existing programming environment goals. It’s still a matter of knowledge, training, and actual examples that show how the three goals of tasking, vectorization, and offload are working inside the same box with this newest release. Reinders says that the most frequent questions he’s getting now revolve around what’s inside the standard in general—it’s still in a “kicking the tires” phase that he hopes the community can move past, especially in this era of 244-way parallelism potential with the Xeon Phi.

He says some specific examples of the hidden charms of OpenMP 4.0 are contained within the new Collapse directive, which essentially lets the compiler handle the tasking across the cores in addition to vectorization at the same time. In another scenario, it would be possible to do offload and have a loop that addresses tasking and vectorization. In other words, users are doing messy things out of necessity to manage these aspects of performance gains with individual approaches instead of potentially tackling all three of the problems at the same time. The benefit of this is profound, Reinders argued, but said it’s still lost in the overwhelming early experimentation phase many are working through now.

The main addition to the release is explicit vector programming, which Reinders says is of increasing importance. This is an important feature because vectorizing will offer some profound performance improvements for HPC code, which also adds to overall efficiency since it can compute faster with the CPU being set at a lower power state. “The question these days is, how do we start getting codes to take full advantage of vector instructions in modern instruction sets. Languages like C and Fortran weren’t written with this in mind, so there have been a lot of hacks to hint to the compiler to vectorize over the years that aren’t so dissimilar to those were doing to get more parallelism in the 80s and 90s.”

Now, instead of going back and forth with the compiler to get it to auto-vectorize, the goal is to extend the languages so they still look like C and Fortran and let the compiler know that you’re ready to vectorize a loop even if there are some problems embedded in the language itself. In OpenMP 4.0, this is achieved through Pragma OMP SIMD, which is designed to minimize code changes when vectorizing code. It can be used to vectorize loops that the compiler normally wouldn’t auto-vectorize without all the hacks. The graphic below highlights the minimal code change required with the associated performance boost.

IntelResults

“If you think about SSE, which we introduced more than a decade ago, it could do 2 double-precision numbers at a time or four single—and that was cool, said Reinders. “Then AVX comes along, which could do 8 single or 4 double precision floating point operations–but now we’re looking at Phi with AVX 512 and you can do 16 single floating point computations or 8 double-precision. It’s an incredible difference.

In other words, the hardware keeps finding ways to do more, but the difference between not doing vectorization versus doing it can be 16 to 1 with the Phi, for instance.”I’ve taught vectorization for a decade—it was one thing to get people excited about doubling code performance, but when it’s 16x it’s a big difference, too much to ignore.”

For those hoping to see what those performance improvements look like for other code, see the graphic below or find more details about the new updates in Parallel Studio here: https://software.intel.com/en-us/intel-parallel-studio-xe/

IntelResults2

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerated for AI applications. Now, Amazon Web Services (AWS) is int Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testbed (AQT), which is based at Lawrence Berkeley National Labor Read more…

Graphcore Introduces Larger-Than-Ever IPU-Based Pods

October 22, 2021

After launching its second-generation intelligence processing units (IPUs) in 2020, four years after emerging from stealth, Graphcore is now boosting its product line with its largest commercially-available IPU-based sys Read more…

Quantum Chemistry Project to Be Among the First on EuroHPC’s LUMI System

October 22, 2021

Finland’s CSC has just installed the first module of LUMI, a 550-peak petaflops system supported by the European Union’s EuroHPC Joint Undertaking. While LUMI -- pictured in the header -- isn’t slated to complete i Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

AWS Solution Channel

Royalty-free stock illustration ID: 577238446

Putting bitrates into perspective

Recently, we talked about the advances NICE DCV has made to push pixels from cloud-hosted desktops or applications over the internet even more efficiently than before. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat for that approach), announced it was expanding into gate-based Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerate Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testb Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

LLNL Prepares the Water and Power Infrastructure for El Capitan

October 21, 2021

When it’s (ostensibly) ready in early 2023, El Capitan is expected to deliver in excess of two exaflops of peak computing power – around four times the powe Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a provi Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire