New Degrees of Parallelism, Old Programming Planes

By Nicole Hemsoth

August 28, 2014

Exploiting the capabilities of HPC hardware is now more a matter of pushing into deeper levels of parallelism versus adding more cores or overclocking. What this means is that the time is right for a revolution in programming. The question is whether that revolution should be one that torches the landscape or that handles things “diplomatically” with the existing infrastructure.

While some argue for a “rip and replace” approach to rethinking code for the new era of computational capability, others, including Intel’s Director of Software, James Reinders, are advocating approaches that blend the old and new—that preserve the order of existing programming models while still permitting major leaps ahead for parallelism.

To these ends, Reinders described the latest release of Intel’s Parallel Studio XE 2015 for us this week, pointing to the addition of new explicit vector programming capabilities as well as the many features inside OpenMP 4.0., which is a significant part of the new release.

It’s not difficult to imagine the arguments in favor of holding steady with a consistent programming model for a manycore world, but few expect that slope will be simple to scale. At the heart of Intel’s approach to meshing the old and new approaches are some key features inside OpenMP 4.0, which Reinders says still amount to “hidden charms” that haven’t been fully explored by the HPC world yet. More specifically, he notes that three key elements to exploiting new hardware capabilities—tasking, vectorization, and offload—are not just present in OpenMP 4.0, they work together in unison and represent a turning point in how we will view the possibilities of preserving programming models and bases for the future generation of codes.

“The question is, can we keep the challenges limited to scaling across cores and vectorization to evolve into this new era instead—can we make that set of challenges the programming problem to solve versus learning exotic languages or abandoning the strong code base we have?” Reinders asked. In answer to this, he pointed to some new work his team at Intel, as well as partners around the world, are doing to enhance this possibility via OpenMP 4.0. in addition to their other Intel-specific math libraries and tools.

The issue right now with OpenMP 4.0 isn’t that the capabilities to achieve the new parallelism/existing programming environment goals. It’s still a matter of knowledge, training, and actual examples that show how the three goals of tasking, vectorization, and offload are working inside the same box with this newest release. Reinders says that the most frequent questions he’s getting now revolve around what’s inside the standard in general—it’s still in a “kicking the tires” phase that he hopes the community can move past, especially in this era of 244-way parallelism potential with the Xeon Phi.

He says some specific examples of the hidden charms of OpenMP 4.0 are contained within the new Collapse directive, which essentially lets the compiler handle the tasking across the cores in addition to vectorization at the same time. In another scenario, it would be possible to do offload and have a loop that addresses tasking and vectorization. In other words, users are doing messy things out of necessity to manage these aspects of performance gains with individual approaches instead of potentially tackling all three of the problems at the same time. The benefit of this is profound, Reinders argued, but said it’s still lost in the overwhelming early experimentation phase many are working through now.

The main addition to the release is explicit vector programming, which Reinders says is of increasing importance. This is an important feature because vectorizing will offer some profound performance improvements for HPC code, which also adds to overall efficiency since it can compute faster with the CPU being set at a lower power state. “The question these days is, how do we start getting codes to take full advantage of vector instructions in modern instruction sets. Languages like C and Fortran weren’t written with this in mind, so there have been a lot of hacks to hint to the compiler to vectorize over the years that aren’t so dissimilar to those were doing to get more parallelism in the 80s and 90s.”

Now, instead of going back and forth with the compiler to get it to auto-vectorize, the goal is to extend the languages so they still look like C and Fortran and let the compiler know that you’re ready to vectorize a loop even if there are some problems embedded in the language itself. In OpenMP 4.0, this is achieved through Pragma OMP SIMD, which is designed to minimize code changes when vectorizing code. It can be used to vectorize loops that the compiler normally wouldn’t auto-vectorize without all the hacks. The graphic below highlights the minimal code change required with the associated performance boost.

IntelResults

“If you think about SSE, which we introduced more than a decade ago, it could do 2 double-precision numbers at a time or four single—and that was cool, said Reinders. “Then AVX comes along, which could do 8 single or 4 double precision floating point operations–but now we’re looking at Phi with AVX 512 and you can do 16 single floating point computations or 8 double-precision. It’s an incredible difference.

In other words, the hardware keeps finding ways to do more, but the difference between not doing vectorization versus doing it can be 16 to 1 with the Phi, for instance.”I’ve taught vectorization for a decade—it was one thing to get people excited about doubling code performance, but when it’s 16x it’s a big difference, too much to ignore.”

For those hoping to see what those performance improvements look like for other code, see the graphic below or find more details about the new updates in Parallel Studio here: https://software.intel.com/en-us/intel-parallel-studio-xe/

IntelResults2

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

And So It Begins…Again – The FY19 Exascale Budget Rollout (and things look good)

February 23, 2018

On February 12, 2018, the Trump administration submitted its Fiscal Year 2019 (FY-19) budget to Congress. The good news for the U.S. exascale program is that the numbers look very good and the support appears to be stron Read more…

By Alex R. Larzelere

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with partner Leibniz Supercomputing Center (LRZ) in Germany. The ser Read more…

By Tiffany Trader

Start-up Aims AI at Automated Tuning of Complex Systems

February 22, 2018

Today’s bigger, more complex, connected and intelligent systems have an exponentially higher number of connections, dependencies, interfaces, protocols and processing architectures that, if not optimized, will hamstrin Read more…

By Doug Black

HPE Extreme Performance Solutions

Experience Memory & Storage Solutions that will Transform Your Data Performance

High performance computing (HPC) has revolutionized the way we harness insight, leading to a dramatic increase in both the size and complexity of HPC systems. Read more…

Do Cryptocurrencies Have a Part to Play in HPC?

February 22, 2018

It’s easy to be distracted by news from the US, China, and now the EU on the state of various exascale projects, but behind the vinyl-wrapped cabinets and well-groomed sales execs are an army of Excel-wielding PMO and Read more…

By Chris Downing

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Start-up Aims AI at Automated Tuning of Complex Systems

February 22, 2018

Today’s bigger, more complex, connected and intelligent systems have an exponentially higher number of connections, dependencies, interfaces, protocols and pr Read more…

By Doug Black

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource manage Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This