Tag: parallelism

Compilers and More: The Past, Present and Future of Parallel Loops

Apr 6, 2015 |

Let’s talk about parallel loops. In parallel computing, we’ve been designing, describing, implementing and using parallel loops almost since the beginning. The advantage of parallel loops (over almost all other forms of parallelism) is that the parallelism scales up with the data set size or loop trip count (number of iterations). So what exactly is a parallel Read more…

Read more…

A Comparison of Heterogeneous and Manycore Programming Models

Mar 2, 2015 |

The high performance computing (HPC) community is heading toward the era of exascale machines, expected to exhibit an unprecedented level of complexity and size. The community agrees that the biggest challenges to future application performance lie with efficient node-level execution that can use all the resources in the node. These nodes might be comprised of Read more…

Read more…

ROSE Framework Blooms Toward Exascale

Feb 12, 2015 |

One of the many ways that the Office of Advanced Scientific Computing Research (ASCR) supports the Department of Energy Office of Science facilities is by championing the research that powers computational science. A recent ASCR Discovery feature takes a look at how the DOE science community is preparing for extreme-scale programming. As supercomputers reach exascale Read more…

Read more…

Practical Advice for Knights Landing Coders

Feb 5, 2015 |

The National Energy Research Scientific Computing Center (NERSC) is on track to get its next supercomputer system, Cori, by mid-2016. While that’s more than a year away, it’s not too soon to start preparing for the new 30+ petaflops Cray machine, which will feature Intel’s next-generation Knights Landing architecture. So says Richard Gerber, Senior Science Read more…

Read more…

Scalable Priority Queue Minimizes Contention

Feb 2, 2015 |

The multicore era has been in full-swing for a decade now, yet exploiting all that parallel goodness remains a prominent challenge. Ideally, compute efficiency would scale linearly with increased cores, but that’s not always the case. As core counts are only set to proliferate across the computing spectrum, it’s an issue that merits serious attention. Researchers from Read more…

Read more…

Compilers and More: Is Amdahl’s Law Still Relevant?

Jan 22, 2015 |

From time to time, you will read an article or hear a presentation that states that some new architectural feature or some new programming strategy will let you work around the limits imposed by Amdahl’s Law. I think it’s time to finally shut down the discussion of Amdahl’s Law. Here I argue that the premise Read more…

Read more…

Parallel Programming with OpenMP

Jul 31, 2014 |

One of the most important tools in the HPC programmer’s toolbox is OpenMP, a standard for expressing shared memory parallelism that was published in 1997. The current release, version 4.0, came out last November. In a recent video, Oracle’s OpenMP committee representative Nawal Copty explores some of the tool’s features and common pitfalls. Copty explains Read more…

Read more…

Building Parallel Code with Hybrid Fortran

Jul 31, 2014 |

Over at the Typhoon Computing blog, Michel Müller addresses a topic that is top of mind to many HPC programmers: porting code to accelerators. Fortran programmers porting their code to GPGPUs (general purpose graphics processing units) have a new tool at their disposal, called Hybrid Fortran. Müller shows how this open source framework can enhance portability without sacrificing performance and maintainability. From the blog (editor’s note: the site Read more…

Read more…

Parallel Computing Trends

Jul 22, 2014 |

One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a Read more…

Read more…

The Case for a Parallel Programming Alternative

Jul 2, 2014 |

Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems. To explain Read more…

Read more…