Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: parallel programming

Argonne’s Paul Messina on Training for Extreme-Scale

Mar 12, 2015 |

Paul Messina, director of science for the Argonne Leadership Computing Facility (ALCF), discusses the primary objectives, curriculum and importance of the Argonne Training Program on Extreme-Scale Computing (ATPESC), now in its third year. HPCwire: Can you give us an overview of the Argonne Training Program on Extreme-Scale Computing (ATPESC)? Paul Messina: Absolutely, Tiffany. The ATPESC Read more…

Practical Advice for Knights Landing Coders

Feb 5, 2015 |

The National Energy Research Scientific Computing Center (NERSC) is on track to get its next supercomputer system, Cori, by mid-2016. While that’s more than a year away, it’s not too soon to start preparing for the new 30+ petaflops Cray machine, which will feature Intel’s next-generation Knights Landing architecture. So says Richard Gerber, Senior Science Read more…

Compilers and More: Is Amdahl’s Law Still Relevant?

Jan 22, 2015 |

From time to time, you will read an article or hear a presentation that states that some new architectural feature or some new programming strategy will let you work around the limits imposed by Amdahl’s Law. I think it’s time to finally shut down the discussion of Amdahl’s Law. Here I argue that the premise Read more…

Parallel Programming with OpenMP

Jul 31, 2014 |

One of the most important tools in the HPC programmer’s toolbox is OpenMP, a standard for expressing shared memory parallelism that was published in 1997. The current release, version 4.0, came out last November. In a recent video, Oracle’s OpenMP committee representative Nawal Copty explores some of the tool’s features and common pitfalls. Copty explains Read more…

Parallel Computing Trends

Jul 22, 2014 |

One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a Read more…

The Case for a Parallel Programming Alternative

Jul 2, 2014 |

Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems. To explain Read more…

An Easier, Faster Programming Language?

Jun 18, 2014 |

The HPC community has turned out supercomputers surpassing tens of petaflops of computing power by stringing together thousands of multicore processors, often in tandem with accelerators like NVIDIA GPUs and Intel Phi coprocessors. Of course, these multi-million dollar systems are only as useful as the programs that run on them, and developing applications that can Read more…

The Week in HPC Research

May 2, 2013 |

We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.

Setting Up CUDA in the Cloud

Mar 26, 2013 |

Tutorial describes how to implement CUDA and parallel programming in the AWS Cloud.

Setting Up CUDA in the Cloud

Mar 26, 2013 |

Tutorial describes how to implement CUDA and parallel programming in the AWS Cloud.