Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: parallel programming

Parallel Programming with OpenMP

Jul 31, 2014 |

One of the most important tools in the HPC programmer’s toolbox is OpenMP, a standard for expressing shared memory parallelism that was published in 1997. The current release, version 4.0, came out last November. In a recent video, Oracle’s OpenMP committee representative Nawal Copty explores some of the tool’s features and common pitfalls. Copty explains Read more…

Parallel Computing Trends

Jul 22, 2014 |

One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a Read more…

The Case for a Parallel Programming Alternative

Jul 2, 2014 |

Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems. To explain Read more…

An Easier, Faster Programming Language?

Jun 18, 2014 |

The HPC community has turned out supercomputers surpassing tens of petaflops of computing power by stringing together thousands of multicore processors, often in tandem with accelerators like NVIDIA GPUs and Intel Phi coprocessors. Of course, these multi-million dollar systems are only as useful as the programs that run on them, and developing applications that can Read more…

The Week in HPC Research

May 2, 2013 |

We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.

Setting Up CUDA in the Cloud

Mar 26, 2013 |

Tutorial describes how to implement CUDA and parallel programming in the AWS Cloud.

Setting Up CUDA in the Cloud

Mar 26, 2013 |

Tutorial describes how to implement CUDA and parallel programming in the AWS Cloud.

The Week in HPC Research

Mar 7, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/future_insight_200x.jpg” alt=”” width=”100″ height=”58″ />The top research stories of the week include novel methods of data race detection; a comparison of predictive laws; a review of FPGA’s promise; GPU virtualization using PCI Direct pass-through; and an analysis of the Amazon Web Services High-IO platform.

XPRESS Route to Exascale

Feb 28, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/abstract_future_150x.jpg” alt=”” width=”95″ height=”83″ />The Center for Research in Extreme Scale Computing (CREST) at Indiana University just got a $1.1 million grant to help further the move to exascale computing. Director Thomas Sterling is using some of the money to bolster UI’s research into highly parallel processing for HPC. He talks to HPCwire about his plans.

HPC Programming in the Age of Multicore: One Man’s View

Jan 14, 2013 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Gerhard_Wellein_small.jpg” alt=”” width=”95″ height=”85″ />At this June’s International Supercomputing Conference (ISC’13) in Leipzig, Germany, Gerhard Wellein will be delivering a keynote entitled, Fooling the Masses with Performance Results: Old Classics & Some New Ideas. HPCwire caught up with Wellein and asked him to preview some of the themes of his upcoming talk and expound on his philosophy of programming for performance in the multicore era.