Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: parallelism

Parallel Programming with OpenMP

Jul 31, 2014 |

One of the most important tools in the HPC programmer’s toolbox is OpenMP, a standard for expressing shared memory parallelism that was published in 1997. The current release, version 4.0, came out last November. In a recent video, Oracle’s OpenMP committee representative Nawal Copty explores some of the tool’s features and common pitfalls. Copty explains Read more…

Building Parallel Code with Hybrid Fortran

Jul 31, 2014 |

Over at the Typhoon Computing blog, Michel Müller addresses a topic that is top of mind to many HPC programmers: porting code to accelerators. Fortran programmers porting their code to GPGPUs (general purpose graphics processing units) have a new tool at their disposal, called Hybrid Fortran. Müller shows how this open source framework can enhance portability without sacrificing performance and maintainability. From the blog (editor’s note: the site Read more…

Parallel Computing Trends

Jul 22, 2014 |

One of the most pressing issues faced by the HPC community is how to go about attracting and training the next generation of HPC users. The staff at Argonne National Laboratory is tackling this challenge head on by holding an intensive summer school in extreme-scale computing. One of the highlights of the 2013 summer program was a Read more…

The Case for a Parallel Programming Alternative

Jul 2, 2014 |

Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems. To explain Read more…

ADEPT Emphasizes Energy-Efficient Parallelism

Aug 29, 2013 |

The EU-funded ADEPT project is exploring the energy-efficient use of parallel technologies by combining the talents of HPC and the embedded sector. The goal is to develop a tool for modeling and predicting the power and performance characteristics of parallel systems.

The Week in HPC Research

May 2, 2013 |

We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.

XPRESS Route to Exascale

Feb 28, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/abstract_future_150x.jpg” alt=”” width=”95″ height=”83″ />The Center for Research in Extreme Scale Computing (CREST) at Indiana University just got a $1.1 million grant to help further the move to exascale computing. Director Thomas Sterling is using some of the money to bolster UI’s research into highly parallel processing for HPC. He talks to HPCwire about his plans.

Revisiting Supercomputer Architectures

Dec 8, 2011 |

Additional performance increases for supercomputers are being confounded by three walls: the power wall, the memory wall and the datacenter wall (the “wall wall”). To overcome these hurdles, the market is currently looking to a combination of four strategies: parallel applications development, adding accelerators to standard commodity compute nodes, developing new purpose-built systems, and waiting for a technology breakthrough.

Compilers and More: Exascale Programming Requirements

Apr 14, 2011 |

In his third column on programming for exascale systems, Michael Wolfe shares his views on what programming at the exascale level is likely to require, and how we can get there from where we are today. He explains that it will take some work, but it’s not a wholesale rewrite of 50 years of high performance expertise.

Compilers and More: Expose, Express, Exploit

Mar 28, 2011 |

In Michael Wolfe’s second column on programming for exascale systems, he underscores the importance of exposing parallelism at all levels of design, either explicitly in the program, or implicitly within the compiler. Wolfe calls on developers to express this parallelism, in a language and in the generated code, and to exploit the parallelism, efficiently and effectively, at runtime on the target machine. He reminds the community that the only reason to pursue parallelism is for higher performance.