Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags

Tag: parallelism

The Case for a Parallel Programming Alternative

Jul 2, 2014 |

Cray engineers have been working on a new parallel computing language, called Chapel. Aimed at large-scale parallel computing environments, Chapel was designed with a focus on productivity and accessibility. The project originated from the DARPA High Productivity Computing Systems (HPCS) program, which challenged HPC vendors to improve the productivity of high-end computing systems. To explain Read more…

ADEPT Emphasizes Energy-Efficient Parallelism

Aug 29, 2013 |

The EU-funded ADEPT project is exploring the energy-efficient use of parallel technologies by combining the talents of HPC and the embedded sector. The goal is to develop a tool for modeling and predicting the power and performance characteristics of parallel systems.

The Week in HPC Research

May 2, 2013 |

We’ve scoured the journals and conference proceedings to bring you the top research stories of the week. This diverse set of items includes the latest CAREER award recipient; the push to bring parallel computing to the classroom; HPC in accelerator science; the emerging Many-Task Computing paradigm; and a unified programming model for data-intensive computing.

XPRESS Route to Exascale

Feb 28, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/abstract_future_150x.jpg” alt=”” width=”95″ height=”83″ />The Center for Research in Extreme Scale Computing (CREST) at Indiana University just got a $1.1 million grant to help further the move to exascale computing. Director Thomas Sterling is using some of the money to bolster UI’s research into highly parallel processing for HPC. He talks to HPCwire about his plans.

Revisiting Supercomputer Architectures

Dec 8, 2011 |

Additional performance increases for supercomputers are being confounded by three walls: the power wall, the memory wall and the datacenter wall (the “wall wall”). To overcome these hurdles, the market is currently looking to a combination of four strategies: parallel applications development, adding accelerators to standard commodity compute nodes, developing new purpose-built systems, and waiting for a technology breakthrough.

Compilers and More: Exascale Programming Requirements

Apr 14, 2011 |

In his third column on programming for exascale systems, Michael Wolfe shares his views on what programming at the exascale level is likely to require, and how we can get there from where we are today. He explains that it will take some work, but it’s not a wholesale rewrite of 50 years of high performance expertise.

Compilers and More: Expose, Express, Exploit

Mar 28, 2011 |

In Michael Wolfe’s second column on programming for exascale systems, he underscores the importance of exposing parallelism at all levels of design, either explicitly in the program, or implicitly within the compiler. Wolfe calls on developers to express this parallelism, in a language and in the generated code, and to exploit the parallelism, efficiently and effectively, at runtime on the target machine. He reminds the community that the only reason to pursue parallelism is for higher performance.

Compilers and More: Programming at Exascale

Mar 8, 2011 |

There are at least two ways exascale computing can go, as exemplified by the top two systems on the latest TOP500 list: Tianhe-1A and Jaguar. The Chinese Tianhe-1A uses 14,000 Intel multicore processors with 7,000 NVIDIA Fermi GPUs as compute accelerators, whereas the American Jaguar Cray XT-5 uses 35,000 AMD 6-core processors.

DOE Research Group Makes Case for Exascale

Feb 21, 2011 |

Exascale computing promises incredible science breakthroughs, but it won’t come easily, and it won’t come free.

The Weekly Top Five

Feb 3, 2011 |

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the computing power on display at SC10′s Student Cluster Competition; the University of Portsmouth’s new supercomputer; IBM Watson’s SUSE Linux platform; multicore advances at North Carolina State; and Intel’s new approach to university funding.