Tag: parallel programming

Setting Up CUDA in the Cloud

Mar 26, 2013 |

Tutorial describes how to implement CUDA and parallel programming in the AWS Cloud.

The Week in HPC Research

Mar 7, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/future_insight_200x.jpg” alt=”” width=”100″ height=”58″ />The top research stories of the week include novel methods of data race detection; a comparison of predictive laws; a review of FPGA’s promise; GPU virtualization using PCI Direct pass-through; and an analysis of the Amazon Web Services High-IO platform.

XPRESS Route to Exascale

Feb 28, 2013 |

<img src=”http://media2.hpcwire.com/hpcwire/abstract_future_150x.jpg” alt=”” width=”95″ height=”83″ />The Center for Research in Extreme Scale Computing (CREST) at Indiana University just got a $1.1 million grant to help further the move to exascale computing. Director Thomas Sterling is using some of the money to bolster UI’s research into highly parallel processing for HPC. He talks to HPCwire about his plans.

HPC Programming in the Age of Multicore: One Man’s View

Jan 14, 2013 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Gerhard_Wellein_small.jpg” alt=”” width=”95″ height=”85″ />At this June’s International Supercomputing Conference (ISC’13) in Leipzig, Germany, Gerhard Wellein will be delivering a keynote entitled, Fooling the Masses with Performance Results: Old Classics & Some New Ideas. HPCwire caught up with Wellein and asked him to preview some of the themes of his upcoming talk and expound on his philosophy of programming for performance in the multicore era.

OpenMP Takes To Accelerated Computing

Nov 27, 2012 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/OpenMP_logo_small.bmp” alt=”” width=”112″ height=”36″ />OpenMP, the popular parallel programming standard for high performance computing, is about to come out with a new version incorporating a number of enhancements, the most significant one being support for HPC accelerators. Version 4.0 will include the functionality that was implemented in OpenACC, the accelerator API that splintered off from the OpenMP work, as well as offer additional support beyond that. The new standard is expected to become the the law of the land sometime in early 2013.

Adapteva Reaches Funding Goal for Parallella Project

Oct 29, 2012 |

Kickstarter investment model notches another high-tech success.

Supercomputing Education in Russia

Apr 4, 2012 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Lomonsov_MSU_small.jpg” alt=”” width=”115″ height=”90″ />The second year of “Supercomputing Education” project in Russia has completed. The idea for the project was presented to the President of Russia, Dmitry Medvedev, back in 2009. The work was immediately approved and scheduled for the 2010–2012 timeframe, with the implementation assigned to Lomonosov Moscow State University, the university that hosts the largest supercomputing center of Russia.

The Heterogeneous Programming Jungle

Mar 19, 2012 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/green_mb.bmp” alt=”” width=”109″ height=”91″ />There are several approaches being developed to program heterogeneous systems, but none of them have proven to successfully address the real goal. This article will discuss a range of potentially interesting heterogeneous systems for high performance computing, why programming them is hard, and why developing a high level programming model is even harder.

Retrofitting Programming Languages for a Parallel World

Feb 23, 2012 |

<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/parallels.JPG” alt=”” width=”78″ height=”63″ />The most widely used computer programming languages today were not designed as parallel programming languages. But retrofitting existing programming languages for parallel programming is underway. We can compare and contrast retrofits by looking at four key features, five key qualities, and the various implementation approaches.

Intel Debuts New HPC Cluster Tool Suite

Nov 8, 2011 |

This week Intel unveiled its upmarket version of its Cluster Studio offering aimed at performance-minded MPI application developers. Called Cluster Studio XE, the jazzed-up developer suite adds Intel analysis tools to make it easier for programmers to optimize and tune codes for maximum performance. It also includes the latest compilers, runtimes, and MPI library to keep pace with the new developments in parallel programming.