Tag: parallel programming
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Gerhard_Wellein_small.jpg” alt=”” width=”95″ height=”85″ />At this June’s International Supercomputing Conference (ISC’13) in Leipzig, Germany, Gerhard Wellein will be delivering a keynote entitled, Fooling the Masses with Performance Results: Old Classics & Some New Ideas. HPCwire caught up with Wellein and asked him to preview some of the themes of his upcoming talk and expound on his philosophy of programming for performance in the multicore era.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/OpenMP_logo_small.bmp” alt=”” width=”112″ height=”36″ />OpenMP, the popular parallel programming standard for high performance computing, is about to come out with a new version incorporating a number of enhancements, the most significant one being support for HPC accelerators. Version 4.0 will include the functionality that was implemented in OpenACC, the accelerator API that splintered off from the OpenMP work, as well as offer additional support beyond that. The new standard is expected to become the the law of the land sometime in early 2013.
Kickstarter investment model notches another high-tech success.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Lomonsov_MSU_small.jpg” alt=”” width=”115″ height=”90″ />The second year of “Supercomputing Education” project in Russia has completed. The idea for the project was presented to the President of Russia, Dmitry Medvedev, back in 2009. The work was immediately approved and scheduled for the 2010–2012 timeframe, with the implementation assigned to Lomonosov Moscow State University, the university that hosts the largest supercomputing center of Russia.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/green_mb.bmp” alt=”” width=”109″ height=”91″ />There are several approaches being developed to program heterogeneous systems, but none of them have proven to successfully address the real goal. This article will discuss a range of potentially interesting heterogeneous systems for high performance computing, why programming them is hard, and why developing a high level programming model is even harder.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/parallels.JPG” alt=”” width=”78″ height=”63″ />The most widely used computer programming languages today were not designed as parallel programming languages. But retrofitting existing programming languages for parallel programming is underway. We can compare and contrast retrofits by looking at four key features, five key qualities, and the various implementation approaches.
This week Intel unveiled its upmarket version of its Cluster Studio offering aimed at performance-minded MPI application developers. Called Cluster Studio XE, the jazzed-up developer suite adds Intel analysis tools to make it easier for programmers to optimize and tune codes for maximum performance. It also includes the latest compilers, runtimes, and MPI library to keep pace with the new developments in parallel programming.
Steve Lionel, aka, “Doctor Fortran” defends the venerable programming language and its modern relevance.
Language could pave the way for native parallelism in C and C++.
Temple University will soon be home to a new hybrid GPU-CPU system to support a broad range of research needs. Computer scientists at the new Center for High Performance Computing and Networking will also have a dedicated space to explore challenges related to parallel programming in conjunction with the Pittsburgh Supercomputing Center and other HPC sites.