Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
September 5, 2013

A Shot of Java to Send Accelerators Mainstream

Nicole Hemsoth

When it comes to mainstream adoption of the use of GPUs and other accelerators, one of the primary barriers lies in programmability. While the vendor communities around accelerators have pushed to flatten the learning curve, the fact remains that it takes special effort on the part of ordinary developers to undertake the educational process.

The HPC space has proven, at most massive scale, that GPUs and accelerators can lead to significant performance improvements and these are certainly not unattractive to businesses outside of the traditional high performance computing purview. So the question becomes, what might “sweeten the deal” for mainstream developers when it comes to diving into programming for acceleration?

According to one researcher, Max Grossman from Rice University, there is a steep learning curve and it takes some time to get up to speed but there are some notable projects that are pushing the reach. The interview below details some of these challenges—and what’s being done, especially on the OpenCL/Java front by this young researcher and his team—not to mention others who want to bring advanced tools to a higher level.

Grossman says that even though it’s possible to use OpenCL to allow portable execution of SIMD kernels across a number of platforms (from CPUs, manycore GPUs, FPGAs, etc.) using OpenCL from Java to do so is a perilous path—and one that is anything but a simplification. For instance, it will still be necessary to dig in deep to manage data transfers, write kernels in the OpenCL kernel language, and so on.

To tackle these issues, they collaborated on some unique compile and run-time techniques to speed Java-based programs via an automatic generation of OpenCL as the base. As the team describes, the approach, which they call HJ-OpenCL includes: Automatic generation of OpenCL kernels and JNI glue code from a parallel-for construct (forall) available in the Habanero-Java (HJ) language; Leveraging HJ’s array view language construct to efficiently support rectangular, multi-dimensional arrays on OpenCL devices; Implementing HJ’s phaser (next) construct for all-to-all barrier synchronization in automatically generated OpenCL kernels.

As the team summarizes:

“We use a set of ten Java benchmarks to evaluate our approach, and observe performance improvements due to both native OpenCL execution and parallelism. On an AMD APU, our results show speedups of up to 36.7× relative to sequential Java when executing on the host 4-core CPU, and of up to 55.0x on the integrated GPU. For a system with an Intel Xeon CPU and a discrete NVIDIA Fermi GPU, the speedups relative to sequential Java are 35.7× for the 12-core CPU and 324.0× for the GPU. Further, we find that different applications perform optimally in JVM execution, in OpenCL CPU execution, and in OpenCL GPU execution. The language features, compiler extensions, and runtime extensions included in this work enable portability, rapid prototyping, and transparent execution of JVM applications across all OpenCL platforms.”

In addition to these and other approaches, Grossman says there are some simpler things that the vendor community can do to boost experimentation with accelerators, including pushing more hardware out to wider sets of developers.

More on this Rice University group’s work here: http://pppj2013.dhbw.de/conference-pppj2013/program.html