The AMD Fusion Developer Summit came to a close this afternoon following a three-day run in Bellevue, Washington. With over 700 developers in attendance and a surprisingly large selection of sessions with direct appeal to the HPC crowd, the event provided a broad range of use cases and academic arguments supporting the idea that OpenCL (and of course, GPU computing) are set to play a role in the future of high performance computing and beyond.
While there were plenty of opportunities to explore the graphics eye candy and more general-purpose uses of the Fusion APU, most of this reporter’s time was spent delving into sub-topics in the HPC category, including crash courses on GPUs in the context of Hadoop, Mathematica, and of course, OpenCL for large enterprise and research.
Below are a few noteworthy video clips and other featured items collected during the event. If you haven’t seen it, check out the ARM keynote given on AMD stage—an interesting selection for a speaker, but he was able to pull together the connection in terms of the two companies’ dual emphasis on open standards, energy efficiency and of course, heterogeneous computing.
First, we’ll let AMD’s Margaret Lewis tell us a little bit about where OpenCL stands for the HPC community in contrast to CUDA. She says the openness is part of what should make OpenCL an attractive programming model for HPC shops, especially as they tend to use a range of architectures. As she told us, “AMD sees the maturation of OpenCL as a capability not only to download to the GPU but to start utilizing the CPU and GPU as complementary computing engines. Really, that’s what heterogeneous computing is about.”
We also hit on the concept of the scale-out cluster and the role of manycore as well as take a broader look at the future of the upcoming Bulldozer architecture.
AMD Corporate Fellow, Charles Moore presented a short session on Fusion Processors for HPC, noting that while it was not a product announcement, we could expect to see Fusion APUs with very high performance single-precision and double-precision support in the future. Moore stepped back into history to look at events leading up to the heterogeneous compute era. He claims that for the first time, “the GPU is now a first-class citizen; at the same level as the CPU.” In addition to talking about the role of HPC in saving economies and boosting healthcare, education and other areas, he also spent a considerable amount of time discussing some of the challenges inherent to the pending exascale era. In Moore’s view, to reach exascale, AMD needs a 10x efficiency improvement, but he claims they are on a trajectory to intersect exascale requirements by 2018-2020.
Dylan Roeh and Abdul Daddak are kernel developers for Wolfram Research. Dakkak leads efforts to exploit GPU capabilities for Mathematic 8 and Roeh was one of the developers behind the recently-added OpenCL support in Mathematica. The two presented a session called “Heterogeneous Computing for Finance Using Mathematica and OpenCL” in which they discussed how the addition of GPU support in Mathematica has allowed for enhanced possibilities within the Mathematica language. They looked at the ways that OpenCL can be applied to pricing a variety of financial derivatives from inside Mathematica, focusing on the ease of use that is provided by the OpenCL/Mathematica combination and highlighting the performance advantages of GPU computing for these applications in general.
I caught up with Abdul and Dylan (who speaks first in the video below) about the value of GPU computing for finance and some of the challenges that hardware vendors unwittingly lay on developers.
Among some other notable items that were on the agenda at the AMD Fusion Developer Summit were sessions like the one presented by Jim Falgout, Chief Technologist for Pervasive DataRush. Falgout’s topic, “Leveraging Multicore Systems for Hadoop and HPC Workloads” demonstrated how developers can harness multicore servers and clusters, particularly Hadoop clusters, to tackle some of the problems hidden under mounds of big data. Falgout made the argument that even though developers are being faced with the promise of scaled-out hardware, many are still waiting far longer than they’d like for processing and building MapReduce jobs. He claims that the root of these problems is in the scattered software development that leaves cores sitting idle and non-performing, which means increases in energy costs and loss of potential productivity. While something of a deep-dive session into the specifics of Hadoop, it nonetheless was revealing to see how minor variations on the software side can add vast performance increases to Hadoop and HPC workloads that still scale.
While this was not an HPC-geared conference by any means, AMD made sure to stock the agenda full of sessions that reeled in some rather sizable crowds. As the OpenCL ecosystem matures over the next year it will be interesting to see how many more use cases of OpenCL and HPC there are and furthermore, if there appears to be any decline in similar uses of CUDA. Something says that shakedown between the two won’t occur anytime soon but as AMD touted all week, we cannot underestimate the value proposition of open architectures and open standards.