Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
November 9, 2012

Supercomputing Conference Offers Up Smorgasbord of HPC Sessions

Michael Feldman

The epic supercomputing event of the year, SC12, will be booting up next week in Salt Lake City, Utah, attracting HPC digerati, vendors, press, and analysts from around the world. And even though the DOE won’t be there in full force this year, big crowds are still expected. This year’s event should deliver plenty of fodder for those looking to keep up on the latest and greatest in the field, especially in the cutting edge areas of accelerators, big data, cloud computing, exascale supercomputing, and green HPC.

In fact if you take a look the SC12 conference schedule, those five topics just mentioned dominate much of the technical program this year. Eye-balling the listed sessions, there are 48 on accelerators (GPUs, Xeon Phi, DSPs and FPGAs), 37 on big data, 32 on cloud computing, 20 on exascale, and 19 on green computing. Of course, there’s also the usual fare of presentations on interconnects, parallel programming, storage technology (although curiously, not much specifically on flash storage), software development tools, and various HPC use cases.

The session distribution reflects the big drivers in the HPC space today. I would argue the top three on this list — accelerators, big data, cloud computing — are the technologies that will matter most to high performance computing, not just in the next year or two, but also several years down the road. For example, accelerators like GPUs and Intel’s Xeon Phi are not just revamping the basic structure of supercomputing hardware, but the programming tools underneath (CUDA, OpenCL, OpenACC, and OpenMP). Big data is opening up the space a new set of vendors and broadening the horizons for ones that used to be confined to HPC. Finally, cloud computing promises to change to delivery model for at least a subset of HPC users and offer a path for others who have been excluded from high performance computing altogether.

To get a sense of the accelerator space from the application perspective, I’d recommend the Application Grand Challenges in the Heterogeneous Accelerator Era BoF, led by Satoshi Matsuoka, tech lead for the GPU-accelerated TSUBAME super at Tokyo Tech. To catch up on the latest in software tools for accelerator, there are several tutorials and BoFs available, hosted by the various vendors (NVIDIA, PGI, CAPS enterprise, and Intel). Closer to the metal are Design, Implementation and Evolution of High Level Accelerator Programming on Wednesday, presented by PGI compiler engineer Michael Wolfe and Dealing with Portability and Performance on Heterogeneous Systems with Directive-Based Programming Approaches, delivered by CAPS’ François Bodin.

For the HPC cloud space, I’d check out Kate Keahey’s BoF: HPC Cloud: Can Infrastructure Clouds Provide a Viable Platform for HPC?. Kate was exploring the cloud model for HPC when practically everyone else was still talking about grids. She is also part of another BoF title Science-as-a-Service: Exploring Clouds for Computational and Data-Enabled Science and Engineering, which is specifically aimed at how clouds can be structured for science and engineering applications. And if you’re still around on Friday, the Sustainable HPC Cloud Computing 2012 workshop looks worthwhile. It will “report performance studies comparing traditional HPC cluster against HPC cloud, military applications of computing clouds, MPI security studies, failure prevention methods, GPU for the cloud, and innovative methods to understand the movements of individually identifiable entities in the cloud data.”

For the big data topic there’s a little something for everyone — from an introductory tutorial to tweaking Hadoop for HPC. There’s also a best practices BoF. And if you lean toward graph analytics, there’s a BoF for that too. Even with all that, you’ve just skimmed the surface of the big data world at SC12.

As usual, there’s a good line-up of keynote presentations at SC, starting with the opening keynote on Tuesday with Michio Kaku. Kaku is a world-renowned theoretical physicists who has helped to popularize science with the general public. His talk “Physics of the Future: How Science will Change Daily Life by 2100,” will present a vision of the future in which science will advance human civilization by revolutionizing medicine, computers, and space travel.

If that’s too esoteric for you, try Kirk Cameron’s keynote on energy-efficient HPC: Pushing Water Up Mountains: Green HPC and Other Energy Oddities, or the related The Costs of HPC-Based Science in the Exascale Era, delivered by Thomas Ludwig, from the German Climate Computing Center. Both center around the challenges presented by green IT at the scale of supercomputers.

Beyond the technical program and keynotes will be the usual awards (ACM Gordon Bell Prize, SC12 Best Paper, Student Cluster Competition, Seymour Cray Computer Science and Engineering Award, etc) and XXX500 lists: TOP500, Green500, and Graph 500. The TOP500 will get the most press and this year we should see some big shake-ups at the top of the top with Titan (ORNL), Stampede (TACC) and Blue Waters (NCSA) coming online. For the Green500, the big question is: will anyone knock Blue Gene/Q off its perch as the most energy-efficient FLOPS-maker in the world? The new accelerator-based supers equipped with Kepler-grade GPUs (like Titan) and Xeon Phi-juiced machines (like Stampede) have a shot at it. We’ll find out in a few days.