Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

November 21, 2008

What Caught My Attention at SC08

Michael Feldman

A show the size of the Supercomputing Conference is difficult to swallow whole. With hundreds of exhibitors and conference activities, it’s virtually impossible to get a balanced perspective. Despite being here for a full week, I’m sure I’ll come away feeling I missed most of the conference. That said, here are a few areas that caught my attention at SC08.

The most compelling “big iron” story of the conference was the new “Jaguar” Cray XT at Oak Ridge. The 1.64 (peak) petaflop machine is now the largest general-purpose supercomputer in the world. Although ORNL missed the Linpack submission to beat out Roadrunner for the number one spot on the TOP500, a real live application that executed on Jaguar captured this year’s Gordon Bell Prize for peak performance. The winning submission was from a team of ORNL researchers that achieved 1.35 petaflops with a simulation of superconductors. Our Q&A with Doug Kothe gives you some idea of the applications that lie ahead for the machine.

At the other end of the scale were the plethora of personal supercomputers being exhibited at SC08. Leading that charge was NVIDIA, who managed to line up all vendors, great and small, and convinced them to incorporate multiple Tesla C1060 GPU accelerator cards into desktop and deskside machines. Tesla-equipped personal systems will soon be offered by Dell, Lenovo, BOXX, Velocity Micro, Penguin Computing, ASUS and a handful of others. In 2009 you’ll be able to buy  a 4 (single precision) teraflop workstation for around $10K. Moving a little further up the food chain, Cray will also be offering Tesla-accelerated machines in their new deskside CX1 system.

A GPGPU software ecosystem is rapidly developing to unleash all this GPU goodness. Although CUDA, and Brook+ are available now for GPU programming, other GPU-enabling software is starting to appear. OpenCL, a hardware agnostic low-level interface for GPU computing, should become available by early 2009, with vendor implementations to follow. If you want to dig a little deeper into OpenCL, check out Friday’s article by RapidMind’s Michael McCool.

Even better news is that higher level GPU-friendly software development environments are starting to appear. While RapidMind has had this level of support for a couple of years, French-based CAPS enterprise now offers a C and Fortran development environment for NVIDIA and AMD GPUs. Newcomer AccelerEyes has developed Jacket, a GPU engine for MATLAB that wraps CUDA (for GPU computing) and OpenGL (for visualization) into an extension of the MATLAB language. Along those same lines, Wolfram Research demonstrated a CUDA-accelerated version of Mathematica to tap into NVIDIA GPUs. Finally, PGI announced it has teamed with AMD to develop compiler technology that will generate GPU code from standard C and Fortran. The first version will target AMD’s FireStream hardware, but presumably PGI is also thinking hard about an NVIDIA Tesla implementation*.

Extracting parallelism from vanilla C and Fortran code is also the model that startup Convey Computer is employing for its CPU-FPGA hybrid server — a product I wrote about on Monday. The Convey offering is the brainchild of HPC veteran (and this year’s Seymour Cray Award winner) Steve Wallach, and is packed with cleverness from top to bottom. Gauging by the booth traffic at the company’s booth, the Convey launch garnered quite a bit of interest from conference attendees. If I were in product development at Convey, I’d consider adding a deskside model for software developers.

Outside of accelerator-land, the most widely talked about processor at SC08 — the Intel Nehalem — never actually made a public appearance at the show. These chips are presumably in production right now and the 2P server versions should start showing up as early as Q1 2009. The 4P Nehalems are expected later in the year — maybe much later. Since the Xeons now dominate the HPC market (even in the TOP500), every server and workstation HPC vendor will probably be scrambling to get new Nehalem-based boxes to market as quickly as possible.

NAND flash memory and the associated Solid State Disk (SSD) products seemed much more visible at SC08 compared to years past. Increased NAND density and performance, along with dropping prices, are making NAND memory a very attractive storage layer between main DRAM and magnetic disks. On the show floor, Texas Memory Systems, Solid Access Technologies, Violin Memory, BiTMICRO Networks, and Fusion-io all had a slightly different stories to tell about their offerings. And while I’m not yet conversant in NAND technology, in talking with Fusion-io CTO David Flynn it became apparent that using these devices just as turbo-charged disks is probably not the way to go. I plan to follow this space more closely in 2009 as more products come to market.

[*UPDATE — Nov. 24. — PGI is already in the process of developing an NVIDIA Tesla implementation and was demonstrating a pre-release version in their booth at SC08.]

Share This