As is the usual case with the annual Supercomputing Conference, the HPC community manages to generate about six months worth of news in the span of five days. That makes for an fun-filled event, but since both journalists and readers have limited bandwidth, there’s only so much real-time coverage that can be crammed into a week. Before SC09 recedes too far in the rear-view mirror, it’s probably worth recapping some of the news connected to the big trends that emerged at the conference.
GPUs: Here, There, and Everywhere
Of the 60-odd press releases delivered at SC09, at least 15 of them were related to GPU computing, starting with NVIDIA’s announcement of its new Fermi-based Tesla-20 series products, which we covered in some depth last week. But there were plenty of other GPU developments at the show, too, including China’s GPU-CPU “Tianhe” supercomputer making it into the number 5 position on the TOP500; Japan’s 3 petaflop TSUBAME 2.0 Fermi-equipped super scheduled for deployment in October 2010; and the announcement of a new GPU computing collaboration network.
Besides those developments, Penguin Computing added GPU computing to its HPC on-demand service, while PGI and CAPS launched new GPU compiler offerings, and TotalView and Allinea promised GPU debugging support in future products.Microway, Velocity Micro, AccelerEyes, TeamHPC, and even Mellanox also had new GPU computing-related product to talk about.
And that’s really just a slice of the GPU stories at the show. The startling aspect to all this activity is that there is little actual production work taking place on GPU-accelerated clusters today. Most of the current deployments are still in the experimental stage. But with the more HPC-capable Fermi GPUs from NVIDIA coming online next year, and with the software ecosystem maturing apace, expect to see production systems start to ramp up in 2010.
Speaking of maturing ecosystems, 10 Gigabit Ethernet seems to be picking up steam from both adapter and switch vendors. Despite that, in the HPC space InfiniBand continues to expand its footprint. On the latest TOP500 list, there is exactly one 10 GigE deployment, compared to 181 InfiniBand-connected systems. And although those systems are still in the minority — most TOP500 machines are still GigE-connected — that’s up from 141 systems just a year ago.
TOP500 systems are not really typical though. Last year, InterSect360 Research reported that 60 percent of the systems they surveyed that were installed since the beginning of 2007 deployed with InfiniBand. A November 2008 survey by IDC found 30 percent of the systems they surveyed had InfiniBand, leading GigE, with 27 percent, and 10 GigE, with 14 percent. All indicators point to continued InfiniBand dominance in high performance computing.
At SC09, InfiniBand leader Mellanox announced its MPI offload technology, which allocates some of the interprocessor communication work to its ConnectX-2 HCA, leaving the CPU free to do more application work. The company also previewed its 120 Gbps per port InfiniBand switch hardware, scheduled for general release in the first half of next year. Meanwhile rival QLogic announced new agreements with SGI, HP, and IBM as it tries to edge into the IB business of Voltaire and Mellanox. Not to be left out, Sun Microsystems also announced a couple of new QDR switches at SC09.
Now that their are four vendors in the IB space (and two of them with their own ASICs), the switch and adapter offerings are more diverse than ever. With QLogic there to counterbalance Mellanox, expect to see even more innovation and more price competition in the months and years ahead.
The popularity of commodity hardware in HPC has encouraged a growing cadre of vendors to employ virtualization schemes to create big powerful machines from industry-standard building blocks. Unlike traditional virtualization, which splits a server for multiple OS environments, the model in HPC virtualization is to aggregate CPU, memory, and I/O across a cluster to create a unified resource under a single OS. The goal is to provide an alternative to the expense of the SMP mainframe and the complexity of a compute cluster, while at the same time offering the ability to reconfigure hardware dynamically.
The virtualization vendors were out in full force at SC09. ScaleMP, 3Leaf, RNA Networks, and NextIO were all displaying there wares on the exhibition floor. The first two, ScaleMP and 3Leaf, aggregate CPUs and memory for up to 16 cluster nodes, making them appear as an SMP machine to the application. RNA networks and NextIO focus on virtualized memory and I/O, respectively.
ScaleMP, the most established of the bunch, was exhibiting its latest offerings: a virtual-SMP-in-the-cloud product as well as its new Direct Connect 2 technology that can turn a small (4-node) switchless cluster into a virtual SMP. Likewise, 3Leaf was showing off its new ASIC-enabled virtual SMP technology, which the company launched just prior to the conference.
Startup RNA networks focuses solely on memory virtualization across a cluster, with the assumption that access to a large global memory space, rather than processor or core count, is the biggest impediment for most applications. Like ScaleMP and 3Leaf, the RNA offering collects memory across a server farm to create one large memory pool. The company’s was selected as a “Disruptive Technology” at SC09.
NextIO does I/O virtualization via the industry-standard PCI Express bus, allowing users to reconfigure I/O devices on the fly according to application workloads. At the conference, it was showing off its new GPU appliance that can house up to 8 double-wide (or 16 single-wide) GPUs. The company was also exhibiting a Texas Memory Systems’ RamSan-based PCIe flash memory appliance that delivers up to 1.2 million IOPS from 3U of rack space.
One virtualization vendor I didn’t mention is NumaScale, which was previewing its NumaConnect technology at SC09. Like 3Leaf, the solution employs its own custom ASIC on each server motherboard to create the virtual SMP environment. But in the case of NumaScale, the technology also comes with its own internal switch fabric, eliminating the need for InfiniBand and the associated network paraphernalia. We’ll provide more coverage as the company gets closer to launch, which is currently scheduled for the second quarter of 2010. In the meantime, it’s worth checking out its Web site.
Supercomputing: Beyond Algorithmic Trading and Oil Exploration
For a guy who squeezed the Supercomputing Conference in between appearances on Larry King Live and Saturday Night Live, Al Gore delivered a surprisingly effective keynote address at SC09. Gore, who characterizes himself as a “recovering politician,” is a techie at heart, having authored some of the original legislation that helped establish the supercomputing centers in the US.
His keynote speech centered on the notion that supercomputing has become one of the most powerful tools of civilization, and needs to be used as such to help solve the world’s environmental problems. Gore’s vision for the next decade involves using HPC resources not just to study the climate and environmental crisis, but to remedy them by employing supercomputing to help develop and design the next-generation of renewable energy systems. It was a timely reminder that HPC can have nobler purposes than squeezing profits from stock trades or finding more oil to burn.
For additional roundup coverage, download our SC09 wrap-up podcast (MP3).