If you are familiar with current approaches to programming accelerators, you are either discomforted by the complexities, or excited at the levels of control you can get. Can we come up with a different model of GPU and accelerator programming — a model that allows HPC programmers to focus on domain science instead of on computer science?
The “cloud” model of exporting user workload and services to remote, distributed and virtual environments is emerging as a powerful computing paradigm. Yet, one domain that challenges this model in its characteristics and needs is high performance computing.
OpenCL (the Open Computing Language) is under development by the Khronos Group as an open, royalty-free standard for parallel programming of CPUs, GPUs, the Cell and other parallel processors. An update of the effort was presented at SC08 on Nov. 17.
A show the size of the Supercomputing Conference is difficult to swallow whole. With hundreds of exhibitors and conference activities, it’s virtually impossible to get a balanced perspective. That said, here are a few areas that caught my attention at SC08.
John West had a great conversation with Matt Reilly, chief engineer for SiCortex. Matt talked about what’s going on with the SiCortex’s low power, high density compute platform, and then he discussed the need for the computer science curriculum to include parallelism.
InfiniBand has been a comfort zone for those tightly-coupled HPC applications that can’t live without their addiction to low latency and high speed. If your application is a science experiment with good funding and no firm schedule, that’s OK. If your application involves business, deadlines, and ROI, it’s time to break out of that comfort zone and acquaint yourself with 10 Gigabit Ethernet.
John West talks with John Lee, vice president of advanced technology solutions for Appro; Steve Cumings, director of infrastructure for HP’s Scalable Computing and Infrastructure Group; Morgan Littlewood, vice president of Violin Memory; Jim Falgout, chief technologist for Pervasive’s DataRush; and Dave Ellis, director of HPC architecture for LSI, on the SC08 show floor. We also present our second Two-Option Audio Quiz.
At SC08 this week, Appro announced it had completed the final deployment of 38 teraflop Xtreme-X supercomputer for the ING Renault F1 Team. The new system lives in a brand new CFD facility, built for environmentally-friendly computing.
Oak Ridge National Laboratory recently unveiled the first petascale system dedicated to scientific research, a Cray XT machine with a theoretical peak performance of 1.64 petaflops. We talked with Doug Kothe, director of science at ORNL’s National Center for Computational Sciences, about the challenges of and potential breakthroughs in science now possible with this built-for-science petascale system.
Researchers at Tohoku University in Sendai, north-eastern Japan, announced on Wednesday that they had broken a batch of performance records on their NEC SX-9 supercomputer, as measured on the HPC Challenge Benchmark test. Hiroaki Kobayashi, director the university’s Cyberscience Center, said the SX-9 had achieved the highest marks ever in 19 of 28 areas the test evaluates in computer processing, memory bandwidth and networking bandwidth.