Visit additional Tabor Communication Publications
June 16, 2011
Next week the HPC digerati will descend upon the International Supercomputing Conference (ISC) in Hamburg, Germany. As the European counterbalance to the larger Supercomputing Conference (SC) in the US, ISC strives for a more international flavor and a more intimate vibe -- although with 2,300-plus attendees and 152 exhibitors registered this year, it's definitely starting to feel like a smaller version of SC.
Occurring on the cusp of the spring-summer transition, ISC manages to fill a seasonal void for HPC vendors looking to introduce products or talk up new technologies and strategies in the middle of the year. While there will be a flurry of vendor announcements this year, I think a lot of the talk is going to be around technology.
HPC is in one of those fundamental transitions right now as it shifts from monolithic CPU-based systems to heterogeneous platforms. That's changing the hardware and software fundamentals of high performance computing, not to mention the vendor and geographic supercomputing landscape.
Terms that were alien to HPC a few short years ago -- GPU computing, CUDA, OpenCL, APUs, manycore processors, Chinese supercomputers -- are now part of the vernacular. With chip giants like Intel, AMD, and NVIDIA all offering (or soon offering) accelerator products for HPC, and software support racing to catch up, we are seeing an architectural shift as profound as the 1990s-era transition from proprietary vector and MPP supercomputers to commodity clusters.
It's now pretty much an accepted fact that the petascale age will be chock full of heterogeneous computer systems. It's fairly safe to say that most, if not all, architectures for exascale systems, will have a significant heterogeneous component to them -- most likely with on-chip floating point accelerators. Whether these are integrated GPUs, Intel's Many Integrated Core (MIC) processors, or some other variation of a fat core-thin core design, remains to be seen.
As you might imagine, there will be a number of sessions at ISC on this topic set including a panel that tackles the subject head-on: Heterogeneous Systems & Their Challenges to HPC Systems, which gets under way on Monday afternoon.
Berkeley Lab's John Shalf hosts the panel, which includes Cray CTO Steve Scott, Intel researcher Pradeep Dubey, the University of Heidelberg's Rainer Spurzem, and Kai Lu of China's National University of Defense Technology (NUDT).
With the Cray unveiling of their new XK6 GPU super fresh in his mind, Scott is liable to devote some attention to the joys and heartache of CPU-GPU heterogeneity and the importance of offering a portable, productive, high-level software environment for this environment. Cray is pushing for OpenMP accelerator extensions to fill the void here, so I'd expect that be part of his spiel.
Intel's Pradeep Dubey was one of the authors of a 2010 paper that aimed to debunk the 100X to 1000X performance improvements claimed for GPUs (compared to CPUs). Given that Dubey is also familiar with Intel's MIC architecture, he's likely to draw some distinction between the two approaches, especially in regard to ease of programming.
Rainer Spurzem employs GPUs to accelerate astrophysics applications, so he'll offer the perspective of heterogeneous computing user. Astrophysics, which until recently was an observational science, now increasingly uses compute-heavy simulations and signal processing as a common tool. GPUs are well-suited to such work, although some researchers have gone the FPGA route. Spurzem could offer some perspective on how best to exploit these accelerators.
Expect NUDT's Kai Lu to talk up China's push into heterogenous computing via its embrace of GPU-boosted supercomputers. NUDT is the developer of the reigning TOP500 champ, Tianhe-1A, which clocks in at over 2.5 Linpack petaflops. Although there is plenty of skepticism to go around about the practical utility of such massive GPU-accelerated machines, the Tianhe super recently claimed 1.87 petaflop of performance on a real-world molecular simulation application.
Another session along the same lines is Tuesday's GPU debate between HPC icon Thomas Sterling and NVIDIA's David Kirk. Kirk, obviously, will be the advocate for GPU computing, with Sterling there to offer the rebuttal and some historical perspective.
Heterogeneous computing is likely to permeate a number of other ISC presentations, including Monday's session on Transpeta Flops Initiatives and Wednesday's session on Many-Core Computing. I can also promise you that there will be breaking vendor news around this topic at ISC, so be sure to catch our special coverage of ISC starting next week.
Posted by Michael Feldman - June 16, 2011 @ 6:01 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.