Visit additional Tabor Communication Publications
November 09, 2011
A recent report by analyst firm IDC, titled "Heterogeneous Computing: A New Paradigm for the Exascale Era," makes the case that heterogeneous computing is going mainstream and will be "indispensable for achieving exascale computing."
Forgetting for a moment that NVIDIA sponsored the report, those conclusions are well supported by the fact that heterogeneous computing in the form of GPGPUs has indeed enjoyed a relatively fast adoption cycle in the normally staid HPC community, not to mention that essentially every major HPC chip and system vendor has a some sort of roadmap that includes heterogeneous components in its future.
While the report cites some of the usual adoption barriers for this relatively new paradigm (i.e., programming challenges, communication bottlenecks, uncertainty about advantages of accelerators versus future CPUs), it notes that system cost, energy efficiency, and space limitations are all driving users to adopt the more compute-efficient GPUs that have made their way into the HPC landscape over the last five years. Those same issues, the report says, will make heterogeneous computing the basis of exascale systems by the end of the decade.
IDC backs this up with its own research. From the report:
IDC's 2008 worldwide study on HPC processors revealed that 9% of HPC sites were using some form of accelerator technology alongside CPUs in their installed systems. Fast-forward to the 2010 version of the same global study and the scene has changed considerably. Accelerator technology has gone forth and multiplied. By this time, 28% of the HPC sites were using accelerator technology — a threefold increase from two years earlier — and nearly all of these accelerators were GPUs. Although GPUs represent only about 5% of the processor counts in heterogeneous systems, their numbers are growing rapidly.
The report also notes that GPUS and accelerator technology more generally (with a shot-out to the Intel MIC coprocessor) is moving from experimental use into more mainstream production work. Nowhere is this more apparent than in the top supercomputers, where currently three of the top ten machines in the world employ GPUs, a number which is expected to grow as more US supercomputers like Titan (ORNL) and Stampede (TACC) come online over the next 12 to 18 months.
IDC's only caveat is that x86 technology is not standing still and they expect products based on that architecture to remain the revenue leader in HPC through 2015. The implication is that even in a world replete with exotic HPC accelerators, x86 is likely to survive as a complementary CPU technology, or in the case of Intel MIC, as its own accelerator.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.