Visit additional Tabor Communication Publications
November 09, 2011
A recent report by analyst firm IDC, titled "Heterogeneous Computing: A New Paradigm for the Exascale Era," makes the case that heterogeneous computing is going mainstream and will be "indispensable for achieving exascale computing."
Forgetting for a moment that NVIDIA sponsored the report, those conclusions are well supported by the fact that heterogeneous computing in the form of GPGPUs has indeed enjoyed a relatively fast adoption cycle in the normally staid HPC community, not to mention that essentially every major HPC chip and system vendor has a some sort of roadmap that includes heterogeneous components in its future.
While the report cites some of the usual adoption barriers for this relatively new paradigm (i.e., programming challenges, communication bottlenecks, uncertainty about advantages of accelerators versus future CPUs), it notes that system cost, energy efficiency, and space limitations are all driving users to adopt the more compute-efficient GPUs that have made their way into the HPC landscape over the last five years. Those same issues, the report says, will make heterogeneous computing the basis of exascale systems by the end of the decade.
IDC backs this up with its own research. From the report:
IDC's 2008 worldwide study on HPC processors revealed that 9% of HPC sites were using some form of accelerator technology alongside CPUs in their installed systems. Fast-forward to the 2010 version of the same global study and the scene has changed considerably. Accelerator technology has gone forth and multiplied. By this time, 28% of the HPC sites were using accelerator technology — a threefold increase from two years earlier — and nearly all of these accelerators were GPUs. Although GPUs represent only about 5% of the processor counts in heterogeneous systems, their numbers are growing rapidly.
The report also notes that GPUS and accelerator technology more generally (with a shot-out to the Intel MIC coprocessor) is moving from experimental use into more mainstream production work. Nowhere is this more apparent than in the top supercomputers, where currently three of the top ten machines in the world employ GPUs, a number which is expected to grow as more US supercomputers like Titan (ORNL) and Stampede (TACC) come online over the next 12 to 18 months.
IDC's only caveat is that x86 technology is not standing still and they expect products based on that architecture to remain the revenue leader in HPC through 2015. The implication is that even in a world replete with exotic HPC accelerators, x86 is likely to survive as a complementary CPU technology, or in the case of Intel MIC, as its own accelerator.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.