Visit additional Tabor Communication Publications
September 24, 2008
HPC hardware accelerators -- GPUs, FPGAs, the Cell processor, and custom ASICs like the ClearSpeed floating point device -- have captured the imagination of HPC users in search of higher performance and lower power consumption. While these offload engines continue to show impressive performance results for supercomputing workloads, Intel is sticking to its CPU guns to deliver HPC to the broader market. According to Richard Dracott, Intel's general manager of the company's High Performance Computing business unit, CPU multicore processors, and eventually manycore processors, will prevail over accelerator solutions in the financial services industry, as well as for HPC applications in general.
Dracott says he's seen the pattern before where people get attracted to specialized hardware for particular applications. But in the end, he says, general-purpose CPUs turn out to deliver the best ROI. Dracott claims that to exploit acceleration in HPC, developers need to modify the software anyway, so they might as well modify it for multicore. "What we're finding is that if someone is going to go to the effort of optimizing an application to take advantage of an offload engine, whatever it may be, the first thing they have to do is parallelize their code," he told me.
To Intel's credit, the company has developed a full-featured set of tools and libraries to help mainstream developers parallelize their codes for x86 hardware. With the six-core Dunnington in the field today and eight-core Nehalem processors just around the corner, developers will need all the help they can get to fully utilize the additional processing power.
In fact though, adding CPU-based multithreading parallelism to your app tends to be more difficult than adding data parallelism. The latter is the only type of parallelism accelerators are any good at. And if your workload can exploit data parallelism, this can be done rather straightforwardly. With the advent of NVIDIA's CUDA, AMD's Brook+, RapidMind's development platform, FPGA C-based frameworks, and SDKs from ClearSpeed and other vendors, the programming of these devices has become simpler.
And it may get simpler yet. PGI compiler developer Michael Wolfe thinks there is no reason why high-level language compilers can't take advantage of these offload engines. "We believe we can produce compilers that allow evolutionary migration from today's processors to accelerators, and that accelerators provide the most promising path to high performance in the future," he wrote recently in his HPCwire column.
Of course, CPUs are not standing still performance-wise. According to Dracott, when financial customers were asked how long a 10x performance advantage over a CPU-based solution would have to be maintained to make it worth their while, they told him anywhere from 2-3 years up to as much as 7 years. For production environments, the software investment required to bring accelerators into the mix needs to account for re-testing and re-certification. In the case of the financial services industry (because of regulatory and other legal requirements), this can be a significant part of the effort. "And by the time they actually make the investment in the software, the general-purpose [CPU] hardware has caught up," says Dracott.
Maybe. A lot of applications are already realizing much better than a 10x improvements in performance with hardware acceleration. SciComp, a company that offers derivatives pricing software, recently announced a "20-100X execution speed increase" for its pricing models. Other HPC workloads have done even better. And while the CPU hardware will eventually catch up to current accelerators, all silicon is moving up the performance ladder, roughly according to Moore's Law. So the CPU-accelerator performance gap will in all likelihood remain.
Accelerators do have a steeper hill to climb in certain areas though. Except for the Cell processor, where a PowerPC core is built-in, all accelerators require a connection to a CPU host. Depending upon the nature of the connection (PCI, HyperTransport, QuickPath, etc.) the offload engine can become starved for data because of bandwidth limitations. In fact, the time spent talking to the host can eat up any performance gains realized through faster execution. More local store on the accelerator and careful programming can often mitigate this, but the general-purpose CPU has a built-in advantage here.
Dracott points out that the lack of double precision floating point capabilities and error correction code (ECC) memory limits accelerator deployment in many HPC production environments. This is especially true in the financial space, where predictability and reliability of results are paramount. But the latest generation of offload engines all support DP to some degree, and only GPUs have an ECC problem. ClearSpeed ASICs, in particular, have full-throttle 64-bit support plus enterprise-level ECC protection. GPUs, on the other hand, will have to deal with soft error protection in some systematic way to become a more widely deployed solution for technical computing. I've got to believe that NVIDIA and AMD will eventually add this capability to their GPU computing offerings.
The shortcomings of accelerator solutions have prevented much real-world deployment in production situations, according to Dracott. He thinks users will continue to experiment with offload engines for several more years, but with the exception of certain application niches, most will eventually end up back at the CPU. But interest in these more exotic solutions remains high in the HPC community. HPCwire's Dennis Barker, at this week's High Performance on Wall Street conference, reports that the hardware accelerator companies were drawing quite a crowd and a number of FPGA-accelerated products are already on the market. "Sellers of these products were all over the place, their booths were busy, and several sessions on the subject were standing-room only," he writes.
And despite Intel's commitment to the x86 CPU and Dracott's take on the future of accelerators, the company has been evolving its position on co-processor acceleration. Intel's (and IBM's) Geneseo initiative to extend PCI Express for offload engines and its plans to license the new QuickPath interconnect technology would seem to indicate that the company hasn't completely discounted acceleration. AMD, of course, has Torenzza, its own co-processor integration technology. Whether Intel is just hedging its bets to counter its rival or is genuinely committed to sharing the computing world with other architectures remains to be seen.
Posted by Michael Feldman - September 23, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.