Visit additional Tabor Communication Publications
February 19, 2009
As the first Intel Nehalem EP server chips get ready for their debut, HPC users in particular are anxious to get a taste of Intel's new high performance design. The new architecture incorporates the QuickPath Interconnect (QPI) and integrated memory controllers, a setup which should be especially kind to memory-intensive applications.
As I've discussed before, the "memory wall" has become one of the most worrisome issues in HPC. Over the last two decades, memory performance has been steadily losing ground to CPU performance. From the 1986 to 2000, CPU speed improved at an annual rate of 55 percent, while memory speed only improved at 10 percent. As clock speeds stalled, chipmakers resorted to multiple cores, but if anything, that only caused the CPU-memory performance gap to widen. A recent study by Sandia pointed to the futility of just throwing more cores at the problem.
The Nehalem processors, though, should provide some relief -- if temporarily. The soon-to-be-released quad-core EP chips for two-socket servers will have integrated DDR3 memory controllers, which Intel claims will bump memory bandwidth by 300-400 percent compared to the current "Penryn" class Xeon processors. Exact performance has not been verified, but the new DDR3 controllers should yield memory bandwidth in the range of 32-35 GB/second per socket. That should be a big lift for many memory-bound applications.
Unfortunately, after Nehalem, Intel probably won't be able to duplicate another memory performance increase of similar magnitude for some time. DDR4 will have perhaps twice the raw performance of DDR3, but is not expected to show up until 2012. More exotic memory architectures are on the drawing boards, but no manufacturers have committed to a roadmap.
GPUs are a different story though. These chips are all about data parallelism, so the memory architecture was designed for parallel throughput from the get-go. For GPGPU computing platforms like NVIDIA Tesla and AMD FireStream, the hardware comes with a hefty amount of very fast memory so that large chunks of computations can take place locally, without having to continually tap into system memory.
Today, you can get an NVIDIA Tesla GPU with 4 GB of (GDDR3) memory at 102 GB/second of bandwidth. Granted this is graphics memory, so you have to deal with the lack of error correction, but at roughly three times the memory performance available to a Nehalem processor, GPUs can offer some respite from the memory wall. The more favorable GPU-memory performance balance is one reason why users have been able to speed up their data parallel apps by one or two orders of magnitude.
And yet the entry of Nehalem into the HPC server market is bound to be the big story this year. Despite the meteoric rise of GPUs in the general-purpose computing world over the last couple of years, most HPC users are still using x86-based clusters. According to IDC, less than 10 percent of the HPC user sites they surveyed were using alternative processors (most of which, I assume, were GPUs and Cell processors), and they didn't see those numbers changing dramatically in the near term.
But the memory wall will be unrelenting. The eight-core Nehalem EX chip is in the works and is expected to show up in the second half of 2009. At eight cores, memory-intensive apps might be a poor fit for this platform. It was at the eight-core mark that the Sandia study saw an actual decrease in performance. There's plenty of anecdotal evidence that a variety of HPC applications are seeing declining application performance as they migrate from just two to four cores.
On top of that, considering the onerous software licensing model for multicore processors used by many ISVs and the well-known difficulties of multi-threaded software development, multicore CPUs may not be the path to HPC nirvana after all. Thinking optimistically, though, it's quite possible we'll find a path around the memory wall and all the other parallel computing roadblocks. But the solution is likely to come about by looking at the problem in an unconventional way.
As IT publisher Tim O'Reilly recently wrote: "The future isn't going to be like the past. What's more, it isn't going to be like any future we imagine. How wonderful that is, if only we are prepared to accept it."
Posted by Michael Feldman - February 19, 2009 @ 5:43 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.