Visit additional Tabor Communication Publications
October 19, 2007
Recently I participated on a panel to discuss what has changed in the parallel computing research community in the past 20 years, specifically dealing with languages and compilers. Many things look the same, but one thing that has changed a great deal is the presence, acceptance and use of standard benchmarks. Twenty years ago, we compared supercomputers and minisupercomputers using any number of microkernels and small programs (remember the Livermore Loops!). We still fall into this trap, rating the top 500 supercomputers using a single benchmark program. However, even 20 years ago, there were several efforts to collect benchmarks suites to represent real applications, including the Perfect Club, RiCEPS, and Mendez suites. These were also used to compare the behavior and performance of compiler optimizations.
Processor, system and compiler performance are now frequently measured using the SPEC CPU benchmarks. The original suite was released as ten programs in 1989, each of which ran for about 5 minutes on then-current workstations, with performance normalized to that of a VAX 11/780 (that very VAX now sits in the Paul G. Allen Center, housing the Computer Science and Engineering department at the University of Washington). Good luck finding those programs today, SPEC seems to have disavowed all knowledge of them. If you can, it's fun to run them or the 1992 or 1995 versions; today's machines are so fast that the programs finish almost before the return key bounces back. The latest suite, SPEC CPU2006, comprises 28 programs written in C++, C and Fortran.
As originally intended, the SPEC suite was to be used by customers as a reasonably comprehensive and vendor-neutral benchmark for comparing system performance. It has since become much more important. For instance, the list price for a new computer system may be raised or lowered depending on whether the SPEC benchmark score is higher or lower than that of its competition, so SPEC performance can affect the profitability of a vendor. To be fair, SPEC is not the only benchmark that is so used, but it is one of the most visible.
The components that affect SPEC performance are the processor, cache, memory and memory bus, and the compiler. The cache, memory and bus are typically determined by the processor, so there are really only two variables. Processor designers add or optimize features to improve performance, often looking at instructions used in these benchmarks to decide what to optimize. The days of increasing speed by pumping up the clock rate seem to be gone, but there are still opportunities for improvement in implementation technology and microarchitecture.
It's illuminating to look at some historical SPEC CPU2000 results between its initial release in late 1999 until its retirement in late 2006. Dell published a run in November 1999 on a top-of-the-line Precision Workstation 420 which delivered a SPEC CINT2000 base ratio of 336, and CFP2000 ratio of 242. Seven years later, Dell published a run on a then-current Precision Workstation 390 with a CINT2000 base ratio of 2829 and CFP2000 ratio of 2679, for a factor of 8 improvement in CINT and 11 in CFP performance. Most of this improvement was undoubtedly due to the move from the 733 MHz Intel Pentium III (with 256KB L2 cache) to the 2.66GHz Intel Core 2 Extreme (with 4MB L2 cache). We might predict a 3.5X improvement just from the clock rate, with additional speedup due to larger cache, double precision SSE2 instructions, and aggressive superscalar instruction issue with out-of-order execution. Indeed, the lowest speedup was 4.5 (for 164.gzip).
However, the speedups are quite nonuniform. For instance, 171.swim was the best CFP performer in 1999, but was in the middle of the pack in 2006, with a speedup of 6.3; 179.art, the second best CFP performer in 1999 benefited from a speedup of almost 30 to become the best (by far) in 2006. On the CINT side, 181.mcf was the worst performer in 1999, but was second best in 2006 (speedup of 18), so something obviously changed for the better there, whereas 164.gzip, the third best CINT in 1999, was the worst performer in 2006.
I'd like to explore how much of the performance improvement is due to clock and implementation differences, how much is due to the instruction set changes with the addition of SSE2 and the move to x86-64, and in particular whether the compiler had any effect. To demonstrate this, I'm going to show some results of one particular SPEC CPU2000 benchmark, 172.mgrid, for reasons that I'll explain below. I ran mgrid on two different machines with two different compilers. Note: These are not official SPEC runs, so the results are only estimates. The two machine profiles are:
The two compilers are the PGI compiler suite from 1999 (3.2-4a, using -fast) and our most recent release, 7.0-7 (using -fast -Mipa=fast,inline). I estimate the total speedup from 1999 to 2006 by comparing runs on the Pentium III using the 3.2-4 compiler to runs on the Xeon using the 7.0-7 compiler. For mgrid, the speedup was 28.
My first experiment isolates the compiler, testing only the clock and implementation improvements from the Pentium III to the Xeon. I compiled and ran mgrid using the 3.2-4a compiler on the Pentium III, then ran the same binary on the Xeon machine; this gives a speedup of 12.5, not quite half the total. A factor of eight is easy to explain with the clock speed and microarchitectural improvements; the higher number is possibly due to more of the working set fitting entirely in the much larger L2 cache.
My second experiment repeats the first, but using the newer compiler. I compiled and ran mgrid using the 7.0-7 compiler on the Pentium III, then ran the same binary on the Xeon machine. The newer compilers have some new features, but we should expect more or less the same improvements from the older machine to the Xeon; here I saw a speedup of 15. The new binary ran faster on both machines, but the improvement was more significant on the Xeon, even though the compiler was optimizing for the Pentium III. From these two experiments, I conclude that clock, microarchitecture and other implementation improvements delivered a speedup factor of between 12-15. Very impressive, but this is only about half the total speedup for mgrid between 1999 and 2006.
My third experiment tries to isolate the improvements due to the instruction set enhancements added since the Pentium III, such as SSE2 instructions. I compiled and ran mgrid using the 7.0-7 compiler, targetting the 32-bit instruction set of the Xeon, and compared it to the run targeting the Pentium III instruction set. We should expect some improvement here, since mgrid has lots of vectorized double precision operations. However, my results show only about a 2 percent improvement, quite a surprise.
My final architectural experiment isolates the benefits of moving to the x86-64 instruction set. I compiled and ran mgrid with the 7.0-7 compiler targetting the full 64-bit instruction set, and compared this to the run using the 32-bit instruction set. For mgrid, this improvement is about 22 percent. So the total speedup due to hardware and instruction set is between 15-19.
But this is supposed to be a compiler column. Can we attribute the rest of the speedup to compiler improvements? Certainly, to take advantage of the new instructions (SSE2, 64-bit instructions), compilers had to be significantly enhanced. However, it's not fair to chalk up all those speedups in the compiler column.
Commercial compilers have advanced greatly since 1999; for instance, as I've mentioned in past columns, all current commercial compilers now use some form of interprocedural or whole program analysis and optimization. Some of this controls inlining, some propagates other information across subprogram and file boundaries.
I ran two more experiments to isolate the effects of the compiler. I took the 3.2-4 compiled code and the 7.0-7 compiled code, and ran both on the Pentium III. Here I saw a speedup of 1.5, 50 percent improvement just from compiler improvements. Not nearly the factor of 15-19 we get from hardware, but these are multiplicative improvements. I then took those same binaries and ran them on the Xeon. Note, these binaries were optimized for the Pentium-III, not the Xeon. Nevertheless, the speedup on the Xeon was even better, almost 1.8.
So, what is it about mgrid that lets the compiler deliver a 50-80 percent performance improvement over seven years. I chose mgrid for a reason, and not because it benefits most from compiler improvements since 1999, but because there's one specific optimization that applies. Let's look at one of the key loops in mgrid:
DO I3 = 2, N-1Several loops in mgrid are similar to this one. This loop fetches 28 array elements and performs 27 double precision floating point additions. However, compilers now recognize that in the inner loop, the value computed as 'U(I1+1,I2,I3-1)+U(I1+1,I2,I3+1)' will be used again in the next iteration as 'U(I1,I2,I3-1)+U(I1,I2,I3+1)'. Saving that value in a register eliminates two array element fetches and one addition. In fact, this pattern occurs so often in this loop that the optimized inner loop only loads 12 array elements and performs 15 floating point additions, a savings of about 1/2. Such optimizations have appeared in academic literature with research languages, such as ZPL, but had not been implemented in production commercial compilers before SPEC CPU2000 and mgrid. At PGI, we designed our implementation as an enhancement of the Scalar Replacement optimization developed at Rice University. We call it Loop-Carried Redundancy Elimination, or LRE, and it is responsible for improving our compiler's performance on mgrid 15-20 percent, depending on the target machine.
DO I2 = 2, N-1
DO I1 = 2, N-1
> -A(0)*( U(I1, I2, I3 ) )
> -A(1)*( U(I1-1,I2, I3 ) + U(I1+1,I2, I3 )
> + U(I1, I2-1,I3 ) + U(I1, I2+1,I3 )
> + U(I1, I2, I3-1) + U(I1, I2, I3+1) )
> -A(2)*( U(I1-1,I2-1,I3 ) + U(I1+1,I2-1,I3 )
> + U(I1-1,I2+1,I3 ) + U(I1+1,I2+1,I3 )
> + U(I1, I2-1,I3-1) + U(I1, I2+1,I3-1)
> + U(I1, I2-1,I3+1) + U(I1, I2+1,I3+1)
> + U(I1-1,I2, I3-1) + U(I1-1,I2, I3+1)
> + U(I1+1,I2, I3-1) + U(I1+1,I2, I3+1) )
> -A(3)*( U(I1-1,I2-1,I3-1) + U(I1+1,I2-1,I3-1)
> + U(I1-1,I2+1,I3-1) + U(I1+1,I2+1,I3-1)
> + U(I1-1,I2-1,I3+1) + U(I1+1,I2-1,I3+1)
> + U(I1-1,I2+1,I3+1) + U(I1+1,I2+1,I3+1) )
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.