Compilers and More: Precision and Accuracy

By Michael Wolfe

May 11, 2007

According to Wikipedia, precision is defined in engineering and science as the degree of agreement among a series of individual measurements or results. For instance, if I measure my height (nonmetrically) five days in a row and come up with 73.8″, 73.5″, 73.75″, 73.7″ and 73.6″, I could state that I’m just over 6 feet tall. The precision of the measurements is about three tenths of an inch, the range of the measurements.

Accuracy is the degree of agreement of a measure value with the actual value. If you know the actual value, you hardly need to measure it, except perhaps to determine the accuracy of the measuring tool. If you don’t know the actual value, you might infer the accuracy of the measurement from the precision of your measurements, if you believe your measuring tool is reasonably accurate. When asked, I usually say I’m 6’2″ tall, which is reasonably accurate, though not exact.

In computing, we use precision in two ways. One is the number of bits in the representation of the mantissa of a floating point number; for instance, single-precision IEEE floating point gives 24 bits of precision (counting the hidden bit for normalized numbers), about 7 decimal digits; double-precision (or full precision, as numerical analysts used to call it) provides 53 bits of precision, about 15 decimal digits.

We also use precision as a measure of how many of these bits are correct. For instance, if I represent a very precise measurement of 1.000100 as a single precision number, then subtract 1.0 from that, the floating point result is 1.000000E-4; since the original measurement was only precise to seven digits, this difference is only precise to three digits, despite being represented with 7 digits. As another simple example, try summing 1/i in single precision where i ranges from 1 to 10,000,000; then sum the numbers in the other order, from 1/10,000,000 to 1/1; here, the answers differ in the second digit. The computations are mathematically identical, but computationally different, because of the limited accuracy of the computer floating point representation. The key is the order of computations can change the floating point answers, due to differences in the order and magnitude of rounding, itself due to the limited size of the representation. Numerical analysts earn a living determining how accurate and precise a computed answer is, and how to formulate the computation to improve accuracy and precision. The rest of us address the problem by going to double precision and hoping for the best.

It’s well known that a compiler can change the order of computations, and yes, this can affect the answers delivered. This dates back at least to the early days of vectorizing compilers for the Texas Instruments ASC, Control Data Cyber 205, and Cray 1. The simplest example was the way a summation is accumulated in vector mode. In our simple example, we showed two orders (forward and backward). These vector computers accumulated some number of intermediate partial sums, then add up the partial sums for a final result. The Cray compiler, for instance, would accumulate a sum in groups of 64 partial sums, the length of the vector register. The Cyber 205 would accumulate eight partial sums, the pipeline depth of the floating point adder. In all cases, the compiler had a flag to preserve the same floating point computation order, but that could be significantly slower.

This is not just historically interesting; many current processors have multimedia or SIMD instruction set extensions, such as the SSE and SSE2 instructions on the Intel and AMD x64 processors. Current compilers use the same code generation scheme to vectorize summations as compilers did 30 years ago for vector machines, though the ‘vector length’ may be only 2 or 4, the size of the multimedia registers.

In addition, a language definition may allow some freedom of evaluation order. Fortran allows a compiler to evaluate any mathematically equivalent expression, provided that parentheses are not violated. C does not even protect parenthesized subexpressions, explicitly allowing reassociation and even redistribution of multiply over addition.

Why this matters

It’s also well known that floating point operations, in spite of gigahertz clocks, functional unit pipelines and all the other architectural magic, are relatively slow. Multiplication is slower than addition, and division or square root is up to ten times slower than that. For those in the high performance world, reducing the multiply count, or replacing multiplication by addition, or, even better, replacing division by multiplication can give a big performance boost. It’s hard to underestimate the importance of benchmark performance in the world of compiler and processor vendors, where bragging rights, pricing, and sales can all depend on a particular benchmark or suite.

So let’s take the case of a particular benchmark, Gromacs, from the SPEC CPU2006 suite. This benchmark has a large number of single precision square root function calls, many of which are in the denominator, such as 1.0/sqrt(rsq11). Several current microprocessors have an instruction to approximate a single-precision inverse square root, such as the PowerPC (AltiVec), the Intel (SSE) and AMD (3D-Now! and SSE). In all cases, the instruction gives a low precision approximation, about half the number of bits. One Newton-Raphson iteration is enough to improve the result to full precision.

Such a technique is well known; the Cray 1 implemented a reciprocal approximate instead of a divide instruction, requiring the compiler to generate the Newton-Raphson iteration to generate a full precision result. This allowed divides to be vector pipelined without undue hardware requirements.

The performance of the square root and divide units are so slow that doing the reciprocal square root approximate instruction, followed by a Newton-Raphson iteration involving four multiplies and a substract, is still quite a bit faster. In some applications (graphics, for example), full precision may be overkill anyway, so perhaps the approximation by itself is sufficient.

So, all is well, right?

The problem is that one iteration of Newton-Raphson is enough to bring the approximation to full precision, except for the last ULP (unit in the last place) or two. This means that using the optimized code in this benchmark generates slightly different results than the standard code that uses a library or full-precision square root and divide instructions.

So, are the answers right or not?

This is tough question, perhaps one that doesn’t make sense. Recall that there’s a certain amount of roundoff error for any computer floating point arithmetic, so the answer is “wrong” whichever way you compute this value. Moreover, the computation is probably based on some initial data that was measured or approximated and so doesn’t have that much precision to start with.

So, are the answers okay or not?

As long as the programmer and the user realize and agree with the limitations of the generated code, yes, they are okay. There are really only two problems. In the history of floating point arithmetic, we’ve had a myriad of floating point formats, from base 16 to two, with or without rounding, with wildly varying dynamic range. In the 1980s, IEEE sponsored an effort to standardize on floating point formats, which all current mainstream processors use. This means that the answers computed on one brand processor, be it AMD, IBM, or Intel, are going to be the same as on any other brand, such as Motorola, MIPS, or SPARC. But here, we’re changing the computation, so the first problem is that answers on different hardware will be close, but may no longer be exactly the same.

The other problem has to do with the programmer and user agreeing with the limitations. Compilers that implement this performance optimization will enable or disable it under programmer control, usually with a command-line option (for those of us still using command lines). The issue is what’s the default? Would you like to specify an additional option to relax the precision of these operations, or specify an additional option to maintain their precision? This is largely a marketing question, though having an option to ‘maintain precision’ makes one wonder what the default behavior is.

As with all SPEC CPU2006 benchmarks, Gromacs checks that the computed answers are correct. However, the check is weak enough that the reciprocal square root approximation is good enough; the Newton-Raphson iteration isn’t even necessary. And yes, some of those SPEC submissions use this “optimization” in their peak numbers. Beware! Dragons lurk here!

Perhaps compilers could take another step and provide an option that would aggressively reorder and reassociate operations, and in general play heck with the floating point rounding properties of the program. If you run your program in standard and reordered mode and the answers are the same, you could have some degree of confidence that the language, the compiler, and the hardware did nothing bad to your computation (though the physics, of course, is up to you). However, if the answers differed wildly, then perhaps you need to bring in that numerical analyst to determine just what makes your program so sensitive.

As clock rates peak, we will explore ever more bizarre mechanisms to improve performance. There’s a lot of buzz about GPGPUs (General Purpose computing on Graphics Processing Units), which are cheap and fast, but which don’t (yet) implement all the IEEE rounding modes. We could quickly degrade into a world much like 20 years ago, where you must consider the hardware floating point representation and implementation before choosing your hardware.

Precision control can also affect performance. Recent numerical library work at Oak Ridge National Labs and the University of Tennessee uses the fact that single precision (32-bit) floating point operations run much faster than double precision on many current processors. The packed SSE instructions on the x64 processors, for instance, can do twice as many 32-bit operations per cycle. This encourages them to redesign their libraries to do the bulk of the work in single precision, then to use iterative refinement in double precision to provide a high precision result with low precision speed.

On the other hand, some scientists are concerned about how quickly roundoff error accumulates when performing peta-operations, perhaps reducing the number of significant digits to zero. Eighteen years ago, David Bailey and others predicted that IEEE double precision floating point would run out of bits as problem sizes increased, and suggested that hardware 128-bit floating point would be needed. Perhaps those SSE registers will soon be holding a single value.

—–

Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire