Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

November 11, 2013

Programming for Unreliable Hardware

Tiffany Trader
biary distortion 250x

As transistors approach sub-atomic sizes, reliability is increasingly jeopardized. Chipmakers keep figuring out technical workarounds to the miniaturization problem; however, prevailing wisdom maintains that current manufacturing techniques will sooner or later run out of steam, and Moore’s Law – the prediction that has yielded huge increases in semiconductor performance for nearly five decades – will be no more.

Computer engineers are scrambling to come up with alternatives to silicon-based CMOS, but some scientists are suggesting another possibility: let the computers make mistakes.

It’s a heresy to some. Aren’t computers supposed to be bastions of precision and accuracy? In a recently published paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory argue that many high-performance architectures already contain unreliable components that lead to soft errors, which silently corrupt computations. While full detection of these errors is challenging from a time and energy perspective, some applications can withstand a certain amount of errors. For example, approximate computing applications, such as multimedia processing, machine learning and big data analytics are naturally tolerant to soft errors. If a few pixels in a high-definition video frame are improperly decoded, it won’t impact the viewing experience, but relaxing the requirement for perfect decoding allows the process to run faster and with less energy.

Under the direction of Martin C. Rinard, the research group developed a new programming language, called Rely, that allows developers to specify when errors may be tolerable. The system then calculates the probability that the software will produce the correct result when executed on unreliable hardware.

“If the hardware really is going to stop working, this is a pretty big deal for computer science,” says Rinard, a professor in the Department of Electrical Engineering and Computer Science. “Rather than making it a problem, we’d like to make it an opportunity. What we have here is a … system that lets you reason about the effect of this potential unreliability on your program.”

To exploit the benefits of unreliable hardware, the developer adds a “dot” to the appropriate line of code. For example, the instruction “total = total + new_value” becomes “total = total +. new_value.” That “dot” tells Rely to evaluate the program’s execution using the failure rates that have been specified. The default version of Rely (sans dot) assumes a failure-free mode of operation, which is likely to incur longer execution times and higher power consumption.

At the compilation stage, Rely turns the probability that each instruction will yield the right answer into an estimation of the overall variability of the program’s output.

“One thing you can have in programs is different paths that are due to conditionals,” says Sasa Misailovic, a graduate student working on the project and co-author of the paper. “When we statically analyze the program, we want to make sure that we cover all the bases. When you get the variability for a function, this will be the variability of the least-reliable path.”

If the overall probability of success is unacceptably low, the programmer can retool the code, removing and adding dots until the optimal reliability level is achieved. According to the researchers say, this process generally takes no more than a few seconds.

The developers are working on another version of Rely that will be even easier to use. The programmer would just specify the accepted failure rate for whole blocks of code and the system would automatically determine how the code should be modified to both meet those requirements and maximize either power savings or speed of execution.

The authors have identified a trend in which emerging hardware architectures are trading reliability for additional energy or performance savings. Some of the efforts that researchers are pursuing include probabilistic CMOS chips, stochastic processors, error resilient architecture, and unreliable memories. Certain classes of applications (multimedia processing, machine learning, etc.) will be well poised to exploit such components, and, in the authors’ words, “Rely aims to help developers better understand and control the behavior of their applications on such platforms.”

According to Dan Grossman, an associate professor of computer science and engineering at the University of Washington, “The increased efficiency in the hardware is very, very tempting. We need software work like this work in order to make that hardware usable for software developers.”

Share This