Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
October 2, 2013

Controlling Soft Errors at Scale

Tiffany Trader
PNNL_soft_error_research_graphic_250x

There are many important issues when it comes to advancing the field of HPC toward the exascale era, but among all these variables, there are about five or so sticking points that really stand-out: one of these is controlling for soft errors.

As the number of cores per machine increases, incorrect behaviors, known as soft errors, begin to threaten the validity of simulations. When you consider that exascale machines will employ billion-way parallelism, the necessity to address this problem is clear.

A team of scientists from PNNL performed experiments revealing the high risk of soft errors on large-scale computers. The research team found that without intervention, soft errors invalidate simulations in a large fraction of cases, but they also developed a technique that will correct 95 percent of them.

According to their paper in the Journal of Chemical Theory and Computation, the next generation of systems will combine millions of cores, which will increase the odds for soft errors, thereby producing unexpected results.

“Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption,” note the authors.

The only way to deal with these errors is to identify and remedy them. The paper explores the impact of soft errors on optimization algorithms, which start with an initial guess and iteratively reduce the error until a correct solution is obtained. For a concrete example, the team used the Hartree–Fock method from quantum chemistry.

The results indicate that the optimization algorithms worked well for soft errors of small magnitudes but not for large errors. In other words, calculations still failed in a significant fraction of cases. The team suggests that mechanisms for different classes of data structures will allow large errors to be detected and corrected. They conclude it is possible to correct more than 95% of the soft errors using these techniques with only a modest increase in computational cost.

PNNL_soft_error_research_465x

The work was supported by the eXtreme Scale Computing Initiative using resources from the Environmental Molecular Sciences Laboratory, located at PNNL, as well as the PNNL Institutional Computing Facility. The paper was authored by PNNL researchers Hubertus J. J. van Dam, Abhinav Vishnu, and Wibe A. de Jong.

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video