Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
November 9, 2010

Looking to Fault-Tolerant Software

Tiffany Trader

If time marches on, computing marches up. Currently, in the terascale, and early petascale era, we are seeing thousands of processors on a given machine. Connecting all these processors requires even more hardware. And the more hardware there is, the greater the odds of component failure. Such is the subject of an article at Scientific Computing. Author Doug Baxter urges his audience to think about accomodating hardware failure by redesigning the software.

Hardware fault-tolerance measures are in use today, but the drawbacks are many. The ability to predict when hardware is about to fail, making it hot swappable, and proactively rescheduling software running on parts about to fail are all current ways to deal with the problem of faulty hardware. These methods are helpful, but only in hardware that is actively monitored. Another workaround is hardware redundancy, but the expense can make it impractical. There’s checkpoint restarting, but the cost and logistics issues involved with check-pointing massive volumes of distributed memory can cancel out the benefits.

It is for these reasons that Baxter recommends looking to the software design community to achieve fault-tolerant computing. He reports that researchers have started working on this goal and categorizes their efforts into two groups: data-centric software and process-centric software. Baxter proceeds to explore a process-centric strategy. In order for process-centric HPC codes to accommodate hardware failutres, Baxter says that there must first be a shift in software design paradigms and a discarding of outmoded assumptions. Some examples of the latter are that input/output operations never fail and are relatively inexpensive, and that communications calls always succeed. Although the idea that Baxter sets himself to debunking, and one he says is particulary entrenched, is that a consistent set of resources is available for the duration of a computation. He goes on to make his case in detail, including possible pitfalls with suggested solutions.

In the end, Baxter calls for the software developer community to “design locally synchronized, dynamically scheduled, and hierarchically managed applications that can complete computations despite the expected modest number of hardware component failures.” Imagine an application that can sense a hardware failure and just work around it, like a car avoiding a large pothole, able to continue to its destination.

Full story at Scientific Computing

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video