Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
November 4, 2013

Reining in Restarts Through Selective Recovery

Tiffany Trader

Computational failures take a steep toll in the HPC sciences. Events such as broken node electronics, software bugs, insufficient hardware resources, and communication faults stymy work on expensive machines and bedevil computer scientists. An article at Deixis Magazine chronicles the work of a Pacific Northwest National Laboratory researcher who is developing load balancing techniques to keep calculations running as smoothly as possible even in the wake of unforeseen mishaps.

Addressing fault tolerance grows more urgent as core counts proliferate and machines cross over from petaflop to exaflop-class territory. Sriram Krishnamoorthy and his research team have developed a technique called selective recovery that aims to minimize the negative impact of faults.

“The basic idea of dynamic load balancing is you can react to things like faults online,” Krishnamoorthy reports. “When a fault happens, we showed you could actually find what went bad due to the fault, recover that portion and only that portion and then re-execute it” while everything else continues to execute.

Under the current paradigm, when a system failure occurs, the process rolls back to the last checkpoint and the tasks are re-executed. It’s a tried-and-true method, but it is time-consuming and resource-intensive.

“When one process goes bad and you take a million of them back to the last good checkpoint, it’s costly,” Krishnamoorthy says. “We showed that the cost of a failure is not proportional to the scale at which it runs.”

Krishnamoorthy and his colleagues proposed a new framework, called Task Scheduling Library (TASCEL) for Load Balancing and Fault Tolerance, which they described at the June International Supercomputing meeting. In the event of a failure, only the problematic section is rerun while the computer continues without interruption. The method employs a system of checks that ignores duplications and synchronizes results for a given task. The overall job is tracked via data structures that are globally accessible, instead of being stored in local memory, which reduces communication costs.

The framework was initially developed to enable computational chemistry codes to make the jump from smaller cluster to highly-parallel many-core machines. In a recent success story, the research team ran a computationally-intensive code on 210,000 processor cores of Titan at Oak Ridge’s Leadership Computing Facility, achieving more than 80 percent parallel efficiency.

Now the team is working towards broadening the framework so it can apply to any algorithm with load-imbalance issues. The goal of exascale is certainly a big motivator and it means that Krishnamoorthy must work with one foot in the present and the other in the future. To establish exascale machines within the next 8 to ten years, load imbalance and fault tolerance will require this kind of dedicated attention.

Judging from the community support that his research has garnered, it appears Krishnamoorthy is on the right track. The computer scientist was awarded a DOE Early Career Research Program award, which provides $2.5 million over five years to explore exascale computing strategies. And recently, Krishnamoorthy was also recognized with PNNL’s 2013 Ronald L. Brodzinski Award for Early Career Exceptional Achievement.