By Matthijs van Leeuwen, Founder and CEO, Bright Computing.
It all starts as a perfectly good day. Suddenly a compute job crashes without warning, and then seconds later the whole job queue flushes. You push aside that cup of coffee you just poured and let out a stream of expletives. Yes, the Black Hole Node Syndrome has struck again.
The dreaded black hole node syndrome, or BHNS, silently and randomly kills productivity in HPC clusters. Although the workload manager reports that all nodes are running, sometimes the job executes, sometimes it crashes, leaving few clues for even the most talented system administrators to fix the problem.
In the worst cases, all the jobs are flushed from the queue for no apparent reason. Valuable compute hours are lost, energy wasted, other priorities sidelined; frustrated users hound beleaguered system administrators. Return on investment is impacted, both directly through increased operating expense, and indirectly, through the opportunity cost of downtime and redirected manpower.
Black Hole Node Syndrome: How Bad is it?
“The black hole node syndrome is a serious issue,” said Dr. Don Holmgren, Computer Services Architect at Fermilab. “There are many subtle problems from apparently healthy nodes that can create cascading job failures. And the bigger the job, the more nodes, the higher the probability of failure.”
According to Dr. Holmgren, Fermilab runs nearly two million jobs per annum on its HPC clusters, comprising 21,000 cores in aggregate. Not all of them make it through to completion: in spite of herculean efforts to contain the problem, Holmgren estimates that .5% of jobs fail as a result of unhealthy nodes, netting out to about 9,000 jobs per year.
“It’s painful for users to resubmit their affected jobs, especially when the cluster continues to perform correctly for other jobs,” continued Holmgren. “And it’s not always evident that there is a problem at first. The scheduler continuously assigns new jobs to the unhealthy nodes. We don’t realize there may be a problem until we notice that an extremely high rate of job failure has occurred on part of the cluster.”
In surveys conducted by Bright Computing, more than 64% of respondents indicated they have been impacted by black hole node syndrome. Many have worked to prevent job crashes: 23% of respondents reported that they have written scripts to prevent the problem, while 14% have purchased software to address it. Another 27% report that the problem “still drives me nuts.” The remaining 36% are either not impacted or do not realize there is a name for what is killing their jobs.
Jesse Trucks, HPC Cyber Security Administrator at Oak Ridge National Laboratory didn’t lose his sanity, but he may have lost his patience now and then. His experience with BHNS began when he was a systems engineer at D.E. Shaw Research, LLC, where Trucks oversaw daily data center operations.
“When a job fails, the source of the problem is often difficult to identify,” said Trucks. “A job that crashes can run fine on another cluster, or even on the same cluster if you run it again. You can spend hours in the data center pulling your hair out, pulling up floor tiles, or worse. Is it the machine? Is it the middleware? The code itself? Or the data?”
“Initially, if a job crashed and we suspected BHNS,” continued Trucks, “we would re-run the same job on a similar system to compare outputs. If the outputs weren’t identical, we had a problem at which point we would pull the cluster offline and perform advanced diagnostic testing.”
“This approach works, but it’s costly in terms of downtime. So I spent a hundred hours or more over 5 months writing a series of scripts to predict and isolate the problem, which was a significant time investment to create one custom solution.”
“The funny thing is that sys admins across the globe are taking this same approach. I have extensive experience with Cray systems, for example. There are obviously lots of Cray systems globally, and all of them work on generally the same principles. But since the workflow and types of jobs are unique within each data center, most admins tackling BHNS end up writing their own code. This seems highly inefficient,” said Trucks.
At Fermilab, Dr. Holmgren and his colleagues have written a wide array of scripts to perform pre- and post-job health checks, often developed on an iterative basis following job losses. But less-experienced HPC users face a long learning curve, punctuated by a great deal of frustration. In either case, these scripts and workarounds are seldom documented, leaving HPC facilities at risk when the specialists move on to other roles or leave the organization.
“In the end, everyone wants to prevent BHNS and optimize their workflow, because it’s expensive to run even a small cluster, let alone a data center with thousands of nodes,” added Trucks. “There is an obvious need for a versatile, easy-to-use cluster management solution which is capable of doing more than simply detecting dead nodes.”
Black Hole Node Syndrome: Common Causes
Node failure in HPC systems is common, and normally does not pose a problem. Most workload managers recognize dead nodes and simply work around them by excluding them from jobs.
The bigger problem, and the cause of BHNS, are the nodes that are unhealthy in a subtle way. These sick nodes will crash jobs, sometimes on a seemingly random basis. Examples of node “illnesses” that often slip “under the workload manager radar” and capable of crashing jobs include:
- GPU driver failed to load
- Unmounted parallel file system
- Full scratch disk
- Malfunctioning InfiniBand adapter
- Irregular system clock
- SMART errors on the disk drive
- System services not running
- External user authentication not working properly
Unless unhealthy nodes are detected, workload managers will continue to include them in jobs, causing continual failures or even cascading job losses as described by Dr. Holmgren.
In addition to crashing jobs, there are a number of performance-reducing “ailments” that most workload management software overlooks:
- Rogue processes present on the node
- Degraded RAID array
- Swap memory is being used
- Network interfaces not up
These “ailments” may not crash jobs, but they can significantly impede system performance.
There is good news, however, as there are ways to defeat the BHNS that does not included man-months of scripting, tearing up floor-boards or ripping out hair.
Black Hole Node Syndrome: Prevention
There are three approaches to combating black hole node syndrome; unlike men, they are not created equal:
There is of course, a fourth option: do nothing and accept job losses.
1. Extensive scripting: Veteran HPC specialists typically work hard to prevent black hole node syndrome by writing a wide array of scripts to perform pre- and post-job health checks, usually developed on an iterative basis following job losses. This approach can solve the problem, but it is costly in terms of time and lost jobs as the iterations are addressed.
2. Workload Managers plus scripting: Workload managers can address part of the problem, but again, custom scripts must be written to fill in the gaps of these products. This approach potentially reduces the scope of scripting, but comes with similar opportunity costs and organizational risks.
3. Select a cluster management solution that has the ability to tie health checks into the workload manager: The ideal scenario is to ensure that unhealthy nodes are identified and sidelined before jobs are run. These health checks must be extendible and easy adapted to the cluster and its operating environment, to avoid the risks inherent in custom (and undocumented) scripting.
I recommend option 3, of course. To help you choose the best solution to prevent BHNS, I suggest you ask the following questions to the software providers you are considering:
- Does the cluster management solution have extensive health-checking capabilities?
- If yes, can these capabilities be customized or extended?
- Can the health checks be associated with specific jobs?
- Can the workload manager schedule health checks?
- How much overhead is associated with the health checking?
- Does the cluster management solution offer you a choice of workload managers, enabling you to choose what is best for your needs? If yes, how much effort is required to integrate the workload manager into the overall solution?
Making the right cluster management choice up front can make the difference between fighting BHNS on a continual basis (and losing jobs, frustrating users and yourself) or preventing the problem from ruining a perfectly good day.
I leave you with the words Steve Conway, IDC research vice president for HPC:
“With today’s biggest supercomputers featuring more than 200,000 cores and million-core machines just a few years away, parts failures are increasingly common. Sophisticated system health analyses, such as Bright Cluster Manager is designed to perform, are crucial for maintaining productivity.”
How Bright Cluster Manager helps prevent the Black Hole Node Syndrome: click here.