Visit additional Tabor Communication Publications
September 24, 2012
By Matthijs van Leeuwen, Founder and CEO, Bright Computing.
It all starts as a perfectly good day. Suddenly a compute job crashes without warning, and then seconds later the whole job queue flushes. You push aside that cup of coffee you just poured and let out a stream of expletives. Yes, the Black Hole Node Syndrome has struck again.
The dreaded black hole node syndrome, or BHNS, silently and randomly kills productivity in HPC clusters. Although the workload manager reports that all nodes are running, sometimes the job executes, sometimes it crashes, leaving few clues for even the most talented system administrators to fix the problem.
In the worst cases, all the jobs are flushed from the queue for no apparent reason. Valuable compute hours are lost, energy wasted, other priorities sidelined; frustrated users hound beleaguered system administrators. Return on investment is impacted, both directly through increased operating expense, and indirectly, through the opportunity cost of downtime and redirected manpower.
Black Hole Node Syndrome: How Bad is it?
“The black hole node syndrome is a serious issue,” said Dr. Don Holmgren, Computer Services Architect at Fermilab. “There are many subtle problems from apparently healthy nodes that can create cascading job failures. And the bigger the job, the more nodes, the higher the probability of failure.”
According to Dr. Holmgren, Fermilab runs nearly two million jobs per annum on its HPC clusters, comprising 21,000 cores in aggregate. Not all of them make it through to completion: in spite of herculean efforts to contain the problem, Holmgren estimates that .5% of jobs fail as a result of unhealthy nodes, netting out to about 9,000 jobs per year.
“It’s painful for users to resubmit their affected jobs, especially when the cluster continues to perform correctly for other jobs,” continued Holmgren. “And it’s not always evident that there is a problem at first. The scheduler continuously assigns new jobs to the unhealthy nodes. We don’t realize there may be a problem until we notice that an extremely high rate of job failure has occurred on part of the cluster.”
In surveys conducted by Bright Computing, more than 64% of respondents indicated they have been impacted by black hole node syndrome. Many have worked to prevent job crashes: 23% of respondents reported that they have written scripts to prevent the problem, while 14% have purchased software to address it. Another 27% report that the problem “still drives me nuts.” The remaining 36% are either not impacted or do not realize there is a name for what is killing their jobs.
Jesse Trucks, HPC Cyber Security Administrator at Oak Ridge National Laboratory didn’t lose his sanity, but he may have lost his patience now and then. His experience with BHNS began when he was a systems engineer at D.E. Shaw Research, LLC, where Trucks oversaw daily data center operations.
“When a job fails, the source of the problem is often difficult to identify,” said Trucks. “A job that crashes can run fine on another cluster, or even on the same cluster if you run it again. You can spend hours in the data center pulling your hair out, pulling up floor tiles, or worse. Is it the machine? Is it the middleware? The code itself? Or the data?”
“Initially, if a job crashed and we suspected BHNS,” continued Trucks, “we would re-run the same job on a similar system to compare outputs. If the outputs weren’t identical, we had a problem at which point we would pull the cluster offline and perform advanced diagnostic testing.”
“This approach works, but it’s costly in terms of downtime. So I spent a hundred hours or more over 5 months writing a series of scripts to predict and isolate the problem, which was a significant time investment to create one custom solution.”
“The funny thing is that sys admins across the globe are taking this same approach. I have extensive experience with Cray systems, for example. There are obviously lots of Cray systems globally, and all of them work on generally the same principles. But since the workflow and types of jobs are unique within each data center, most admins tackling BHNS end up writing their own code. This seems highly inefficient,” said Trucks.
At Fermilab, Dr. Holmgren and his colleagues have written a wide array of scripts to perform pre- and post-job health checks, often developed on an iterative basis following job losses. But less-experienced HPC users face a long learning curve, punctuated by a great deal of frustration. In either case, these scripts and workarounds are seldom documented, leaving HPC facilities at risk when the specialists move on to other roles or leave the organization.
“In the end, everyone wants to prevent BHNS and optimize their workflow, because it’s expensive to run even a small cluster, let alone a data center with thousands of nodes,” added Trucks. “There is an obvious need for a versatile, easy-to-use cluster management solution which is capable of doing more than simply detecting dead nodes.”
Black Hole Node Syndrome: Common Causes
Node failure in HPC systems is common, and normally does not pose a problem. Most workload managers recognize dead nodes and simply work around them by excluding them from jobs.
The bigger problem, and the cause of BHNS, are the nodes that are unhealthy in a subtle way. These sick nodes will crash jobs, sometimes on a seemingly random basis. Examples of node “illnesses” that often slip “under the workload manager radar” and capable of crashing jobs include:
Unless unhealthy nodes are detected, workload managers will continue to include them in jobs, causing continual failures or even cascading job losses as described by Dr. Holmgren.
In addition to crashing jobs, there are a number of performance-reducing “ailments” that most workload management software overlooks:
These “ailments” may not crash jobs, but they can significantly impede system performance.
There is good news, however, as there are ways to defeat the BHNS that does not included man-months of scripting, tearing up floor-boards or ripping out hair.
Black Hole Node Syndrome: Prevention
There are three approaches to combating black hole node syndrome; unlike men, they are not created equal:
There is of course, a fourth option: do nothing and accept job losses.
1. Extensive scripting: Veteran HPC specialists typically work hard to prevent black hole node syndrome by writing a wide array of scripts to perform pre- and post-job health checks, usually developed on an iterative basis following job losses. This approach can solve the problem, but it is costly in terms of time and lost jobs as the iterations are addressed.
2. Workload Managers plus scripting: Workload managers can address part of the problem, but again, custom scripts must be written to fill in the gaps of these products. This approach potentially reduces the scope of scripting, but comes with similar opportunity costs and organizational risks.
3. Select a cluster management solution that has the ability to tie health checks into the workload manager: The ideal scenario is to ensure that unhealthy nodes are identified and sidelined before jobs are run. These health checks must be extendible and easy adapted to the cluster and its operating environment, to avoid the risks inherent in custom (and undocumented) scripting.
I recommend option 3, of course. To help you choose the best solution to prevent BHNS, I suggest you ask the following questions to the software providers you are considering:
Making the right cluster management choice up front can make the difference between fighting BHNS on a continual basis (and losing jobs, frustrating users and yourself) or preventing the problem from ruining a perfectly good day.
I leave you with the words Steve Conway, IDC research vice president for HPC:
"With today's biggest supercomputers featuring more than 200,000 cores and million-core machines just a few years away, parts failures are increasingly common. Sophisticated system health analyses, such as Bright Cluster Manager is designed to perform, are crucial for maintaining productivity."
How Bright Cluster Manager helps prevent the Black Hole Node Syndrome: click here.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.