If you thought cluster management tools were too plentiful to count, they have nothing on system monitoring. From open source to commercial packages, the list goes on. The problem, however, is that there are too few tools that bring together a unified view of what’s happening on large clusters in a comprehensive way—from gathering data from the scheduler, the compute, and the applications themselves.
Although there are no hard numbers on its use, Ganglia appears to be the clear leader in terms of cluster monitoring. According to X-ISS President and CEO, around 90% of HPC shops of all sizes are using the framework, with another small subset using other mature HPC monitoring tools like Supermon. His company has seen its share of large and mid-sized HPC clusters in their decade-plus run in the systems business, but what they haven’t been able to find until more recently are ways to get a “single pane of glass” view into how the clusters are operating holistically. In other words, there has been no capability to take the strengths of Ganglia and similar tooling and mesh that capability with a wealth of other cluster monitoring and management data.
To be fair, modernization and sophistication of tools like Ganglia has been happening at a quick clip, especially since the “wider world” is catching on to the value of these tools. Not to single out Ganglia (since there are other apt examples), but its usage is surging beyond the halls of HPC. Cloud service providers, hyperscale datacenter operators, and a new crop of big data types are picking it out of a crowd. (On this note, for the love of god, do not type “Ganglia growth” into Google. It is not what you’re looking for. Ew.)
While the existing cadre of monitoring tools are perfect for understanding the nuts and bolts of what’s happening with a cluster from a hardware and general performance perspective, Khosla says that they are unable to provide a more comprehensive view into other practical metrics, including those around broader application and project performance, job cost, and historical trends. Even when coupled with the analytics tools found in all of the popular schedulers, including LSF, Torque, PBS and others, users are left with a scattered field of results that are technical to chew through quickly and too distributed to mesh without significant effort.
This problem is compounded by centers that have distributed HPC datacenters. For example, in the oil and gas industry, which was the impetus for X-ISS to build a broader view, clusters are scattered in different geographical areas, often with varied scheduler and system environments. Pulling together a single-pane view of these systems and their efficiency on an operational, application, cost, and performance level is not a simple task and involves that troublesome meshing of different tools.
For these users, pushing together the data is not the only practical challenge. “HPC users are by nature wary about anything that gets put into their stack,” says Khosla. “This means they aren’t going to want to add more monitoring or other tools when they’ve been using something like Ganglia and their regular scheduler tools.” So if this is the case—and the need for more comprehensive, meshed monitoring is clear—what are users to do?
The solution is to hook into existing monitoring and other tools and their collectors and feed all of that data into one place. In the case of X-ISS and its cluster analytics, the data is fed via a secure tunnel into their own servers where it is processed for a real-time or historical/trends view for analysis via a web portal. This way, there’s no need for users to add more weight to their monitoring operations or to create a performance drag on the systems with the addition of yet another tool to manage.
The analytics and monitoring tool X-ISS cooked up, called DecisionHPC, hooks into most of the common schedulers used in HPC environments (Torque, PBS Pro, LSF, CJM, and Grid Engine) and can snap in with Ganglia and other custom monitoring tools.
Users can access the web interface to view several aspects about the overall operation of the cluster, then refine the analysis to look at things from new vantage points, including cost analysis, performance details to help gauge and refine what is failing or working well, and of course the know-how to adjust to overcome or complement the findings.
An example of the dashboard is below, but what’s notable here, says Kholsa, is how it offers a real-time view of what’s happening with the cluster(s) at any given moment. It’s possible to monitor clusters in different geographic locations, even those running different schedulers, monitoring agents, with variable hardware configuration—another unique element, he argues.
He agrees that it is indeed possible to do all of these things with existing tools, but they’re all separate and can only provide part of the insight. For instance, he says, “what’s available in Linux tools are system-level metrics, but most HPC users don’t make use of those tools because you have to go to the node level. Other tools like Ganglia give you a more manageable view but it’s technical and has to be done piece by piece, making it hard to get a global view.” He added that while it’s possible to see what’s happening with the CPU, memory, I/O, and other elements, “it can’t answer questions like how busy a cluster was application-wise from month to month, for instance.”As it stands now, many are just writing their own tools for reporting, which also doesn’t offer the level of ease and insight needed.
“Today we have our largest customer with about 15k objects we’re monitoring and around 20-30 metrics every 5 minutes—and at any moment they can pull up a live view of those 7000. The analytics tools in schedulers can’t report live like this. Part of the goal is application profiling and benchmarking also but also, CPU, memory, network throughput alone are valuable.”