Care and Feeding of Your Cluster

By Deepak Khosla

February 23, 2015

In the previous Cluster Lifecycle Management column, I described the crucial steps that should be taken to deploy and validate your new cluster. In this column, I discuss how best to move the system into production, configure, and maintain it so that operations run smoothly and efficiently for the long term.

Once the deployment and validation of your new HPC cluster are completed, it is time for the HPC systems management functions to begin. I am assuming the advice from the previous columns was followed and the primary HPC system administrator was identified and in place in the deployment phase. This is no time to discover you do not have an HPC expert on your staff or at your disposal. Just because the hardware and software are humming now doesn’t mean they will stay that way. Like any other complex system, the HPC cluster needs to be continuously monitored, analyzed, and maintained to keep it running efficiently.

The mistake I’ve seen made too often, especially by larger organizations, is the assumption that someone on the existing IT staff can probably figure out the HPC system, perhaps with some minor training. Unfortunately, this rarely works out. Although HPC is a niche within the larger Information Technology space, even the best IT generalist will have little or no experience in supercomputing. It is NOT just a collection of Linux or Windows servers stacked together. HPC is a specialization unto itself.

You must have HPC expertise available to you if you want the new system to perform as expected. There are two options – hire one or more full-time HPC administrators or contract for ongoing HPC system support. Budget will likely dictate which works best for your organization. For several scenarios, contract support may be a better option due to the difficulty of finding and retaining HPC experts on staff due to intense market demand or because you may not need a full-time person. Check with your system vendor or integrator to see if they offer contracted management services.

Now that your cluster is operational and you have a skilled HPC administrator(s) on staff or under contract, the first job is to configure the cluster so that it works well operationally. The two major aspects of this responsibility are that the cluster must be configured to work optimally from both an end user usability perspective and from a systems operation perspective.

The administrator must first set up proper security access for the end users. There are two major components to a successful security design. The first addresses connectivity to the appropriate authentication system that makes sure users can securely log in. Often the cluster has to be configured to tie into an already established enterprise system such as LDAP, Windows, etc. It is critical that this authentication performs with speed and reliability. HPC jobs running in parallel will fail often if the authentication system is unreliable. The second component to success addresses the authorization requirements. The administrator must validate that the file systems and directory permissions follow the authorization policies. This is critical so that users can work smoothly all the way from submitting the jobs to reviewing the results from their workstation. These must then be set up, configured, and tested across both the compute and storage components for the unique user groups.

Additionally, policies may need to be set up on the scheduler to allocate for various user groups and application profiles, as well as on storage to meet the varying space requirements. When security, computer, and storage are configured, users can safely log into the system and know where to securely put their data.

If your cluster is brand new, the users are most likely first-time users of HPC technology. This means they will need training and instruction on how to run their applications on the system. The applications they ran on a desktop or mainframe will not perform the same way on the cluster. Users will likely need application-specific training. Depending on the scheduler, there will be different ways to submit jobs from various applications.

It will be the administrator’s responsibility to begin building a written knowledge base pertaining to the cluster and each application. This hardcopy or web-based document will serve as a guide for users to understand how to submit and track jobs and what to do if a problem occurs. Depending on the level or size of the user base, it may also make sense to look at some portals that can make job management easier for the end users.

For the cluster itself, the administrator should set up monitoring and alerting tools as soon as the system becomes operational. Monitoring, reporting, and alerting of storage, network, and compute services on a continuous or periodic basis are critical to identify signs of trouble before they turn into major malfunctions. Minor usage problems could simply mean disk space is filling up, but soft memory errors could be signs of impending node failure.

Such monitoring and analysis tools are readily available. Many HPC clusters come equipped with system-specific tools, while other more robust technical and business analysis packages are commercially available. Whatever their source, these tools should be set up to identify and predict routine maintenance issues, such as disk cleanup and error log review, as well as actual malfunctions that must be repaired.

In my experience, however, pinpointing the cause of several problems in the HPC domain requires looking for clues in multiple components. When things are going wrong with an HPC cluster, alarms may be triggered in several places at once. The skilled administrator will review all of the flagged performance issues and figure out what the underlying cause actually is. Few software tools can take the place of a human in this regard.

Proper care of the cluster also requires the administrator to be proactive. Every three to six months, I recommend running a standard set of diagnostics and benchmarks to see if the cluster has some systemic issues or has fallen below baselines established during deployment. If so, further scrutiny is in order. Last, but not least, the HPC administrator must find the right way to make changes so that all applications keep working well on the cluster. Patches and changes for applications, or libraries, or OS/hardware must be carefully considered and tested if possible, before implementing. I have seen quite a few expensive outages where a simple change for one application has caused failures in other co-existing applications.

Finally, a viable back-up plan must be enacted so the system can be brought back online quickly in the event of failure. The most important things to back up are the configurations of the scheduler, head node, key software, applications and user data. While intermediate data does not often need to be backed up, user input and output data should be, especially if the time to regenerate results is high. The organization should also establish data retention policies determining when data should be backed up from the cluster to offsite storage.

An extension of caring for and feeding your new cluster is “Capacity Planning and Reporting,” which I will cover in the next column.

Deepak Khosla is president and CEO of X-ISS Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Expands Worldwide Availability to AMD-based Instances

July 22, 2019

Setting aside potential setbacks caused by U.S. trade policies, the steady cadence of AMD’s revival in HPC and the datacenter continued last week with AWS expanding availability of its AMD Epyc-based instances. Recall Read more…

By Staff

Microsoft Investing $1B in OpenAI Artificial General Intelligence R&D

July 22, 2019

Artificial general intelligence (AGI) is AI’s moonshot, the next giant leap for the AI field. Microsoft regards it to be feasible enough to warrant a $1 billion investment in OpenAI, the not-for-profit research organi Read more…

By Doug Black

Researchers Use Supercomputing to Study Links Between Hurricanes and Climate Change

July 19, 2019

As climate change looms, researchers are scrambling to answer the question of how a warming planet will affect the frequency and severity of already-deadly hurricanes. Now, a team of researchers from the University of Il Read more…

By Oliver Peckham

AWS Solution Channel

Unleashing Seismic Modeling at Scale: We Can’t Stop Quakes, But We Can Be Better Prepared

It has been a scary July so far for many residents of California. A magnitude 6.4 quake struck on July 4 near Ridgecrest (about 194 kilometers northeast of Los Angeles), followed by a magnitude 7.1 quake in the same region on July 5. Read more…

HPE Extreme Performance Solutions

Bring the Combined Power of HPC and AI to Your Business Transformation

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Read more…

IBM Accelerated Insights

Visual Capital: Seeing Digital Image and Video Archives as Potential Revenue Streams

As most business owners agree, cash is king. But what if there was a hidden source of revenue that companies are only starting to learn how to exploit? Read more…

San Diego Supercomputer Center to Welcome ‘Expanse’ Supercomputer in 2020

July 18, 2019

With a $10 million dollar award from the National Science Foundation, San Diego Supercomputer Center (SDSC) at the University of California San Diego is procuring a new supercomputer, called Expanse, to be deployed next Read more…

By Staff report

Microsoft Investing $1B in OpenAI Artificial General Intelligence R&D

July 22, 2019

Artificial general intelligence (AGI) is AI’s moonshot, the next giant leap for the AI field. Microsoft regards it to be feasible enough to warrant a $1 billi Read more…

By Doug Black

Informing Designs of Safer, More Efficient Aircraft with Exascale Computing

July 18, 2019

During the process of designing an aircraft, aeronautical engineers must perform predictive simulations to understand how airflow around the plane impacts fligh Read more…

By Rob Johnson

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Goonhilly Unveils New Immersion-Cooled Platform, Doubles Down on Sustainability Mission

July 16, 2019

Goonhilly Earth Station has opened its new datacenter – an enhancement to its existing tier 3 facility – in Cornwall, England, touting an ambitious commitme Read more…

By Oliver Peckham

ISC19 Cluster Competition: Application Results, Finally!

July 15, 2019

Our exhaustive coverage of the ISC19 Student Cluster Competition continues as we discuss the application scores below. While the scores were typically high, som Read more…

By Dan Olds

Nvidia Expands DGX-Ready AI Program to 19 Countries

July 11, 2019

Nvidia’s DGX-Ready Data Center Program, announced in January and designed to provide colo and public cloud-like options to access the company’s GPU-powered Read more…

By Doug Black

Argonne Team Makes Record Globus File Transfer

July 10, 2019

A team of scientists at Argonne National Laboratory has broken a data transfer record by moving a staggering 2.9 petabytes of data for a research project.  The data – from three large cosmological simulations – was generated and stored on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)... Read more…

By Oliver Peckham

Nvidia, Google Tie in Second MLPerf Training ‘At-Scale’ Round

July 10, 2019

Results for the second round of the AI benchmarking suite known as MLPerf were published today with Google Cloud and Nvidia each picking up three wins in the at Read more…

By Tiffany Trader

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This