How Outdated Infrastructure Will Cripple HPC

By Dr. Craig Finch

May 20, 2014

The raw compute power of HPC clusters continues to grow, driven by new parallel technologies such as many-core CPUs, GPUs, and the Xeon Phi. It is well known that writing applications to exploit massive parallelism is a significant challenge to the growth of HPC. Another challenge, which is not as widely discussed, is the increasing difficulty of managing HPC clusters. The way we approach the management and administration of high performance computing clusters is slowly strangling the field of HPC.

The practices that many HPC administrators use to manage users, operating systems, applications, and workloads have not kept pace with the growth of compute power and the size of the HPC user base. A UNIX system administrator from 1985 could step out of a time machine and go right to work managing most HPC clusters today. Because many clusters are not designed for manageability, a significant amount of an HPC administrator’s time is spent doing things that could be automated.

Administrative processes are often automated with ad hoc collections of scripts and cron jobs instead of standard tools. Management tools are often overlooked when a new cluster is built or purchased, especially in organizations that are new to HPC. Many new HPC admins start from scratch and re-invent the wheel. Tools exist to solve these problems, but few people in HPC are even aware of them. My work in commercial enterprise IT has produced a paradigm shift in the way that I view system administration. The administration of HPC clusters can be transformed by applying the thought processes and tools that are currently used in cutting-edge enterprise information technology.

The root of the problem is that we think about HPC clusters today largely in terms of raw performance (Tflops or core count), a one-dimensional metric that omits important information about a system. The Top500 ranking is an obvious example of our focus on performance. To move beyond the limits of this paradigm, a cluster should be evaluated on two dimensions: performance and systemic complexity. Factors that contribute to complexity include the diversity of hardware in the cluster, diversity of the user base, and diversity of applications that run on the cluster. I learned about these issues first-hand during the two years I spent as a system administrator of the STOKES HPC cluster at the University of Central Florida. I’ve talked to many HPC specialists from around the country, and I know these problems are not unique to my university.

In terms of raw power, STOKES is a modest HPC cluster with about 3400 compute cores. However, it is a very complex system that has grown and evolved since the first hardware was purchased in 2008. STOKES includes servers from two vendors with three generations of CPUs, GP-GPUs, Xeon Phis, three brands of Infiniband hardware, and four brands of Ethernet switches. STOKES serves over 150 active users who run high-throughput and high-performance applications in a dozen different fields of physical and social sciences and engineering. In contrast, HPC systems that are more powerful than STOKES actually may be less complex. For example, the National Oceanic and Atmospheric Administration (NOAA) has two 10,000 core, 213Tflop clusters that run “production” hurricane and weather models[1].

Finch_picThese twin systems, provided and managed by IBM, have homogeneous hardware, serve a single customer, and run a small collection of applications. Figure 1 on the left shows how performance and complexity can be visualized on a two-dimensional plot.

Hardware is one fundamental source of complexity. An HPC system which grows over the years may have servers from different manufacturers with different generations of processors and interconnect hardware. Some servers may require specific versions of an operating system or different drivers to accommodate certain hardware. As hardware diversity increases, different types of errors can occur, and monitoring becomes more complicated. The trend to include accelerator hardware, such as Xeon Phi cards and GP-GPUs, means that modern clusters are often diverse by design.

The diversity of the cluster’s user base is another major source of complexity. As HPC becomes more widely used, the user base will grow and become more diverse. While this is a sign of success for a general-purpose cluster, it leads to administrative challenges. User accounts need to be created and managed more often. There will be more requests for support, which will require more time and/or better tools for monitoring the cluster and diagnosing problems.

The diversity of applications that run on the cluster is another aspect of complexity that is often correlated with the diversity of the user base. More applications are supporting parallel processing “out of the box,” often in fields that have not traditionally used HPC. These applications often bring novice users to the cluster. They are used to a graphical desktop environment, and are unprepared for the command-line and script-based submission systems used on most clusters. “Legacy” users and applications pose a different challenge: they may depend upon specific versions of the operating system, compilers, and libraries. The user may be running a program built years ago by someone else who no longer works there, and the user may not know how to recompile it. The diversity of applications also complicates workload management. Some users run high-throughput computing applications with hundreds of single-core jobs, while others need to run a single massively parallel MPI job that consumes a significant fraction of the cluster.

Today, system administration is usually managed with the tools that were provided by the original cluster vendor. As the cluster grows, administration tools that were adequate in the beginning are no longer sufficient. System administrators gradually accumulate a collection written procedures, scripts, and cron jobs to patch the gaps in the administrative framework. This approach has significant disadvantages. The amount of labor spent on administration increases as the system outgrows its management tools. This is bad news for organizations that depend on research funding, which tends to provide “up front” funds but limited or no funding for follow-on maintenance. Custom in-house solutions are only “free” if your time is worth nothing. The reliability of the system will degrade over time, as more manual input is required to keep it running. Effective operation of the system will increasingly depend on the skill and knowledge of the local sysadmin.

The ad hoc approach to system administration is also bad for the field of HPC. Significant amounts of time are spent “re-inventing the wheel” as each department, company, or university acquires its first HPC cluster. This time is wasted in the sense that it could have been better spent on advancing the field of HPC. It also increases the difficulty of attracting and retaining personnel in the HPC field.

Fortunately, there is a better way to approach the administration of an HPC cluster. The growth of cloud computing and hyperscale data centers has driven the development of practices and tools for managing computing systems that are simply too large and complex to be managed economically using methods from the 1980s. Corporate IT departments and providers of web applications and services now manage nationwide networks of servers that rival the complexity of the largest supercomputers. At this scale, a system must be designed for management. Significant amounts of time and money can be saved if these practices and tools are applied to HPC clusters.

We need to start thinking about HPC cluster management as a framework that is built from components. Every cluster has a set of management components; each component may be a software tool, or it may be a manual process. Every HPC sysadmin is familiar with workload/resource management software. Other components of the management framework may not be so obvious. For example, your cluster does have an alerting component; it may be a software tool such as Nagios, or it may be getting emails and phone calls from users when their jobs crash. You can monitor a cluster with Ganglia, or you can log in to each node and run top. Every cluster has an administrative framework, and we need to make conscious decisions about how we are going to implement that framework.

When we do choose to automate a component of the management framework, we should commit to using industry-standard system management tools wherever possible. The more “standard” a system is, the less it will cost in the long run. It is much easier to hire staff to run a system that is built with industry-standard software tools. Unfortunately, many HPC system administrators are not familiar with the standard tools that are widely used in the enterprise IT space. HPC centers are often operated as “silos” within an organization, staffed by graduate students and faculty with backgrounds in research. Enterprise IT personnel seldom cross over into HPC, since they often lack the academic qualifications for “research” positions, and the pay in research organizations is often significantly lower than in corporate IT.

There is no “one-size-fits-all” solution to the problem of cluster management. Rather, the HPC community can advance the state of cluster administration by changing the way that we approach the subject. At a high level, those who are responsible for specifying, designing, and purchasing clusters need to start prioritizing system administration. A simple calculation of Tflops per dollar is no longer sufficient. A smaller cluster with a high degree of complexity will require a larger budget for administrative systems and configuration. The alternative is to pay for these costs down the road, when the inadequacy of the administrative tools becomes clear and “unexpected” system administration costs arise.

It is difficult to justify spending more money up-front for better management tools unless there has been an honest assessment of the cost of the cluster over its lifetime. When building or purchasing a cluster, the designer or vendor must be required to specify how the proposed cluster will implement each management component. It is important to understand that the decision “we’re not going to implement this component” usually means, “we’re going to do it manually.” That can be a valid choice, but we have to budget for the long-term cost. How will the cost change if the user count or core count increases by a factor of five over the next five years? Another option is to outsource certain functions that are not core to your mission. For example, the security aspect of many clusters is implicitly outsourced to a campus or corporate IT department, which operates a border firewall that protects the cluster from outside attacks.

In order to ask the right questions, decision makers must know what components are required to manage an HPC cluster. The HPC community can help by defining a set of standard cluster management components that will form an open specification for an HPC cluster. The exact set of components, and which components should be automated first, is open to debate. As a starting point for a broader discussion, I propose that the minimum core components required for any HPC cluster are identity management, workload management, and security. Another tier of components may be implemented manually on “personal” clusters, but become increasingly time-consuming as the number of users increases beyond the size of a small research group. These components include monitoring, alerting/notification, and configuration management. Finally, designing systems for reliability becomes critical for clusters that serve large numbers of users.

The HPC community can also help cluster designers and administrators choose standard system management tools. In order to take advantage of the ecosystem of enterprise IT management tools, HPC sysadmins need to know which tools are available, and they need information to help them choose the best tool for their needs. The open cluster specification can enumerate the most widely used tools that can be used to automate each component of a cluster. To help choose the right tool for a particular situation, the HPC community needs to publish more information about how we manage our clusters. We need to report which management tools we are using, why we are using them, and how well those tools are working for us. We also need to increase our contributions to open source projects, documentation, and standards so that other HPC sysadmins can benefit from our experience.

Commercial software, whether provided by a cluster vendor or a third-party vendor, is also an important part of cluster administration. However, even commercial tools need to “play nicely” with other software to enable a healthy HPC ecosystem. HPC-specific management tools need to offer better support for modern management features. For example, any tool that depends upon user identities should be able to authenticate against an identity server instead of requiring an administrator to create and maintain another unique identity for every user. Software tools should also be able to exchange data in a standard format (SNMP, JSON, XML, etc.) to enable centralized services such as monitoring and logging.

It’s time for the HPC community to start regarding system administration as a critical aspect of an HPC cluster. We can build better administrative frameworks by drawing on the strategies and tools developed for enterprise IT. Working together as a community, we can dramatically reduce the amount of time that is wasted on outdated, inefficient cluster management practices.

About the Author

Craig Finch is a Principal Consultant at Rootwork InfoTech LLC (http://www.rootwork.it/). Craig started his career as a design engineer in the wireless communications sector during the rapid growth period of the late 90′s. Growing bored with the evolutionary nature of wireless technology, the end of the tech bubble provided an occasion to take a break from industry and pursue a full-time PhD in Modeling and Simulation while performing research at the NanoScience Technology Center at the University of Central Florida (UCF). Craig developed predictive computational tools and used them to design optical biosensors, microfluidic devices, and functional tissue constructs. Following his PhD, he was responsible for STOKES, the core high performance computing cluster at UCF. Dr. Finch was a co-PI on several proposals, including a funded cyberinfrastructure grant from the National Science Foundation. On the side, Craig has worked as a concert lighting designer, wrote a technical book (Sage Beginners Guide), and held leadership positions in volunteer organizations.

[1] http://www.ncep.noaa.gov/newsletter/october2012/printable.shtml

[1]http://www.noaanews.noaa.gov/stories2013/2013029_supercomputers.html

 

 

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

DoE Awards 24 ASCR Leadership Computing Challenge (ALCC) Projects

June 28, 2017

On Monday, the U.S. Department of Energy’s (DOE’s) ASCR Leadership Computing Challenge (ALCC) program awarded 24 projects a total of 2.1 billion core-hours at the Argonne Leadership Computing Facility (ALCF). The o Read more…

By HPCwire Staff

STEM-Trekker Badisa Mosesane Attends CERN Summer Student Program

June 27, 2017

Badisa Mosesane, an undergraduate scholar who studies computer science at the University of Botswana in Gaborone, recently joined other students from developing nations around the world in Geneva, Switzerland to particip Read more…

By Elizabeth Leake, STEM-Trek

The EU Human Brain Project Reboots but Supercomputing Still Needed

June 26, 2017

The often contentious, EU-funded Human Brain Project whose initial aim was fixed firmly on full-brain simulation is now in the midst of a reboot targeting a more modest goal – development of informatics tools and data/ Read more…

By John Russell

DOE Launches Chicago Quantum Exchange

June 26, 2017

While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a Department of Energy sponsored collaboration between the Univ Read more…

By John Russell

HPE Extreme Performance Solutions

Creating a Roadmap for HPC Innovation at ISC 2017

In an era where technological advancements are driving innovation to every sector, and powering major economic and scientific breakthroughs, high performance computing (HPC) is crucial to tackle the challenges of today and tomorrow. Read more…

UMass Dartmouth Reports on HPC Day 2017 Activities

June 26, 2017

UMass Dartmouth's Center for Scientific Computing & Visualization Research (CSCVR) organized and hosted the third annual "HPC Day 2017" on May 25th. This annual event showcases on-going scientific research in Massach Read more…

By Gaurav Khanna

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information ab Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented t Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out plans to push deeper into climate science and develop more gran Read more…

By John Russell

DoE Awards 24 ASCR Leadership Computing Challenge (ALCC) Projects

June 28, 2017

On Monday, the U.S. Department of Energy’s (DOE’s) ASCR Leadership Computing Challenge (ALCC) program awarded 24 projects a total of 2.1 billion core-hour Read more…

By HPCwire Staff

DOE Launches Chicago Quantum Exchange

June 26, 2017

While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a D Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid wh Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out pla Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale Read more…

By Doug Black

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big d Read more…

By Alex Woodie

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This