The Impact of Cluster Virtualization on HPC

By Nicole Hemsoth

December 8, 2006

In part two of this interview, Don Becker, CTO of Penguin Computing and co-inventor of the Beowulf clustering model, and Pauline Nist, senior vice president of product development and management for Penguin Computing, describe how cluster virtualization changes the cost model of server resoures and how virtualization and clustering will evolve in the marketplace. They also discuss Penguin’s role in this evolution.

Read part one of the “Impact of Cluster Virtualization in HPC” interview.

HPCwire: How does managing and using a cluster like a single machine help to increase productivity and lower your operational costs?

Becker: Let’s drill down into more detail on Scyld ClusterWare’s architecture. The essence of Scyld’s differentiation is that it is the only virtualization or system management solution which presents a fully-functional, SMP-like usage and administration model. It is the unique architecture of Scyld that enables customers to truly realize the potential of Linux clustering to drive productivity up and cost out of their organization. It offers the practicality and flexibility of ‘scale-out’ with the simplicity of ‘scale-up.’

The great thing about a scale out architecture with commodity clusters is the capital costs are tremendously lower. The flexibility to expand it and upgrade it are really attractive, and it lowers your vulnerability with the compute power spread out on many servers. The downside is the bigger the cluster the more of an operational nightmare it is to provision, manage and keep consistent where that is crucial — that is, if you put it together in the traditional ad-hoc configuration.

The whole idea behind cluster virtualization is to make large pools of servers as easy to provision, use and manage as a single server, no matter how many “extra processors” you put behind it. Instead of the traditional approach of a full, disk-based Linux install on each server and using complex scripting to try and mask the complexity of setting up users, security, running jobs and monitoring what is happening, Scyld ClusterWare virtualizes the cluster into a one single server — the Master. Everything is done in this one place.

Its single point of command and control vastly simplifies the complexity of DIY cluster management that is time, skill and cost intensive, and it eliminates multiple layers of administration, management and support, driving cost out. So software installs and updates are done on one machine. Users are setup and run workloads on one machine. Statistics from the cluster are gathered, stored and graphically displayed on one machine. Even the Linux process space is virtualized across the cluster to one virtual process space on one machine so that jobs can be monitored and managed in one place.

The compute servers exist only to run applications specified by the Master node and are automatically provisioned with a lightweight, in-memory operating system from the single software installation on that Master. In this way, the compute servers are fully provisioned in under 20 seconds and users can flexibly add or delete nodes, or repurpose them, on demand, in seconds, making the cluster extraordinarily scalable and resilient.

They are always consistent, which is critical in HPC, and stripped of any unnecessary system services and associated vulnerabilities, making the cluster inherently more reliable and secure.

On top of this fundamentally superior architecture for compute resource management, we offer tools for virtualizing the HPC workloads across the available resources, in time and according to business policies and priorities, thus maximizing resource utilization against real business objectives.

There is no doubt that the more servers you have to manage, the harder and more costly it becomes. Scyld ClusterWare reduces the entire pool to a logical extension of the single Master machine and makes that pool phenomenally easier and less expensive to work with.

The benefits of a commercially-supported cluster virtualization solution are realized every day of the cluster life cycle and begin returning on the initial investment immediately. First clusters can be up and running applications on the first morning of software installation instead of days or weeks with DIY “project” software. From there updating software is a simple update on one machine that automatically and instantly updates compute nodes as they run new applications. Adding a new compute node is as effortless as plugging it in and it can be ready to take jobs in under 20 seconds.

A critical point about Scyld provisioning is the intelligence of its booting/provisioning subsystem. Very few players address this issue. Scyld not only auto-provisions the compute nodes but dynamically detects the hardware devices and loads the appropriate hardware device drivers. A typical Scyld compute node uses about 8 MB for the OS and Clusterware as opposed to 400MB with a traditional full install — there is 50 times more memory for applications and a far less chance of applications swapping out to disk.

Scyld provides instant cluster stats and job stats for the entire cluster at all times on a single machine with no need to ever log into compute nodes saving enormous time every single day. Admins and users write far fewer and vastly simpler scripts to automate tasks since the single system environment is so much more intuitive and seamless. This saves days and weeks over the course of a given year especially when new people come up to speed on the system.

HPCwire: How do cluster virtualization and virtual machine technology play together and where is the market play for each?

Nist: What is interesting about the virtual machine technology is that it can allow you to consolidate ten physical servers onto one box, but there are still ten virtual servers each with its own OS and application stack that need to be deployed and managed and monitored.

It’s almost ironic to think of an admin buying 50 real servers so that he can turn them into 500 virtual servers with different workloads and then use cluster virtualization software to make it all as easy to manage as one simple, powerful server with 1000 or 2000 processors. But that’s definitely our vision of the evolution of the computing infrastructure ecosystem.

Now, server consolidation using machine virtualization is pretty much an enterprise play, particularly at the application tier where you otherwise have very low server utilization due to the silo’d applications we spoke about. There is overhead associated with carving up the server into multiple virtual machines and I/O bottlenecks are still a big issue. But the applications here are not so I/O bound and the net gain of server consolidation outweighs the general overhead in enterprise datacenters.

In HPC, every ounce of performance is crucial and a fair amount of I/O-bound applications makes machine virtualization not as viable for production HPC environments. Virtual machines are great for test and prototyping usage and we do that every day and so it is just a matter of the technology evolving to overcome the performance issues before usage expands to production usage.

Ultimately, we see cluster virtualization developing as follows:

Today: A dedicated cluster with physical resources, which appears and acts as a single system — a virtual pool of resources that can expand/contract on demand.

Near future: Within the cluster, individual compute nodes are virtualized, which enables running of different applications on individual machines.

Longer term: Beyond the cluster, an ecosystem of virtual compute nodes where nodes ‘borrowed’ beyond the cluster for a transient period are used to maximize the entire infrastructure. VM nodes are provisioned on-demand and wiped out when not needed. This yields dramatic scalability while retaining simplicity.

Meanwhile, clustered and Grid computing is definitely crossing over into the enterprise datacenter in areas like stateless web farming where large pools of servers need to be harnessed to provide significant, coordinated compute power to changing workloads. This is where we see demand converge for the simplicity of cluster virtualization to address the proliferation of virtual servers across a farm of physical servers. The most compelling feature is in automating workflows against organizational policies and priorities to match workloads to the available resources on demand — adaptive and automated computing in the enterprise.

HPCwire: What’s next in the world of virtualization and what role will Penguin play?

Becker: We see three major areas of activity moving pretty rapidly right now.

First, there are intense efforts to address performance optimization for virtual machine technology. CPU vendors are rapidly rolling out hardware mechanisms to enhance support for virtual machines. Not all of the early work has been successful but the key stakeholders continue to collaborate to optimize the solution. The I/O bottlenecks are the most crucial to solve.

 

There is also an interesting initiative surrounding the virtualization of USB ports on remote servers which is a very tricky problem to solve but can address some annoying aspects of connecting to remote machines…

Secondly, the leading OS vendors are aggressively working to incorporate and standardize foundational hypervisor support for machine virtualization into the kernel. This seems the likely move on their part to maintain control of the software socket on the hardware.

Finally, the commoditization of the foundation of virtual machine capability will drive a shift in innovation up to the level of managing the provisioning and monitoring of virtual machine and automating workflows to map resources to the shifting demands of the application clients. This is the big payoff for the enterprise when business demand can automatically cull the compute resources needed on demand. We already see VMware, XenSource, and third parties emerging with early solutions for deploying and managing virtual machines across large pools of servers.

HPCwire: What role will Penguin play in this?

Becker: Penguin Computing can add tremendous value and real solutions in this emerging movement. The trend with the virtual machine hypervisors is that they are effectively a specialized lightweight “boot OS” sitting directly on the hardware to then provision virtual machines for launching full general purpose OS’s and the app stack.

Scyld CusterWare can leverage this architecture in two ways.

A Scyld compute node can rapidly provision these lightweight OS platforms and then launch multiple virtual machines or virtual compute nodes out of a single physical machine. Scyld Clusterware is provisioned to each virtual compute node to run different sets of apps that may have different OS environment requirements. One practical application of this could be where a cluster needs to run one set of applications that require a RH ES 3 (2.4 based) kernel and others that need to run on a RH ES 4 (2.6 based) kernel, and do so on-demand during a given period.

Scyld ClusterWare excels at rapidly provisioning diskless operating environments on demand. Within a Scyld cluster we would, by default, provision any hypervisor OS to compute nodes that require virtual machine capability. What might be more interesting is if there is a more general play for Scyld in enterprises that adopt the hypervisor OS as their default provisioned host platform in order for them to launch VMs on demand to meet changing business needs. The concept of rapid diskless provisioning is gaining mindshare as a general concept. Scyld could offer general provisioning infrastructure in this environment.

Cluster virtualization is here today and already solving very real customer problems today. As the technology around virtualization continues to evolve and advance, very powerful benefits will continue to be realized by organizations faced with the challenges of server proliferation and matching business priorities on demand to the resources brought to bear by these servers.

—–

Donald Becker is the CTO of Penguin Computing and co-inventor of Beowulf clusters. Donald is an internationally recognized operating system developer and the original inventor of Beowulf clustering. In 1999 he founded Scyld Computing and led the development of the next-generation Beowulf cluster operating system. Prior to founding Scyld, Donald started the Beowulf Parallel Workstation project at NASA Goddard Space Flight Center. He is the co-author of How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters. With colleagues from the California Institute of Technology and the Los Alamos National Laboratory, he was the recipient of the IEEE Computer Society 1997 Gordon Bell Prize for Price/Performance.

Pauline Nist is the SVP of Product Development and Management at Penguin Computing. Before joining Penguin Computing, Pauline served as vice president of Quality for HP’s Enterprise Storage and Servers Division and immediately prior to that, as vice president and general manager for HP’s NonStop Enterprise Division, where she was responsible for the development, delivery, and marketing of the NonStop family of servers, database, and middleware software. Prior to the NonStop Enterprise Division (formerly known as Tandem Computers), Pauline served as vice president of the Alpha Servers business unit at Digital Equipment Corporation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). On Read more…

By Elizabeth Leake, STEM-Trek

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. Read more…

By Doug Black

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Leveraging Exaflops Performance to Remediate Nuclear Waste

November 12, 2019

Nuclear waste storage sites are a subject of intense controversy and debate; nobody wants the radioactive remnants in their backyard. Now, a collaboration between Berkeley Lab, Pacific Northwest National University (PNNL Read more…

By Oliver Peckham

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This