The Impact of Cluster Virtualization on HPC

By Nicole Hemsoth

December 8, 2006

In part two of this interview, Don Becker, CTO of Penguin Computing and co-inventor of the Beowulf clustering model, and Pauline Nist, senior vice president of product development and management for Penguin Computing, describe how cluster virtualization changes the cost model of server resoures and how virtualization and clustering will evolve in the marketplace. They also discuss Penguin’s role in this evolution.

Read part one of the “Impact of Cluster Virtualization in HPC” interview.

HPCwire: How does managing and using a cluster like a single machine help to increase productivity and lower your operational costs?

Becker: Let’s drill down into more detail on Scyld ClusterWare’s architecture. The essence of Scyld’s differentiation is that it is the only virtualization or system management solution which presents a fully-functional, SMP-like usage and administration model. It is the unique architecture of Scyld that enables customers to truly realize the potential of Linux clustering to drive productivity up and cost out of their organization. It offers the practicality and flexibility of ‘scale-out’ with the simplicity of ‘scale-up.’

The great thing about a scale out architecture with commodity clusters is the capital costs are tremendously lower. The flexibility to expand it and upgrade it are really attractive, and it lowers your vulnerability with the compute power spread out on many servers. The downside is the bigger the cluster the more of an operational nightmare it is to provision, manage and keep consistent where that is crucial — that is, if you put it together in the traditional ad-hoc configuration.

The whole idea behind cluster virtualization is to make large pools of servers as easy to provision, use and manage as a single server, no matter how many “extra processors” you put behind it. Instead of the traditional approach of a full, disk-based Linux install on each server and using complex scripting to try and mask the complexity of setting up users, security, running jobs and monitoring what is happening, Scyld ClusterWare virtualizes the cluster into a one single server — the Master. Everything is done in this one place.

Its single point of command and control vastly simplifies the complexity of DIY cluster management that is time, skill and cost intensive, and it eliminates multiple layers of administration, management and support, driving cost out. So software installs and updates are done on one machine. Users are setup and run workloads on one machine. Statistics from the cluster are gathered, stored and graphically displayed on one machine. Even the Linux process space is virtualized across the cluster to one virtual process space on one machine so that jobs can be monitored and managed in one place.

The compute servers exist only to run applications specified by the Master node and are automatically provisioned with a lightweight, in-memory operating system from the single software installation on that Master. In this way, the compute servers are fully provisioned in under 20 seconds and users can flexibly add or delete nodes, or repurpose them, on demand, in seconds, making the cluster extraordinarily scalable and resilient.

They are always consistent, which is critical in HPC, and stripped of any unnecessary system services and associated vulnerabilities, making the cluster inherently more reliable and secure.

On top of this fundamentally superior architecture for compute resource management, we offer tools for virtualizing the HPC workloads across the available resources, in time and according to business policies and priorities, thus maximizing resource utilization against real business objectives.

There is no doubt that the more servers you have to manage, the harder and more costly it becomes. Scyld ClusterWare reduces the entire pool to a logical extension of the single Master machine and makes that pool phenomenally easier and less expensive to work with.

The benefits of a commercially-supported cluster virtualization solution are realized every day of the cluster life cycle and begin returning on the initial investment immediately. First clusters can be up and running applications on the first morning of software installation instead of days or weeks with DIY “project” software. From there updating software is a simple update on one machine that automatically and instantly updates compute nodes as they run new applications. Adding a new compute node is as effortless as plugging it in and it can be ready to take jobs in under 20 seconds.

A critical point about Scyld provisioning is the intelligence of its booting/provisioning subsystem. Very few players address this issue. Scyld not only auto-provisions the compute nodes but dynamically detects the hardware devices and loads the appropriate hardware device drivers. A typical Scyld compute node uses about 8 MB for the OS and Clusterware as opposed to 400MB with a traditional full install — there is 50 times more memory for applications and a far less chance of applications swapping out to disk.

Scyld provides instant cluster stats and job stats for the entire cluster at all times on a single machine with no need to ever log into compute nodes saving enormous time every single day. Admins and users write far fewer and vastly simpler scripts to automate tasks since the single system environment is so much more intuitive and seamless. This saves days and weeks over the course of a given year especially when new people come up to speed on the system.

HPCwire: How do cluster virtualization and virtual machine technology play together and where is the market play for each?

Nist: What is interesting about the virtual machine technology is that it can allow you to consolidate ten physical servers onto one box, but there are still ten virtual servers each with its own OS and application stack that need to be deployed and managed and monitored.

It’s almost ironic to think of an admin buying 50 real servers so that he can turn them into 500 virtual servers with different workloads and then use cluster virtualization software to make it all as easy to manage as one simple, powerful server with 1000 or 2000 processors. But that’s definitely our vision of the evolution of the computing infrastructure ecosystem.

Now, server consolidation using machine virtualization is pretty much an enterprise play, particularly at the application tier where you otherwise have very low server utilization due to the silo’d applications we spoke about. There is overhead associated with carving up the server into multiple virtual machines and I/O bottlenecks are still a big issue. But the applications here are not so I/O bound and the net gain of server consolidation outweighs the general overhead in enterprise datacenters.

In HPC, every ounce of performance is crucial and a fair amount of I/O-bound applications makes machine virtualization not as viable for production HPC environments. Virtual machines are great for test and prototyping usage and we do that every day and so it is just a matter of the technology evolving to overcome the performance issues before usage expands to production usage.

Ultimately, we see cluster virtualization developing as follows:

Today: A dedicated cluster with physical resources, which appears and acts as a single system — a virtual pool of resources that can expand/contract on demand.

Near future: Within the cluster, individual compute nodes are virtualized, which enables running of different applications on individual machines.

Longer term: Beyond the cluster, an ecosystem of virtual compute nodes where nodes ‘borrowed’ beyond the cluster for a transient period are used to maximize the entire infrastructure. VM nodes are provisioned on-demand and wiped out when not needed. This yields dramatic scalability while retaining simplicity.

Meanwhile, clustered and Grid computing is definitely crossing over into the enterprise datacenter in areas like stateless web farming where large pools of servers need to be harnessed to provide significant, coordinated compute power to changing workloads. This is where we see demand converge for the simplicity of cluster virtualization to address the proliferation of virtual servers across a farm of physical servers. The most compelling feature is in automating workflows against organizational policies and priorities to match workloads to the available resources on demand — adaptive and automated computing in the enterprise.

HPCwire: What’s next in the world of virtualization and what role will Penguin play?

Becker: We see three major areas of activity moving pretty rapidly right now.

First, there are intense efforts to address performance optimization for virtual machine technology. CPU vendors are rapidly rolling out hardware mechanisms to enhance support for virtual machines. Not all of the early work has been successful but the key stakeholders continue to collaborate to optimize the solution. The I/O bottlenecks are the most crucial to solve.

 

There is also an interesting initiative surrounding the virtualization of USB ports on remote servers which is a very tricky problem to solve but can address some annoying aspects of connecting to remote machines…

Secondly, the leading OS vendors are aggressively working to incorporate and standardize foundational hypervisor support for machine virtualization into the kernel. This seems the likely move on their part to maintain control of the software socket on the hardware.

Finally, the commoditization of the foundation of virtual machine capability will drive a shift in innovation up to the level of managing the provisioning and monitoring of virtual machine and automating workflows to map resources to the shifting demands of the application clients. This is the big payoff for the enterprise when business demand can automatically cull the compute resources needed on demand. We already see VMware, XenSource, and third parties emerging with early solutions for deploying and managing virtual machines across large pools of servers.

HPCwire: What role will Penguin play in this?

Becker: Penguin Computing can add tremendous value and real solutions in this emerging movement. The trend with the virtual machine hypervisors is that they are effectively a specialized lightweight “boot OS” sitting directly on the hardware to then provision virtual machines for launching full general purpose OS’s and the app stack.

Scyld CusterWare can leverage this architecture in two ways.

A Scyld compute node can rapidly provision these lightweight OS platforms and then launch multiple virtual machines or virtual compute nodes out of a single physical machine. Scyld Clusterware is provisioned to each virtual compute node to run different sets of apps that may have different OS environment requirements. One practical application of this could be where a cluster needs to run one set of applications that require a RH ES 3 (2.4 based) kernel and others that need to run on a RH ES 4 (2.6 based) kernel, and do so on-demand during a given period.

Scyld ClusterWare excels at rapidly provisioning diskless operating environments on demand. Within a Scyld cluster we would, by default, provision any hypervisor OS to compute nodes that require virtual machine capability. What might be more interesting is if there is a more general play for Scyld in enterprises that adopt the hypervisor OS as their default provisioned host platform in order for them to launch VMs on demand to meet changing business needs. The concept of rapid diskless provisioning is gaining mindshare as a general concept. Scyld could offer general provisioning infrastructure in this environment.

Cluster virtualization is here today and already solving very real customer problems today. As the technology around virtualization continues to evolve and advance, very powerful benefits will continue to be realized by organizations faced with the challenges of server proliferation and matching business priorities on demand to the resources brought to bear by these servers.

—–

Donald Becker is the CTO of Penguin Computing and co-inventor of Beowulf clusters. Donald is an internationally recognized operating system developer and the original inventor of Beowulf clustering. In 1999 he founded Scyld Computing and led the development of the next-generation Beowulf cluster operating system. Prior to founding Scyld, Donald started the Beowulf Parallel Workstation project at NASA Goddard Space Flight Center. He is the co-author of How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters. With colleagues from the California Institute of Technology and the Los Alamos National Laboratory, he was the recipient of the IEEE Computer Society 1997 Gordon Bell Prize for Price/Performance.

Pauline Nist is the SVP of Product Development and Management at Penguin Computing. Before joining Penguin Computing, Pauline served as vice president of Quality for HP’s Enterprise Storage and Servers Division and immediately prior to that, as vice president and general manager for HP’s NonStop Enterprise Division, where she was responsible for the development, delivery, and marketing of the NonStop family of servers, database, and middleware software. Prior to the NonStop Enterprise Division (formerly known as Tandem Computers), Pauline served as vice president of the Alpha Servers business unit at Digital Equipment Corporation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

Nvidia Appoints Andy Grant as EMEA Director of Supercomputing, Higher Education, and AI

March 22, 2024

Nvidia recently appointed Andy Grant as Director, Supercomputing, Higher Education, and AI for Europe, the Middle East, and Africa (EMEA). With over 25 years of high-performance computing (HPC) experience, Grant brings a Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire