The Impact of Cluster Virtualization on HPC

By Nicole Hemsoth

December 8, 2006

In part two of this interview, Don Becker, CTO of Penguin Computing and co-inventor of the Beowulf clustering model, and Pauline Nist, senior vice president of product development and management for Penguin Computing, describe how cluster virtualization changes the cost model of server resoures and how virtualization and clustering will evolve in the marketplace. They also discuss Penguin’s role in this evolution.

Read part one of the “Impact of Cluster Virtualization in HPC” interview.

HPCwire: How does managing and using a cluster like a single machine help to increase productivity and lower your operational costs?

Becker: Let’s drill down into more detail on Scyld ClusterWare’s architecture. The essence of Scyld’s differentiation is that it is the only virtualization or system management solution which presents a fully-functional, SMP-like usage and administration model. It is the unique architecture of Scyld that enables customers to truly realize the potential of Linux clustering to drive productivity up and cost out of their organization. It offers the practicality and flexibility of ‘scale-out’ with the simplicity of ‘scale-up.’

The great thing about a scale out architecture with commodity clusters is the capital costs are tremendously lower. The flexibility to expand it and upgrade it are really attractive, and it lowers your vulnerability with the compute power spread out on many servers. The downside is the bigger the cluster the more of an operational nightmare it is to provision, manage and keep consistent where that is crucial — that is, if you put it together in the traditional ad-hoc configuration.

The whole idea behind cluster virtualization is to make large pools of servers as easy to provision, use and manage as a single server, no matter how many “extra processors” you put behind it. Instead of the traditional approach of a full, disk-based Linux install on each server and using complex scripting to try and mask the complexity of setting up users, security, running jobs and monitoring what is happening, Scyld ClusterWare virtualizes the cluster into a one single server — the Master. Everything is done in this one place.

Its single point of command and control vastly simplifies the complexity of DIY cluster management that is time, skill and cost intensive, and it eliminates multiple layers of administration, management and support, driving cost out. So software installs and updates are done on one machine. Users are setup and run workloads on one machine. Statistics from the cluster are gathered, stored and graphically displayed on one machine. Even the Linux process space is virtualized across the cluster to one virtual process space on one machine so that jobs can be monitored and managed in one place.

The compute servers exist only to run applications specified by the Master node and are automatically provisioned with a lightweight, in-memory operating system from the single software installation on that Master. In this way, the compute servers are fully provisioned in under 20 seconds and users can flexibly add or delete nodes, or repurpose them, on demand, in seconds, making the cluster extraordinarily scalable and resilient.

They are always consistent, which is critical in HPC, and stripped of any unnecessary system services and associated vulnerabilities, making the cluster inherently more reliable and secure.

On top of this fundamentally superior architecture for compute resource management, we offer tools for virtualizing the HPC workloads across the available resources, in time and according to business policies and priorities, thus maximizing resource utilization against real business objectives.

There is no doubt that the more servers you have to manage, the harder and more costly it becomes. Scyld ClusterWare reduces the entire pool to a logical extension of the single Master machine and makes that pool phenomenally easier and less expensive to work with.

The benefits of a commercially-supported cluster virtualization solution are realized every day of the cluster life cycle and begin returning on the initial investment immediately. First clusters can be up and running applications on the first morning of software installation instead of days or weeks with DIY “project” software. From there updating software is a simple update on one machine that automatically and instantly updates compute nodes as they run new applications. Adding a new compute node is as effortless as plugging it in and it can be ready to take jobs in under 20 seconds.

A critical point about Scyld provisioning is the intelligence of its booting/provisioning subsystem. Very few players address this issue. Scyld not only auto-provisions the compute nodes but dynamically detects the hardware devices and loads the appropriate hardware device drivers. A typical Scyld compute node uses about 8 MB for the OS and Clusterware as opposed to 400MB with a traditional full install — there is 50 times more memory for applications and a far less chance of applications swapping out to disk.

Scyld provides instant cluster stats and job stats for the entire cluster at all times on a single machine with no need to ever log into compute nodes saving enormous time every single day. Admins and users write far fewer and vastly simpler scripts to automate tasks since the single system environment is so much more intuitive and seamless. This saves days and weeks over the course of a given year especially when new people come up to speed on the system.

HPCwire: How do cluster virtualization and virtual machine technology play together and where is the market play for each?

Nist: What is interesting about the virtual machine technology is that it can allow you to consolidate ten physical servers onto one box, but there are still ten virtual servers each with its own OS and application stack that need to be deployed and managed and monitored.

It’s almost ironic to think of an admin buying 50 real servers so that he can turn them into 500 virtual servers with different workloads and then use cluster virtualization software to make it all as easy to manage as one simple, powerful server with 1000 or 2000 processors. But that’s definitely our vision of the evolution of the computing infrastructure ecosystem.

Now, server consolidation using machine virtualization is pretty much an enterprise play, particularly at the application tier where you otherwise have very low server utilization due to the silo’d applications we spoke about. There is overhead associated with carving up the server into multiple virtual machines and I/O bottlenecks are still a big issue. But the applications here are not so I/O bound and the net gain of server consolidation outweighs the general overhead in enterprise datacenters.

In HPC, every ounce of performance is crucial and a fair amount of I/O-bound applications makes machine virtualization not as viable for production HPC environments. Virtual machines are great for test and prototyping usage and we do that every day and so it is just a matter of the technology evolving to overcome the performance issues before usage expands to production usage.

Ultimately, we see cluster virtualization developing as follows:

Today: A dedicated cluster with physical resources, which appears and acts as a single system — a virtual pool of resources that can expand/contract on demand.

Near future: Within the cluster, individual compute nodes are virtualized, which enables running of different applications on individual machines.

Longer term: Beyond the cluster, an ecosystem of virtual compute nodes where nodes ‘borrowed’ beyond the cluster for a transient period are used to maximize the entire infrastructure. VM nodes are provisioned on-demand and wiped out when not needed. This yields dramatic scalability while retaining simplicity.

Meanwhile, clustered and Grid computing is definitely crossing over into the enterprise datacenter in areas like stateless web farming where large pools of servers need to be harnessed to provide significant, coordinated compute power to changing workloads. This is where we see demand converge for the simplicity of cluster virtualization to address the proliferation of virtual servers across a farm of physical servers. The most compelling feature is in automating workflows against organizational policies and priorities to match workloads to the available resources on demand — adaptive and automated computing in the enterprise.

HPCwire: What’s next in the world of virtualization and what role will Penguin play?

Becker: We see three major areas of activity moving pretty rapidly right now.

First, there are intense efforts to address performance optimization for virtual machine technology. CPU vendors are rapidly rolling out hardware mechanisms to enhance support for virtual machines. Not all of the early work has been successful but the key stakeholders continue to collaborate to optimize the solution. The I/O bottlenecks are the most crucial to solve.

 

There is also an interesting initiative surrounding the virtualization of USB ports on remote servers which is a very tricky problem to solve but can address some annoying aspects of connecting to remote machines…

Secondly, the leading OS vendors are aggressively working to incorporate and standardize foundational hypervisor support for machine virtualization into the kernel. This seems the likely move on their part to maintain control of the software socket on the hardware.

Finally, the commoditization of the foundation of virtual machine capability will drive a shift in innovation up to the level of managing the provisioning and monitoring of virtual machine and automating workflows to map resources to the shifting demands of the application clients. This is the big payoff for the enterprise when business demand can automatically cull the compute resources needed on demand. We already see VMware, XenSource, and third parties emerging with early solutions for deploying and managing virtual machines across large pools of servers.

HPCwire: What role will Penguin play in this?

Becker: Penguin Computing can add tremendous value and real solutions in this emerging movement. The trend with the virtual machine hypervisors is that they are effectively a specialized lightweight “boot OS” sitting directly on the hardware to then provision virtual machines for launching full general purpose OS’s and the app stack.

Scyld CusterWare can leverage this architecture in two ways.

A Scyld compute node can rapidly provision these lightweight OS platforms and then launch multiple virtual machines or virtual compute nodes out of a single physical machine. Scyld Clusterware is provisioned to each virtual compute node to run different sets of apps that may have different OS environment requirements. One practical application of this could be where a cluster needs to run one set of applications that require a RH ES 3 (2.4 based) kernel and others that need to run on a RH ES 4 (2.6 based) kernel, and do so on-demand during a given period.

Scyld ClusterWare excels at rapidly provisioning diskless operating environments on demand. Within a Scyld cluster we would, by default, provision any hypervisor OS to compute nodes that require virtual machine capability. What might be more interesting is if there is a more general play for Scyld in enterprises that adopt the hypervisor OS as their default provisioned host platform in order for them to launch VMs on demand to meet changing business needs. The concept of rapid diskless provisioning is gaining mindshare as a general concept. Scyld could offer general provisioning infrastructure in this environment.

Cluster virtualization is here today and already solving very real customer problems today. As the technology around virtualization continues to evolve and advance, very powerful benefits will continue to be realized by organizations faced with the challenges of server proliferation and matching business priorities on demand to the resources brought to bear by these servers.

—–

Donald Becker is the CTO of Penguin Computing and co-inventor of Beowulf clusters. Donald is an internationally recognized operating system developer and the original inventor of Beowulf clustering. In 1999 he founded Scyld Computing and led the development of the next-generation Beowulf cluster operating system. Prior to founding Scyld, Donald started the Beowulf Parallel Workstation project at NASA Goddard Space Flight Center. He is the co-author of How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters. With colleagues from the California Institute of Technology and the Los Alamos National Laboratory, he was the recipient of the IEEE Computer Society 1997 Gordon Bell Prize for Price/Performance.

Pauline Nist is the SVP of Product Development and Management at Penguin Computing. Before joining Penguin Computing, Pauline served as vice president of Quality for HP’s Enterprise Storage and Servers Division and immediately prior to that, as vice president and general manager for HP’s NonStop Enterprise Division, where she was responsible for the development, delivery, and marketing of the NonStop family of servers, database, and middleware software. Prior to the NonStop Enterprise Division (formerly known as Tandem Computers), Pauline served as vice president of the Alpha Servers business unit at Digital Equipment Corporation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Penguin Computing Brings Cascade Lake-AP to OCP Form Factor

July 7, 2020

Penguin Computing, a subsidiary of SMART Global Holdings, Inc., is announcing a new Tundra server, Tundra AP, that is the first to implement the Intel Xeon Scalable 9200 series processors (codenamed Cascade Lake-AP) in t Read more…

By Tiffany Trader

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia's Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 "Accelerator Optimized" VM A2 instance family on Google Compute Engine. The instances are powered by t Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

HPCwire: Let's start with HLRS and work our way up to the European scale. HLRS has stood out in the HPC world for its support of both scientific and industrial research. Can you discuss key developments in recent years? Read more…

By Steve Conway, Hyperion

The Barcelona Supercomputing Center Offers a Virtual Tour of Its MareNostrum Supercomputer

July 6, 2020

With the COVID-19 pandemic continuing to threaten the world and disrupt normal operations, facility tours remain a little difficult to operate, with many supercomputing centers having shuttered facility tours for visitor Read more…

By Oliver Peckham

What’s New in Computing vs. COVID-19: Fugaku, Congress, De Novo Design & More

July 2, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

AWS Solution Channel

Maxar Builds HPC on AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer

When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time last year, IBM announced open sourcing its Power instructio Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia's Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 "Accelerator Optimized" VM A2 instance fam Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

HPCwire: Let's start with HLRS and work our way up to the European scale. HLRS has stood out in the HPC world for its support of both scientific and industrial Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

ISC 2020 Keynote: Hope for the Future, Praise for Fugaku and HPC’s Pandemic Response

June 24, 2020

In stark contrast to past years Thomas Sterling’s ISC20 keynote today struck a more somber note with the COVID-19 pandemic as the central character in Sterling’s annual review of worldwide trends in HPC. Better known for his engaging manner and occasional willingness to poke prickly egos, Sterling instead strode through the numbing statistics associated... Read more…

By John Russell

ISC 2020’s Student Cluster Competition Winners Announced

June 24, 2020

Normally, the Student Cluster Competition involves teams of students building real computing clusters on the show floors of major supercomputer conferences and Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

Contributors

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This