Palm Trees, HPC and Virtualization

By Wolfgang Gentzsch

May 21, 2010

We were lounging in the paradise-like ambience of the beautiful conference hotel in Hammamet, Tunisia, earlier this week, under a verdant canopy of palm trees near the beach — not a cloud to be seen. The AICCSA International Conference on Computer Systems and Applications was in full swing, where Dr. Mazin Yousif just presented the keynote on cloud computing.

Shortly after Mazin joined Intel in 2000 to work on InfiniBand, I remember, we worked together on self-adaptable grid architecture. He then got into HPC when he was the chair of the Management Working Group (MgtWG) of the InfiniBand Trade Association (IBTA) which defined the management architecture for the InfiniBand Architecture (IBA). For many of the TOP500 HPC systems today, InfiniBand is the underlying interconnect technology, optimized for high-bandwidth low-latency communication.

Through InfiniBand, HPC applications, after establishing the interconnect channel, have direct access to the hardware, bypassing the operating system and the device drivers, reducing latency to a few hundred nanoseconds. (Ethernet, on the other hand, where communication moves through the TCP transport layer, IP network layer, link layer and physical layer, is an order of magnitude slower). I found that Mazin, equipped with this expertise, was the ideal person to answer my question about how virtualization in cloud computing really affects the performance of our HPC applications. The following is the result of our conversation HPC and virtualization — under the palms.

Wolfgang Gentzsch: We hear a lot about the additional overhead caused by virtualization, these days. How does virtualization really affect the performance of HPC applications?

Mazin Yousif: To answer this question, we first should look at the role of the VMM (the virtual machine monitor, also called hypervisor). The VMM sits directly on top of the hardware, abstracting all the hardware resources into virtual resources that get aggregated and launched as Virtual Machines (VMs, the containers that run the whole software stack). Usually, the VMM also hosts the device drivers for accessing I/O resources, causing extra overhead for I/O requests.

Gentzsch: Does this mean that the performance of mainly compute-intensive applications wouldn’t be affected by the virtualization?

Yousif: Yes, if compute-intensive applications run completely within the VM with very limited enters to and exits from the VMM, the impact on the overall performance is very minimal.

Gentzsch: … and I/O-intensive applications?

Yousif: There, the overhead is going to be noticeable because all I/O requests inside the VM cause jumps to the VMM, where the I/O device drivers are accessed, and enabling access to the physical I/O resource. This usually causes an extra overhead of a few microseconds. In a more realistic HPC scenario with a mix of compute- and I/O-intensive operations, the amount of overhead is certainly somewhere in-between.

Gentzsch: Could I avoid this overhead at all?

Yousif: May be not completely, but in principle, yes. First, VM vendors could further optimize the VMM, for example by reducing the critical path for an I/O operation within the VMM code. Second, instead of going through the VMM, an I/O device could be directly assigned to a VM which would eliminate the overhead caused by the VMM. This can be achieved by configuring the VMM, resulting in a much better I/O performance. The disadvantage however is that now you need an I/O device for every VM, instead of sharing that device among several VMs, as is usual.

Gentzsch: … but isn’t it better to optimize the rate of completing HPC transactions, rather than focusing on latency alone?

Yousif: Indeed. I see rate as more important than latency alone since rate involves both bandwidth and latency (=BW/latency). Virtualization not only impacts latency, but also impacts bandwidth as well. As before, in a mainly compute-intensive workload that fits in the allocated VM memory, rate will not see any depreciation compared to running the same workload on physical resources. In a mixed-traffic workload, relying on an assigned I/O device helps considerably here.

Gentzsch: When you assign a number of VMs to run an HPC workload, would it be better to keep the environment as is for the duration of the run or should it be adapted to track workload resources’ requirements changes?

Yousif: I see it as necessary to adapt the number and configuration of VMs based on the workload’s resources requirements, as well as the service-level agreements the owner of the workload signed with the cloud provider. To track workload changes, the VMM includes provisions to scale resources assigned to a VM up or down based on that VM’s resource needs. If the elasticity provided by the VMM is not sufficient, then other capabilities such as VMware’s Distributed Resource Scheduling along with VMotion can do the trick.

Gentzsch: So what would I have to do, as an HPC user?

Yousif: If you have a feel about the mix of compute versus I/O intensity in your HPC application, you can decide whether to assign an I/O device directly to a VM or not. If, for example, your working set fits completely into the main memory allocated to a VM, there is obviously no I/O, no page faults, no disc swaps, and thus no overhead.

Gentzsch: But that means that I have to have the ability to configure my VMM. I understand that this can be done in my private cloud, but how would I do this in IBM’s public cloud, for example?

Yousif: Today you can’t. Public cloud service providers currently do not allow HPC end-users to decide whether to assign an I/O device per VM or to share it among several VMs. If there is a real need for this, the HPC community should request this feature from the public cloud service providers to enable HPC in the public clouds.

Gentzsch: So what would be your conclusion and recommendation?

Yousif: I do not see major obstacles running HPC workloads in virtualized environments as there are ways to mitigate the overhead incurred through the VMM. But to cater further to the HPC community, we urge the cloud providers to incorporate running IBA in a virtualized environment in their cloud deployments, which could be one of the best choices for the HPC community as, first, IBA is much easier to virtualize than other I/O technologies, and second, at the same time it offers much better performance than other I/O technologies. Cloud providers currently do not offer IBA support in their cloud deployments.

Addendum on Virtualization

When I checked the dictionary to learn the meaning of virtual, here is what I found, “Vir•tu•al (adjective): existing in essence or effect, though not in actual fact.” Now, virtual systems are systems that: (i) incorporate hardware-level abstraction of physical resources including processors, memory, chipset, I/O devices and others ; and (ii) encapsulate all OS & application state. This is done through the VMM virtualization software that: (i) provides extra level of indirection and decouples hardware & OS; (ii) multiplexes physical hardware across multiple Guest VMs; (iii) provides better strong isolation between VMs; and (iv) manages physical resources and improves utilization.

Virtualization provides a great deal of benefits including, but not limited to, (i) considerably increasing utilization from <15 percent to much higher numbers that can reach 90 percent; (ii) through isolation, it allows to run multiple VMs on a single physical host, and any software malware or crashes in one VM do not affect other VMs; (iii) through encapsulation, it is possible to have the entire VM (including OS, applications, data, memory and device state) as a file that will allow us to, for example, take snapshots, clones, backup, capture a VM state on the fly and restore to point-in-time; (iv) reduce total cost of ownership; and many more.

In terms of uses, examples include test and development; server consolidation and containment; and enterprise virtual desktops.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This