Virtualization is Not Cloud…But Does Make It Shine

By Miha Ahronovitz

August 31, 2010

The reason why the clouds came into being and their functionality created such high demand is because most large IT shops were asking themselves “Why does Google make all the money; why does Yahoo make so much—and why don’t we? Can’t we have, in-house, that same model?”

After all, Google might have JBOD (Just a Bunch of Datacentres), so what magic did they use to deliver us, from behind an opaque wall, away from all the abstraction with transparent service and elasticity?

To get to the heart of those questions, Jonathan Lampe spends some time discussing the concept of elasticity. If you ask me, this is the single most important feature of a cloud and part of what separates it so distinctly from the grid. The grids did not have elasticity–all a grid did was to implement policies in sharing a limited number of resources in a fair way.

This means that if I am small-dog user, I am thrown out from the execution space to a holding queue, each time a very big dog user needs the resources. This also means the quality of service is awful in the grid. It is like taking a shower in an older hotel, when the water is either boiling hot or chilling cold, depending on how many guests are showering in their room at the same time.

Let’s revisit Jonathan Lampe’s view on this in the context of elasticity, when he states: “Elastic architecture” is a concept you will read about more frequently as time goes on. It refers to computer architecture designed such that applications with different roles in different tiers of an application can each intelligently (and elastically) scale up or down to meet processing requirements
.
 Jonathan uses as a reference this 2007 blog that looks back, stating:

A few months ago Amazon announced their new web service called EC2, which stands for Elastic Compute Cloud. The idea is pretty simple and powerful. You use an API call to “create” a server and install your software on it. Everything works like a real server, and if you need more power, you call the API again and request another server. If you no longer need the extra power, you shut the extra servers down (with an API call).  You only pay for the actual time you used each “created” server.  Amazon did not invent the concept, but they did make it trivial to use, and with their reputation on the line, they are committed to make it a reliable and competitive platform.

Elastic computing is the result of recent improvements made in the area of virtualization, which is the execution of multiple operating system entities on a single hardware. Image your desktop at home running Windows at the same time it is running Linux. Desktop virtualization is done in the form of one operating system hosting another (something Mac users are very familiar with running Windows inside OS X). Server virtualization is done by running a light virtualization operating system (usually Linux-based) which does not provide any other functionality besides hosting other platforms. Virtualization has reached a certain maturity lately thanks to significant improvements in hardware, mostly in  built-in CPU support for sharing the same hardware between multiple operating systems. 

This is the most lucid simple story of what is EC2. Amazon already operates the largest online store on earth, so the elasticity was big issue for them before it was an issue for everybody else. It was business first. But the genius of Jeff Bezos made this an exercise in the lateral thinking and considered… Gee, if we created this, what not sell the cycles as already do sell books or TVs?” And it was this superb execution that led to Amazon EC2.

The virtualization was never an end in itself, just the means. It so happened that it was handy. At the beginning, virtualization was weak for production use. This is when VMware and XEN opened the eyes and exclaimed, “wow, the cloud, needs us!”

And so we can conclude that the cloud business model does not necessarily need virtualization. Virtualization was an opportunistic tool. The software virtualization companies must continue to make themselves needed via constant improvement. The cloud needs elasticity and irtualization is part of the game for now.

We have an analyst-supplied historic log of virtualization predictions, summarized from 2006 to 2010. In 2006 no one talked about clouds, but about “the IT consolidation market”. Gartner predicted 50% of the workloads will be virtualized in 2010, but “60% of virtualized servers will be less secure than the physical servers they replace.” Supposedly, the cloud infrastructure will have to compensate this basic flaw the server virtualization has by definition.

No wonder  Tech Target’s number one prediction for virtualization in 2010 is disaster recovery (DR).

“Although virtualization provides a backup of sorts, it is not a foolproof method. If one virtual server goes down, it can take hundreds of virtual machines (VMs) with it — bringing enterprise operations to a screeching halt. Having a solid DR plan in place and examining each aspect will make all the difference.”

The DR function is part of many cloud implementatios. This is why, the software virtualization in the cloud needs assistance from the cloud itself.

What about hardware based virtualization? The newest player is Intel who will offer very fast virtualization extension in hardware  processors. The military used already CPUs made by Intel with embedded virtualization  since 2009. 

The advances being made today by CPU makers and hypervisor developers are helping to define the way for future virtualization platforms. New CPU extensions are not only helping to meet the high-performance requirements of future systems, but they’re also making it easier to implement and support legacy operating systems.

In years to come, implementations such as VT-x and VT-d will play an increasingly important role in virtualized systems as industry adopts these types of implementations as effective hardware assistance standards for future CPU architectures.

The original paper on Intel hardware virtualization was published in 2006. The most recent news on Intel virtualization are summrized here

Intel® Xeon® Processor 5000 Sequence has the virtualization technology in place. Here are the benefits as specified by Intel.

• Enables more operating systems and software to run in today’s virtual environments.

• Developed with virtualization software providers to enable greater functionality and  compatibility compared to non-hardware-assisted virtual environments.

• Get the performance and headroom to improve the average virtualization performance over previous generations of two-processor servers.

We have no official news from software virtualization ISV on how their future release will be optimized for Intel processors, yet the results will be spectacular for the both enterprise and home users.

AMD Virtualization (AMD-V™) Technology is also listed on their website

August 7, 2010, an article from Federal Circle, made the following predictions for 2010 virtualization:

Hardware advancements will simplify and help increase penetration of virtualization. I/O Virtualization and direct device access will be focus areas for this year and specific hardware enhancements will remove storage and network bottlenecks. This will allow increased VM (virtual machine) density and better performance. The improvements will enable virtualization of critical workloads without compromising performance. This would enhance utilization and ensure increased RoI (return on investment) for virtualization investments.

The reason 3PAR is the object of an intense bidding between Hewlett Packard and Dell, is because elastic storage, virtual and scalable in a cloud. They are the first, in their storage hardware software combinations working the wonders.

A laptop from HP or Dell needs to have some minimal but extremely fast flash drives. Large heavy internal hard drives will be a thing of the past. Simply every user can have any storage capacity, virtual and  scalable, in the HP or Dell storage clouds based on 3PAR. HP and Bell can update directly all software on the laptop and make connections to any other cloud.

Once married with hardware, the software virtualization will make itself part of the cloud building structures, paving the road to science-fiction-like technological products
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communicat Read more…

By John Russell

US Seeks to Automate Video Analysis

January 16, 2018

U.S. military and intelligence agencies continue to look for new ways to use artificial intelligence to sift through huge amounts of video imagery in hopes of freeing analysts to identify threats and otherwise put their Read more…

By George Leopold

URISC@SC17 and the #LongestLastMile

January 11, 2018

A multinational delegation recently attended the Understanding Risk in Shared CyberEcosystems workshop, or URISC@SC17, in Denver, Colorado. URISC participants and presenters from 11 countries, including eight African nations, 12 U.S. states, Canada, India and Nepal, also attended SC17, the annual international conference for high performance computing, networking, storage and analysis that drew nearly 13,000 attendees. Read more…

By Elizabeth Leake, STEM-Trek Nonprofit

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

The @hpcnotes Predictions for HPC in 2018

January 4, 2018

I’m not averse to making predictions about the world of High Performance Computing (and Supercomputing, Cloud, etc.) in person at conferences, meetings, causa Read more…

By Andrew Jones

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This