Virtualization is Not Cloud…But Does Make It Shine

By Miha Ahronovitz

August 31, 2010

The reason why the clouds came into being and their functionality created such high demand is because most large IT shops were asking themselves “Why does Google make all the money; why does Yahoo make so much—and why don’t we? Can’t we have, in-house, that same model?”

After all, Google might have JBOD (Just a Bunch of Datacentres), so what magic did they use to deliver us, from behind an opaque wall, away from all the abstraction with transparent service and elasticity?

To get to the heart of those questions, Jonathan Lampe spends some time discussing the concept of elasticity. If you ask me, this is the single most important feature of a cloud and part of what separates it so distinctly from the grid. The grids did not have elasticity–all a grid did was to implement policies in sharing a limited number of resources in a fair way.

This means that if I am small-dog user, I am thrown out from the execution space to a holding queue, each time a very big dog user needs the resources. This also means the quality of service is awful in the grid. It is like taking a shower in an older hotel, when the water is either boiling hot or chilling cold, depending on how many guests are showering in their room at the same time.

Let’s revisit Jonathan Lampe’s view on this in the context of elasticity, when he states: “Elastic architecture” is a concept you will read about more frequently as time goes on. It refers to computer architecture designed such that applications with different roles in different tiers of an application can each intelligently (and elastically) scale up or down to meet processing requirements
.
 Jonathan uses as a reference this 2007 blog that looks back, stating:

A few months ago Amazon announced their new web service called EC2, which stands for Elastic Compute Cloud. The idea is pretty simple and powerful. You use an API call to “create” a server and install your software on it. Everything works like a real server, and if you need more power, you call the API again and request another server. If you no longer need the extra power, you shut the extra servers down (with an API call).  You only pay for the actual time you used each “created” server.  Amazon did not invent the concept, but they did make it trivial to use, and with their reputation on the line, they are committed to make it a reliable and competitive platform.

Elastic computing is the result of recent improvements made in the area of virtualization, which is the execution of multiple operating system entities on a single hardware. Image your desktop at home running Windows at the same time it is running Linux. Desktop virtualization is done in the form of one operating system hosting another (something Mac users are very familiar with running Windows inside OS X). Server virtualization is done by running a light virtualization operating system (usually Linux-based) which does not provide any other functionality besides hosting other platforms. Virtualization has reached a certain maturity lately thanks to significant improvements in hardware, mostly in  built-in CPU support for sharing the same hardware between multiple operating systems. 

This is the most lucid simple story of what is EC2. Amazon already operates the largest online store on earth, so the elasticity was big issue for them before it was an issue for everybody else. It was business first. But the genius of Jeff Bezos made this an exercise in the lateral thinking and considered… Gee, if we created this, what not sell the cycles as already do sell books or TVs?” And it was this superb execution that led to Amazon EC2.

The virtualization was never an end in itself, just the means. It so happened that it was handy. At the beginning, virtualization was weak for production use. This is when VMware and XEN opened the eyes and exclaimed, “wow, the cloud, needs us!”

And so we can conclude that the cloud business model does not necessarily need virtualization. Virtualization was an opportunistic tool. The software virtualization companies must continue to make themselves needed via constant improvement. The cloud needs elasticity and irtualization is part of the game for now.

We have an analyst-supplied historic log of virtualization predictions, summarized from 2006 to 2010. In 2006 no one talked about clouds, but about “the IT consolidation market”. Gartner predicted 50% of the workloads will be virtualized in 2010, but “60% of virtualized servers will be less secure than the physical servers they replace.” Supposedly, the cloud infrastructure will have to compensate this basic flaw the server virtualization has by definition.

No wonder  Tech Target’s number one prediction for virtualization in 2010 is disaster recovery (DR).

“Although virtualization provides a backup of sorts, it is not a foolproof method. If one virtual server goes down, it can take hundreds of virtual machines (VMs) with it — bringing enterprise operations to a screeching halt. Having a solid DR plan in place and examining each aspect will make all the difference.”

The DR function is part of many cloud implementatios. This is why, the software virtualization in the cloud needs assistance from the cloud itself.

What about hardware based virtualization? The newest player is Intel who will offer very fast virtualization extension in hardware  processors. The military used already CPUs made by Intel with embedded virtualization  since 2009. 

The advances being made today by CPU makers and hypervisor developers are helping to define the way for future virtualization platforms. New CPU extensions are not only helping to meet the high-performance requirements of future systems, but they’re also making it easier to implement and support legacy operating systems.

In years to come, implementations such as VT-x and VT-d will play an increasingly important role in virtualized systems as industry adopts these types of implementations as effective hardware assistance standards for future CPU architectures.

The original paper on Intel hardware virtualization was published in 2006. The most recent news on Intel virtualization are summrized here

Intel® Xeon® Processor 5000 Sequence has the virtualization technology in place. Here are the benefits as specified by Intel.

• Enables more operating systems and software to run in today’s virtual environments.

• Developed with virtualization software providers to enable greater functionality and  compatibility compared to non-hardware-assisted virtual environments.

• Get the performance and headroom to improve the average virtualization performance over previous generations of two-processor servers.

We have no official news from software virtualization ISV on how their future release will be optimized for Intel processors, yet the results will be spectacular for the both enterprise and home users.

AMD Virtualization (AMD-V™) Technology is also listed on their website

August 7, 2010, an article from Federal Circle, made the following predictions for 2010 virtualization:

Hardware advancements will simplify and help increase penetration of virtualization. I/O Virtualization and direct device access will be focus areas for this year and specific hardware enhancements will remove storage and network bottlenecks. This will allow increased VM (virtual machine) density and better performance. The improvements will enable virtualization of critical workloads without compromising performance. This would enhance utilization and ensure increased RoI (return on investment) for virtualization investments.

The reason 3PAR is the object of an intense bidding between Hewlett Packard and Dell, is because elastic storage, virtual and scalable in a cloud. They are the first, in their storage hardware software combinations working the wonders.

A laptop from HP or Dell needs to have some minimal but extremely fast flash drives. Large heavy internal hard drives will be a thing of the past. Simply every user can have any storage capacity, virtual and  scalable, in the HP or Dell storage clouds based on 3PAR. HP and Bell can update directly all software on the laptop and make connections to any other cloud.

Once married with hardware, the software virtualization will make itself part of the cloud building structures, paving the road to science-fiction-like technological products
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This