Virtualization is Not Cloud…But Does Make It Shine

By Miha Ahronovitz

August 31, 2010

The reason why the clouds came into being and their functionality created such high demand is because most large IT shops were asking themselves “Why does Google make all the money; why does Yahoo make so much—and why don’t we? Can’t we have, in-house, that same model?”

After all, Google might have JBOD (Just a Bunch of Datacentres), so what magic did they use to deliver us, from behind an opaque wall, away from all the abstraction with transparent service and elasticity?

To get to the heart of those questions, Jonathan Lampe spends some time discussing the concept of elasticity. If you ask me, this is the single most important feature of a cloud and part of what separates it so distinctly from the grid. The grids did not have elasticity–all a grid did was to implement policies in sharing a limited number of resources in a fair way.

This means that if I am small-dog user, I am thrown out from the execution space to a holding queue, each time a very big dog user needs the resources. This also means the quality of service is awful in the grid. It is like taking a shower in an older hotel, when the water is either boiling hot or chilling cold, depending on how many guests are showering in their room at the same time.

Let’s revisit Jonathan Lampe’s view on this in the context of elasticity, when he states: “Elastic architecture” is a concept you will read about more frequently as time goes on. It refers to computer architecture designed such that applications with different roles in different tiers of an application can each intelligently (and elastically) scale up or down to meet processing requirements
.
 Jonathan uses as a reference this 2007 blog that looks back, stating:

A few months ago Amazon announced their new web service called EC2, which stands for Elastic Compute Cloud. The idea is pretty simple and powerful. You use an API call to “create” a server and install your software on it. Everything works like a real server, and if you need more power, you call the API again and request another server. If you no longer need the extra power, you shut the extra servers down (with an API call).  You only pay for the actual time you used each “created” server.  Amazon did not invent the concept, but they did make it trivial to use, and with their reputation on the line, they are committed to make it a reliable and competitive platform.

Elastic computing is the result of recent improvements made in the area of virtualization, which is the execution of multiple operating system entities on a single hardware. Image your desktop at home running Windows at the same time it is running Linux. Desktop virtualization is done in the form of one operating system hosting another (something Mac users are very familiar with running Windows inside OS X). Server virtualization is done by running a light virtualization operating system (usually Linux-based) which does not provide any other functionality besides hosting other platforms. Virtualization has reached a certain maturity lately thanks to significant improvements in hardware, mostly in  built-in CPU support for sharing the same hardware between multiple operating systems. 

This is the most lucid simple story of what is EC2. Amazon already operates the largest online store on earth, so the elasticity was big issue for them before it was an issue for everybody else. It was business first. But the genius of Jeff Bezos made this an exercise in the lateral thinking and considered… Gee, if we created this, what not sell the cycles as already do sell books or TVs?” And it was this superb execution that led to Amazon EC2.

The virtualization was never an end in itself, just the means. It so happened that it was handy. At the beginning, virtualization was weak for production use. This is when VMware and XEN opened the eyes and exclaimed, “wow, the cloud, needs us!”

And so we can conclude that the cloud business model does not necessarily need virtualization. Virtualization was an opportunistic tool. The software virtualization companies must continue to make themselves needed via constant improvement. The cloud needs elasticity and irtualization is part of the game for now.

We have an analyst-supplied historic log of virtualization predictions, summarized from 2006 to 2010. In 2006 no one talked about clouds, but about “the IT consolidation market”. Gartner predicted 50% of the workloads will be virtualized in 2010, but “60% of virtualized servers will be less secure than the physical servers they replace.” Supposedly, the cloud infrastructure will have to compensate this basic flaw the server virtualization has by definition.

No wonder  Tech Target’s number one prediction for virtualization in 2010 is disaster recovery (DR).

“Although virtualization provides a backup of sorts, it is not a foolproof method. If one virtual server goes down, it can take hundreds of virtual machines (VMs) with it — bringing enterprise operations to a screeching halt. Having a solid DR plan in place and examining each aspect will make all the difference.”

The DR function is part of many cloud implementatios. This is why, the software virtualization in the cloud needs assistance from the cloud itself.

What about hardware based virtualization? The newest player is Intel who will offer very fast virtualization extension in hardware  processors. The military used already CPUs made by Intel with embedded virtualization  since 2009. 

The advances being made today by CPU makers and hypervisor developers are helping to define the way for future virtualization platforms. New CPU extensions are not only helping to meet the high-performance requirements of future systems, but they’re also making it easier to implement and support legacy operating systems.

In years to come, implementations such as VT-x and VT-d will play an increasingly important role in virtualized systems as industry adopts these types of implementations as effective hardware assistance standards for future CPU architectures.

The original paper on Intel hardware virtualization was published in 2006. The most recent news on Intel virtualization are summrized here

Intel® Xeon® Processor 5000 Sequence has the virtualization technology in place. Here are the benefits as specified by Intel.

• Enables more operating systems and software to run in today’s virtual environments.

• Developed with virtualization software providers to enable greater functionality and  compatibility compared to non-hardware-assisted virtual environments.

• Get the performance and headroom to improve the average virtualization performance over previous generations of two-processor servers.

We have no official news from software virtualization ISV on how their future release will be optimized for Intel processors, yet the results will be spectacular for the both enterprise and home users.

AMD Virtualization (AMD-V™) Technology is also listed on their website

August 7, 2010, an article from Federal Circle, made the following predictions for 2010 virtualization:

Hardware advancements will simplify and help increase penetration of virtualization. I/O Virtualization and direct device access will be focus areas for this year and specific hardware enhancements will remove storage and network bottlenecks. This will allow increased VM (virtual machine) density and better performance. The improvements will enable virtualization of critical workloads without compromising performance. This would enhance utilization and ensure increased RoI (return on investment) for virtualization investments.

The reason 3PAR is the object of an intense bidding between Hewlett Packard and Dell, is because elastic storage, virtual and scalable in a cloud. They are the first, in their storage hardware software combinations working the wonders.

A laptop from HP or Dell needs to have some minimal but extremely fast flash drives. Large heavy internal hard drives will be a thing of the past. Simply every user can have any storage capacity, virtual and  scalable, in the HP or Dell storage clouds based on 3PAR. HP and Bell can update directly all software on the laptop and make connections to any other cloud.

Once married with hardware, the software virtualization will make itself part of the cloud building structures, paving the road to science-fiction-like technological products
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings is both an introductory text and a field guide for anyone working with biomedical data. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

HPE Extreme Performance Solutions

HFT Firms Turn to Co-Location to Gain Competitive Advantage

High-frequency trading (HFT) is a high-speed, high-stakes world where every millisecond matters. Finding ways to execute trades faster than the competition translates directly to greater revenue for firms, brokerages, and exchanges. Read more…

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Intel Ships Drives Based on 3D XPoint Non-volatile Memory

March 20, 2017

Intel Corp. has begun shipping new storage drives based on its 3D XPoint non-volatile memory technology as it targets data-driven workloads. Intel’s new Optane solid-state drives, designated P4800X, seek to combine the attributes of memory and storage in the same device. Read more…

By George Leopold

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Leading Solution Providers

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This