Virtual Management, Virtual Mess

By By Kurt Westerfield, CTO, Managed Objects

March 10, 2008

Despite hand-wringing over the economy, budget forecasts for virtualization tools are on the rise. A recent UBS poll found that in 2008, a majority of CIOs plan to increase spending on virtualization by between 6 percent and 14 percent, which contrasts with overall IT budgets that respondents said would increase anywhere from zero percent and 5 percent. A CIO Insight survey recently went a step further and found that spending on virtualization for servers and storage is expected to grow more than 20 percent in 2008. What’s more, the data isn’t just limited to forecasts: at the Fall 2007 Gartner Symposium & ITXpo, an informal session poll with perhaps several hundred in the audience found that upward of 60 percent of IT organizations have virtualized servers in production.

What’s Driving Growth?

Server consolidation is often cited as one key driver for virtualization growth. Given IT budget woes, space issues, power consumption and green IT initiatives, consolidation sounds like a logical solution — and virtualization a means to achieve that end. Still, opportunities rarely come without costs and, in one sense, the idea that virtualization leads to consolidation might be misleading.

Forester analyst Galan Schrek pointed out in a research note last fall that virtual servers “act just like their physical counterparts, except that they exist only in software. Because they exist in software they can be created at the click of a button.” This is cause for concern because while it’s true that virtualization enables physical servers to be consolidated, the speed and ease in which new virtual machines can be created also means they could multiply — like water spilled on Gremlins. In fact, a social networking company indicated it already had 15,000 servers and was growing at 6 percent a week, which was partly attributable to virtualization.

Despite these concerns, there are many advantages that make virtualization attractive. A well-managed virtualization strategy will provide a high degree of flexibility to those managing the infrastructure in areas such as capacity management and disaster recovery. However, absent a service-based model of the enterprise, virtualization might also prove an IT management nightmare, growing components like weeds and buttressing the silos of data that IT operations endeavor to integrate.

Physical devices such as servers provide a natural and unavoidable system of checks and balances for restraining the growth of IT infrastructure: a budget. New hardware purchases must be justified, which might entail an inventory of existing assets, the manner in which those assets are deployed and the capacity at which they are utilized. In other words, the business case required to purchase physical infrastructure components may not apply in a virtual environment. As Forrester’s Schrek notes, virtualization enables IT “to build ever more complex systems and applications with a minimum of new effort.”

Virtual Mess

Virtualization might be compared to a desert mirage that has even got some experts confused. For example, one industry pundit stated that “the layers of abstraction virtualization enables” would help eliminate the “dense spiderwebs” of dependencies between IT components within the IT infrastructure. The analogy is accurate insofar as interdependencies are concerned, but the IT infrastructure is not comprised of multiple webs. Rather, it is comprised of multiple spiders spinning up simultaneous changes on a single — and perhaps modular and asymmetrical — web. It is the latter that presents the management challenge.

Why is this problematic? Because enterprises struggle with understanding how IT components map to corresponding applications and services, which makes problem diagnosis difficult and labor-intensive. For example, the company that finds it takes a 35-person conference call to resolve IT outages, or the IT operations staffer who sees his console filled with dozens of severity-1 alerts and can only guess which one should be addressed first. For most, the task is as painful as isolating the lone burned-out bulb on the string of old-fashioned Christmas tree lights. If it seems daunting to isolate the root cause of issues amid 5,000 physical servers, then 10,000 virtual machines (which virtualization can provide at the click of a mouse) might be insurmountable.

While virtualization will simplify some aspects of IT management, it may also add to the complexity. Therefore, it represents a double-edged sword. As Gartner analyst Milind Govekar said in a Financial Times article, “[I]f you virtualise [sic] a mess you’ll get a bigger mess. The overriding need is to cut complexity first.”

The Risk of Change

Complexity complicates change, so it is logical to see why an easier way to build more complex systems will inevitably lead to more complex changes. This is important because change is both a requirement and a risk IT organizations must reconcile; change is not optional. It’s a risk because as market researchers note, upward of 80 percent of downtime is caused by human error that stems from both planned and unplanned IT changes. Yet change is inevitable because software will need patches, servers will need upgrades and capacity will need to be re-allocated. The advent of virtualization in a distributed environment means these changes will be easier to make and, therefore, are likely to occur more frequently — which is all the more reason it is imperative that IT controls change.

Some virtualization tools come with management modules that incorporate limited management capability — ostensibly intended to help IT control change. However, such tools lack service context: an accurate end-to-end model of the IT enterprise and the relationships among its components, applications and services. The absence of a service-based model delineating service dependency on IT components means that IT is flying blind when it comes to making changes to the infrastructure. This has been the inherent problem with traditional component-based IT management tools for networks, systems and applications, and seems to have persisted in virtualization management. For example, when IT moves or retires a server — physical or virtual — IT generally does not truly have a good handle on the role each box plays in the IT infrastructure. What is the impact of making that change?

Consider the case of a virtual box sliced three ways, with two of those three virtual servers allocated to middleware and a Web application, respectively. It’s reasonable to say a database administrator has the capacity to move a database application from a physical server to that third slice in support of a server consolidation project. However, while the DBA might understand what is currently linked to the database, he or she is not likely have an understanding of the impact as a consequence of the move. The consequence of such a move may in fact be what keeps CIOs up at night, quite literally.

Service Perspective

The good news for advocates is that the virtualization market will not be alone in growth. AMR Research said in January that IT departments plan to spend upward of 9 percent more on performance improvement technology than in the previous year, which contrasts with plans for an overall IT spending increase of 5 percent. Though seemingly a contradiction, industry experts rationalize that “IT departments believe they can deliver almost twice as much bang this year for each new IT buck compared with their colleagues in the wider business.”

These conclusions correspond with market research firm Enterprise Management Associates’ (EMA) assessment that the business service management (BSM) market grew by 50 percent over the last two years and is poised for continued growth. One of the most important trends in aligning IT with business, BSM dynamically links IT components to applications that enable business process. This is a fundamental shift in both the thinking and the method for managing technology infrastructure. Instead of managing IT as individual components, such as routers, servers or applications, BSM views these components collectively according to the business service being delivered. In other words, BSM provides a platform — an end-to-end model — of information that illustrates the impact of IT with respect to the business.

What’s the connection with virtualization? Performance improvement tools, like BSM tools, which enable IT departments to manage IT infrastructure more effectively, are growing in tandem with tools that provide extraordinary flexibility and capacity. IT should plan for both at the same time — that is to say that IT should plan for virtualization in the context of a service perspective that BSM delivers. Virtual tools should be linked to services in order to understand the impact of change prior to implementing changes in a production environment.

Conclusion

The value proposition of virtualization provides an unprecedented flexibility for capacity, but this flexibility does not come without risk. As is the case with most cutting-edge technologies, both the risks and advantages have not been well defined, and perhaps have yet to be conceived. However, that risk can be mitigated so long as virtualization implementations are managed from a service perspective that incorporates an accurate end-to-end model of the IT infrastructure. Though this model, virtual components, like their physical counterparts, are linked to the services. In essence, BSM provides essential mapping and modeling capabilities to better understand IT and service relationships and dependencies needed to more effectively control change — especially in virtualized environments.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ABB Upgrades Produce Up to 30 Percent Energy Reduction for HPE Supercomputers

June 6, 2020

The world’s supercomputers are currently allied in a common goal: defeating COVID-19. To analyze the billions upon billions of molecules that might produce helpful therapeutics (or even a vaccine), an unimaginable amou Read more…

By Oliver Peckham

Supercomputers Take to the Solar Winds

June 5, 2020

The whims of the solar winds – charged particles flowing from the Sun’s atmosphere – can interfere with systems that are now crucial for modern life, such as satellites and GPS services – but these winds can be d Read more…

By Oliver Peckham

HPC in O&G: Deep Sea Drilling – What Happens Now   

June 4, 2020

At the beginning of March I attended the Rice Oil & Gas HPC conference in Houston. That seems a long time ago now. It’s a great event where oil and gas specialists join with compute veterans and the discussion tell Read more…

By Rosemary Francis

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCSA’s next generation of supercomputers post-Blue Waters,” Read more…

By John Russell

Dell Integrates Bitfusion for vHPC, GPU ‘Pools’

June 3, 2020

Dell Technologies advanced its hardware virtualization strategy to AI workloads this week with the introduction of capabilities aimed at expanding access to GPU and HPC services via its EMC, VMware and recently acquired Read more…

By George Leopold

AWS Solution Channel

Join AWS, Univa and Intel for This Informative Session!

Event Date: June 18, 2020

More enterprises than ever are turning to HPC cloud computing. Whether you’re just getting started, or more mature in your use of cloud, this HPC Cloud webinar is an excellent opportunity to gain valuable insights and knowledge to help accelerate your HPC cloud projects. Read more…

Supercomputers Streamline Prediction of Dangerous Arrhythmia

June 2, 2020

Heart arrhythmia can prove deadly, contributing to the hundreds of thousands of deaths from cardiac arrest in the U.S. every year. Unfortunately, many of those arrhythmia are induced as side effects from various medicati Read more…

By Staff report

NCSA Wades into Post-Blue Waters Era with Delta Supercomputer

June 3, 2020

NSF has awarded the National Center for Supercomputing Applications (NCSA) $10 million for its next supercomputer - named Delta – “which will kick-start NCS Read more…

By John Russell

Indiana University to Deploy Jetstream 2 Cloud with AMD, Nvidia Technology

June 2, 2020

Indiana University has been awarded a $10 million NSF grant to build ‘Jetstream 2,’ a cloud computing system that will provide 8 aggregate petaflops of comp Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This