Virtualization for Business Continuity

By By Kelly Vizzini, CMO, DataSynapse

June 25, 2007

Organizations today not only need to be focused on their day-to-day operations — but to be prepared for all “what if” scenarios that might occur 24/7/365. Organizations working with the federal government to provide key data on the general public — to assist during disasters like Rita, Katrina and 9/11 — must ensure that the data collected — names, addresses, social security numbers, etc. — is being protected to guarantee that no person is “lost” during a disaster, especially when funds are needed to survive.

While this can be effectively achieved, there is a new problem faced by today’s enterprise. To assure optimal processing and efficiency with no downtime or service delay, the applications used to run its business need to adaptively shift all resources to a given application if a disaster occurs. If a disaster happened this moment, today’s organization would experience crippling interruptions, as application servers would need to be manually reconfigured and provisioned for peaks in capacity. Companies need a solution that can offer capacity on-demand, assuring seamless mobility across compute resources for optimal service quality, while reducing the necessary human resources needed to manage the environment.

Does this problem sound familiar? It shouldn’t, as this scenario, unfortunately, is not an isolated case. More and more companies strive to move away from time-intensive, manual and error-prone provisioning of resources toward a more dynamic IT infrastructure able to cope with the demands of today’s business environment.

Our world has changed in many ways. Customer demand is so increasingly dynamic that IT needs to be just as dynamic — able to shift resources and applications adaptively to meet this ever-increasing demand. Due to this increase in demand, and coupled with the dependencies on their business processes and underlying applications, many companies need protection from loss, waste and downtime. As a result, disaster recovery and business continuity strategies have become a large part of IT planning. Achieving high availability through both reduced planned and unplanned downtime has become an IT imperative.  

However, imagine a world where every processor could back up every other processor; where processing power was just a single pool from which the business could draw on-demand; where the compartmentalization between services levels, unplanned downtime, geographic processing windows and disaster recovery disappeared.  

It wasn’t long ago that disaster recovery and business continuity technologies were mostly focused on providing backup and off-site standby. Then, business processes did not depend on technology to the degree they do now.  If access to applications was lost, most business units could revert to manual processes while data was being restored from tape or hardware, and applications would be rebuilt and redeployed by hand. Most organizations had neither the need nor the budget for costly business continuity technologies such as long-distance replication and application failover.

While we must plan for recovering from impending disaster, we must also plan for the day-to-day disruptions. Prevalent business strategies such as online trading, online purchasing, customer support and just-in-time inventory are not possible without technology, and are key to maintaining a competitive advantage. In addition, new government regulations make it mandatory to have advanced levels of protection for all sizes of businesses.  Consequently, for more and more business units, functions and applications, even a minimal service interruption has a dramatic financial impact. Manual processes simply are not an option anymore.  

In global financial enterprises, M&A activity is just a part of everyday business. Consolidating servers and applications to reduce duplication of effort and contain costs is a priority. As companies grow, IT organizations must leverage existing resources more efficiently and at times rapidly add capacity, often in the form of new applications running across a variety of operating systems. The resulting application and server sprawl is costly in terms of technical, financial and human capital. If these costs can be offset by improvements in business continuity, the exercise is more worthwhile.

Virtualization technology is enjoying a period of explosive growth, and an increasing number of enterprises are becoming virtualization converts. Research firm IDC estimates about 750,000 virtual servers were in operation in 2004, and it expects this to rise to more than 5 million by 2009 — a compound annual growth rate of almost 50 percent. Why the surge of interest? Virtualization as a concept has been around for years, if not decades, but only recently has its potential for business continuity been truly understood.

By virtualizing application platforms and services (based on business-driven policies and real-time service levels), it is now possible to centralize the command and control of application deployment and execution, thereby guaranteeing that capacity is available on-demand. Virtualization is able to eliminate downtime and service interruption by providing application failover both locally and remotely, while also enabling organizations to run production applications at hot-sites during non-emergency times. For risk managers, this capability is powerful and compelling, giving the capability to provide high availability for optimal SLA management; increased operational efficiency and flexibility; and higher application and server utilization, while lowering the cost and reducing the complexity of IT.

With a potentially worldwide processing pool, under the control of business performance policies, a whole new approach to business continuity is possible. A recent report by Gartner, which asked CIOs what are the biggest concerns for the datacenter in the year ahead, saw “Business Continuity” and “Disaster Recovery” topping the list. They were followed closely by “Virtualization Directions” and “Technology.” Ironically, in 2007, it can be argued that these two fundamental concerns will give rise to a new approach for dealing with today’s unpredicatable marketplace. Despite the unpredictable demand for IT services, application virtualization heralds the answer to effective business continuity planning by driving dynamic and automatic allocation and optimization of IT resources so that service levels are predictable and consistent.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Microsoft, Nvidia Launch Cloud HPC

November 20, 2019

Nvidia and Microsoft have joined forces to offer a cloud HPC capability based on the GPU vendor’s V100 Tensor Core chips linked via an Infiniband network scaling up to 800 graphics processors. The partners announced Read more…

By George Leopold

Hazra Retiring from Intel Data Center Group, Successor Unknown

November 20, 2019

This article is an update to a story published earlier today. Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring after more than 24 years at the compa Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU-accelerated computing. In recent years, AI has joined the s Read more…

By John Russell

SC19 Student Cluster Competition: Know Your Teams

November 19, 2019

I’m typing this live from Denver, the location of the 2019 Student Cluster Competition… and, oh yeah, the annual SC conference too. The attendance this year should be north of 13,000 people, with the majority attende Read more…

By Dan Olds

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, remain in first and second place. The only new entrants in t Read more…

By Tiffany Trader

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX-1 compute power in an air conditioned, water-cooled ScaleMa Read more…

By Doug Black

Hazra Retiring from Intel Data Center Group, Successor Unknown

November 20, 2019

This article is an update to a story published earlier today. Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Governm Read more…

By Doug Black

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP), Diachin is also... Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This