Five Steps to Cloud Computing Nirvana

By Dennis Barker

November 7, 2008

Once you’ve started down the path of virtualization, you’re on the road to nirvana. Or at least cloud computing nirvana. At least according to rPath. According to the “cloud computing adoption model” the company recently made public, virtualization is the first step for organizations seeking to one day take advantage of cloud computing.

It’s not a big surprise that rPath would focus on this as the first step. The company provides tools for turning applications into virtual appliances, a process it calls application virtualization. Conceptually, however, it makes sense as a starting point for any development team or IT manager sketching out a cloud scenario, and it’s something with which many companies are familiar.

“Virtualization is critical because it allows you to decouple the application from the infrastructure,” said rPath chief strategy officer Billy Marshall during a presentation of the adoption model. “You have to do this to get the benefits of cloud computing, but you have to think about your infrastructure in a way to avoid locking yourself into one hypervisor platform.”

Beside the famous benefits of virtualization (e.g., hardware consolidation and power bills), virtualization can result in faster application deployment and, if done right, ability to scale up rapidly to meet business demands. “Once you’ve decoupled app from infrastructure, and it can be seamlessly deployed across any environment, you’ve completely simplified the process of spinning up a new app,” says Jake Sorofman, rPath’s vice president of marketing. “You’re closing the gap between development and production. It usually takes four to six months to deploy a new app; it’s an iterative, manual process because of the variability between application and underlying operating bits. With our virtualization technology, you can compress that process down to days instead of months.”

Because you can run into challenges when decoupling app from infrastructure, rPath says Level 2 should be “cloud experimentation.” The best place to do that, the company says, is Amazon’s Elastic Compute Cloud. “EC2 is a great way to get your feet wet and hands dirty,” Sorofman says. “We recommend companies get experience with EC2 as a way to become familiar with the cloud. The barrier of entry is so low you can afford to try things out.” (You can get a fingertip wet with a virtual appliance here.)

This phase is the time to build up knowledge about cloud computing, and Marshall believes it is important to have both the IT department and the line-of-business people involved in parallel. It’s also a time to start gathering metrics, identifying bottlenecks and putting issues on the table. Application architects should be thinking down the line about how to design software for this environment.

The heavy lifting begins with Level 3. This is where you lay the foundation for a scalable application architecture, Sorofman says. “Taking what you’ve learned from experimentation, start deploying real applications in the cloud,” he explains. “Work through provisioning apps on demand. Take your reference architecture and turn it into working processes.” This also is the time to come up with policies and best practices.

“Try to assess demand for an application, because that should determine how much infrastructure you’ll need,” Marshall says. Also, “operationalize an approach that lets you work with any hypervisor.”

Lifecycle management becomes crucial at this point, Sorofman says, because you want consistency, repeatability and maintainability of your virtualized application images. Otherwise, scalability is not going to work so well. “Adopt a lifecycle management system and become familiar with configuring and maintaining images. You also want a platform that guarantees that image updates can be pushed out to all cloud units simultaneously,” he says.

At Level 4, you really get down to it. This “cloud exploitation” stage means full-scale, broad-based deployment of applications in either an internal cloud or external cloud. At this point, you’re operating in the cloud as a production environment, Marshall says. This is time to tweak apps and revisit metrics. One warning from Marshall: “Keep in mind that not all apps are suited to run in the cloud, but being able to take advantage of cloud computing can deliver real ROI, savings in capital expenditures and so on.”

Then, in some undetermined future, following further technological developments, those who have mastered the four levels will be prepared for the “hypercloud” that dynamically shares workload and offers self-service provisioning. “Nirvana,” Marshall describes. “You’ll have the capability to dynamically select a target environment at runtime based on the needs of the application.” Load balancing tools will determine the best environment to dispatch operations, based on available compute power. Applications can automatically be sent to a cloud that’s never busy at night. Tools will be able to do instant cost comparisons and deploy applications on that basis. too.

“At this point, clouds will become a commodity,” Sorofman says. “You’ll be able to share workloads across clouds. Applications will move seamlessly. You’ll be able to pour across clouds, take advantage of cost differentials, or move according to business rules. You could also build composite apps that draw on different services from different clouds.” When this will happen “is hard to say,” Sorofman says, “but by the time most companies get to Level 4, the technologies to enable Level 5 could be mature enough.”

Seeding Clouds

The value that rPath brings to cloud computing is “foundational,” Sorofman says, in that the company provides tools to turn applications into virtual appliances that can run in EC2 or any other cloud environment. “We want to make it easy for companies to move apps to an on-demand environment.”

rBuilder simplifies the process of creating application images, he says. “We let you combine your application with any other components you need and just enough operating system to create an application image, an application appliance, that will run optimally on any virtualized infrastructure,” he explains.

rPath’s Lifecycle Management Platform automates configuration and maintenance, backup, and other image-admin tasks. “We provide policy-based definition to make sure images are exactly reproduced,” says Sorofman, “and being able to reproduce multiple images from a single image greatly reduces errors.”

rPath also offers rBuilder Online, which “walks developers through the process of packaging a Linux application as an Amazon Machine Image,” Sorofman says. The related rBuilder Catalog provides a simple interface for launching, managing and retiring EC2 application images.

KnowledgeTree, an rPath customer, provides document management systems for small and medium businesses. Its application was being downloaded about 15,000 times a month, but the company got the notion it could reach more customers if it could sell its software as a service, an on-demand offering. To save time and money, the company wanted to use as much of its current application as possible and not have to do extensive recoding. The company says turn its application into a virtual appliance running on Amazon EC2 took much less time than it would have to build from scratch, and it also saw development and maintenance costs drop by 40 percent. KnowledgeTree says the combination of rPath’s tools and Amazon Web Services allowed it to provide quickly an application that scales transparently to meet user demand.

Of course, not everyone will have the same results as easily, but rPath and other software developers — including companies with vastly different approaches like 3Tera and Elastra — are paving the on-ramp for applications to run in the cloud. With its adoption model, rPath also is trying to offer a “graduated” approach to attaining the full benefits of cloud computing, with users advancing along with the technology.

“I don’t know if this hypercloud is three years, five years or ten years away, but it’s not going to take a miracle to make it happen,” Marshall says. “It’s going to take people working together.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). On Read more…

By Elizabeth Leake, STEM-Trek

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. Read more…

By Doug Black

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Leveraging Exaflops Performance to Remediate Nuclear Waste

November 12, 2019

Nuclear waste storage sites are a subject of intense controversy and debate; nobody wants the radioactive remnants in their backyard. Now, a collaboration between Berkeley Lab, Pacific Northwest National University (PNNL Read more…

By Oliver Peckham

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This