Hybrid Multi-Cloud Enablement — the Next Wave for Enterprise?

By Nicole Hemsoth

April 23, 2010

This week at Cloud Expo, Oracle and Microsoft discussed their offerings in the public cloud environment with Microsoft emphasizing public clouds and Oracle fine-tuning the discussion on hybrid clouds.

In his talk at Cloud Expo in New York this week, Oracle President Hal Stern stated, “If you look at every one of the cases that has been held up as a great case of public cloud, they ran for a period of time and then put the resources back. That’s what made them cost effective.” The former Sun Microsystems CTO reminded attendees that the cloud is only useful when dramatic scaling instances are used — not usually when routine operations are moved out into the cloud. In fact, moving general operations like payroll processing and inventory management, for example, into a cloud might end up being more expensive than continuing those functions in one’s own data center.

This means something many enterprises know already — if not simply from hard experience: the cloud is useful only when it’s cost-effective. And sometimes it’s not. Accordingly, the cloud-bursting hybrid model is becoming the primary choice, which is an even more attractive option when it’s possible to move between cloud providers.

Unlike smaller-scale enterprises, the demands of enterprise and scientific computing, at least in terms of extending into the cloud, require more than simply having the ability to drive key resource-heavy applications into the cloud on an occasional, as-needed basis. When the “cloud bursting” model is implemented, it can often only be accomplished using one of the major players in the public cloud space, like EC2, for instance. Accordingly, there is ever-increasing demand for cloud-bursting ability on a seasonal or an on-demand basis. With that said, there’s additional demand to be able to extend applications into multiple clouds, depending on which clouds are designated for which applications, which cloud providers offer better functionality or price, and which are assigned to certain workloads. In the wake of numerous discussions about the fear of “cloud lock-in” this provision of hybrid multi-cloud solutions is a natural step in the right direction.

In an interview yesterday with Gary Tyreman, senior vice president of products and alliances at Univa UD, the discussion hinged on the hybrid multi-cloud environment and how cloud enablers like Univa UD and others are working to make the migration and policy engines hum along to permit seamless transitions among cloud providers.

Tyreman states, “As an enterprise, if I have multiple target service providers I will want a common management metaphor and capability for any resource across any provider. In a compute-intensive environment I should be using the same technology and platform for all of my applications. I don’t want to or need to add two or three different capabilities — that’s cumbersome for a company. I will want to take advantage of a policy engine to help me maintain a sense of control over what goes into a particular cloud under what terms of conditions.”

Why Hybrid Multi-Cloud?

Over the last several months several companies have been working to form strategic alliances to make hybrid multi-cloud deployment a swift reality through a process bearing minimal hassle, fear, time and cost. Most cloud providers and “cloud enablers” like Univa UD and RightScale for instance are seeing that hybrid clouds are the wave of the future and are thus scrambling to make this option easily accessible. Just this week Univa UD announced integration with Rackspace in addition to its other providers, including Amazon and GoGrid, and also released news on the data migration front as they partnered with Aspera.

Tyreman notes, “The cloud burst model is the most popular choice and while there’s always been public and private debates, my personal view — is that it’s all about hybrid. It is going to be about taking advantage of different infrastructure for capacity, cost, and for many different reasons but customers need to be able to have the independence and mobility.”

Those who have found the hybrid model appealing desire to maintain control over their data and maximize the cloud when they do use it by selecting a provider that can be ready for them on demand. While the choices governing which provider is used are often open to a host of variables including capacity, cost, permissions, and policy, and functionality, there is now a definite choice as more “cloud enablers” are jumping on board to make this transition a possibility.

Some organizations have requirements that state they designate an alternate cloud provider but often, this need for a multi-cloud environment emerges in the course of general operations. As Tyreman notes, “We’ve been working with the U.S. Space Agency and their hope was to first, take advantage of peak capacity requirements; second to do so only under strict control — meaning that there’s time it will make sense because the application and data have met certain requirements to determine what can go to the cloud and what can’t; and third, they wanted to put the right work in the right location — so if Amazon’s spot instance is at a good price or if Rackspace is better suited to the work, they could have the mobility to move across these providers. This is the hybrid use case and it’s also the multi-cloud case. They can access Rackspace, Amazon, or GoGrid on demand, all driven by policies that allow them to make these decisions in a way that can help them take greatest advantage of the cloud’s offerings and to provide them with more control.”

In order for organizations to take advantage of cloud functionality and improvements across different cloud providers, there needs to be a layer of software that has automated the capability to provide commonality. In short, companies like Univa UD and others need to integrate with as many service providers as possible. As Tyreman discusses:

We are cloud enablers. In order to enable hybrid computing we have to consider data and data movement so whether it’s database applications, transcoding or electronic design, we need to be able to move data in and out efficiently. This is where Aspera comes into the picture; such integration allows us to embed integration of policy and workload management (so gauging of data in the compute cloud so if want to shift to Amazon or GoGrid this can be done smoothly). If you have bulk data transfers (genome sequencing, for instance) you used to use FedEx or FTP to upload it, but for mission-critical projects, there needs to be a quick, seamless way to transfer data. Our decision to partner with Aspera is all about transport—if you want to migrate a live virtual machine, they can do it faster. Ultimately, this helps us innovate and improve performance on a basic capability. As customers deploy hybrid clouds, they need that independence and mobility to shift between providers as their offerings change, improve, or are otherwise abandoned for particular workloads. Integrating with Rackspace is an obvious step in expanding our backend and Aspera is the key element in getting the data out and moving it between providers. In the end, we have more fine-grained control and can improve performance of the transfer.

Extending the Wave: Seasonal Hybrid Multi-Clouds

The hybrid model for enterprise can also be described as “seasonal” if needs for public clouds to handle temporary gluts in data can be approximated scheduling-wise. Using this seasonal-needs approach, enterprises can vastly improve efficiency. Tyreman put it into context using the example of the pharmaceutical companies, many of whom have complex in-house infrastructure but have occasional, anticipated spikes in compute resource needs. At mostly predictable points throughout the year, the FDA would rope off part of the infrastructure at a firm and would drill through the reports and request multiple runs, which resulted in vast demand that went beyond typical capacity. If an organization can plan on having these planned peaks, they can alter their compute environment to fit these occasional “cloud bursting” needs without a great degree of effort or time.

As Tyreman states, “In order to take advantage of cloud cluster, there has to be something replicable but it’s not something that will be done daily so every part of the process needs to be automated as much as possible. With cloud cluster, we can provision everything to a service provider and that infrastructure becomes push-button fully configured.” Tyreman goes on to note that in the past, the procurement process could take many months from the time someone desired it, but with advancements in this ready-made configuration from Univa UD and others, this is no longer the case. It is now possible for enterprises to take advantage of the cloud only during times of peak needs — and the process is smoother when it is scheduled to some degree, or at least expected.

Tyreman says that over the coming months we will likely see more additions to the list of Univa UD service providers as the company forms strategic partnerships with other cloud vendors. Besides, these partnerships are positive for cloud enablers and providers alike. As Tyreman notes, “The thing I like about Rackspace is they have a large enterprise business and the customers that they can expose us to and we can in return is very synergistic.”

It seems that companies like Univa in partnership with cloud providers are onto something that could mean big things for enterprise — the ability to have, on-demand, a selection of cloud providers with easy transfer — all based on which can best utilize resources to the maximum.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

“Lunch & Learn” to Explore the Growing Applications of Genomic Analytics

In the digital age of medicine, healthcare providers are rapidly transforming their approach to patient care. Traditional technologies are no longer sufficient to process vast quantities of medical data (including patient histories, treatment plans, diagnostic reports, and more), challenging organizations to invest in a new style of IT to enable faster and higher-quality care. Read more…

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan and will begin operation in fiscal 2018 (starts in April). A Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This