Hybrid Multi-Cloud Enablement — the Next Wave for Enterprise?

By Nicole Hemsoth

April 23, 2010

This week at Cloud Expo, Oracle and Microsoft discussed their offerings in the public cloud environment with Microsoft emphasizing public clouds and Oracle fine-tuning the discussion on hybrid clouds.

In his talk at Cloud Expo in New York this week, Oracle President Hal Stern stated, “If you look at every one of the cases that has been held up as a great case of public cloud, they ran for a period of time and then put the resources back. That’s what made them cost effective.” The former Sun Microsystems CTO reminded attendees that the cloud is only useful when dramatic scaling instances are used — not usually when routine operations are moved out into the cloud. In fact, moving general operations like payroll processing and inventory management, for example, into a cloud might end up being more expensive than continuing those functions in one’s own data center.

This means something many enterprises know already — if not simply from hard experience: the cloud is useful only when it’s cost-effective. And sometimes it’s not. Accordingly, the cloud-bursting hybrid model is becoming the primary choice, which is an even more attractive option when it’s possible to move between cloud providers.

Unlike smaller-scale enterprises, the demands of enterprise and scientific computing, at least in terms of extending into the cloud, require more than simply having the ability to drive key resource-heavy applications into the cloud on an occasional, as-needed basis. When the “cloud bursting” model is implemented, it can often only be accomplished using one of the major players in the public cloud space, like EC2, for instance. Accordingly, there is ever-increasing demand for cloud-bursting ability on a seasonal or an on-demand basis. With that said, there’s additional demand to be able to extend applications into multiple clouds, depending on which clouds are designated for which applications, which cloud providers offer better functionality or price, and which are assigned to certain workloads. In the wake of numerous discussions about the fear of “cloud lock-in” this provision of hybrid multi-cloud solutions is a natural step in the right direction.

In an interview yesterday with Gary Tyreman, senior vice president of products and alliances at Univa UD, the discussion hinged on the hybrid multi-cloud environment and how cloud enablers like Univa UD and others are working to make the migration and policy engines hum along to permit seamless transitions among cloud providers.

Tyreman states, “As an enterprise, if I have multiple target service providers I will want a common management metaphor and capability for any resource across any provider. In a compute-intensive environment I should be using the same technology and platform for all of my applications. I don’t want to or need to add two or three different capabilities — that’s cumbersome for a company. I will want to take advantage of a policy engine to help me maintain a sense of control over what goes into a particular cloud under what terms of conditions.”

Why Hybrid Multi-Cloud?

Over the last several months several companies have been working to form strategic alliances to make hybrid multi-cloud deployment a swift reality through a process bearing minimal hassle, fear, time and cost. Most cloud providers and “cloud enablers” like Univa UD and RightScale for instance are seeing that hybrid clouds are the wave of the future and are thus scrambling to make this option easily accessible. Just this week Univa UD announced integration with Rackspace in addition to its other providers, including Amazon and GoGrid, and also released news on the data migration front as they partnered with Aspera.

Tyreman notes, “The cloud burst model is the most popular choice and while there’s always been public and private debates, my personal view — is that it’s all about hybrid. It is going to be about taking advantage of different infrastructure for capacity, cost, and for many different reasons but customers need to be able to have the independence and mobility.”

Those who have found the hybrid model appealing desire to maintain control over their data and maximize the cloud when they do use it by selecting a provider that can be ready for them on demand. While the choices governing which provider is used are often open to a host of variables including capacity, cost, permissions, and policy, and functionality, there is now a definite choice as more “cloud enablers” are jumping on board to make this transition a possibility.

Some organizations have requirements that state they designate an alternate cloud provider but often, this need for a multi-cloud environment emerges in the course of general operations. As Tyreman notes, “We’ve been working with the U.S. Space Agency and their hope was to first, take advantage of peak capacity requirements; second to do so only under strict control — meaning that there’s time it will make sense because the application and data have met certain requirements to determine what can go to the cloud and what can’t; and third, they wanted to put the right work in the right location — so if Amazon’s spot instance is at a good price or if Rackspace is better suited to the work, they could have the mobility to move across these providers. This is the hybrid use case and it’s also the multi-cloud case. They can access Rackspace, Amazon, or GoGrid on demand, all driven by policies that allow them to make these decisions in a way that can help them take greatest advantage of the cloud’s offerings and to provide them with more control.”

In order for organizations to take advantage of cloud functionality and improvements across different cloud providers, there needs to be a layer of software that has automated the capability to provide commonality. In short, companies like Univa UD and others need to integrate with as many service providers as possible. As Tyreman discusses:

We are cloud enablers. In order to enable hybrid computing we have to consider data and data movement so whether it’s database applications, transcoding or electronic design, we need to be able to move data in and out efficiently. This is where Aspera comes into the picture; such integration allows us to embed integration of policy and workload management (so gauging of data in the compute cloud so if want to shift to Amazon or GoGrid this can be done smoothly). If you have bulk data transfers (genome sequencing, for instance) you used to use FedEx or FTP to upload it, but for mission-critical projects, there needs to be a quick, seamless way to transfer data. Our decision to partner with Aspera is all about transport—if you want to migrate a live virtual machine, they can do it faster. Ultimately, this helps us innovate and improve performance on a basic capability. As customers deploy hybrid clouds, they need that independence and mobility to shift between providers as their offerings change, improve, or are otherwise abandoned for particular workloads. Integrating with Rackspace is an obvious step in expanding our backend and Aspera is the key element in getting the data out and moving it between providers. In the end, we have more fine-grained control and can improve performance of the transfer.

Extending the Wave: Seasonal Hybrid Multi-Clouds

The hybrid model for enterprise can also be described as “seasonal” if needs for public clouds to handle temporary gluts in data can be approximated scheduling-wise. Using this seasonal-needs approach, enterprises can vastly improve efficiency. Tyreman put it into context using the example of the pharmaceutical companies, many of whom have complex in-house infrastructure but have occasional, anticipated spikes in compute resource needs. At mostly predictable points throughout the year, the FDA would rope off part of the infrastructure at a firm and would drill through the reports and request multiple runs, which resulted in vast demand that went beyond typical capacity. If an organization can plan on having these planned peaks, they can alter their compute environment to fit these occasional “cloud bursting” needs without a great degree of effort or time.

As Tyreman states, “In order to take advantage of cloud cluster, there has to be something replicable but it’s not something that will be done daily so every part of the process needs to be automated as much as possible. With cloud cluster, we can provision everything to a service provider and that infrastructure becomes push-button fully configured.” Tyreman goes on to note that in the past, the procurement process could take many months from the time someone desired it, but with advancements in this ready-made configuration from Univa UD and others, this is no longer the case. It is now possible for enterprises to take advantage of the cloud only during times of peak needs — and the process is smoother when it is scheduled to some degree, or at least expected.

Tyreman says that over the coming months we will likely see more additions to the list of Univa UD service providers as the company forms strategic partnerships with other cloud vendors. Besides, these partnerships are positive for cloud enablers and providers alike. As Tyreman notes, “The thing I like about Rackspace is they have a large enterprise business and the customers that they can expose us to and we can in return is very synergistic.”

It seems that companies like Univa in partnership with cloud providers are onto something that could mean big things for enterprise — the ability to have, on-demand, a selection of cloud providers with easy transfer — all based on which can best utilize resources to the maximum.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Geospatial Data Research Leverages GPUs

August 17, 2017

MapD Technologies, the GPU-accelerated database specialist, said it is working with university researchers on leveraging graphics processors to advance geospatial analytics. The San Francisco-based company is collabor Read more…

By George Leopold

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Centers (IPCCs) has resulted in a new Big Data Center (BDC) that Read more…

By Linda Barney

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last week the cloud giant released deeplearn.js as part of that in Read more…

By John Russell

HPE Extreme Performance Solutions

Leveraging Deep Learning for Fraud Detection

Advancements in computing technologies and the expanding use of e-commerce platforms have dramatically increased the risk of fraud for financial services companies and their customers. Read more…

Spoiler Alert: Glimpse Next Week’s Solar Eclipse Via Simulation from TACC, SDSC, and NASA

August 17, 2017

Can’t wait to see next week’s solar eclipse? You can at least catch glimpses of what scientists expect it will look like. A team from Predictive Science Inc. (PSI), based in San Diego, working with Stampede2 at the Read more…

By John Russell

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud. Terms of the acquisition were not disclosed. Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system i Read more…

By Tiffany Trader

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based s Read more…

By John Russell

Deep Learning Thrives in Cancer Moonshot

August 8, 2017

The U.S. War on Cancer, certainly a worthy cause, is a collection of programs stretching back more than 40 years and abiding under many banners. The latest is t Read more…

By John Russell

IBM Raises the Bar for Distributed Deep Learning

August 8, 2017

IBM is announcing today an enhancement to its PowerAI software platform aimed at facilitating the practical scaling of AI models on today’s fastest GPUs. Scal Read more…

By Tiffany Trader

IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges

August 3, 2017

IBM announced yesterday a new record for magnetic tape storage that it says will keep tape storage density on a Moore's law-like path far into the next decade. Read more…

By Tiffany Trader

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

August 1, 2017

With its Radeon “Vega” Instinct datacenter GPUs and EPYC “Naples” server chips entering the market this summer, AMD has positioned itself for a two-head Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Leading Solution Providers

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This