Hybrid Multi-Cloud Enablement — the Next Wave for Enterprise?

By Nicole Hemsoth

April 23, 2010

This week at Cloud Expo, Oracle and Microsoft discussed their offerings in the public cloud environment with Microsoft emphasizing public clouds and Oracle fine-tuning the discussion on hybrid clouds.

In his talk at Cloud Expo in New York this week, Oracle President Hal Stern stated, “If you look at every one of the cases that has been held up as a great case of public cloud, they ran for a period of time and then put the resources back. That’s what made them cost effective.” The former Sun Microsystems CTO reminded attendees that the cloud is only useful when dramatic scaling instances are used — not usually when routine operations are moved out into the cloud. In fact, moving general operations like payroll processing and inventory management, for example, into a cloud might end up being more expensive than continuing those functions in one’s own data center.

This means something many enterprises know already — if not simply from hard experience: the cloud is useful only when it’s cost-effective. And sometimes it’s not. Accordingly, the cloud-bursting hybrid model is becoming the primary choice, which is an even more attractive option when it’s possible to move between cloud providers.

Unlike smaller-scale enterprises, the demands of enterprise and scientific computing, at least in terms of extending into the cloud, require more than simply having the ability to drive key resource-heavy applications into the cloud on an occasional, as-needed basis. When the “cloud bursting” model is implemented, it can often only be accomplished using one of the major players in the public cloud space, like EC2, for instance. Accordingly, there is ever-increasing demand for cloud-bursting ability on a seasonal or an on-demand basis. With that said, there’s additional demand to be able to extend applications into multiple clouds, depending on which clouds are designated for which applications, which cloud providers offer better functionality or price, and which are assigned to certain workloads. In the wake of numerous discussions about the fear of “cloud lock-in” this provision of hybrid multi-cloud solutions is a natural step in the right direction.

In an interview yesterday with Gary Tyreman, senior vice president of products and alliances at Univa UD, the discussion hinged on the hybrid multi-cloud environment and how cloud enablers like Univa UD and others are working to make the migration and policy engines hum along to permit seamless transitions among cloud providers.

Tyreman states, “As an enterprise, if I have multiple target service providers I will want a common management metaphor and capability for any resource across any provider. In a compute-intensive environment I should be using the same technology and platform for all of my applications. I don’t want to or need to add two or three different capabilities — that’s cumbersome for a company. I will want to take advantage of a policy engine to help me maintain a sense of control over what goes into a particular cloud under what terms of conditions.”

Why Hybrid Multi-Cloud?

Over the last several months several companies have been working to form strategic alliances to make hybrid multi-cloud deployment a swift reality through a process bearing minimal hassle, fear, time and cost. Most cloud providers and “cloud enablers” like Univa UD and RightScale for instance are seeing that hybrid clouds are the wave of the future and are thus scrambling to make this option easily accessible. Just this week Univa UD announced integration with Rackspace in addition to its other providers, including Amazon and GoGrid, and also released news on the data migration front as they partnered with Aspera.

Tyreman notes, “The cloud burst model is the most popular choice and while there’s always been public and private debates, my personal view — is that it’s all about hybrid. It is going to be about taking advantage of different infrastructure for capacity, cost, and for many different reasons but customers need to be able to have the independence and mobility.”

Those who have found the hybrid model appealing desire to maintain control over their data and maximize the cloud when they do use it by selecting a provider that can be ready for them on demand. While the choices governing which provider is used are often open to a host of variables including capacity, cost, permissions, and policy, and functionality, there is now a definite choice as more “cloud enablers” are jumping on board to make this transition a possibility.

Some organizations have requirements that state they designate an alternate cloud provider but often, this need for a multi-cloud environment emerges in the course of general operations. As Tyreman notes, “We’ve been working with the U.S. Space Agency and their hope was to first, take advantage of peak capacity requirements; second to do so only under strict control — meaning that there’s time it will make sense because the application and data have met certain requirements to determine what can go to the cloud and what can’t; and third, they wanted to put the right work in the right location — so if Amazon’s spot instance is at a good price or if Rackspace is better suited to the work, they could have the mobility to move across these providers. This is the hybrid use case and it’s also the multi-cloud case. They can access Rackspace, Amazon, or GoGrid on demand, all driven by policies that allow them to make these decisions in a way that can help them take greatest advantage of the cloud’s offerings and to provide them with more control.”

In order for organizations to take advantage of cloud functionality and improvements across different cloud providers, there needs to be a layer of software that has automated the capability to provide commonality. In short, companies like Univa UD and others need to integrate with as many service providers as possible. As Tyreman discusses:

We are cloud enablers. In order to enable hybrid computing we have to consider data and data movement so whether it’s database applications, transcoding or electronic design, we need to be able to move data in and out efficiently. This is where Aspera comes into the picture; such integration allows us to embed integration of policy and workload management (so gauging of data in the compute cloud so if want to shift to Amazon or GoGrid this can be done smoothly). If you have bulk data transfers (genome sequencing, for instance) you used to use FedEx or FTP to upload it, but for mission-critical projects, there needs to be a quick, seamless way to transfer data. Our decision to partner with Aspera is all about transport—if you want to migrate a live virtual machine, they can do it faster. Ultimately, this helps us innovate and improve performance on a basic capability. As customers deploy hybrid clouds, they need that independence and mobility to shift between providers as their offerings change, improve, or are otherwise abandoned for particular workloads. Integrating with Rackspace is an obvious step in expanding our backend and Aspera is the key element in getting the data out and moving it between providers. In the end, we have more fine-grained control and can improve performance of the transfer.

Extending the Wave: Seasonal Hybrid Multi-Clouds

The hybrid model for enterprise can also be described as “seasonal” if needs for public clouds to handle temporary gluts in data can be approximated scheduling-wise. Using this seasonal-needs approach, enterprises can vastly improve efficiency. Tyreman put it into context using the example of the pharmaceutical companies, many of whom have complex in-house infrastructure but have occasional, anticipated spikes in compute resource needs. At mostly predictable points throughout the year, the FDA would rope off part of the infrastructure at a firm and would drill through the reports and request multiple runs, which resulted in vast demand that went beyond typical capacity. If an organization can plan on having these planned peaks, they can alter their compute environment to fit these occasional “cloud bursting” needs without a great degree of effort or time.

As Tyreman states, “In order to take advantage of cloud cluster, there has to be something replicable but it’s not something that will be done daily so every part of the process needs to be automated as much as possible. With cloud cluster, we can provision everything to a service provider and that infrastructure becomes push-button fully configured.” Tyreman goes on to note that in the past, the procurement process could take many months from the time someone desired it, but with advancements in this ready-made configuration from Univa UD and others, this is no longer the case. It is now possible for enterprises to take advantage of the cloud only during times of peak needs — and the process is smoother when it is scheduled to some degree, or at least expected.

Tyreman says that over the coming months we will likely see more additions to the list of Univa UD service providers as the company forms strategic partnerships with other cloud vendors. Besides, these partnerships are positive for cloud enablers and providers alike. As Tyreman notes, “The thing I like about Rackspace is they have a large enterprise business and the customers that they can expose us to and we can in return is very synergistic.”

It seems that companies like Univa in partnership with cloud providers are onto something that could mean big things for enterprise — the ability to have, on-demand, a selection of cloud providers with easy transfer — all based on which can best utilize resources to the maximum.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high-end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This