“Intelligent” Cloud Automation Gets Substantial Push From Investors

By Nicole Hemsoth

September 15, 2010

Adaptive Computing, known for its Moab automation technology, announced today that it was one of four companies selected by Intel Capital for a round of Series A funding. The company is set to receive $14 million with Intel’s line combined with further resources from two other investment firms who saw promise in the company and its nine-year track record of growth and profitability.

Steve Eichenlaub, managing director of Intel Capital stated that “Adaptive Computing’s solutions are well-positioned to play an important role upgrading enterprise data centers to intelligent self-optimizing cloud environments” and that this is in line with Intel Capital’s view that “intelligent policy management will play a critical role in the next phase of cloud automation.”

Adaptive COO and President told HPC in the Cloud on Monday that some were wondering why they made the external funding decision. In his view, “it came down to the fact that we were seeing an inflection in demand for our cloud products and the amount of incoming demand was greater than our organic revenues would allow us to service. We really had the choice of rejecting business, deferring business or accessing capital to build up to service that demand properly.”

Jackson noted that first and foremost, however, this “will enable the company to increase headcount and expand operations to meet the growing global demand” for customers with complex management and policy-driven needs that he feels is missing with other virtualization-centric or provisioning-related technologies that simple give users basic “yes and no” answers to their provisioning and virtualization needs. Adaptive is seeing that a growing number of customers are looking for more “intelligent” automation to enable more efficient resource management and ultimately, greater cost savings–and it seems investors have been watching this demand play out as well.

Placing Value on Intelligent Cloud Management

While there are a number of solutions that promise simple, intuitive, policy-based automation, Jackson argues that the level of complexity is often not enough to manage the needs of some of the largest enterprise data centers, particularly in the realm of financial services, mega e-commerce and web application providers, telcos, and increasingly, government—all of whom comprise the foundation for a significant majority of Adaptive’s business.

According to Adaptive’s COO, one of the reasons they have been singled out for this round of funding (outside of a clearly stated need to expand) is because there are no comprehensive “intelligent” cloud management solutions that do what Moab—the core of Adaptive’s business—can do.

As Jackson put it, “there many others focus just on the mechanism (provisioning and virtualization management type technologies) ours comes in on the decision-making layer. We focus as a service governor to manage the space, to manage the decisions that are made in the cloud and then we connect to a customer’s pre-existing investment in provisioning, virtualization, network, and storage technologies so we help their existing IT become cloud as opposed to a ‘rip and replace’ that requires them to shift their investments over to different technologies.”

Looking at what might set Adaptive apart is a challenging task given the number of vendors competing for share in the cloud management free for all that has led to some confusion about just how provisioning, policy-driven, and virtualization management issues are handled between solutions. Jackson admitted that there are many competing in the cloud space but they have a distinct focus that does not offer “intelligent” automation—instead opting for simple answers to complex questions for any given workload. 

“You have those that are coming in from the mechanism standpoint, those trying to provide provisioning and virtualization management but the challenge they have is that they are a mechanism without a “brain”—without a toolset to optimally apply resources to meet SLAs or objectives” said Jackson. “They’re typically something you can go to and say can I have it and it will either give it to you or not, but that’s the extent of its intelligence; this more like a workflow connected with provisioning management.”

He points to provisioning management technologies from CA, BMC and Eucalyptus to highlight his point, suggesting that these are “packaging of workflows and provisioning or virtualization management technologies” and that even what VMware just rolled out is a virtualization management-specific technology that focuses only on the question of “how do I move, lift and place resources together to create a new environment?”

Adaptive’s response to this is that they are on opposite side of the spectrum because they realize that customers have already made investments in many of these provisioning, storage and network technologies so it becomes their task “to take what’s there, add on this decision-making layer called Moab, which then makes optimal decisions.”

To highlight this point, he presented a nameless case study of one of their “large enterprise customers” 

“They had KVM, they had VMware, they had physical provisioning technologies and stateless provisioning technologies because 75% of workloads are not virtualized—so to have everything under KVM or Xen is not a reality today; most everything is in a physical provisioning space. So then we layer above that and we’re able to drive their server provisioning and virtualization management (even though it’s two different classes of virtualization).

So they wanted to optimize; if they could get something in KVM at a lower cost and still be virtual rather than VMware, they can cost-optimize that within Moab and apply those that need to be in VMware there; those that don’t go to KVM and those workloads that were not virtualized could then go through stateless provisioning because that’s faster, and then if we can pack things into fewer servers, we can use our green compute capabilities to power down servers. We were able to intermix all of those together. You just won’t find this in a virtualization-centric technology or a provisioning management-centric technology; it really takes a organizational level decision maker to look across all you have and optimize it.”

It’s not difficult to see the value of more intelligent cloud automation and the same companies he called out earlier are working furiously to produce similar capabilities. It seems it will be up to Adaptive to continue enhancing and expanding Moab in order to stay ahead of the curve since some of the other vendors entering into their “intelligent cloud” territory have the advantage of name recognition.

Life After Investment

The words of Intel Capital’s general manager Eichenlaub are worth remembering—that there is an increasing awareness about the business value of cloud automation products and their value for enterprises who are virtualizing some or most of their applications and will likely continue to do so if Gartner and IDC (and any other number of analyst firms) are correct.

Intelligent clouds are a requirement for large-scale enterprises who have taken steps down the road to virtualization as more companies realize they want to have what Jackson says is a “cloud infrastructure that is highly agile or over-buy and have a lot of excess capacity in several areas in order to service customers.” In his view, they want to see “the benefits of having that agile cloud infrastructure and our technology allows them to take advantage of that rapid delivery of resources. But they also want something that is intelligent and watches things like, what are the implications for my applications, what are the implications for my SLA’s—how do I optimize an SLA in that context?”

The funding will be concentrated mostly on the sales and delivery side; or, as Jackson put it, “on those who will take the technology and deliver cloud services to customers” but will also be distributed to other areas the company will emphasize, including end-to-end solutions for its partners. Currently, the company relies heavily on its partnerships, which already include the likes of IBM, SGI and HP, among others, but Adaptive also handles direct customers, which may happen more frequently given the added dash of resources to continue expanding their reach.


Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate scientists the ability to use machine learning to identify e Read more…

By Rob Farber

Mellanox Reacts to Activist Investor Pressures in Letter to Shareholders

March 16, 2018

Activist investor Starboard Value has been exerting pressure on Mellanox Technologies to increase its returns. In response, the high-performance networking company on Monday, March 12, published a letter to shareholders outlining its proposal for a May 2018 extraordinary general meeting (EGM) of shareholders and highlighting its long-term growth strategy and focus on operating margin improvement. Read more…

By Staff

Quantum Computing vs. Our ‘Caveman Newtonian Brain’: Why Quantum Is So Hard

March 15, 2018

Quantum is coming. Maybe not today, maybe not tomorrow, but soon enough. Within 10 to 12 years, we’re told, special-purpose quantum systems will enter the commercial realm. Assuming this happens, we can also assume that quantum will, over extended time, become increasingly general purpose as it delivers mind-blowing power. Read more…

By Doug Black

HPE Extreme Performance Solutions

Achieve Optimal Performance at Scale with High Performance Fabrics for HPC

High Performance Computing (HPC) is unlocking a new era of speed and productivity to fuel business transformation. Rapid advancements in HPC capabilities are helping organizations operate faster and more effectively than ever, but in today’s fast-paced marketplace, a new generation of technologies is required to reach greater scalability and cost-efficiency. Read more…

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise IT in its willingness to outsource computational power. The m Read more…

By Chris Downing

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Stephen Hawking, Legendary Scientist, Dies at 76

March 14, 2018

Stephen Hawking passed away at his home in Cambridge, England, in the early morning of March 14; he was 76. Born on January 8, 1942, Hawking was an English theo Read more…

By Tiffany Trader

Hyperion Tackles Elusive Quantum Computing Landscape

March 13, 2018

Quantum computing - exciting and off-putting all at once - is a kaleidoscope of technology and market questions whose shapes and positions are far from settled. Read more…

By John Russell

Part Two: Navigating Life Sciences Choppy HPC Waters in 2018

March 8, 2018

2017 was not necessarily the best year to build a large HPC system for life sciences say Ari Berman, VP and GM of consulting services, and Aaron Gardner, direct Read more…

By John Russell

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

SciNet Launches Niagara, Canada’s Fastest Supercomputer

March 5, 2018

SciNet and the University of Toronto today unveiled "Niagara," Canada's most-powerful supercomputer, comprising 1,500 dense Lenovo ThinkSystem SD530 high-perfor Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

World Record: Quantum Computer with 46 Qubits Simulated

December 18, 2017

Scientists from the Jülich Supercomputing Centre have set a new world record. Together with researchers from Wuhan University and the University of Groningen, Read more…

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This