“Intelligent” Cloud Automation Gets Substantial Push From Investors

By Nicole Hemsoth

September 15, 2010

Adaptive Computing, known for its Moab automation technology, announced today that it was one of four companies selected by Intel Capital for a round of Series A funding. The company is set to receive $14 million with Intel’s line combined with further resources from two other investment firms who saw promise in the company and its nine-year track record of growth and profitability.

Steve Eichenlaub, managing director of Intel Capital stated that “Adaptive Computing’s solutions are well-positioned to play an important role upgrading enterprise data centers to intelligent self-optimizing cloud environments” and that this is in line with Intel Capital’s view that “intelligent policy management will play a critical role in the next phase of cloud automation.”

Adaptive COO and President told HPC in the Cloud on Monday that some were wondering why they made the external funding decision. In his view, “it came down to the fact that we were seeing an inflection in demand for our cloud products and the amount of incoming demand was greater than our organic revenues would allow us to service. We really had the choice of rejecting business, deferring business or accessing capital to build up to service that demand properly.”

Jackson noted that first and foremost, however, this “will enable the company to increase headcount and expand operations to meet the growing global demand” for customers with complex management and policy-driven needs that he feels is missing with other virtualization-centric or provisioning-related technologies that simple give users basic “yes and no” answers to their provisioning and virtualization needs. Adaptive is seeing that a growing number of customers are looking for more “intelligent” automation to enable more efficient resource management and ultimately, greater cost savings–and it seems investors have been watching this demand play out as well.

Placing Value on Intelligent Cloud Management

While there are a number of solutions that promise simple, intuitive, policy-based automation, Jackson argues that the level of complexity is often not enough to manage the needs of some of the largest enterprise data centers, particularly in the realm of financial services, mega e-commerce and web application providers, telcos, and increasingly, government—all of whom comprise the foundation for a significant majority of Adaptive’s business.

According to Adaptive’s COO, one of the reasons they have been singled out for this round of funding (outside of a clearly stated need to expand) is because there are no comprehensive “intelligent” cloud management solutions that do what Moab—the core of Adaptive’s business—can do.

As Jackson put it, “there many others focus just on the mechanism (provisioning and virtualization management type technologies) ours comes in on the decision-making layer. We focus as a service governor to manage the space, to manage the decisions that are made in the cloud and then we connect to a customer’s pre-existing investment in provisioning, virtualization, network, and storage technologies so we help their existing IT become cloud as opposed to a ‘rip and replace’ that requires them to shift their investments over to different technologies.”

Looking at what might set Adaptive apart is a challenging task given the number of vendors competing for share in the cloud management free for all that has led to some confusion about just how provisioning, policy-driven, and virtualization management issues are handled between solutions. Jackson admitted that there are many competing in the cloud space but they have a distinct focus that does not offer “intelligent” automation—instead opting for simple answers to complex questions for any given workload.

“You have those that are coming in from the mechanism standpoint, those trying to provide provisioning and virtualization management but the challenge they have is that they are a mechanism without a “brain”—without a toolset to optimally apply resources to meet SLAs or objectives” said Jackson. “They’re typically something you can go to and say can I have it and it will either give it to you or not, but that’s the extent of its intelligence; this more like a workflow connected with provisioning management.”

He points to provisioning management technologies from CA, BMC and Eucalyptus to highlight his point, suggesting that these are “packaging of workflows and provisioning or virtualization management technologies” and that even what VMware just rolled out is a virtualization management-specific technology that focuses only on the question of “how do I move, lift and place resources together to create a new environment?”

Adaptive’s response to this is that they are on opposite side of the spectrum because they realize that customers have already made investments in many of these provisioning, storage and network technologies so it becomes their task “to take what’s there, add on this decision-making layer called Moab, which then makes optimal decisions.”

To highlight this point, he presented a nameless case study of one of their “large enterprise customers”

“They had KVM, they had VMware, they had physical provisioning technologies and stateless provisioning technologies because 75% of workloads are not virtualized—so to have everything under KVM or Xen is not a reality today; most everything is in a physical provisioning space. So then we layer above that and we’re able to drive their server provisioning and virtualization management (even though it’s two different classes of virtualization).

So they wanted to optimize; if they could get something in KVM at a lower cost and still be virtual rather than VMware, they can cost-optimize that within Moab and apply those that need to be in VMware there; those that don’t go to KVM and those workloads that were not virtualized could then go through stateless provisioning because that’s faster, and then if we can pack things into fewer servers, we can use our green compute capabilities to power down servers. We were able to intermix all of those together. You just won’t find this in a virtualization-centric technology or a provisioning management-centric technology; it really takes a organizational level decision maker to look across all you have and optimize it.”

It’s not difficult to see the value of more intelligent cloud automation and the same companies he called out earlier are working furiously to produce similar capabilities. It seems it will be up to Adaptive to continue enhancing and expanding Moab in order to stay ahead of the curve since some of the other vendors entering into their “intelligent cloud” territory have the advantage of name recognition.

Life After Investment

The words of Intel Capital’s general manager Eichenlaub are worth remembering—that there is an increasing awareness about the business value of cloud automation products and their value for enterprises who are virtualizing some or most of their applications and will likely continue to do so if Gartner and IDC (and any other number of analyst firms) are correct.

Intelligent clouds are a requirement for large-scale enterprises who have taken steps down the road to virtualization as more companies realize they want to have what Jackson says is a “cloud infrastructure that is highly agile or over-buy and have a lot of excess capacity in several areas in order to service customers.” In his view, they want to see “the benefits of having that agile cloud infrastructure and our technology allows them to take advantage of that rapid delivery of resources. But they also want something that is intelligent and watches things like, what are the implications for my applications, what are the implications for my SLA’s—how do I optimize an SLA in that context?”

The funding will be concentrated mostly on the sales and delivery side; or, as Jackson put it, “on those who will take the technology and deliver cloud services to customers” but will also be distributed to other areas the company will emphasize, including end-to-end solutions for its partners. Currently, the company relies heavily on its partnerships, which already include the likes of IBM, SGI and HP, among others, but Adaptive also handles direct customers, which may happen more frequently given the added dash of resources to continue expanding their reach.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire