Eucalyptus Chief Talks Future After $20 Million Infusion

By Nicole Hemsoth

July 6, 2010

The holiday weekend that just ended was probably just a little peppier for private cloud software vendor Eucalyptus Systems, following a fresh infusion of cash to the tune of $20 million. This dwarfs the first round of funding the company received, which was $5.5 million, most of which sits untouched to date.

According to CEO of Eucalyptus Systems, Marten Mickos, during a phone interview on Friday, “We had several venture capitalists who were knocking on our door every day but we only spoke with a small group of them, mostly they were the VCs who came recommended or we knew. We very quickly settled on New Enterprise Associates (NEA) but I will say, I’ve never raised capital this quickly and efficiently before — it was a breeze.”

Not all companies can call the funding process a breeze, but not all are as uniquely positioned as Eucalyptus Systems since their software, which is used widely in its open source incarnation, has been gaining popularity. Eucalyptus has its roots directly in HPC, where it began as a research project led by Rich Wolksi who was then at UC Santa Barbara. The open source project grew wings in mid-2009 following its first round of capital from Benchmark Capital, which allowed Wolksi to take his project from open source to the enterprise. Far from being a direct mention of a tree that the founder particularly liked, Eucalyptus is an acronym for “Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems” which is, well, pretty much exactly what it does — hence its adoption in both its open source and commercial form.

While it might seem that this kind of injection of funding would spur a new mission or vision statement on the part of a CEO with a high profile, this is not necessarily the case — at least not to the degree one might think. Trying to talk about a company’s vision with Mickos is complicated since he does not believe that vision is what makes a company succeed, especially as he noted, since he already agreed with the trajectory the company was taking when he took the position in March. According to Mickos, “A CEO doesn’t need a vision, a CEO needs ears to listen to customers and markets. That’s much more important than vision. If you look at some of the most spectacular failures in IT they were led by very visionary people — but those people didn’t listen. That’s my philosophy.”

So with the vision statement all taken care of, we were left to focus on more important matters as they relate to the influx of funding and the roadmap for the next two years that the Eucalyptus CEO discusses in some detail.

During the course of our 45-minute discussion, Mickos was asked several specific questions about the current state and future of the company now that it is in a better competitive position. We also spend time toward the end of our discussion talking about some of the lesser-known concepts that form the backbone of Eucalyptus and address what Mickos calls “misconceptions” about Eucalyptus, especially in terms of a few of its more recent decisions.

HPCc: The tenure of your involvement with Eucalyptus is relatively short; how did you come to the company — what foundational ideas did you bring to the table and how might these be shaping Eucalyptus, especially now?

Mickos: So when I left Sun over a year ago I asked everyone when I left, “so what’s bigger than open source” and some were just joking and said “closed source” (laughs) but the two serious ones were the mobile internet and the other was cloud computing. At the time, I didn’t pay much attention but over successive months I came to realize these are really two massive shifts in IT. At one point, I realized I needed to be involved with the cloud but on the infrastructure side, which is close to my heart. I visited the Eucalyptus team and fell in love with the team and knew the market opportunity and original DNA of the team are wonderful. I applied and begged for them to bring me on and they did.

I think the client-server paradigm has been one of the biggest historical IT shifts up until now, one which happened in the late 80s — it was a great paradigm that worked for a long time. Then came the web, which replaced the client part of client-server, which means we stopped having thick clients and instead used web browsers. But on the server side we stuck to the same architecture. Now with cloud computing, we are replacing and shifting away from the server part and building a new infrastructure for running applications or services that are scalable in a way never seen before. It is a major, major shift. It could take a long time to fulfill, maybe five to ten years, but it will be massive.

HPCc: If we are to look at your involvement with the business model, how have you shifted the vision or focus of the company since when you took over in March, if at all?

Mickos: You know, many people think that CEOs in general come in with some brilliant vision to change the course or save a company but I can’t say that’s the case. One reason I joined is because I felt they had a very good strategy and so with me coming on board, I am just reinforcing the good strategic parts and getting rid of those things that are weak or that are not so good — its not like any wholesale changes have been made. We’re just becoming more specific and determined about elements that have always been part of the strategy?

HPCc: What are some of those weak areas you just referred to?

Mickos: We have great partners but we didn’t have an official program so we quickly created one for our partners. We realized that to be successful we needed to have a strong partner program. The other thing is the hard work of assembling a management team — they are and were great managers when I arrived but it was an incomplete team so I’m now engaged in recruiting outstanding talent to build out. When you have those kinds of people on board, things start happening because you have more hands and more intelligence on board. There are no real major shifts, maybe at some point, but for now, since the company was in such a good trajectory when I joined, I am just adding more fuel to the fire. We are hiring more rapidly, bringing in capital more rapidly.

HPCc: Before we went on record you were talking about the roadmap for Eucalyptus Systems — this is a two-year projection but where does it lead? It’s difficult to tell when there is very little published about your strategy in the industry directly from the source.

Mickos: I agree we haven’t been good at communicating our strategy but our marketing efforts are growing, we just haven’t published these things. The nature of a private cloud solution is that it needs to operate in a heterogeneous environment so on our roadmap we have things that make us fit in with that. We just added support for Windows images, we just added support for VMware and integration for storage devices — these integration aspects are keys to our roadmap.

We have a design that lends itself to massive scalability but for each component, we need to tune and perfect the code so it scales. We’re doing that with the release, we were able to add some smart improvements to add to scalability.

The other thing we’re working on is HA — high availability. You need to have a cloud system that can detect failures, see when nodes go down or respond too slowly and assign other nodes to take over the nodes. HA is technically a tricky thing — you need to have constant readiness to take over and must know when there’s a failure. You have specific mathematical challenges of having a watchdog that watches over the process and then someone to watch over that process; it’s a difficult and interesting engineering task to bring to reality. At the same time, HA concerns have existed for a long time but they are difficult and consuming to work through.

HPCc: What are some of the other more practical uses of this funding and where do they fit into your roadmap for Eucalyptus? I can see how you will want to amp up development and refinement of the existing model but how will this funding be put to use in the near and long term?

Mickos: We are ramping up development on the path according to our two-year roadmap. We know what needs to be done but we need to add more people to accomplish it. Each time we get a new customer or partner they put pressure on the product, which is good — can you make a change, open up an API, make other alterations–it is in our interest to do what they need because they are driving us in the right direction. To manage that sort of development model where we are going in many directions we need more people; more engineers. Remember, the company was started by PhDs who were core engineers, after all, so we have a strong team building the core functionality.

HPCc: Where do you see Eucalyptus when you come to the end of the road–at least according to your map–in two years?

Mickos: Two years from now it will be that we are ubiquitous across the planet with our open source version and we serve some significant online services under a commercial arrangement — by that I mean web and mobile internet companies and also Fortune 100 companies who are running customer-facing applications, data warehousing and computation. That’s where I think cloud will be in two years. Give us another year or two and we will be running mission-critical ERP applications. In five years — that’s too far away for me to know (laughs).

HPCc: Where do HPC users, particularly those looking to deploy clouds for scientific computing or research and development purposes fit into your roadmap? I know that there are institutions making use of Eucalyptus in its open source version but how does this fit into the business strategy?

Mickos: So we started as an NSF-funded academic project without any commercial intentions at the time — this might be a crazy start. The benefit is that the software was designed — from the start up — for HPC and scientific users. We have built into the design abstraction layer the design elements that make Eucalyptus very suitable for computational simulations, for example, where suddenly you need to employ hundreds of nodes and the next moment you’re done and don’t need them anymore. There we see a specific benefit for HPC. NASA used it for the Nebula Cloud, for instance.

Certainly the large and largest enterprises in the world who are interested but HPC has been spending so much money on tuning for so long that they’re not thinking of shifting HPC yet, but this is many others thinking of shifting into cloud but all of this is driven by developments in the large organizations.

Public, private, hybrid aside for a moment, do you feel that this is a major shift for HPC or is it something that is going to take a long time to see to fruition?

It might be but it is not happening rapidly — I am an optimist — it seems to happen fast but behind every HPC center sits a budget manager or CIO who doesn’t want changes to happen so it could be the opposite — that HPC compute centers have received a great deal of attention and investment over the last few years that they are too ashamed to move it right over to the cloud. The market is so massive so we see more growth than we can deal with but in the grand scheme of things, not all changes happen fast and not all people are ready for change.

The thing is that you have to decompose everything into small pieces so you can distribute to different nodes and VMs and so on. Some of those HPC systems are built in such a monolithic fashion that it can be more difficult to shake them loose to fit them to a cloud.

HPCc: What is your personal stance on all of the conversations revolving around private versus public versus hybrid clouds, by the way?

Mickos: The thing is, the future must be hybrid and everyone agrees but it will take some time to realize it and even then there would be lots of non-hybrid clouds. Today the state of the industry is such that there is a separate public and private cloud industry. We continue for another year or two until the industry really has figured how to build reliable and successful hybrid clouds.

HPCc: What are people missing when they talk about Eucalytus? Where are some of the sources of confusion from your view and how can you address some of them?

Mickos: I think our founders had great insight when they chose the same API Amazon uses for the cloud and I think people haven’t seen the value of that — customers do, but many wonder why we did it. One reason we did is because we believe Amazon’s API is becoming an industry standard — it’s not just a question of linking the two clouds, which is a vision of the hybrid cloud. The ease of learning to use it, when you have engineers who have developed applications for Eucalyptus or Amazon they can apply the same knowledge to one just as the other. Also, our engineers and founders concluded that the Amazon API, in addition to being very widely used, is also well designed for massive scalability. They brought it down to the most primitive level where you can get that valuable scalability and haven’t messed it up with too much complexity, which is a respectable accomplishment. That’s why we implemented their API — we know we can implement other APIs on it. Who knows, it may be an API that many other vendors ultimately support and if so, it will make it easier for customers to recognize and use. The ease of use is the important element here.

To get to the heart of the technical underpinnings and history of Eucalyptus Systems, tune in later this month for a detailed interview with founder Rich Wolski based on a conversation he had with HPC in the Cloud’s editor Friday, not long after the funding announcement.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire