Catching Up on Cloud: An Interview with IBM

By Nicole Hemsoth

May 19, 2008

If there’s one company trying to scatter cloud computing across the planet — like some kind of big, blue cloud, you might say — it’s IBM. And one of the top guys behind IBM’s cloud initiatives is Dennis Quan, chief technology officer for High-Performance On-Demand Solutions. The HiPODS team is charged with helping customers smartly grow and manage their datacenters, accelerate time to market and reduce IT complexity, among other things. In this Q&A with GRIDtoday, Quan gave us 29 minutes to recap some of the high points of IBM’s busy past six months or so in cloud computing. Then he had to catch a plane.

GRIDtoday:  When customers ask you to explain cloud computing, what do you tell them?

DENNIS QUAN: Cloud computing is about providing applications to large numbers of users via the network and their connected mobile devices, their laptop computers or whatever, and having an IT infrastructure, a cloud computing center, that’s capable of supporting large numbers of applications, and being able to massively scale to meet growing user demands. We’ve actually been able to prove out these concepts in some projects in use within IBM. And we’ve found that one of the key characteristics is not only scaling to demands, but being able to get applications on board and running as quickly as possible. This is critical for this generation of applications because of the innovation cycle being so fast today. You really need to get innovators the compute resources they need, and that’s one of our goals with our on-demand solutions.

One message that resonates well with customers is being able to increase the speed at which they can prototype their apps and get feedback. Because we’re using virtualization and provisioning automation, we’re able to allow people to go to a self-service portal and say “I need these Linux boxes with an app server,” and so on, and be able to get that done within minutes. You know, as opposed to taking potentially weeks to acquire the machines and set them up and rack them and install the software themselves. That is extremely appealing to our customers. They also like having the freedom to run any kind of workload they like and put any kind of software on the cloud that they like. It’s not limited to back-end tasks, batch-processing tasks; it could be user-facing applications, it could be Web servers, it could be database servers.


Gt:
What’s one of your most compelling examples of cloud computing?

QUAN: About two and a half years ago we launched a cloud within IBM, the Innovation Portal. When individual users at IBM had a new idea — instead of having to hunt down permission to buy a new machine and having to find a place to host it and install it, etc., which is not only time-consuming but also resource- consuming because they have to handle all the system admin stuff — what they can do instead is go to this self-service portal and request resources. It can be 20 virtual machines running Linux and WebSphere and DB2, for example, or any number of combinations. It’s a lot like booking a hotel room on a Web site. You’re able to get access to a certain number of resources for a period of time, and the system goes off and provisions that for you, and you’re given access to those machines, with all the software and middleware, etc., set up for you.

Since launching this cloud within IBM, we’ve had over 100 projects run on it, and about 20 percent have contributed to technology used in shipping products.

The applications and projects we’ve run in the cloud have ranged from collaboration tools to social networking tools to development tools — and even a game. What we’ve found is that people are able to access compute resources very quickly, which benefits not just individual innovators with a brand new idea but also design teams who want to test their new product on the greater IBM population. A software development company could use this kind of cloud to do in-house testing and quality control.

Gt: What software is at the heart of this cloud?

QUAN: We put together this cloud solution based on our Tivoli products — Tivoli Provision Manager, Tivoli Monitoring — and that’s really been the foundation architecture that we’ve been using in all our explorations of cloud computing.

Gt: What is the goal of the joint venture with Google announced last fall?

QUAN: It’s a partnership to promote research into cloud computing, especially to promote the parallel programming models we think are going to be important for future applications to take advantage of these large cloud centers. We’ve built out three clouds for this project. One at the Almaden Research Center in San Jose (California), one at the University of Washington, and one at a Google datacenter. We’ve been able to get six universities involved with this project (MIT, University of Maryland, Carnegie Mellon University, Stanford University, University of Washington, University of California, Berkeley). Overall it’s about a thousand machines across the three sites, using the Tivoli architecture mentioned earlier.

Gt: November’s Blue Cloud announcement of “ready-to-use cloud computing”: What does it mean for businesses, and what’s happened with the initiative since then?

QUAN: Blue Cloud is really a statement about everything we’ve learned about running clouds for innovation enablement inside IBM, or for supporting these new parallel programming models. It’s about how we’re going to apply these new technologies to solve some of the nagging pain points of many of our customers. They’re facing trouble trying to grow their datacenters in the face of rising energy costs and running out of space, etc.

Blue Cloud is about having a broad spectrum of products across our systems and software technologies, our services, to support a cloud computing style of datacenter management. It’s really all about having the massively scalable datacenter model that will allow you to support large numbers of applications and users, and a very diverse range of applications and workloads.

What we’ve done is put together an offering — which we project will come out this spring — that will allow our customers to be able to start up a cloud center of their own within their datacenters, within their own four walls. We’ve found that a number of our customers are very interested in this kind of highly scalable, manageable form of large-scale computing, but want to be able to maintain control of their own datacenters. So we are commercializing what we’ve learned in the clouds that we’ve built so far.

We want to be the arms dealers, as it were, of these cloud computing components that will enable our customers to build up datacenters that have these capabilities.

Gt: In February, IBM announced it will build the first cloud computing center in China, at the new Wuxi Tai Hu New Town Science and Education Industrial Park. Who will be using the Wuxi cloud, what for, and why?

QUAN: We’ve engaged with the municipal government of Wuxi, a city north of Shanghai, to build them a cloud for software development. There are cities all over China that want to create software parks, entrepreneurs who want to do enterprise software development for multinationals, or do a variety of things like animation — things that involve a large amount of compute resources. They can really benefit from access to scalable resources on demand. So, we’re building them a cloud center that includes a wide range of our Rational tools for developing enterprise applications.

An entrepreneur making use of this cloud can go to the self-service portal and say, “I am going to be doing this project for 10 months and I’m going to need resources for my 20 or 40 developers to be able to do source control and project management,” and they’ll have the appropriate products provisioned for them. The government will be able to bill them on a monthly basis, or whatever the schedule happens to be. The big benefit to these small entrepreneurs is that upfront costs of buying hardware and software licenses, as well as the ongoing maintenance, have all been centralized and borne by the government, and so the software company is able to pay as they go to make use of this.

We’ve seen that as a very popular model for governments not just in China but around the world who are interested in promoting economic development, entrepreneurialism, and innovation.

Gt: Then a month later, IBM opens a new cloud center in Dublin.

QUAN: We partnered with the local government industrial development authority. This particular cloud is run out of an IBM center. We’re able to use this center to demonstrate to clients the benefits of cloud computing, especially for enterprises.

One of the highlights of the Dublin cloud is a solution we call the Idea Factory for Cloud Computing. It’s a Web 2.0 app that allows you to exchange ideas using different collaboration tools like blogging and wikis and such. One customer’s consultants had an idea exchange session a couple weeks ago with thousands of participants. They’ve been happy with what they can do using an application being delivered from a cloud computing center. We’ve had similar experiences with a wide range of institutions, including governments and financial services.

Gt: What does cloud technology mean for tomorrow’s datacenter?

QUAN: We see cloud computing as a broadly applicable technology platform for enabling the next generation of datacenter, what we call the new enterprise datacenter. [This type of datacenter will be] able to combine the things that you see in the Web-centered cloud platforms out there — the MySpaces, the Flickrs, the YouTubes — where they’re able to do large scalable application delivery and support large numbers of users with the characteristics of traditional enterprise datacenters, where large companies are able to depend on these datacenters for mission-critical applications, being able to do secure transaction processing, being able to maintain security and isolation of data. The new enterprise datacenter model is inspired by this Web-centered cloud concept and also inherits the characteristics of enterprise datacenters that our clients find absolutely critical.

Gt: What kind of response do you get from management and IT people when talking about bringing a cloud into their business?

QUAN: I think the way we’ve talked about cloud computing tends to resonate very well. They might have concerns based on things they read in the press and hear from other folks in industry, but at the end of the day, they care most about solving the problems that hamper them. How are they going to get higher utilization out of their datacenters that are running out of space? How are they going to lower the labor costs or the maintenance costs of running large-scale systems? And that’s really where we’ve targeted our solutions. It’s by design trying to help along these axes. So using technologies like virtualization, provisioning, automation, it’s really about taking what they’ve seen in terms of the benefits from Web-centric clouds and then applying it directly to the pain points that they have.

Gt: What are some of those pain points?

QUAN: One of the biggest is machine utilization. We’ve probably all seen the statistics about x86 datacenters having about 5 to 10 percent utilization across their systems. And then you talk about needing to grow those datacenters and you’re running out of space, one of the things you want to do is make higher use of the machines you have. By using things like virtualization, we’re able to improve utilization significantly. Virtualization has been an IBM specialty for ages.

Gt: Just a couple weeks ago, IBM introduced a new line of servers. Tell us about the iDataPlex systems and how they fit into the cloud environment.

QUAN: They’re a great example of a hardware platform to support cloud systems. The iDataPlex systems allow for an extremely dense configuration of compute power in the rack. We think these Linux servers can be used to support an extremely large cloud datacenter. They double the number of systems that can fit in a rack, but use about 40 percent less power. We actually have some of these systems running within the cloud that we have in our laboratories, and they’ll be used in our other cloud installations. They’re a key part of the portfolio we’re offering to companies to build out their clouds. The systems support all the characteristics needed for cloud computing: extremely dense, vast pools of compute power, virtualization capabilities, and so on.

Gt: We will refrain from asking you if the future of cloud computing is sunny … so, how is it?

QUAN: The future is pretty bright because what we’re seeing right now is such growth in mobile technologies and need for data access anywhere. More and more users are going to be signing on from more and more locations. In developing economies they’ll be signing on mostly from mobile devices because of lack of traditional infrastructure. That’s going to put extreme demands on datacenters to be able to scale and to process the large amounts of video, audio, and text that these users are contributing and sending to each other. You’re going to need a cloud computing-style datacenter model to support these kinds of applications, and these phenomena are not restricted to consumer applications. You see these things happening within enterprises in terms of the types of collaboration or interactions people have within a business, for anything from supporting basic e-mail to sales processes, CRM, and line-of-business applications. All these things are going to undergo a transformation toward being mobile, supporting richer forms of Web 2.0 interactions, and being able to sustain lots of concurrent users.

These are all driving the need for scalable datacenter models, such as the ones we’ve been building with our cloud computing initiatives. And finally, clouds will respond to green initiatives to get the most out of the compute cycles that you have in these datacenters because of the rising cost of energy. We made an announcement about a month ago in collaboration with Ohio State and Georgia Tech about research we’re doing with them on autonomic computing as applied to clouds. There are several areas we’re looking at, like automating workload scheduling, more intelligent balancing of resources across a virtualized datacenter, and being able to do workload movement. These features would let a company shut down a portion of the datacenter if it’s not in use in order to save electricity.

We’ve been showing these technologies to customers for a couple years now and they’re now able to understand how those things apply to them. It’s basically been taking off.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire