IBM Computing on Demand Evolves Toward Cloud Computing Service

By Michael Feldman

August 19, 2009

As IT budgets have gotten squeezed, more customers are looking at cloud computing as a way to avoid up-front capital costs, while getting access to as many CPU cycles as they need. In response, all the big IT firms are scrambling to develop a cloud computing product and services strategy, and IBM is no exception.

IBM has actually enjoyed a bit of head start at this. The company’s Deep Computing on Demand offering was launched back in June 2003, when everyone thought clouds were just fluffy white things in the sky. The original offering allowed HPC customers to rent remote access to supercomputer-type systems maintained by IBM. The initial infrastructure consisted of a Linux cluster of xSeries servers housed at a facility at the company’s Poughkeepsie, New York plant.

One of the first users of the service was GX Technology Corporation, a company that does seismic data imaging for the oil & gas industry. Besides dodging the expense of a cluster build-out, one of the big advantages of the on demand service was that the image processing turnaround was much quicker, since IBM could provision up to a thousand servers at a time, depending upon job size.

In general, the original Deep Computing on Demand service was designed for HPC applications across government, academia and industry. Over the next six years, IBM’s on demand offering evolved into a more general-purpose service, broadening its scope beyond traditional HPC, but keeping its computationally-intensive theme. Today it’s just called Computing on Demand and is run more like a cloud with the ability to create virtual images within individual servers.

David Gelardi, IBM’s vice president of Systems and Technology Group for Worldwide Client Centers, sees their current on demand offering as one of the ways in which a client can take advantage of cloud computing today. “In some sense you could think of Computing on Demand as almost a dress rehearsal for cloud,” he says. “We just didn’t know it.”

Currently, there are six IBM on demand centers strung across the US, Europe and Asia. In most cases customer data is stored locally, so bandwidth and latency dictates that the remote servers not be too remote. Because of that, the centers have tended to migrate to “centers of opportunity.” For example, when the oil & gas industry was booming, IBM maintained a center in Houston. As financial services got hot, they expanded into London and New York. Their newest center is in Japan.

Today the two most active sectors of IBM’s on demand service are the financial services industry and industrial design/automation. In the financial space, the applications that support risk compliance plus the creation and management of new types of financial instruments are the two big drivers right now. In the design space, one of the biggest clients is IBM itself, which periodically rents cycles to do large verification runs on its in-house integrated circuit designs.

The six centers currently house a total of 13,000 processors and 54 terabytes of storage. Customers are offered a choice of hardware: IBM Power CPUs (System p servers), or x86 CPUs (BladeCenter and System x servers) using either Intel Xeon and AMD Opteron processors. On the x86 side, both Linux and Microsoft Windows is supported, while the System p users get their choice of Linux or AIX. IBM-built management software, like the xCAT (Extreme Cluster Administration Toolkit), is layered on top for extra functionality.

At one time, IBM offered remote access to Blue Gene technology, but that’s no longer the case. Gelardi says they couldn’t find a broad enough market for the type of specialized technology and support inherent in a rent-a-Blue-Gene offering. The same goes for the Cell processor. He does, however, see the possibility of incorporating IBM mainframes into the on demand model since these represent fairly dear cycles when customers are getting ready to deploy a mainframe application into production.

As far as pricing goes, there are a number of factors that determine cost, including service commitment, technology requirements, and number of compute cycles. It’s actually quite similar to renting other types of infrastructure, like hotel rooms or cars. If you rent for a day, you get one price, for a week, you get a better deal, and so on. Similarly, you get charged a premium if you rent the compute equivalent of a Ferrari versus a Ford. Customer flexibility related to the Service Level Agreement (SLA) is also a consideration. For example, if a customer needs a 24/7 uptime, that’s going to drive the price up since spare servers have to be set aside to account for the inevitable hardware failures.

Gelardi noted that the $1 per CPU-hour for Sun Microsystems’ now defunct Network.com utility computing service might have sounded good, but was an unworkable business model. At some level, he probably wishes the Sun model would have succeeded since it would have kept prices up for all the players. “If I could get a dollar per CPU-hour, I could pave the roads with gold bullion,” he jokes.

Although the IBM compute service has grown beyond its rent-a-supercomputer roots, it still represents a fairly typical compute utility service. The plan, though, is to evolve into a more complex model, where customers will be offered four different types of cloud infrastructure: compute clouds, development clouds, test clouds, and storage clouds. The current offering will naturally evolve into the compute cloud, but IBM’s intention is to develop purpose-built infrastructure aimed at the other three functions.

IBM is already working on a proof-of-concept project with a large financial institution that is looking to give up to 10,000 programmers the ability to independently develop a database plus application service engine in the cloud. The idea for the developer is to be able to attach their workstation to a virtual machine that represents a much larger system. They will also have the ability to do a refresh, which resets the virtual machine back to its initial state.

Although IBM doesn’t supply hard numbers about the size of its computing on demand business, Gelardi says they have hundreds of clients that are currently active or have been active through the course of the program. When it started out in 2003, he says the service was generating revenue on the order of millions of dollars per year. At this point, he says, that has risen to tens of millions of dollars. “As we start to bring in the other types of clouds — the test clouds, development clouds, storage clouds — we’ll blow through the next level very quickly.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled o Read more…

By Pete Beckman

Intel Touts Silicon Spin Qubits for Quantum Computing

February 14, 2018

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offe Read more…

By John Russell

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

HPE Extreme Performance Solutions

Safeguard Your HPC Environment with the World’s Most Secure Industry Standard Servers

Today’s organizations operate in an environment with ever-evolving threats, and in order to protect themselves they must continuously bolster their security strategy. Hewlett Packard Enterprise (HPE) and Intel® are addressing modern security challenges with the world’s most secure industry standard servers powered by the latest generation of Intel® Xeon® Scalable processors. Read more…

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended to make it easier, faster and cheaper to train and run machi Read more…

By Doug Black

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

February 8, 2018

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new vent Read more…

By George Leopold

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

February 6, 2018

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series p Read more…

By John Russell

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

2017 Gordon Bell Prize Finalists Named

October 23, 2017

The three finalists for this year’s Gordon Bell Prize in High Performance Computing have been announced. They include two papers on projects run on China’s Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This