IBM Computing on Demand Evolves Toward Cloud Computing Service

By Michael Feldman

August 19, 2009

As IT budgets have gotten squeezed, more customers are looking at cloud computing as a way to avoid up-front capital costs, while getting access to as many CPU cycles as they need. In response, all the big IT firms are scrambling to develop a cloud computing product and services strategy, and IBM is no exception.

IBM has actually enjoyed a bit of head start at this. The company’s Deep Computing on Demand offering was launched back in June 2003, when everyone thought clouds were just fluffy white things in the sky. The original offering allowed HPC customers to rent remote access to supercomputer-type systems maintained by IBM. The initial infrastructure consisted of a Linux cluster of xSeries servers housed at a facility at the company’s Poughkeepsie, New York plant.

One of the first users of the service was GX Technology Corporation, a company that does seismic data imaging for the oil & gas industry. Besides dodging the expense of a cluster build-out, one of the big advantages of the on demand service was that the image processing turnaround was much quicker, since IBM could provision up to a thousand servers at a time, depending upon job size.

In general, the original Deep Computing on Demand service was designed for HPC applications across government, academia and industry. Over the next six years, IBM’s on demand offering evolved into a more general-purpose service, broadening its scope beyond traditional HPC, but keeping its computationally-intensive theme. Today it’s just called Computing on Demand and is run more like a cloud with the ability to create virtual images within individual servers.

David Gelardi, IBM’s vice president of Systems and Technology Group for Worldwide Client Centers, sees their current on demand offering as one of the ways in which a client can take advantage of cloud computing today. “In some sense you could think of Computing on Demand as almost a dress rehearsal for cloud,” he says. “We just didn’t know it.”

Currently, there are six IBM on demand centers strung across the US, Europe and Asia. In most cases customer data is stored locally, so bandwidth and latency dictates that the remote servers not be too remote. Because of that, the centers have tended to migrate to “centers of opportunity.” For example, when the oil & gas industry was booming, IBM maintained a center in Houston. As financial services got hot, they expanded into London and New York. Their newest center is in Japan.

Today the two most active sectors of IBM’s on demand service are the financial services industry and industrial design/automation. In the financial space, the applications that support risk compliance plus the creation and management of new types of financial instruments are the two big drivers right now. In the design space, one of the biggest clients is IBM itself, which periodically rents cycles to do large verification runs on its in-house integrated circuit designs.

The six centers currently house a total of 13,000 processors and 54 terabytes of storage. Customers are offered a choice of hardware: IBM Power CPUs (System p servers), or x86 CPUs (BladeCenter and System x servers) using either Intel Xeon and AMD Opteron processors. On the x86 side, both Linux and Microsoft Windows is supported, while the System p users get their choice of Linux or AIX. IBM-built management software, like the xCAT (Extreme Cluster Administration Toolkit), is layered on top for extra functionality.

At one time, IBM offered remote access to Blue Gene technology, but that’s no longer the case. Gelardi says they couldn’t find a broad enough market for the type of specialized technology and support inherent in a rent-a-Blue-Gene offering. The same goes for the Cell processor. He does, however, see the possibility of incorporating IBM mainframes into the on demand model since these represent fairly dear cycles when customers are getting ready to deploy a mainframe application into production.

As far as pricing goes, there are a number of factors that determine cost, including service commitment, technology requirements, and number of compute cycles. It’s actually quite similar to renting other types of infrastructure, like hotel rooms or cars. If you rent for a day, you get one price, for a week, you get a better deal, and so on. Similarly, you get charged a premium if you rent the compute equivalent of a Ferrari versus a Ford. Customer flexibility related to the Service Level Agreement (SLA) is also a consideration. For example, if a customer needs a 24/7 uptime, that’s going to drive the price up since spare servers have to be set aside to account for the inevitable hardware failures.

Gelardi noted that the $1 per CPU-hour for Sun Microsystems’ now defunct Network.com utility computing service might have sounded good, but was an unworkable business model. At some level, he probably wishes the Sun model would have succeeded since it would have kept prices up for all the players. “If I could get a dollar per CPU-hour, I could pave the roads with gold bullion,” he jokes.

Although the IBM compute service has grown beyond its rent-a-supercomputer roots, it still represents a fairly typical compute utility service. The plan, though, is to evolve into a more complex model, where customers will be offered four different types of cloud infrastructure: compute clouds, development clouds, test clouds, and storage clouds. The current offering will naturally evolve into the compute cloud, but IBM’s intention is to develop purpose-built infrastructure aimed at the other three functions.

IBM is already working on a proof-of-concept project with a large financial institution that is looking to give up to 10,000 programmers the ability to independently develop a database plus application service engine in the cloud. The idea for the developer is to be able to attach their workstation to a virtual machine that represents a much larger system. They will also have the ability to do a refresh, which resets the virtual machine back to its initial state.

Although IBM doesn’t supply hard numbers about the size of its computing on demand business, Gelardi says they have hundreds of clients that are currently active or have been active through the course of the program. When it started out in 2003, he says the service was generating revenue on the order of millions of dollars per year. At this point, he says, that has risen to tens of millions of dollars. “As we start to bring in the other types of clouds — the test clouds, development clouds, storage clouds — we’ll blow through the next level very quickly.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Long Flights to Cluster Fights: Meet the Asian Student Cluster Teams

November 22, 2017

Five teams from Asia traveled thousands of miles to compete at the SC17 Student Cluster Competition in Denver. Our cameras were there to meet ‘em, greet ‘em, and grill ‘em about their clusters and how they’re doi Read more…

By Dan Olds

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open question. The latest geo-region to throw its hat in the quantum co Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshop Read more…

By Andrew Jones

HPE Extreme Performance Solutions

HPE Wins “Best HPC Server” for the Apollo 6000 Gen10 System

Hewlett Packard Enterprise (HPE) was nominated for 14 HPCwire Readers’ and Editors’ Choice Awards—including “Best High Performance Computing (HPC) Server Product or Technology” and “Top Supercomputing Achievement.” The HPE Apollo 6000 Gen10 was named “Best HPC Server” of 2017. Read more…

Turnaround Complete, HPE’s Whitman Departs

November 22, 2017

Having turned around the aircraft carrier the Silicon Valley icon had become, Meg Whitman is leaving the helm of a restructured Hewlett Packard. Her successor, technologist Antonio Neri will now guide what Whitman assert Read more…

By George Leopold

Long Flights to Cluster Fights: Meet the Asian Student Cluster Teams

November 22, 2017

Five teams from Asia traveled thousands of miles to compete at the SC17 Student Cluster Competition in Denver. Our cameras were there to meet ‘em, greet ‘em Read more…

By Dan Olds

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC Read more…

By Andrew Jones

SC Bids Farewell to Denver, Heads to Dallas for 30th Anniversary

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This