Are Reports of Mainframe Death Greatly Exaggerated?

By Nicole Hemsoth

April 1, 2011

If you take a look back at the commentary that began when the possibilities of clouds were just becoming clear, one of the first bells sounded was the question of what this would mean for mainframes. While there is still no telling what the future holds for the data center, some organizations are trying to put their finger of the pulse of computing to see what IT managers are planning.

AFCOM, an association for data center management professionals, released a report this week entitled “The State of the Data Center” to better understand how data centers are adapting to a number of changes in their industry, including the growing rates of cloud adoption.

In addition to providing some insights about disaster recovery, space, energy and security, the report, which is based on survey results from 358 data center managers concluded that there are threats on the horizon for the trusty mainframe. While it isn’t likely to go down without a long fight, and for some uses, be crushed at all, those in the mainframe business might be finding work a little harder to come by in the next several years if AFCOM’s crystal ball is correct.

We asked Jill Yaoz, CEO of AFCOM, how the cloud computing movement is shaping this movement away from mainframes—and to what extent this is really happening versus being noted as a possibility. Based on their results she says that “last year only 14.9 percent of data centers had implemented the technology but today that percentage has grown to 36.6 percent, with another 35.1 percent seriously considering it.”

As AFCOM report indicates, “While historically one of the most critical elements of any data center, today mainframe usage continues to shrink. While we predict mainframes will exist forever in some capacity, their prevalence has been severely diminished.”

In organization’s view, “cloud computing will continue on this trajectory for the next five years, with 80 to 90 percent of all data centers adopting some form of the cloud during that period.”

In some cases cloud computing is replacing the mainframe because of price concerns. As Yaoz stated, “companies are starting to move certain applications off the mainframe and onto servers, especially because of server virtualization that can save companies significant money.”

She notes, however that there are “other applications that absolutely require the capability of a mainframe and its high level of processing and computing power. So in that regard, cloud computing is not affecting the decline of mainframe usage because the applications that run on the cloud are more server-based.”

In her opinion, in order to move high performance computing applications to the cloud “the cloud provider would to have a mainframe with that level of processing power, which is not really possible to do effectively or efficiently.”

The AFCOM figures are different than a report from CA Technologies last year that suggested 79 percent of IT organizations considered mainframes to be a key part of their cloud computing strategy.  Based on these results, 82 percent of the respondents said that they planned on using their mainframe in the future either as much or more than they currently do.

In the CA survey, 55% of respondents said they kept mission-critical systems on the mainframe for reliability reasons. Additionally, just under half of those surveyed felt staying on the legacy product was the most cost-effective. Remember, however, this is a survey that was published by CA Technologies, who only a couple of years earlier set forth a major push for is Mainframe 2.0 strategy to modernize mainframes.

The debate about mainframes and the role of cloud computing extends to questions about what the real difference is and what makes them attractive. Many of those who are in the mainframe game might contend, there is nothing new about clouds and really nothing that clouds are capable of that mainframes can’t do.

Jon Toigo, who is the CEO of Toigo Partners International, a mainframe consulting company, told ComputerWorld this week that “a mainframe is a cloud” because its “allocated and de-allocated on demand and made available within a company with security and management controls…all of that already exists in a mainframe.”

However, this brings us back to the question of definitions again—if we consider cloud computing’s value proposition to lie in the idea of dynamic self-service provisioning and easy on and off based on the end user’s whims than mainframes really don’t have the advantage, at least if you’re a user that can make good, quick use of the resources for your particular applications.

Most mainframe systems are kept behind lock and key with dedicated guardians keeping track of its operations. While the concept of self-provisioning is absolutely possible with some custom tweaks, this is not something that generally happens.

While there are some companies that are still pushing forward their mainframe strategies to include cloud computing (IBM and its zEnterprise, which allows for a “hybrid” approach to mainframes—and can also be configured via Tivoli to allow user self-provisioning) there could be other barriers that go beyond hardware or software functionality.

For instance, the mainframe (and computing in general until recently) has been tied to licensing costs that are tied to the physical hardware for the duration. Additionally, the distributed software licensing models can be very high, especially for companies that have an IT policy based on “bring on capacity to ensure peak needs are met” versus dynamically scalable and available based on actual demand.

The release of the CA survey caused a stir and reawakening the debate about mainframe health just as the AFCOM survey did this week. Surveys like these tend to put folks on edge on either side and invigorate fresh questions about true capabilities.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire