[Connect with Spectrum users and learn new skills in the IBM Spectrum LSF User Community.]
As of 2017, 92 of the world’s top 100 banks used mainframes. Today, mainframes handle 87% of all credit card transactions, and 71% of Fortune 500 companies use mainframes for at least some of their mission-critical application workloads.1
Clearly, mainframe servers remain one of the most popular high-performance computing (HPC) platforms available. The question is – Why?
Though the answer involves various attributes and advantages, in general the mainframe continues to be the foundation of modern business – in banking, finance, healthcare, insurance, utilities, government, and across a multitude of other public and private enterprises – essentially for three overarching reasons – familiarity, efficiency, and perhaps most importantly, adaptability.
When it comes to familiarity, mainframes have a leg – or processor – up on other HPC architectures. Until the mid-1990s, mainframes provided essentially the only acceptable means of handling the data processing requirements of large organizations. These requirements were then (and are often now) based on running massive and complex applications such as payroll and general ledger processing.
The mainframe owes much of its popularity and longevity to its inherent reliability and stability, a result of careful and steady technological advances since the introduction of the System/360 in 1964. No other computer architecture can claim as much continuous evolutionary improvement, while maintaining compatibility with previous releases.2
So, mainframes were here first, and they’ve been rather reluctant to give up their privileged position. But being first, and most familiar, is only part of the story. Mainframes are also extremely efficient at what they do. Consider this –
Mainframes handle 68% of the world’s production application workloads, yet they account for only 6% of IT costs.3
By their nature, mainframes, which settled in at about the size of a refrigerator, come with a healthy initial capital investment (CapEx). In a fascinating twist of irony, this sizeable CapEx hurdle, instead of leading to their extinction as some predicted, may actually have driven the mainframe’s survival. Over the years, organizations where the CapEx hurdle was too high and that may not have had enough of a certain type of application workload, turned to other emerging compute architectures.
But consider an electric utility, for example. A city without power for even a minute is a calamity. Managing a portion of the USA power grid involves processing thousands of concurrent data streams at the highest possible speed – with absolutely no margin for downtime. This is not an environment where initial CapEx is an insurmountable deterrent.
Think about how important the effective processing of billions of credit card transactions is to the global economy. In some cases, security breaches involving loss of sensitive customer data have crippled entire corporations. Certain workloads truly are mission critical. They demand the highest performance, security, and reliability available. Once the initial CapEx hurdle is no longer an issue, then mainframes suddenly offer many advantages. Without all the added enclosures and networking and software lurking between distributed processors, mainframes can be really fast, secure, and efficient.
So, for users, the high CapEx hurdle pushed mainframes in a certain direction. Vendors of mainframes saw the vision. For this particular compute platform, the demand was for the lowest latency, most powerful data protection, and greatest resiliency possible. And the basic system architecture lent itself very well to satisfying these requirements.
This natural market/vendor interplay leads us to the third leg of our answer to the question of why mainframes have remained attractive – adaptability. The basic mainframe compute niche has remained fairly stable for decades, but the surrounding technology, business, and marketplace environments have changed dramatically, and continue to do so. These changes have fueled on-going development in the mainframe platform itself, and also in complementary systems and software. These days software comes in containers. Clouds do much more than rain on parades. Thanks to their familiarity, their extraordinary efficiency, and the high cost of replacing them again and again as each new wave of change washed by, mainframes benefitted from very strong adaptive motivations.
[Also read: Cloudy with a Chance of Mainframes]
Take, for example, the concept of “pervasive encryption,” which is essentially the goal of encrypting data along its entire journey, both in flight and at rest. But to achieve it, computers need the help and cooperation from surrounding systems such as storage and networking. Thus, the new IBM DS8900F storage arrays responded to the requirement and now help enable cyber resiliency – all the way into the cloud and back.
IBM Spectrum Computing solutions such as Symphony and LSF – market leading HPC resource management and job scheduling tools – responded and adapted too. They now help enable the multi-cloud environments within which modern mainframes operate. Once accounts and policies are established, these tools can literally orchestrate off-premises cloud-based resources as needed. The mainframe becomes a local center of what is essentially an unlimited web of compute capabilities and data flows.
This is the mainframe in the 21st century. Familiar, extremely efficient, and highly adaptable. Wherever mission-critical workloads demand the highest performance, security, and availability, that’s where you’ll find mainframes.
 Syncsort: 9 mainframe statistics that may surprise you, June 2018 https://blog.syncsort.com/2018/06/mainframe/9-mainframe-statistics/
2 IBM Knowledge Center: Who Uses Mainframes and Why Do They Do It? https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_whousesmf.htm
3 Syncsort: 9 mainframe statistics that may surprise you, June 2018 https://blog.syncsort.com/2018/06/mainframe/9-mainframe-statistics/