HPE’s Formula for HPC Market Leadership

By Steve Conway

July 18, 2016

Tsukishima Skyscrapers nightview with light trails

IDC OPINION

Hewlett Packard Enterprise (HPE) emerged from 2015 as the clear revenue leader in the expanding worldwide market for high-performance computing (HPC) servers, with a 35.9% share of this $11.4 billion market and an even larger 41.1% share of the $3.3 billion supercomputers segment for HPC server systems selling for more than $500,000 each, including the largest share (31%) of the systems on the November 2015 Top500 list. In both cases, HPE’s share was more than double the share of the company’s nearest rival. HPE captured the leadership position with a strategy combining a long-term corporate commitment to HPC, adequate scale to address all segments of the market, a strong embrace of open standards, expanded HPC hardware and software ecosystem innovation (thanks in no small part to HP Labs), and strong and deepening expertise in HPC-reliant domains. Now, HPE is upping the ante with its comprehensive portfolio of HPE Apollo purpose-built platforms and related solutions aimed squarely at the rapid convergence of Big Compute, Big Data, and the Internet of Things (IoT) — including industry-specific requirements. Initial turnkey solutions are aimed at the economically important financial services, oil and gas (energy), healthcare, life science, and manufacturing sectors. This white paper examines HPE’s rise to HPC market leadership against the background of escalating user requirements that present new opportunities and challenges for vendors.

HPC/HPDA MARKET DYNAMICS: RAPID GROWTH AND CHANGE

The global market for HPC server systems has been one of the fastest-growing IT markets. HPC server revenue more than doubled from $4.8 billion in 2001 to $11.4 billion in 2015 and is en route to an IDC-predicted value of $15.1 billion in 2020 (a CAGR of 5.9% versus 3.6% for the worldwide server market as a whole). When software, storage, and service are added to the mix, the 2020 forecast for the HPC market effectively doubles.

During the past 15 years, clusters and other server systems based on x86 architecture processors have been primarily responsible for propelling rapid growth of the global HPC server market. In the five-year period from 2009 to 2014, the number of x86 processors sold annually into the HPC market grew 78.7%, from 1.7 million to 3.0 million. In 2014, x86-based systems accounted for 93.1% of worldwide HPC server revenue. The majority of these x86 processors have come from Intel. Further:

  • Historically, HPC market growth has been driven by expanding use at established sites, targeting higher performance and density. This has been heavily augmented by successive waves of new users, each presenting vendors with new requirements and challenges.
  • The new edition of “A Strategy for American Innovation,” made public by President Obama on October 21, 2015, named HPC as one of the top investment priorities for growing the

U.S. economy. A key role for HPC cited in “A Strategy for American Innovation” is to address “the rise of extremely large data sets and attendant computational challenges.”

Big Compute

HPC began in the 1960s as a niche market for government- and university-based researchers, primarily for compute-intensive floating point modeling and simulation (M&S) of physical and quasi-physical phenomena. But in less than a decade, HPC M&S began penetrating tier 1 commercial firms as a game changer for accelerating product development and competitiveness. In a pioneering IDC study for the Washington, D.C.–based Council on Competitiveness, 97% of companies that had adopted HPC said they could no longer compete or survive without it.

The arrival of standards-based, commercial-grade clusters in 2001–2002 made HPC M&S affordable for many small and medium-sized enterprises (SMEs) and start-ups. In 2015, 113,000 HPC systems were sold at prices below $100,000.

The spread of HPC into private sector firms of all sizes has brought with it a need for vendors to begin developing industry-specific, and even workload-specific, solutions. Today, the ability to create

purpose-built solutions for economically important HPC domains and use cases is becoming crucial for large-scale vendors that want to address all segments of the HPC market.

city-lights_2Big Data

Historically, the HPC market has included Big Data workloads of two main types. First, some M&S jobs have been data intensive (i.e., they have involved much more data processing than computation). Second, a few HPC domains — notably the intelligence community and the financial services industry (FSI) — have long relied heavily on integer-based analytics (as opposed to floating point–based M&S). The back offices (“quants”) of large investment banks began using HPC in the late 1980s, especially for pricing exotic instruments, portfolio optimization, and firmwide risk management (high-frequency trading [HFT] was recently added to this mix). Today, a growing number of HPC sites have both M&S and analytics workloads.

Much newer is the trend for large commercial firms to move up to HPC to tackle mission-critical analytics challenges that are too complex and time critical for the firms’ enterprise server technology to handle alone. In these cases, HPC servers are typically inserted directly into the live data pipeline. The drivers here are competitive forces and the opportunity to save money (PayPal has saved over $700 million by migrating to HPC).

In 2012, IDC launched the High-Performance Data Analysis (HPDA) service, which tracks both historical data-intensive computing and newer advanced analytics in the commercial sector. Economically important use cases include fraud and anomaly detection, business intelligence, affinity marketing, and personalized (“precision”) medicine. Arguably, no IT market has seen a more powerful Big Data explosion than the HPC/HPDA market. An important consequence of this explosion is the need for users to adopt advanced data analytics methods (Hadoop, Spark, etc.). Even more important is the need to elevate storage capacities and capabilities, such as by adding object storage and software-defined storage, to enable effective scaling.

Internet of Things

HPC will almost certainly perform two key functions in the nascent Internet of Things market:

  • HPC systems will provide dense nodes needed for local, exceedingly data-intensive, real-time use cases such as managing urban traffic with a mix of human-driven, driverless, and semi-driverless vehicles. Urban traffic management is already an important HPC application around the world, and major automakers are using HPC heavily to develop tomorrow’s driverless vehicles, along with the real-time IoT infrastructure that will be needed to support their use.
  • HPC will be needed for the functional and wellness management of large portions of the global IoT network, such as national-level IoT networks (China’s national IoT plan already calls for HPC management). HPC will also be needed for IoT data tracking and aggregation, especially in network-edge environments.

TECHNICAL AND OTHER CHALLENGES

Today, HPC system developers and users face an array of interrelated challenges, including:

  • Developing software capable of efficiently exploiting HPC hardware systems
  • Burgeoning system sizes and complexity
  • Heterogeneity (CPUs, accelerators)
  • New environments (e.g., public clouds)
  • Reliability/resiliency requirements
  • A mix of compute- and data-intensive workloads
  • Energy efficiency
  • An influx of small and medium-sized businesses (SMBs) and other commercial users that want “ease of everything”
  • The movement from synchronous applications to asynchronous workflows
  • A serious shortage of qualified job applicants, especially programmers, systems administrators, and people able to bridge the gap between HPC and domain science/engineering/analytics

Together, these challenges represent a daunting agenda for future system development.

HPE‘S STRATEGY FOR MARKET LEADERSHIP

As noted previously, HPE exited 2015 as the clear revenue leader in the expanding worldwide market for all HPC servers, with a 35.9% share of this $11.4 billion market. Also in 2015, HPE captured an even larger share of 41.1% of the $3.3 billion supercomputers segment for HPC server systems selling for more than $500,000 each.

HPE has been ramping up efforts to tackle HPC system and storage challenges head-on. For example, HPE is addressing the Big Compute–Big Data convergence with an expanding lineup of server and storage offerings for the related HPC, deep/machine learning, and nascent Internet of Things markets. The offerings include tailored solutions for financial services, oil and gas, life sciences, and manufacturing.

Concerted R&D Initiatives for HPC

HPE leverages the company’s overall R&D on behalf of the HPC market and also invests substantial R&D funding specifically for HPC. Metagoals are to democratize HPC and make it easier to access, use, and maintain.

Domain Expertise and Centers of Excellence

  • A prerequisite for these purpose-built solutions is HPE’s deep, expanding domain expertise. The company has been investing in industry consortiums and collaborating with leading academic institutions and other vendors to develop benchmarks and best practices at the intersection of HPC and Big Data.
  • HPE and Intel jointly created two HPC centers of excellence (CoEs), one in Grenoble, France, and the other in Houston, Texas. The companies have staffed the centers with deep engineering expertise, vertical industry knowledge, and expertise in performance optimization and code modernization. The centers focus on benchmarking and proof-of-concept work.
  • HPE and Intel created another CoE at Teratec in Bruyères-le-Châtel, France. Teratec is “a European pole of competence in high-performance digital simulation” and brings together over 80 companies, laboratories and research centers, universities, and engineering schools. The CoE’s roles are to showcase new technologies, conduct proof of concepts and performance benchmarks, and develop educational white papers and reference architectures for HPC and Big Data analytics solutions.
  • HPE is actively engaged in the National Strategic Computing Initiative (NSCI) and is helping provide exascale computing capabilities to the United States to shape the future of the country’s technology direction. To date, HPE has participated in NSCI activities including a congressional panel sponsored by ITIF, OSTP workshops, and meetings with NSCI stakeholders.

Expanding Product Portfolio for Big Compute, Big Data, and IoT

HPE’s new HPC solutions feature innovations in systems design, workload optimization, density optimization, open source software, and visualization (including software-defined visualization). Together, the innovations aim to accelerate time to value in machine learning/deep learning, energy exploration, mechanical design, financial trading and regulatory compliance, and other major HPC domains.

HPE’s HPC solutions are designed to be customer centric and technology agnostic. The limited scope of this white paper does not permit fuller descriptions, but a mere list of the product portfolio should give some feel of the breadth and depth of HPE’s product set, which consists of hardware and software platforms, horizontal solutions, and more. The product portfolio includes the following offerings:

  • The HPE Apollo 6500 deep learning platform, which has 4U chassis with two server nodes that can hold up to 8 NVIDIA GPU cards or Intel Xeon Phi cards per node.
  • HPE Apollo 4520 System, a high-density, high-scalability, and high-resiliency storage server, which can be outfitted with Intel Enterprise Edition for Lustre* software or open source Lustre as well as other parallel file systems.
  • HPE Trade and Match Server, optimized for superior HFT performance
  • HPE Risk Compliant Archive for regulatory compliance management in financial services
  • HPE Moonshot Trader Workstation, designed to maximize trader experience and productivity while lowering TCO
  • Apollo 2000 System, which is highly scalable for HPC workloads, such as ANSYS for CAE, or traditional IT workloads
  • Apollo 6000 System, designed for HPC workloads at rack scale
  • The new HPE Edgeline IoT Systems, resulting from an HPE-Intel partnership to help deliver proven open solutions for the IoT market (The HPE Edgeline IoT Systems 10 and 20, as well as the HPE Edgeline 1000 and 4000 systems, sit at the network edge and are designed to enable customers to securely aggregate and analyze data in real time and control devices and things.)
  • HPE Apollo 8000, a warm water–cooled supercomputer that delivers over 250 peak teraflops per square foot while targeting high efficiency and minimal energy consumption
  • The HPE Big Data portfolio, which includes the purpose-built HPE Apollo 4510 System that is tailored for object storage at petabyte scale; HPE Apollo 4530 and 4200 Systems, aimed at Hadoop analytics and other Big Data analytics; and HPE Integrity Superdome X System, designed for workloads benefiting from in-memory computing and real-time analytics

In addition to these purpose-built platforms, HPE has a comprehensive portfolio of general-purpose compute platforms in the company’s HPE ProLiant racks and towers and HPE BladeSystem series. HPE’s comprehensive software portfolio includes HPE’s Core HPC Software Stack, Insight CMU, Cluster Test, and HPE OneView.

HPE’s overall market leadership is fueled by close collaboration and deep relationships with the company’s technology partners. The HPE portfolio builds on industry-leading technologies from partners including (but not limited to) AMD, Intel, Mellanox, NVIDIA, Seagate, ISVs, and the open source community.

OPPORTUNITIES AND CHALLENGES

Opportunities

  • Exploit IDC’s forecast market growth, HPE’s market leader position, and the escalating convergence of Big Compute, Big Data, and IoT. Being the OEM revenue leader in a robustly growing market presents opportunities for further growth, especially as HPC competencies increasingly drive the convergence of advanced computation and advanced analytics and become indispensable for more large commercial firms and the nascent IoT market.
  • Maintain HPE’s exceptional customer loyalty. IDC studies have shown that with regard to customer loyalty, most HPC vendors’ ratings are bunched closely together but HPE rises above the crowd to form a class of its own. HPE has an opportunity to bank on this valuable asset as the company works to expand its HPC market leadership.
  • Promote HPE’s R&D innovation for HPC more assertively. In IDC’s opinion, HPE has been overly modest about its R&D investments and innovations benefiting the HPC community. The company has an opportunity to tell this story more assertively in order to receive proper recognition for these contributions.

Challenges

  • Ensure that HPE’s senior management publicly and consistently expresses the company’s long- term commitment to the HPC market. HPE’s senior management has taken important steps to demonstrate this commitment, such as creating a dedicated business unit focused on HPC, Big Data, and IoT with a strong leadership team; forging an HPC alliance with Intel; engaging deeply in the NSCI; and funding several new, long-term R&D initiatives and go-to-market programs.
  • Extend HPE’s HPC customer base beyond the current large, loyal contingent. Note that HPE has picked up a lot of former IBM customers during the IBM-Lenovo transition.
  • Grasp and exploit the dynamics of the convergence of Big Compute, Big Data, and IoT.

CONCLUSION

HPE is now the clear revenue leader in the fast-growing worldwide HPC server market, valued at

$11.4 billion in 2015, and in the $3.3 billion supercomputers segment of the market. The company’s winning strategy started with a long-term commitment to the HPC market, both in its own right and as an effective lever for opening up major new opportunities for Big Compute, Big Data, and IoT, all of which face crucial challenges that need to be addressed by the global HPC community. HPE’s product portfolio is comprehensive enough to cover the vast majority of HPC user/buyer requirements — and the newly announced products move that collection substantially forward. HPE has also been growing its domain-specific solutions for economically important vertical segments, along with the domain expertise that enables peer-to-peer sales and support.

Finally, HPE has been ramping up R&D for its HPC strategy, from new technology initiatives to centers of excellence where promising new technologies and ideas can be evaluated and benchmarked.

IDC predicts that by 2020, the global market for HPC servers will exceed $15 billion and the whole

HPC market — servers, storage, software, and service — will be worth about double that amount.

We believe that HPE’s existing leadership status, combined with the company’s long-term commitment and strategy for this market, positions HPE well to exploit our forecast growth for the interrelated

HPC Big Compute, Big Data, and IoT markets. For HPE, as for any major vendor, it will be challenging to anticipate and respond effectively to the increasingly complex dynamics of the HPC market, but the company so far has shown an impressive ability to do this.

About IDC

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications and consumer technology markets. IDC helps IT professionals, business executives, and the investment community make fact- based decisions on technology purchases and business strategy. More than 1,100 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries worldwide. For 50 years, IDC has provided strategic insights to help our clients achieve their key business objectives. IDC is a subsidiary of IDG, the world’s leading technology media, research, and events company.

Global Headquarters
5 Speen Street
Framingham, MA 01701
USA
508.872.8200
Twitter: @IDC
idc-community.com www.idc.com

Copyright Notice

External Publication of IDC Information and Data — Any IDC information that is to be used in advertising, press releases, or promotional materials requires prior written approval from the appropriate IDC Vice President or Country Manager. A draft of the proposed document should accompany any such request. IDC reserves the right to deny approval of external usage for any reason.

Copyright 2016 IDC. Reproduction without written permission is completely forbidden.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire