How Big is the HPC Market, Really?

By Addison Snell

August 17, 2007

For a research analyst in any industry, there is one question that is fundamental: How big is the market? More specific questions support the same theme: How fast is the market growing? Who are the important players? What are the breakout opportunities? How big will the market be tomorrow?

Fast growth within HPC has been widely reported for the last several years. Falling hardware prices seem to have caused an increase in spending, among both traditional and non-traditional HPC users.

Just how big has the market gotten? Bigger than you think.

Before the cluster revolution, it was relatively easy to assess the size of the HPC industry. Analysts would simply do what analysts do in any industry: call the suppliers, and add up what was sold. Servers and supercomputers were self-contained items that you could point to and say, “That’s the HPC system I got from [insert vendor here].” Storage and applications were often purchased separately, but those accounted for a smallish portion of the solution.

Today the industry is trickier to size. Interconnects, accelerators, operating systems, and middleware might all come from a different source than the nodes themselves. As the node price falls, software and storage take up more of the budget. Furthermore, there is a proliferation of sales that are not easily tracked, because of users buying either through smaller vendors or through non-traditional channels.

To understand the size and growth trends in HPC, we need to look at more than servers. We need to understand a user’s complete HPC ecosystem.

The Role of Demand-Side Research

The straightforward way around this dilemma is to supplement supply-side (vendor) surveys with research into the demand-side (buyer) spending and usage models. Tabor Research is addressing this with two alternating surveys of the HPC user community: a site budget allocation map and an installation census. Each survey has only a few questions, but together they will provide a comprehensive view of how users purchase and configure their systems.

The site budget allocation survey, currently underway, asks users how their budgets are divided, on an approximate percentage basis, between and within categories such as hardware, software, staff, facilities, and services. For example, users will specify what percent of their budget goes to hardware, and within the hardware, what percent goes to servers, storage, networks, and so on.

With this data, plus some set points from supply-side research, we will able to model that most fundamental piece of information: the size of the overall HPC market. We’ll also learn the extent to which facility issues, like space and power consumption, can influence hardware and software budgets, and we can compare the amount spent externally on third-party software to the amount paid internally to staff members to write and maintain internally developed codes.

The HPC user installation census, which will begin in September and run for two months, will ask users what they currently have installed and how it is configured. This simple information will allow us to understand average configurations, upgrade patterns, and typical system life spans.

Sizing the New HPC

Early returns to the budget map survey indicate that users might spend more externally on software, service, and non-server hardware than they spend on the servers themselves. If that data holds true for the remaining surveys, it would mean the true size of the HPC industry could be more than double what was previously counted!

With our shift to High Productivity Computing, Tabor Research is aiming to size the entire HPC ecosystem. The external spending – money that users spend outside the organization – constitutes the HPC market. This will include servers, storage, interconnects, applications, middleware, and services of all types. In addition, we will examine portions of the internal spending in total available market analysis. For example, spending on staff for internally developed applications could help determine the total potential market for ISVs in that space.

The Challenges of Demand-Side Analysis

The reason analysts start with supplier data is simple. There are few suppliers, and many users. No matter how many surveys we run, we won’t get to everyone who’s bought an HPC system. The key to good market sizing information, therefore, is to reach enough users so that we can confidently model the census data we get from suppliers. The participation level in the various surveys is critical.

The budget map survey is still open, and it is important to capture as many user data points as possible, from a diverse group of users. Contact information is required, but individual responses remain anonymous. All respondents will receive a summary report, which will allow them to compare themselves to their industry peers. Results from our research will also be returned to the user community in the form of articles in HPCwire, public reports posted on the Tabor Research web site, and insights on the Tabor research blog.

Tabor Research is also seeking users of all types and sizes to form an HPC User Views Advisory Council. In exchange for regular participation in demand-side research (about one survey per month), Advisory Council members gain free access to Tabor Research data and report, inquiry time with analysts, and invitations to exclusive events.

The key to good HPC market intelligence is demand-side research. And the key to the demand-side research is user participation. Tabor Research was founded to give a voice to the user community. Your participation in our surveys and HPC User Views Advisory Council will shape the course of development in the HPC industry.

There are a lot of you out there making important spending decisions. More than people might think. And the industry wants to hear from you.

—–

Addison Snell is the VP/GM of Tabor Research. To participate in the site budget allocation survey or the HPC User Views Advisory Council, or to get more information, visit www.taborresearch.com or email Addison at Addison@TaborResearch.com.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conferen Read more…

By Tiffany Trader

U of Illinois, NCSA Launch First US Nanomanufacturing Node

September 14, 2017

The University of Illinois at Urbana-Champaign together with the National Center for Supercomputing Applications (NCSA) have launched the United States's first computational node aimed at the development of nanomanufactu Read more…

By Tiffany Trader

PGI Rolls Out Support for Volta 100 in its 2017 Compilers and Tools Suite

September 14, 2017

PGI today announced a fairly lengthy list of new features to version 17.7 of its 2017 Compilers and Tools. The centerpiece of the additions is support for the Tesla Volta 100 GPU, Nvidia’s newest flagship silicon annou Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

HPC environmentsHewlett Packard Enterprise (HPE) is committed to delivering end-to-end HPC software solutions to help customers get the most out of their HPC systems. Read more…

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakt Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

Cubes, Culture, and a New Challenge: Trish Damkroger Talks about Life at Intel—and Why HPC Matters More Than Ever

September 13, 2017

Trish Damkroger wasn’t looking to change jobs when she attended SC15 in Austin, Texas. Capping a 15-year career within Department of Energy (DOE) laboratories, she was acting Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL). Her mission was to equip the lab’s scientists and research partners with resources that would advance their cutting-edge work... Read more…

By Jan Rowell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

MIT-IBM Watson AI Lab Targets Algorithms, AI Physics

September 7, 2017

Investment continues to flow into artificial intelligence research, especially in key areas such as AI algorithms that promise to move the technology from speci Read more…

By George Leopold

Need Data Science CyberInfrastructure? Check with RENCI’s xDCI Concierge

September 6, 2017

For about a year the Renaissance Computing Institute (RENCI) has been assembling best practices and open source components around data-driven scientific researc Read more…

By John Russell

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

Leading Solution Providers

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This