IDC: Searching for Dark Energy in the HPC Universe

By Bob Sorensen

October 20, 2016

Editor’s Note: In this guest commentary, Bob Sorensen, research vice president in IDC’s High Performance Computing group, argues that high performance computing is undergoing basic changes in how we should think about it and define it. Advanced scale computing was once mostly the domain of government labs and large academic research centers. Today, the HPC universe is expanding, to use Sorensen’s metaphor, and many more forces are at play and becoming visible and must be taken into account. This isn’t a new idea (see IDC: The Changing Face of HPC, HPCwire) but it is one that is increasingly crystallizing and being embraced. No doubt we will hear more at the annual IDC HPC Update Breakfast held at SC16 in November. – John Russell

The latest scientific evidence indicates that the universe is expanding at an accelerating rate and that so-called dark energy is the driver behind this growth. Even though it comprises roughly two-thirds of the universe, not much is known about dark energy because it cannot be directly observed. The same idea of such dark energy in HPC applies. Simply put, the HPC universe is expanding in ways that are not being directly observed using traditional HPC definitions, and that new definitions may be needed to accurately capture this phenomenon.

Potential dark energy in the HPC universe encompasses a number of emerging and distinct elements, but each in its own way adds to the collective technology and market dynamics of the HPC sector. They include:

  • New hardware to support deep learning applications that with their emphasis on high computational capability, large memory capacity, and strong interconnect schemes, can rightly be called HPC systems. Examples here include the NVIDIA DGX-1 supercomputer in a box, the Facebook Big Sur rack, or the Google Tensor Processing Unit. Even Intel is moving into the field with its recent acquisition of Nervana, a cloud-based deep learning provider that will demonstrate next year its custom designed ASIC that includes 32 GB of on-chip storage and six bi-directional high-bandwidth links. IDC projects that global spending on cognitive systems – of which deep learning is an integral component – will reach nearly $31.3 billion in 2019 with a five-year compound annual growth rate (CAGR) of 55%. For perspective, IDC estimates that the total HPC server market that same year will be about $14 billion
  • HPC in the cloud offerings that are increasingly providing HPC capabilities outside the traditional HPC vendor/user relationships, such as what is being done at AWS, Google, and Microsoft Azure. These HPC in the cloud providers are offering both the hardware and software needed to attract traditional HPC users to their services, and many expect that once the pricing models for these services settle down, more and more traditional HPC workloads will be pushed out into a cloud environment. Many see this not as a zero sum game, but as a way to grow the total HPC market. In addition, as many traditional HPC users are looking to cloud-based computation as a way to complement their in-house capabilities, vendors will need to offer seamless application migration between cloud and on-prem hardware or risk finding themselves locked out of the market. Some project that cloud-based HPC could grow to over $10 billion by 2020.
  • New big data applications that are running in non-traditional HPC environments but that use HPC hardware, such as in the finance or cyber security sectors. For example, Cray and Deloitte recently announced the first commercially available supercomputing-based threat analytics service on a subscription basis. Across the board, commercial firms that currently are engaging in traditional enterprise business analytics are increasingly turning to HPCs to address some of their more complex, time sensitive, or data rich problems. Despite this, many of these users likely will not strongly identity with or be strongly identified by the traditional HPC sector as part of the HPC universe. The process whereby these ‘new’ users enter into the HPC universe will be an interesting one to watch as they will bring their own unique experiences, expectations, and requirements into the mix.

As no credible theory can go forward without some notion of identifying validating experiments, it is instructive to look at what is already happening in the sector as seen in the Top 500 HPC list. For example, in the most recent Top 500, there were 138 entries that simply did not fit into the traditional HPC categories. Here is how those sites self-identity as instead:

  • 68 Internet Companies
  • 39 IT Service Providers
  • 14 Telecommunications Companies
  • 12 Hosting Companies
  • 5 Cloud Companies

Although one could argue that many of these HPCs are being used for traditional HPC workloads, it is clear that something interesting is going on in the sector. Does the ability of these systems to qualify for the Top 500 list – a list that does not expressly claim to be a measure of technical HPC computing, but does use a traditional scientific calculation for its performance gatekeeper – mean that they are running scientific workloads? Or is it more likely that increasingly systems that can qualify for the Top 500 are not being used in traditional HPC environments, but instead finding use in a broader range of applications?

Ultimately, this is a case where if it is important to identify the dark energy in HPC, the sector needs to consider what exactly an HPC is. Can systems that run CFD calculations for an automaker, drive real-time decision making for credit card fraud detection, and self-learn to do highly accurate photo image identification all be considered HPC? If one defines HPC to be the embodiment of some of the most advanced developments in hardware and software that enables new scientific discoveries, underwrites innovation in engineering and manufacturing, and creates significant economic return, then the answer is a clear yes. And maybe little else matters.

Perhaps it’s time for the HPC sector to expand its perspective and embrace the dark energy out there that offers significant promise for a renaissance of the HPC sector writ large. It’s either that or get left behind by these new fields that look to be key drivers of HPC-related technologies – as well as a source of financial growth – for the foreseeable future. Are we looking at a missing 68% content like we see at the galactic level? It’s hard to say right now, but it is clear that as time passes these new HPC use cases will only grow more prevalent.

Author Bio:

Bob Sorensen, IDC
Bob Sorensen, IDC

Bob Sorensen, Research Vice President in IDC’s High Performance Computing group, is part of the HPC technical computing team, driving research and consulting efforts in the United States, European, and Asian-Pacific markets for technical servers, supercomputers, clouds, and high performance data analysis. Prior to joining IDC, Mr. Sorensen worked 33 years for the U.S. Federal Government. There he served as a Senior Science and Technology analyst covering global competitive and technical HPC and related advanced computing developments to support senior-level U.S. policy makers, including those in the White House, Department of Defense, and Treasury. Mr. Sorensen holds a bachelor’s degree in electrical engineering from the University of Rochester and a master’s degree in computer science from the George Washington University.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Pfizer HPC Engineer Aims to Automate Software Stack Testing

January 17, 2019

Seeking to reign in the tediousness of manual software testing, Pfizer HPC Engineer Shahzeb Siddiqui is developing an open source software tool called buildtest, aimed at automating software stack testing by providing the community with a central repository of tests for common HPC apps and the ability to automate execution of testing. Read more…

By Tiffany Trader

Senegal Prepares to Take Delivery of Atos Supercomputer

January 16, 2019

In just a few months time, Senegal will be operating the second largest HPC system in sub-Saharan Africa. The Minister of Higher Education, Research and Innovation Mary Teuw Niane made the announcement... Read more…

By Tiffany Trader

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Resource Management in the Age of Artificial Intelligence

New challenges demand fresh approaches

Fueled by GPUs, big data, and rapid advances in software, the AI revolution is upon us. Read more…

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By John Russell

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three Read more…

By Tiffany Trader

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchm Read more…

By John Russell

A Big Data Journey While Seeking to Catalog our Universe

January 16, 2019

It turns out, astronomers have lots of photos of the sky but seek knowledge about what the photos mean. Sound familiar? Big data problems are often characterize Read more…

By James Reinders

Intel Bets Big on 2-Track Quantum Strategy

January 15, 2019

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment. Read more…

By Doug Black

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM’s New Global Weather Forecasting System Runs on GPUs

January 9, 2019

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By Oliver Peckham

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This