Engaging the Missing Middle in HPC

By Nicole Hemsoth

June 7, 2010

At its core, HPC’s missing middle is comprised of those who could benefit from HPC but who are blocked by the high barriers of entry (cost, difficulty, programming challenges). Those active in the HPC space already have been hearing about this concept for years, but for those relative newcomers, revisiting the concept of the missing middle, especially in the context of the more imminent arrival of HPC as a Service, is a must.

When Microsoft predicts this missing middle of HPC users to be close to 10 million technical computing users; when SGI’s CEO remarks on the “enormous potential for growth” in the same area, and a large handful of traditional HPC companies (and even non-HPC focused ones) are discussing ways to extend their reach and deliver HPC to a much broader audience — an audience that includes many who don’t even realize that a) what they are doing is actually HPC and b) they might be able to gain market share from implementing it for competitive advantage, it is time to reevaluate the importance of this missing middle concept once again.

As one might expect, at the International Supercomputing Conference last week, topics of conversation revolved around engaging users, but the TOP500 took center stage. While this is not to say that such an elite list is unimportant, it’s time to make the argument (or almost time, there’s room to debate whether this is all too bleeding edge at the present) that HPC is swiftly moving away from a reliance on the TOP500 as the core basis of might. Companies at all ends of the spectrum were talking about bringing HPC to the masses and furthermore, about showing how it is crucial to demonstrate to users of vanilla workstations what they’re missing and how what they’re doing already can be enhanced. There were real, vivid conversations about how bringing supercomputing to the masses is already happening.

This is, of course, done in part via the cloud. By removing the roadblocks, the barriers to entry in the elite space of HPC, the missing middle is engaged. In theory, at least. After all, if this were true now (and of course the technical challenges behind delivering this vision cannot be minimized — this is some ways off for the mainstream technical computing world), the show floor at ISC would have been packed with customers evaluating solutions. As it stands, it was difficult to run into anyone at the show with the specific purpose of buying — it was a show. Might this be different if there were more widely-available solutions for a drastically-increased number of technical computing users?

Found: One Missing Middle

To back up for a moment, the longer, original version of the concept of this huge group of untouched users that either needed or didn’t at first see the use for HPC was defined way back in 2008 by the Council on Competiveness. The group asked us to imagine a set of firms that have become experts in applying HPC-based modeling and simulation. Because of their technical fluency, they now have HPC at the center of their dominance whereas “a much larger group of companies has not advanced beyond using entry-level HPC systems. The gap between these two extremes, sometimes referred to as the ‘missing middle’ represents an enormous productivity loss for the nation.”

If you replace the word “nation” with more industry-specific terms, it is helpful since we’re not talking world domination here — at least not at this juncture. What we are discussing is the leveling of a playing field that was once so uneven that it was impossible to even glimpse those above. That great leveler is, of course, the cloud. Not as a concept, but rather as an early-phase experiment in making HPC broadly available.

It’s nearly impossible to argue with the idea of a missing middle given that HPC, while playing a dominant role in the ability for competition on a national or industry-specific level, is certainly not for everyone. This isn’t because there are only a few who could make tremendous use of vast compute resources, certainly — it’s that setting up a cluster and making sure applications actually function is, well…hard. Accordingly, use of actual HPC is limited by barriers to entry that have been impossible to upend barring supersized grants and skilled IT teams specializing in parallel programming, MPI, and a host of other precious talents.

The distinct domain of HPC has been at the high-end; the crème de la crème. Major corporations and institutions. Not Bob’s Super-Deluxe Engineering Feats, LLC. But Microsoft, SGI, Platform and others are counting on the needs of Bob. Because they believe that there are millions of Bobs out there, all of whom — if coaxed and addressed properly and thoroughly convinced that they have nothing to lose and market share to gain via first-time HPC entry — will line up, nay, snuggle up to the idea.

Okay, the Bob metaphor is a little facetious. But you see how the Council on Competitiveness viewed HPC’s potential a few short years ago — it wanted the community to find a way to make HPC the defining differentiator in a country or industry’s ability to thrive.

And if the age-old rule about he with the most start-up capital to invest in a horde of clusters is turned on its head then it means that the shop, institution or facility with the best innovation practices wins. Do you see the beauty of this? It means that creativity, invention, inspired progress is once again the defining factor in the success or viability of any user.

This is your economic stimulus. And it’s even being delivered as a package. Even if that package is something that’s been introduced already and is being reformed into more cohesive packages — i.e., Microsoft’s technical computing initiative, recent efforts by SGI to extend reach into this critical entry-level HPC user base, and the various enhancements and ways to reduce complexity from Platform, Adaptive, Univa, and others — some of whom are new names in this space altogether.

When Bob’s Super-Deluxe Engineering Feats, LLC beats out DEKA and long-standing engineering giants because the HPC playing field was leveled and the competitive element was mere innovation, call me. I’ll be nodding, smiling, and having a glass of chardonnay to toast the new, inspired world I live in.

If this was a refresher course for any of you, apologies, but after ISC and extensive discussion about what lies ahead in the future — a future that is still difficult to grasp and in flux — this statement about the engagement of this vital missing middle seemed like a critical reiteration.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chaired by PRACE Council Vice-Chair Sergi Girona (Barcelona Super Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

An Overview of ‘OpenACC for Programmers’ from the Book’s Editors

June 20, 2018

In an era of multicore processors coupled with manycore accelerators in all kinds of devices from smartphones all the way to supercomputers, it is important to train current and future computational scientists of all dom Read more…

By Sunita Chandrasekaran and Guido Juckeland

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose primary use case is to support high IOPS rates to/from a scra Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Preview the World’s Smartest Supercomputer at ISC 2018

Introducing an accelerated IT infrastructure for HPC & AI workloads Read more…

Lenovo to Debut ‘Neptune’ Cooling Technologies at ISC

June 19, 2018

Lenovo today announced a set of cooling technologies, dubbed Neptune, that include direct to node (DTN) warm water cooling, rear door heat exchanger (RDHX), and hybrid solutions that combine air and liquid cooling. Lenov Read more…

By John Russell

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chair Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose p Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

Xiaoxiang Zhu Receives the 2018 PRACE Ada Lovelace Award for HPC

June 13, 2018

Xiaoxiang Zhu, who works for the German Aerospace Center (DLR) and Technical University of Munich (TUM), was awarded the 2018 PRACE Ada Lovelace Award for HPC for her outstanding contributions in the field of high performance computing (HPC) in Europe. Read more…

By Elizabeth Leake

U.S Considering Launch of National Quantum Initiative

June 11, 2018

Sometime this month the U.S. House Science Committee will introduce legislation to launch a 10-year National Quantum Initiative, according to a recent report by Read more…

By John Russell

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Exascale USA – Continuing to Move Forward

June 6, 2018

The end of May 2018, saw several important events that continue to advance the Department of Energy’s (DOE) Exascale Computing Initiative (ECI) for the United Read more…

By Alex R. Larzelere

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This