The Six Personalities of Supercomputing

By Andrew Jones

November 16, 2012

There are two types of people in supercomputing – people that have a top 10 supercomputer and people that don’t. Or people who understand the exascale problem and people who understand the missing middle problem. Or people who have scalable applications and people that don’t.

Or people who claim just two types of person and then list several non-exclusive options.

In my usual part serious, part provocative style, here is my light-hearted look at the different personality stereotypes involved in high performance computing. This is by no means an exclusive list, but it does illustrate the range of people who contribute to the flavor of the world of supercomputing.

1. The Great Wall of China type.Our supercomputer is so big you can see it from outer space!” To these people, the size of the supercomputer is the primary factor determining standing in the supercomputing world. They’ll talk to you if your supercomputer can be seen from low orbit, will feel sorry for you if your machine is only visible from low flying aircraft, and refuse to acknowledge your relevance if your system is only visible from the ground.

Now, in no way am I suggesting that the size of a supercomputer is not important. Almost self-evidently it is. A more powerful supercomputer can enable more realistic simulations, new kinds of science enquiry, or more comprehensive data analytics.

And sometimes real breakthroughs do occur as a direct result of the scale of the supercomputer used. But smaller supercomputers, even ones not visible from altitude, also deliver cutting edge science, engineering and data analytics.

2. The Apocalypse type. These people are convinced the end of supercomputing is at hand. Most of them focus on the technology challenges that seemingly erect impassable barriers to our progress. Exascale can’t happen because of power. And even if we could afford the power, we wouldn’t be able to program it. And even then, the system/application would collapse in a statistically inevitable heap of errors after a few minutes. Not to mention the skills shortage. We may as well give up now and just keep deploying the systems we have now.

Then there are the ones who proclaim that doom is not technological; it is political and financial. They argue that we cannot sustain the increasing budgets at the national lab scale, nor will senior managers in research-led businesses fund the increasing demand for supercomputer technology to enable higher fidelity simulation and deeper analytics. Quite rightly, they preach that simply quoting wonderful science is not enough justification (to the world outside of HPC) for these investments. This is where the Fort Knox types come in.

3. The Fort Knox type. It’s about the money to these people. They are driven by the big dollar deals, ideally, the high profit margin ones. They are reluctant to invest time in meetings, travel, projects, or acquisitions that don’t provide a substantial financial return this year. They often inhabit the parts of the ecosystem with better margins or with commercial applicability outside HPC (e.g. storage, networking, and so on) or sometimes can be found in those HPC vendors that keeping trying to discover a profitable business in selling solutions to buyers of technology with aggressive cost ambitions.

They are among the sharpest dressers, the most ardent advocates of their piece of the ecosystem, and get the least passionate interest from the buyers and users of supercomputers. Money and supercomputing always have been strange partners. Clearly, money, and lots of it, is required to fund the development, deployment, and operation of supercomputers and their enabling technologies (e.g. software).

But with such a technically dominated population of inhabitants, the HPC space often struggles to focus on this harsh reality. If companies can’t make money, or we can’t persuade politicians (and, by extension, the general public) to invest, then the R&D that is the lifeblood feeding the future of our world will weaken.

Likewise, the need to prove the economic return on investments in HPC services (both hardware and software) in commercial and academic/national lab spaces cannot be forgotten. The Fort Knox types have a crucial role to play in ensuring HPC continues to sustainably deliver its great potential. But the focus on money must balance with the pursuit of the technical race without which supercomputing would be meaningless.

4. The Art Gallery type. Keen to assure you as early as possible in the conversation that they are not technical – they leave technical detail to other people. Presenting a front of pride in their non-technical status, these people are usually, but not always, sales or business management people. However, a chink in their psyche appears, when they almost immediately follow up and make sure you know they once wrote some code.

Thus, they probably do recognize the integral value of technical understanding in the HPC world. But, either they are nervous about their lack of understanding (don’t be – not everyone is an expert – just be willing!) or they are hoping their stance will come across as above the detail (not good – this is often a detail game). And, in reality, the world of HPC is marked by significant technical expertise in so many of the people in sales positions, senior management or other traditionally business focused roles.

But there are also many who are not experts (or maybe aren’t anymore) but who have enough technology or science understanding to play their part in the ecosystem. Be proud of the technical knowledge you do have, honestly admit its limits, and be keen to learn more as needed. But then, you could say that about any skill, not just HPC.

5. The Horse-drawn Cart type.I remember when …” This person is able to turn any conversation about next year’s technology or this week’s implementation issue into a prolonged reminiscence of their distant childhood making supercomputers out of wooden sticks and spittle. Filled with “we tried that years ago,” “we had it much harder,” or “we should go back to the way we used to do it,” these monologues eventually stall as the polite but glazed expressions cemented on the faces around the room slowly reveal the audience has departed to mind-wandering land.

Occasionally, these reminiscences take a life of their own as dialogue springs forth – yes, those dreaded occasions when there is more than one Horse-drawn Cart type in the room. Sometimes, there are gems of insight relevant to the present or future to be found in these experiences though. The key to finding the gems is distinguishing the Horse-drawn Cart types from the Concorde types.

6. The Concorde type. Now that it is no longer flying the world with brutal performance and elegant class, this marvel of engineering brilliance and commercial application is distressingly easily seeping away from our memory. I’m writing this article sat in one of the other flagships of aviation, the Boeing 747, as I cross the Atlantic on my way to Salt Lake City for SC12 (there are about a dozen other HPC people just within a few rows of me).

But, much as I appreciate the 747 as probably the elder statesman of the skies, I wish the Concorde was still flying. Not that I’d expect to ride in it; it’s out of my league. But it is a shame that we have thrown away such a monumental capability: the 3 hour transatlantic crossing. That is a different class of interchange between London and New York than the current journey of a whole day’s flying.

More amazing still was that it was essentially 1960s technology. And this is my link to supercomputing. Some great technological achievements have peppered the history of supercomputing – processors, systems, algorithms, software implementations, etc. Many of them have been overtaken by subsequent products or technology shifts, but many we still rely on directly or via their evolutionary successors.  And, supercomputing too, is judged on the capability it enables, not merely the engineering brilliance of the technology implementation.

And the people part? Well, people made those great supercomputing technology advances. Some are sadly gone, many are still with us. In the fullness of the modern HPC ecosystem it is easy to let the impact of those technological leaps and their creators seep from our memory. Don’t.

So there you go – a selective stereotyping of the people that make supercomputing the marvel that we know – so powerful in its impact, often frustrating in its reality, usually addictive to those who encounter it, but always special. And, hopefully a few serious points about our community have been highlighted along the way. How many of these types did you see at SC12 this week? What types have I missed? Which, if any, are you?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose primary use case is to support high IOPS rates to/from a scra Read more…

By John Russell

Lenovo to Debut ‘Neptune’ Cooling Technologies at ISC ‘18

June 19, 2018

Lenovo today announced a set of cooling technologies, dubbed Neptune, that include direct to node (DTN) warm water cooling, rear door heat exchanger (RDHX), and hybrid solutions that combine air and liquid cooling. Lenov Read more…

By John Russell

World Cup is Lame Compared to This Competition

June 18, 2018

So you think World Cup soccer is a big deal? While I’m sure it’s very compelling to watch a bunch of athletes kick a ball around, World Cup misses the boat because it doesn’t include teams putting together their ow Read more…

By Dan Olds

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Banks Boost Infrastructure to Tackle GDPR

As banks become more digital and data-driven, their IT managers are challenged with fast growing data volumes and lines-of-businesses’ (LoBs’) seemingly limitless appetite for analytics. Read more…

IBM Demonstrates Deep Neural Network Training with Analog Memory Devices

June 18, 2018

From smarter, more personalized apps to seemingly-ubiquitous Google Assistant and Alexa devices, AI adoption is showing no signs of slowing down – and yet, the hardware used for AI is far from perfect. Currently, GPUs Read more…

By Oliver Peckham

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose p Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

Xiaoxiang Zhu Receives the 2018 PRACE Ada Lovelace Award for HPC

June 13, 2018

Xiaoxiang Zhu, who works for the German Aerospace Center (DLR) and Technical University of Munich (TUM), was awarded the 2018 PRACE Ada Lovelace Award for HPC for her outstanding contributions in the field of high performance computing (HPC) in Europe. Read more…

By Elizabeth Leake

U.S Considering Launch of National Quantum Initiative

June 11, 2018

Sometime this month the U.S. House Science Committee will introduce legislation to launch a 10-year National Quantum Initiative, according to a recent report by Read more…

By John Russell

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Exascale USA – Continuing to Move Forward

June 6, 2018

The end of May 2018, saw several important events that continue to advance the Department of Energy’s (DOE) Exascale Computing Initiative (ECI) for the United Read more…

By Alex R. Larzelere

Exascale for the Rest of Us: Exaflops Systems Capable for Industry

June 6, 2018

Enterprise advanced scale computing – or HPC in the enterprise – is an entity unto itself, situated between (and with characteristics of) conventional enter Read more…

By Doug Black

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17


AMD @ SC17


ASRock Rack @ SC17

ASRock Rack



DDN Storage @ SC17

DDN Storage

Huawei @ SC17


IBM @ SC17


IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17


Lenovo @ SC17


Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17


Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17


Tyan @ SC17


Univa @ SC17


Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This