2009-2019: A Look Back on a Decade of Supercomputing

By Andrew Jones

December 15, 2009

As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It’s amazing to think how much has changed in that time. Many of our older readers will recall how things were before the official Planetary Supercomputing Facilities at Shanghai, Oak Ridge and Saclay were established. Strange as it may seem now, each country — in fact, each university or company — had its own supercomputer!

Hindsight is easier, of course, but it is interesting to review how this major change in supercomputing came to happen over the last few years.

At the start of the decade, each major university, research centre or company using simulation & modelling had its own HPC resources — they owned it or leased it, operated it, housed it, etc. In addition, some countries (US, UK, Germany, etc.) operated their own national resources for open research. The national facilities were larger than individual institutions could afford, and access to these was usually by a mechanism known as “peer review” — the prospective user would write a short case describing how their science would benefit from using the facility and a group of fellow scientists would judge if the science was worthy. (Note: they rated the science, almost never the quality of the computing implementation!) Very often these national supercomputers were reserved for capability computations, similar to today’s Strategic Simulation category at Shanghai.

The highest profile facilities were those in major research centres (e.g., universities, US DOE labs, etc.) but many commercial organisations had very large facilities too, although these weren’t as well publicised since companies had begun to recognise their use of HPC as a strategic competitive asset. The world’s fastest supercomputers were ranked twice yearly on the TOP500 list. One of the key uses of the TOP500 was for tracking the increasing performance of supercomputing power, usually through a plot showing performance on a vertical logarithmic axis against years on a horizontal axis, and especially two trends on this plot: the reasonably linear growth (on the log scale) of the performance of the fastest machine at any one time; and the smooth linearly (log scale) increasing sum of performance of the 500 systems on the list. The first spark towards the Planetary Supercomputing Facilities came when someone asked “what if we could actually use the compute power of that sum line at once?”

Another factor was the increasing cost of the facilities provision — from computer acquisition (capital) to power (both capital for infrastructure and recurrent for operations) to site management (recurrent and capital, project management, etc.).

Based on this, a number of collaborations started to occur. In Europe, over 20 countries joined together for the two-year PRACE initiative to explore how a pan-European supercomputer service could work in practice. Much was learned from that project and the influences can be seen in the three Planetary Supercomputing Facilities. In the US, ORNL, originally a DOE open science national supercomputing centre, started to host other national facilities (initially for NSF, NOAA and DoD). In fact, ORNL was probably the first planetary supercomputing facility in practice, even though, as we know, Shanghai was the first official Planetary Supercomputing Facility.

People started to realise that operating these large supercomputers was not the interesting part of HPC, and was in fact a very specialist job. As more and more aggregation between national operating sites occurred, and as the scale limited the potential sites (due to power constraints, etc.), it became apparent that there would only be a few sites worldwide capable of fulfilling the growth predicted by the original TOP500 trends.

Then of course came what I call “the public realisation”. Politicians, the public, and Boards finally got it. Supercomputing made a difference. It wasn’t just big rooms of computers costing lots of tax dollars. It was a tool to underpin science, and often to propel it forward. It was a tool for accelerating any properly-formulated computational task, many even with impact on daily life. Better weather predictions. Better design and safety testing of household products. Consumer video/image processing (I remember trying to do early video processing on my own PC!). Speech processing — think how that has revolutionized mobile communications since the early days of typing email messages on BlackBerrys and the like.

And then the critical step — businesses and researchers finally understood that their competitive asset was the capabilities of their modelling software and user expertise — not the hardware itself. Successful businesses rushed to establish a lead over their competitors by investing in their modelling capability — especially robustness (getting trustable predictions/analysis), scalability (being able to process much larger datasets than before) and performance (driving down time to solutions).

As this “software arms race” was put into practice (led by the commercial users) — slowly at first but then with a surge of investment in robust scalable high performance software — money spent on hardware ceased to be the competitive difference. Coupled with the massive increase in demand for HPC resources following the public realisation, and the challenges of managing large facilities, this led to the announcement of the first Planetary Supercomputer Facility in Shanghai. Whilst there was initially preferential access for Chinese domestic users, anyone in the world could use the facility — from consumers to researchers to businesses. After years of trying to exploit commodity components, HPC itself became a commodity service. And this was true HPC, supporting tightly-coupled large simulations, not the earlier attempts at something daftly called “cloud computing,” which only really supported large numbers of very small jobs. The facility shocked the world with its scale — being larger not only than the then top machine on the TOP500, but also larger than the sum of the 500 systems.

The business case for individual ownership of HPC facilities worldwide suddenly became dramatically tougher to justify, with Shanghai providing all classes of computer resources at scale, including the various specialist processing types. Everyone got better HPC, whether capacity or capability, and cheaper HPC than they could ever provide locally. The consumer demand drove innovations in ease-of-use and accounting that previously were only ambitions of seemingly-perpetual academic research.

The international agreements from research funding agencies on behalf of their user communities and from consumer HPC brokers soon followed, confirming the official Planetary Supercomputing Facility status. Within a year, the US had followed suit, securing global agreement for Oak Ridge as the second official Planetary Supercomputing Facility, and of course deployed even more powerful resources than Shanghai.

Soon, the main security concerns had been solved. Network bandwidth that plagued earlier global collaborations went away, as data rarely needed to leave the facilities (or if so, only to transfer between Oak Ridge and Shanghai, which now had massive dedicated bandwidth). Anything that might be done with the data could be done at Oak Ridge or Shanghai — the data never needed to go anywhere else.

With the opening last year of the third and final Planetary Supercomputing Facility at Saclay, the world’s HPC is now ready to sprint into the next decade. We have now left the housing and daily care of the hardware to the specialists. The volume of public and private demand has set the scene for strong HPC provision into the future. We have the three official global providers to ensure consumer choice, with its competitive benefits, but few enough providers to underpin their business cases for the most capable possible HPC infrastructure.

With the pervasiveness of HPC in consumer, business and research arenas, and the long overdue acceptance of the truth that the software capabilities and performance at scale was the competitive asset, “can program HPC at scale” is now more than ever a valuable item for your CV.

For all this astounding progress, I wonder how quaint today’s world will seem when we look back from 2030. After all, just imagine someone reading this in 2009!

2009 Author’s Note: This is not intended to be a prediction nor vision for the next decade, merely some seasonal fun looking at some unlikely extremes of how our community might develop. After all, we’ve had reports saying “it’s the software” for years — so are the chances of us finally doing anything about it more or less likely than the Planetary Supercomputing Facilities?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip using standard CMOS fabrication. At Hot Chips 31 in Stanfor Read more…

By Tiffany Trader

Talk to Me: Nvidia Claims NLP Inference, Training Records

August 15, 2019

Nvidia says it’s achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the co Read more…

By Doug Black

Trump Administration and NIST Issue AI Standards Development Plan

August 14, 2019

Efforts to develop AI are gathering steam fast. On Monday, the White House issued a federal plan to help develop technical standards for AI following up on a mandate contained in the Administration’s AI Executive Order Read more…

By John Russell

AWS Solution Channel

Efficiency and Cost-Optimization for HPC Workloads – AWS Batch and Amazon EC2 Spot Instances

High Performance Computing on AWS leverages the power of cloud computing and the extreme scale it offers to achieve optimal HPC price/performance. With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision or compromise capacity. Read more…

HPE Extreme Performance Solutions

Bring the combined power of HPC and AI to your business transformation

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Cloudy with a Chance of Mainframes

[Connect with HPC users and learn new skills in the IBM Spectrum LSF User Community.]

Rapid rates of change sometimes result in unexpected bedfellows. Read more…

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a good understanding of the early universe, its fate billions Read more…

By Rob Johnson

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Scientists to Tap Exascale Computing to Unlock the Mystery of our Accelerating Universe

August 14, 2019

The universe and everything in it roared to life with the Big Bang approximately 13.8 billion years ago. It has continued expanding ever since. While we have a Read more…

By Rob Johnson

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

August 13, 2019

Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program... Read more…

By Tiffany Trader and John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Lenovo Drives Single-Socket Servers with AMD Epyc Rome CPUs

August 7, 2019

No summer doldrums here. As part of the AMD Epyc Rome launch event in San Francisco today, Lenovo announced two new single-socket servers, the ThinkSystem SR635 Read more…

By Doug Black

Building Diversity and Broader Engagement in the HPC Community

August 7, 2019

Increasing diversity and inclusion in HPC is a community-building effort. Representation of both issues and individuals matters - the more people see HPC in a w Read more…

By AJ Lauer

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This