2009-2019: A Look Back on a Decade of Supercomputing

By Andrew Jones

December 15, 2009

As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It’s amazing to think how much has changed in that time. Many of our older readers will recall how things were before the official Planetary Supercomputing Facilities at Shanghai, Oak Ridge and Saclay were established. Strange as it may seem now, each country — in fact, each university or company — had its own supercomputer!

Hindsight is easier, of course, but it is interesting to review how this major change in supercomputing came to happen over the last few years.

At the start of the decade, each major university, research centre or company using simulation & modelling had its own HPC resources — they owned it or leased it, operated it, housed it, etc. In addition, some countries (US, UK, Germany, etc.) operated their own national resources for open research. The national facilities were larger than individual institutions could afford, and access to these was usually by a mechanism known as “peer review” — the prospective user would write a short case describing how their science would benefit from using the facility and a group of fellow scientists would judge if the science was worthy. (Note: they rated the science, almost never the quality of the computing implementation!) Very often these national supercomputers were reserved for capability computations, similar to today’s Strategic Simulation category at Shanghai.

The highest profile facilities were those in major research centres (e.g., universities, US DOE labs, etc.) but many commercial organisations had very large facilities too, although these weren’t as well publicised since companies had begun to recognise their use of HPC as a strategic competitive asset. The world’s fastest supercomputers were ranked twice yearly on the TOP500 list. One of the key uses of the TOP500 was for tracking the increasing performance of supercomputing power, usually through a plot showing performance on a vertical logarithmic axis against years on a horizontal axis, and especially two trends on this plot: the reasonably linear growth (on the log scale) of the performance of the fastest machine at any one time; and the smooth linearly (log scale) increasing sum of performance of the 500 systems on the list. The first spark towards the Planetary Supercomputing Facilities came when someone asked “what if we could actually use the compute power of that sum line at once?”

Another factor was the increasing cost of the facilities provision — from computer acquisition (capital) to power (both capital for infrastructure and recurrent for operations) to site management (recurrent and capital, project management, etc.).

Based on this, a number of collaborations started to occur. In Europe, over 20 countries joined together for the two-year PRACE initiative to explore how a pan-European supercomputer service could work in practice. Much was learned from that project and the influences can be seen in the three Planetary Supercomputing Facilities. In the US, ORNL, originally a DOE open science national supercomputing centre, started to host other national facilities (initially for NSF, NOAA and DoD). In fact, ORNL was probably the first planetary supercomputing facility in practice, even though, as we know, Shanghai was the first official Planetary Supercomputing Facility.

People started to realise that operating these large supercomputers was not the interesting part of HPC, and was in fact a very specialist job. As more and more aggregation between national operating sites occurred, and as the scale limited the potential sites (due to power constraints, etc.), it became apparent that there would only be a few sites worldwide capable of fulfilling the growth predicted by the original TOP500 trends.

Then of course came what I call “the public realisation”. Politicians, the public, and Boards finally got it. Supercomputing made a difference. It wasn’t just big rooms of computers costing lots of tax dollars. It was a tool to underpin science, and often to propel it forward. It was a tool for accelerating any properly-formulated computational task, many even with impact on daily life. Better weather predictions. Better design and safety testing of household products. Consumer video/image processing (I remember trying to do early video processing on my own PC!). Speech processing — think how that has revolutionized mobile communications since the early days of typing email messages on BlackBerrys and the like.

And then the critical step — businesses and researchers finally understood that their competitive asset was the capabilities of their modelling software and user expertise — not the hardware itself. Successful businesses rushed to establish a lead over their competitors by investing in their modelling capability — especially robustness (getting trustable predictions/analysis), scalability (being able to process much larger datasets than before) and performance (driving down time to solutions).

As this “software arms race” was put into practice (led by the commercial users) — slowly at first but then with a surge of investment in robust scalable high performance software — money spent on hardware ceased to be the competitive difference. Coupled with the massive increase in demand for HPC resources following the public realisation, and the challenges of managing large facilities, this led to the announcement of the first Planetary Supercomputer Facility in Shanghai. Whilst there was initially preferential access for Chinese domestic users, anyone in the world could use the facility — from consumers to researchers to businesses. After years of trying to exploit commodity components, HPC itself became a commodity service. And this was true HPC, supporting tightly-coupled large simulations, not the earlier attempts at something daftly called “cloud computing,” which only really supported large numbers of very small jobs. The facility shocked the world with its scale — being larger not only than the then top machine on the TOP500, but also larger than the sum of the 500 systems.

The business case for individual ownership of HPC facilities worldwide suddenly became dramatically tougher to justify, with Shanghai providing all classes of computer resources at scale, including the various specialist processing types. Everyone got better HPC, whether capacity or capability, and cheaper HPC than they could ever provide locally. The consumer demand drove innovations in ease-of-use and accounting that previously were only ambitions of seemingly-perpetual academic research.

The international agreements from research funding agencies on behalf of their user communities and from consumer HPC brokers soon followed, confirming the official Planetary Supercomputing Facility status. Within a year, the US had followed suit, securing global agreement for Oak Ridge as the second official Planetary Supercomputing Facility, and of course deployed even more powerful resources than Shanghai.

Soon, the main security concerns had been solved. Network bandwidth that plagued earlier global collaborations went away, as data rarely needed to leave the facilities (or if so, only to transfer between Oak Ridge and Shanghai, which now had massive dedicated bandwidth). Anything that might be done with the data could be done at Oak Ridge or Shanghai — the data never needed to go anywhere else.

With the opening last year of the third and final Planetary Supercomputing Facility at Saclay, the world’s HPC is now ready to sprint into the next decade. We have now left the housing and daily care of the hardware to the specialists. The volume of public and private demand has set the scene for strong HPC provision into the future. We have the three official global providers to ensure consumer choice, with its competitive benefits, but few enough providers to underpin their business cases for the most capable possible HPC infrastructure.

With the pervasiveness of HPC in consumer, business and research arenas, and the long overdue acceptance of the truth that the software capabilities and performance at scale was the competitive asset, “can program HPC at scale” is now more than ever a valuable item for your CV.

For all this astounding progress, I wonder how quaint today’s world will seem when we look back from 2030. After all, just imagine someone reading this in 2009!

2009 Author’s Note: This is not intended to be a prediction nor vision for the next decade, merely some seasonal fun looking at some unlikely extremes of how our community might develop. After all, we’ve had reports saying “it’s the software” for years — so are the chances of us finally doing anything about it more or less likely than the Planetary Supercomputing Facilities?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Watch Nvidia’s GTC21 Keynote with Jensen Huang Livestreamed Here at HPCwire

April 9, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Argonne Supercomputing Supports Caterpillar Engine Design

April 8, 2021

Diesel fuels still account for nearly ten percent of all energy-related U.S. carbon emissions – most of them from heavy-duty vehicles like trucks and construction equipment. Energy efficiency is key to these machines, Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new training and inference servers that will power the upcoming Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

What’s New in HPC Research: Tundra, Fugaku, µHPC & More

April 6, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

RIKEN’s Ongoing COVID Research Includes New Vaccines, New Tests & More

April 6, 2021

RIKEN took the supercomputing world by storm last summer when it launched Fugaku – which became (and remains) the world’s most powerful supercomputer – ne Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

AI Systems Summit Keynote: Brace for System Level Heterogeneity Says de Supinski

April 1, 2021

Heterogeneous computing has quickly come to mean packing a couple of CPUs and one-or-many accelerators, mostly GPUs, onto the same node. Today, a one-such-node system has become the standard AI server offered by dozens of vendors. This is not to diminish the many advances... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire