2009-2019: A Look Back on a Decade of Supercomputing

By Andrew Jones

December 15, 2009

As we turn the decade into the 2020s, we take a nostalgic look back at the last ten years of supercomputing. It’s amazing to think how much has changed in that time. Many of our older readers will recall how things were before the official Planetary Supercomputing Facilities at Shanghai, Oak Ridge and Saclay were established. Strange as it may seem now, each country — in fact, each university or company — had its own supercomputer!

Hindsight is easier, of course, but it is interesting to review how this major change in supercomputing came to happen over the last few years.

At the start of the decade, each major university, research centre or company using simulation & modelling had its own HPC resources — they owned it or leased it, operated it, housed it, etc. In addition, some countries (US, UK, Germany, etc.) operated their own national resources for open research. The national facilities were larger than individual institutions could afford, and access to these was usually by a mechanism known as “peer review” — the prospective user would write a short case describing how their science would benefit from using the facility and a group of fellow scientists would judge if the science was worthy. (Note: they rated the science, almost never the quality of the computing implementation!) Very often these national supercomputers were reserved for capability computations, similar to today’s Strategic Simulation category at Shanghai.

The highest profile facilities were those in major research centres (e.g., universities, US DOE labs, etc.) but many commercial organisations had very large facilities too, although these weren’t as well publicised since companies had begun to recognise their use of HPC as a strategic competitive asset. The world’s fastest supercomputers were ranked twice yearly on the TOP500 list. One of the key uses of the TOP500 was for tracking the increasing performance of supercomputing power, usually through a plot showing performance on a vertical logarithmic axis against years on a horizontal axis, and especially two trends on this plot: the reasonably linear growth (on the log scale) of the performance of the fastest machine at any one time; and the smooth linearly (log scale) increasing sum of performance of the 500 systems on the list. The first spark towards the Planetary Supercomputing Facilities came when someone asked “what if we could actually use the compute power of that sum line at once?”

Another factor was the increasing cost of the facilities provision — from computer acquisition (capital) to power (both capital for infrastructure and recurrent for operations) to site management (recurrent and capital, project management, etc.).

Based on this, a number of collaborations started to occur. In Europe, over 20 countries joined together for the two-year PRACE initiative to explore how a pan-European supercomputer service could work in practice. Much was learned from that project and the influences can be seen in the three Planetary Supercomputing Facilities. In the US, ORNL, originally a DOE open science national supercomputing centre, started to host other national facilities (initially for NSF, NOAA and DoD). In fact, ORNL was probably the first planetary supercomputing facility in practice, even though, as we know, Shanghai was the first official Planetary Supercomputing Facility.

People started to realise that operating these large supercomputers was not the interesting part of HPC, and was in fact a very specialist job. As more and more aggregation between national operating sites occurred, and as the scale limited the potential sites (due to power constraints, etc.), it became apparent that there would only be a few sites worldwide capable of fulfilling the growth predicted by the original TOP500 trends.

Then of course came what I call “the public realisation”. Politicians, the public, and Boards finally got it. Supercomputing made a difference. It wasn’t just big rooms of computers costing lots of tax dollars. It was a tool to underpin science, and often to propel it forward. It was a tool for accelerating any properly-formulated computational task, many even with impact on daily life. Better weather predictions. Better design and safety testing of household products. Consumer video/image processing (I remember trying to do early video processing on my own PC!). Speech processing — think how that has revolutionized mobile communications since the early days of typing email messages on BlackBerrys and the like.

And then the critical step — businesses and researchers finally understood that their competitive asset was the capabilities of their modelling software and user expertise — not the hardware itself. Successful businesses rushed to establish a lead over their competitors by investing in their modelling capability — especially robustness (getting trustable predictions/analysis), scalability (being able to process much larger datasets than before) and performance (driving down time to solutions).

As this “software arms race” was put into practice (led by the commercial users) — slowly at first but then with a surge of investment in robust scalable high performance software — money spent on hardware ceased to be the competitive difference. Coupled with the massive increase in demand for HPC resources following the public realisation, and the challenges of managing large facilities, this led to the announcement of the first Planetary Supercomputer Facility in Shanghai. Whilst there was initially preferential access for Chinese domestic users, anyone in the world could use the facility — from consumers to researchers to businesses. After years of trying to exploit commodity components, HPC itself became a commodity service. And this was true HPC, supporting tightly-coupled large simulations, not the earlier attempts at something daftly called “cloud computing,” which only really supported large numbers of very small jobs. The facility shocked the world with its scale — being larger not only than the then top machine on the TOP500, but also larger than the sum of the 500 systems.

The business case for individual ownership of HPC facilities worldwide suddenly became dramatically tougher to justify, with Shanghai providing all classes of computer resources at scale, including the various specialist processing types. Everyone got better HPC, whether capacity or capability, and cheaper HPC than they could ever provide locally. The consumer demand drove innovations in ease-of-use and accounting that previously were only ambitions of seemingly-perpetual academic research.

The international agreements from research funding agencies on behalf of their user communities and from consumer HPC brokers soon followed, confirming the official Planetary Supercomputing Facility status. Within a year, the US had followed suit, securing global agreement for Oak Ridge as the second official Planetary Supercomputing Facility, and of course deployed even more powerful resources than Shanghai.

Soon, the main security concerns had been solved. Network bandwidth that plagued earlier global collaborations went away, as data rarely needed to leave the facilities (or if so, only to transfer between Oak Ridge and Shanghai, which now had massive dedicated bandwidth). Anything that might be done with the data could be done at Oak Ridge or Shanghai — the data never needed to go anywhere else.

With the opening last year of the third and final Planetary Supercomputing Facility at Saclay, the world’s HPC is now ready to sprint into the next decade. We have now left the housing and daily care of the hardware to the specialists. The volume of public and private demand has set the scene for strong HPC provision into the future. We have the three official global providers to ensure consumer choice, with its competitive benefits, but few enough providers to underpin their business cases for the most capable possible HPC infrastructure.

With the pervasiveness of HPC in consumer, business and research arenas, and the long overdue acceptance of the truth that the software capabilities and performance at scale was the competitive asset, “can program HPC at scale” is now more than ever a valuable item for your CV.

For all this astounding progress, I wonder how quaint today’s world will seem when we look back from 2030. After all, just imagine someone reading this in 2009!

2009 Author’s Note: This is not intended to be a prediction nor vision for the next decade, merely some seasonal fun looking at some unlikely extremes of how our community might develop. After all, we’ve had reports saying “it’s the software” for years — so are the chances of us finally doing anything about it more or less likely than the Planetary Supercomputing Facilities?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

First All-Petaflops Top500 List Debuts; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafloppers only. The entry point for the new list is 1.022 petaf Read more…

By Tiffany Trader

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its intention to make Arm a full citizen in the processing arch Read more…

By Tiffany Trader

Jack Wells Joins OpenACC; Arm Support Coming

June 17, 2019

Perhaps the most significant ISC19 news for OpenACC wasn’t in its official press release yesterday which touted growing user traction and the notable addition of HPC leader Jack Wells, director of science, Oak Ridge Le Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Transforming Dark Data for Insights and Discoveries in Healthcare

Healthcare in the USA produces an enormous amount of patient-related data each year. It is likely that the average person will generate over one million gigabytes of health-related data across his or her lifetime, equivalent to 300 million books. Read more…

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are already ensconced at the venue. In any case, you're busy, so he Read more…

By Tiffany Trader

First All-Petaflops Top500 List Debuts; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Jack Wells Joins OpenACC; Arm Support Coming

June 17, 2019

Perhaps the most significant ISC19 news for OpenACC wasn’t in its official press release yesterday which touted growing user traction and the notable addition Read more…

By John Russell

Final Countdown to ISC19: What to See

June 13, 2019

If you're attending the International Supercomputing Conference, taking place in Frankfurt next week (June 16-20), you're either packing, in transit, or are alr Read more…

By Tiffany Trader

The US Global Weather Forecast System Just Got a Major Upgrade

June 13, 2019

The United States’ Global Forecast System (GFS) has received a major upgrade to its modeling capabilities. The new dynamical core that has been added to the G Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

The Spaceborne Computer Returns to Earth, and HPE Eyes an AI-Protected Spaceborne 2

June 10, 2019

After 615 days on the International Space Station (ISS), HPE’s Spaceborne Computer has returned to Earth. The computer touched down onboard the same SpaceX Dr Read more…

By Oliver Peckham

Building the Team: South African Style

June 9, 2019

We’re only eight days away from the start of the ISC 2019 Student Cluster Competition. Fourteen student teams from eleven countries will travel to Frankfurt, Read more…

By Dan Olds

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This