Reflecting on the 25th Anniversary of ASCI Red and Continuing Themes for Our Heterogenous Future

By James Reinders

April 26, 2022

In the third of a series of guest posts on heterogeneous computing, James Reinders shares experiences surrounding the creation of ASCI Red and ties that system’s quadranscentennial anniversary to predictions about the heterogeneous future being ushered in by exaflops machines.

In 1997, ASCI Red appeared on the Top500 as the first teraflops machine in history. It held that spot for seven lists, a record that remains unbroken decades later. Using thousands of Intel microprocessors, it offered additional evidence that massively parallel machines based on “off the shelf” technology would dominate supercomputing of the future – a trend that was not universally endorsed in 1997. It was also not hard to find skeptics that claimed we would never need a petaflops of computing power, and many saw teraflops as only needed for military needs.

Twenty-five years later, exaflops machines offer evidence of trends that will dominate supercomputing of the future. Before I share my predictions of our future, I’ll reflect on how ASCI Red came to be.

ASCI Red

In December 1996, while the machine was still at Intel in Oregon and only three-fourths built, ASCI Red ran for the first time above the one-trillion-operations-per-second rate.

The full system featured 1.2 TB of memory and 9,298 processors (200 MHz Intel Pentium Pro processor boosted later with specially packaged 333 MHz Pentium II Xeon processors) in 104 cabinets. Not including cooling, the system consumed 850 kW of power.

People speak of ASCI Red supercomputer, operated at Sandia for nine years, with a well-deserved reverence. Sandia director Bill Camp said, in 2006, that ASCI Red had the best reliability of any supercomputer ever built.

Why ASCI Red?

Accelerated Strategic Computing Initiative (ASCI), was a ten-year program designed to move nuclear weapons design and maintenance from a test-based (underground explosions) to simulation-based approach (no more underground testing).

By developing reliable computational models for the processes involved over the whole life of nuclear weapons, the U.S. could comfortably live with a Comprehensive Nuclear-Test-Ban Treaty. DOE scientists estimated they needed 100 teraflops by the early 2000s.

Convex, Tombstones, and Execution of Strategies

To build a teraflops machine it was initially believed we would need to do that with a non-Intel processor. Clearly, the floating-point performance of the Pentium processor was insufficient.

In 1994, I visited Convex Computer Corporation to consider if we should use HP processors. Convex pushed HP designs to their limits including over-clocking (long before gamers made this popular). On the patio just outside of the Convex cafeteria, there are more than twenty names etched in cement including Chopp Computer, ETA Systems, and Multiflow. These were all companies that started alongside Convex in supercomputers and failed as businesses.

They explained that these were reminders of the need for more than a great strategy and smart people, you have to actually execute it successfully. Convex cofounder Bob Paluck was quoted in Bloomberg as saying “You’ve got to have a brilliant strategy, and you have to actually execute it. Otherwise, you become a tombstone.”

It fits perfectly with the Andy Grove philosophy drilled into us at Intel that “only the paranoid survive.”

Convex survived and was eventually acquired by HP. While we didn’t select HP parts, I never forgot that Convex graveyard.

Krazy Glew on comp.arch

Intel was the first company to have hardware (the 8087 in 1980) supporting the (then draft) IEEE FP standard. The Intel i860 used a VLIW design to power the #1 supercomputer in 1994, but x86 floating-point remained disappointing for HPC. As a frequent reader of comp.arch on usenet, I was intrigued when Andy “Krazy” Glew from Intel’s P6 wrote “Don’t count Intel out on floating-point” to a flame about Intel floating point being noncompetitive. Andy and I hit it off.

I became the first champion in the architecture study team for using the P6 design. The interest became much greater when the first P6 parts — now called the Intel Pentium Pro — came back and it became apparent we could have 200MHz parts under 40 watts, and that included an on-package L2. The power efficiency, compute density, and costs quickly made it the obvious choice with the entire architecture team.

Comet Shoemaker–Levy 9 and C++

For the most part, C++ had no following in HPC. C++ was not an ANSI or ISO standard (that came in 1998). A notable exception was a group at Sandia, the destination for ASCI Red.

The discovery of Comet Shoemaker–Levy 9 and the realization that it was likely to collide with Jupiter caused great excitement – a never before seen opportunity to observe two significant Solar System bodies collide.

Astronomers and astrophysicists, with scant data to guide them, did not believe the effects of the collision would be visible from Earth. Sandia researchers, experts on high energy impacts, offered a different perspective. Computational simulations by Dave Crawford and Mark Boslough at Sandia, using C++ on an Intel Paragon supercomputer (#1 on the Top500 list at the time), predicted a visible plume rising above the rim of Jupiter. This public disagreement was carried by the media, notably CNN. In the end, the close correspondence between their predictions of a visible plume rising above the rim of Jupiter and the actual plume as observed by astronomers lent even more confidence to the accuracy of the Sandia simulation codes. What an awesome validation!

In a recent book “Impactful Times: Memories of 60 Years of Shock Wave Research at Sandia National Laboratories,” J. Michael McGlaun of Sandia related “We decided to write [c.1990] PCTH[1] in C++ rather than FORTRAN. We hoped to eliminate some coding errors using C++ features.” The results were “a version of PCTH working that demonstrated excellent parallel speedup” that also “demonstrated that we could eliminate many software defects in a carefully written C++ program.”

Sandia and comets helped fuel the interest that set the stage for C++ to really be a serious language on ASCI Red, in addition to the dominant FORTRAN and lesser used C.

Trends for the Future

In retrospect, most trends that would expand over the next twenty-five years were quite evident when you looked at what the needs were in 1997 and what results were coming out of groundbreaking work.

Those trends became even more evident during the life of ASCI Red. The spectacular comet simulations with C++ code was strong evidence of future directions (I can’t imagine writing an adaptive mesh code in Fortran no matter how much I love Fortran). While only defense uses were willing to pay for a teraflops machine, there were plenty of hints that would change including dual-use[2] work at Sandia. The insatiable appetite for performance drove the importance of standardizing message passing (MPI), and then the fattening of nodes with more and more computation at the node level which in turn fueled the need for node level standards (e.g., OpenMP). The topic of security has grown as well as the scope of usage has grown dramatically. Arguably, the least predictable trend was the giant leap in AI usefulness thanks to deep learning algorithms.

In brief, nine notable changes that went from small to big in the past twenty-five year are:

  1. The rise in importance of C++.
  2. The use of supercomputers for far more than military purposes.
  3. Standardization of MPI.
  4. Enormous growth of computational power at a node level (fat nodes).
  5. Standardization of OpenMP to help with fat node programming.
  6. Emergence of AI techniques as an important programming technique.
  7. Floating point accelerators (most notably GPUs) to boost performance/watt and density.
  8. Open source grew from occasional to ubiquitous.
  9. Security has grown from a local concern to one with many surfaces to worry about.

What would our next list look like twenty-five years from now after exaflops systems appear. We already should know that the following nine are in our future:

  1. More abstract programming – the rise in importance of Python, frameworks, and more and more abstractions to the point of inspiring thinking such as “No Code.”
  2. More uses – supercomputers democratized even more especially as cloud vendors offer supercomputing for all – another vote for abstractions and “No Code”?
  3. More programming attention for distributed computing – energized by higher performance interconnects.
  4. Heterogeneous computing – fatter nodes get even more diverse thanks to many solutions from multiple vendors plus the mix-and-match that will happen with open chiplet interconnect standards (“new golden age for computer architecture”).
  5. Multivendor multiarchitecture fat node programming – requires more open and performance portable solutions.
  6. Algorithms matter – emergence of more AI techniques (not just deep learning) as important.
  7. Multivendor heterogeneous capabilities to boost performance/watt and density – made more prevalent thanks to open chiplet interconnect standards.
  8. Open – continues to expand to support more competition in everything.
  9. Security – bigger machines, more simultaneous users, and wide availability make this an ever-growing topic of concern.

Unlike ASCI Red, our heterogeneous future will be multivendor, and multiarchitecture because competition is only growing in this “new golden age for computer architecture.”

Additionally, diversity in hardware demands that performance portability will be critical to the future. When systems were CPU-only, performance portability came about because each generation of CPUs sought to be uniformly better than the CPUs that came before. Every CPU tried to be general purpose. In a heterogeneous world, where specialization is needed for lower power and higher densities, non-CPU compute devices are no longer trying to be general purpose. Any rush to standardize in order to lock in the architectures of today will only serve to undermine the credibility of such a standard.

These nine trends demand we support more variety in hardware and applications, while making it more approachable, faster, and better.

And, unlike in 1997, we need to do it in far more than just Fortran (formerly known as FORTRAN).

“No code” is sounding better and better all the time. Dream on.

[1] PCTH stands for Parallel CTH, CTH stands for CSQ to the Three Halves, CSQ stands for CHARTD Squared, CHARTD stands for Coupled Hydrodynamics And Radiation Transport Diffusion). Learn more about CTH at https://www.sandia.gov/cth/.

[2] Dual-use technologies refer to technologies with both military utility and commercial potential.

About the Author

James Reinders believes the full benefits of the evolution to full heterogeneous computing will be best realized with an open, multivendor, multiarchitecture approach. Reinders rejoined Intel a year ago, specifically because he believes Intel can meaningfully help realize this open future. Reinders is an author (or co-author and/or editor) of ten technical books related to parallel programming; his latest book is about SYCL (it can be freely downloaded here). 


Other articles in this series

Solving Heterogeneous Programming Challenges with SYCL

Why SYCL: Elephants in the SYCL Room

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in four different configurations, including a Grace Hopper HGX Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA—now in its eighth iteration, COSMA8—has been working to an Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that encompasses the company’s rigorous accounting of the supply Read more…

AWS Solution Channel

Shutterstock 1044740602

DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances

Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. Read more…

TACC Adds Details to Vision for Leadership-Class Computing Facility

May 23, 2022

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin passed to the next phase of the planning process for the Leadership-Class Computing Facility (LCCF), a process that has many approval stage Read more…

LRZ Adds Mega AI Aystem as It Stacks up on Future Computing Systems

May 25, 2022

The battle among high-performance computing hubs to stack up on cutting-edge computers for quicker time to science is getting steamy as new chip technologies become mainstream. A European supercomputing hub near Munich, called the Leibniz Supercomputing Centre, is deploying Cerebras Systems' CS-2 AI system as part of an internal initiative called Future Computing to assess alternative computing... Read more…

Nvidia Launches Four Arm-based Grace Server Designs

May 25, 2022

Nvidia is lining up Arm-based server platforms for a diverse range of HPC, AI and cloud applications. The new systems employ Nvidia’s custom Grace Arm CPUs in Read more…

Nvidia Bakes Liquid Cooling into PCIe GPU Cards

May 24, 2022

Nvidia is bringing liquid cooling, which it typically puts alongside GPUs on the high-performance computing systems, to its mainstream server GPU portfolio. The company will start shipping its A100 PCIe Liquid Cooled GPU, which is based on the Ampere architecture, for servers later this year. The liquid-cooled GPU based on the company's new Hopper architecture for PCIe slots will ship early next year. Read more…

Durham University to Test Rockport Networks on COSMA7 Supercomputer

May 24, 2022

Durham University’s Institute for Computational Cosmology (ICC) is home to the COSMA series of supercomputers (short for “cosmological machine”). COSMA— Read more…

SoftIron Measures Its Carbon Footprint to Make a Point

May 24, 2022

Since its founding in 2012, London-based software-defined storage provider SoftIron has been making its case for what it calls secure provenance: a term that en Read more…

ISC 2022: International Association of Supercomputing Centers to Debut

May 23, 2022

At ISC 2022 in Hamburg, Germany, representatives from four supercomputing centers across three countries plan to debut the International Association of Supercom Read more…

ANL Special Colloquium on The Future of Computing

May 19, 2022

There are, of course, a myriad of ideas regarding computing’s future. At yesterday’s Argonne National Laboratory’s Director’s Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm – not (just) economic cost; we’re talking entropy here – is fundamentally undermining computing’s progress such that... Read more…

HPE Announces New HPC Factory in Czech Republic

May 18, 2022

A week ahead of ISC High Performance 2022 (set to be held in Hamburg, Germany), supercomputing heavyweight HPE has announced a major investment in sovereign Eur Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

In Partnership with IBM, Canada to Get Its First Universal Quantum Computer

February 3, 2022

IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…

Supercomputer Simulations Show How Paxlovid, Pfizer’s Covid Antiviral, Works

February 3, 2022

Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Leading Solution Providers

Contributors

D-Wave to Go Public with SPAC Deal; Expects ~$1.6B Market Valuation

February 8, 2022

Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…

Intel Announces Falcon Shores CPU-GPU Combo Architecture for 2024

February 18, 2022

Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…

Industry Consortium Forms to Drive UCIe Chiplet Interconnect Standard

March 2, 2022

A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Facebook Parent Meta’s New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…

Nvidia Acquires Software-Defined Storage Provider Excelero

March 7, 2022

Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…

Nvidia Announces ‘Eos’ Supercomputer

March 22, 2022

At GTC22 today, Nvidia unveiled its new H100 GPU, the first of its new ‘Hopper’ architecture, along with a slew of accompanying configurations, systems and Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire