Tensors Come of Age: Why the AI Revolution Will Help HPC

By John Gustafson & Lenore Mullin

November 13, 2017

A Quick Retrospect

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Gordon Bell softened that to an annual award for the best speedup, what we now know as the Gordon Bell Prize.

John Gustafson

This year also marks the 30th Supercomputing Conference. At the first SC in 1988, Seymour Cray gave the keynote, and said he might consider combining up to 16 processors. Just weeks before that event, Sandia researchers had managed to get thousand-fold speedups on the 1024-processor nCUBE for several DOE workloads, but those results were awaiting publication.

The magazine Supercomputing Review was following the battle with interest, publishing a piece by a defender of the old way of doing things, Jack Worlton, titled “The Parallel Processing Bandwagon.” It declared parallelism a nutty idea that would never be the right way to build a supercomputer. Amdahl’s law and all that. A rebuttal by Gustafson titled “The Vector Gravy Train” was to appear in the next issue… but there was no next issue of Supercomputing Review. SR had made the bold step of turning into the first online magazine, back in 1987, with a new name.

Lenore Mullin

Happy 30th Anniversary, HPCwire!

What better occasion than to write about another technology that is coming of age, one we will look back on as a watershed? That technology is tensor computing: Optimized multidimensional array processing using novel arithmetic[1].

Thank you, AI

You can hardly throw a tchotchke on the trade show floor of SC17 without hitting a vendor talking about artificial intelligence (AI), deep learning, and neural nets. Google recently open-sourced its TensorFlow AI library and Tensor Processing Unit. Intel bought Nervana. Micron, AMD, ARM, Nvidia, and a raft of startups are suddenly pursuing an AI strategy. Two key ideas keep appearing:

  • An architecture optimized for tensors
  • Departure from 32-bit and 64-bit IEEE 754 floating-point arithmetic

What’s going on? And is this relevant to HPC, or is it unrelated? Why are we seeing convergent evolution to the use of tensor processors, optimized tensor algebras in languages, and nontraditional arithmetic formats?

What’s going on is that computing is bandwidth-bound, so we need to make much better use of the bits we slosh around a system. Tensor architectures place data closer to where it is needed. New arithmetic represents the needed numerical values using fewer bits. This AI-driven revolution will have a huge benefit for HPC workloads. Even if Moore’s law stopped dead in its tracks, these approaches increase computing speed and cut space and energy consumption.

Tensor languages have actually been around for years. Remember APL and Fortran 90, all you old-timers? However, now we are within reach of techniques that can automatically optimize arbitrary tensor operations on tensor architectures, using an augmented compilation environment that minimizes clunky indexing and unnecessary scratch storage[2]. That’s crucial for portability.

Portability suffers, temporarily, as we break free from standard numerical formats. You can turn float precision down to 16-bit, but then the shortcomings of IEEE format really become apparent, like wasting over 2,000 possible bit patterns on “Not a Number” instead of using them for numerical values. AI is providing the impetus to ask what comes after floats, which are awfully long in the tooth and have never followed algebraic laws. HPC people will someday be grateful that AI researchers helped fix this long-standing problem.

The Most Over-Discovered Trick in HPC

As early as the 1950s, according to the late numerical analyst Herb Keller, programmers discovered they could make linear algebra go faster by blocking the data to fit the architecture. Matrix-matrix operations in particular run best when the matrices are tiled into submatrices, and even sub-submatrices. That was the beginning of dimension lifting, an approach that seems to get re-discovered by every generation of HPC programmers. It’s time for a “grand unification” of the technique.

Level N BLAS

The BLAS developers started in the 1970s with loops on lists (level 1), then realizing doubly nested loops were needed (level 2), then triply nested (level 3), and then LAPACK and SCALAPACK introduced blocking to better fit computer architectures. In other words, we’ve been computing with tensors for a long time, but not admitting it! Kudos to Google for naming their TPU the way they did. What we need now is “level N BLAS.”

Consider this abstract way of thinking about a dot product of four-element vectors:

Notice the vector components are not numbered; think of them as a set, not a list, because that allows us to rearrange them to fit any memory architecture. The components are used once in this case, multiplied, and summed to some level (in this case, all the way down to a single number). Multiplications can be completely parallel if the hardware allows, and summation can be as parallel as binary sum reduction allows.

Now consider the same inputs, but used for 2-by-2 matrix-matrix multiplication:

 

Each input is used twice, either by a broadcast method or re-use, depending on what the hardware supports. The summation is only one level deep this time.

Finally, use the sets for an outer product, where each input is used four times to create 16 parallel multiplications, which are not summed at all.

All these operations can be captured in a single unified framework, and that is what we mean by “Level N BLAS.” The sets of numbers are best organized as tensors that fit the target architecture and its cost functions. A matrix really isn’t two-dimensional in concept; that’s just for human convenience, and semantics treat it that way. An algebra exists for index manipulation that can be part of the compiler smarts, freeing the programmer from having to worry about details like “Is this row-major or column-major order[4]?” Tensors free you from imposing linear ordering that isn’t required by the algorithm and that impedes optimal data placement.

Besides linear algebra, tensors are what you need for Fast Fourier Transforms (FFTs), convolutions for signal and image processing, and yes, neural networks. Knowledge representation models like PARAFAC or CANDECOMP use tensors. Most people aren’t taught tensors in college math, and tensors admittedly look pretty scary with all those subscripts. One of Einstein’s best inventions was a shorthand notation that gets rid of a lot of the subscripts (because General Relativity requires tensor math), but it still takes a lot of practice to get a “feel” for how tensors work. The good news is, computer users don’t have to learn that skill, and only a few computer programmers have to. There now exists a theory[4], and many prototypes[5], for handling tensors automatically. We just need a few programmers to make use of the existing theory of array indexing to build and maintain those tools for distribution to all[6]. Imagine being able to automatically generate a Fast Fourier Transform (FFT) without having to worry about the indexing! That’s already been prototyped[7].

Which leads us to another HPC trend that we need for architecture portability…

The Rise of the Installer Program

In the old days, code development meant edit, compile, link, and load. Nowadays, people never talk about “linkers” and “loaders.” But we certainly talk about precompilers, makefiles and installer programs. We’ve also seen the rise of just-in-time compilation in languages like Java, with system-specific byte codes to get both portability and sometimes, surprisingly high performance. The nature of who-does-what has changed quite a bit over the last few decades. Now, for example, HPC software vendors cannot ship a binary for a cluster supercomputer because they cannot know which MPI library is in use; the installer links that in.

The compiler, or preprocessor, doesn’t have to guess what the target architecture is; it can instead specify what needs to be done, but not how, stopping at an intermediate language level. The installer knows what the costs are of all the data motions in the example diagrams above, and can predict precisely what the cost of a particular memory layout is. What you can predict, you can optimize. The installer takes care of the how.

James Demmel has often described the terrible challenge of building a ScaLAPACK-like library that gets high performance for all possible situations. Call it “The Demmel Dilemma.” It appears we are about to resolve that dilemma. With tensor-friendly architectures, and proper division of labor between the human programmer and the preprocessor, compiler, and installer, we can look forward to a day when we don’t need 50 pages of compiler flag documentation, or endless trial-and-error experimentation with ways to lay out arrays in storage that is hierarchical, parallel, and complicated. Automation is feasible, and essential.

The Return of the Exact Dot Product

There is one thing we’ve left out though, and it is one of the most exciting developments that will enable all this to work. You’ve probably never heard of it. It’s the exact dot product approach invented by Ulrich Kulisch, back in the late 1960s, but made eminently practical by some folks at Berkeley just this year[8].

With floats, because of rounding errors, you will typically get a different result when you change the way a sum is grouped. Floats disobey the associative law: (a + b) + c, rounded, is not the same as a + (b + c). That’s particularly hazardous when accumulating a lot of small quantities into a single sum, like when doing Monte Carlo methods, or a dot product. Just think of how often a scientific code needs to do the sum of products, even if it doesn’t do linear algebra. Graphics codes are full of three-dimensional and two-dimensional dot products. Suppose you could calculate sums of products exactly, rounding only when converting back to the working real number format?

You might think that would take a huge, arbitrary precision library. It doesn’t. Kulisch noticed that for floating-point numbers, a fixed-size register with a few hundred bits suffices as scratch space for perfect accuracy results even for vectors that are billions of floats long. You might think it would run too slowly, because of the usual speed-accuracy tradeoff. Surprise: It runs 3–6 times faster than a dot product with rounding after every multiply-add. Berkeley hardware engineers discovered this and published their result just this summer. In fact, the exact dot product is an excellent way to get over 90 percent of the peak multiply-add speed of a system, because the operations pipeline.

Unfortunately, the exact dot product idea has been repeatedly and firmly rejected by the IEEE 754 committee that defines how floats work. Fortunately, it is an absolute requirement in posit arithmetic[9] and can greatly reduce the need for double precision quantities in HPC programs. Imagine doing a structural analysis program with 32-bit variables throughout, yet getting 7 correct decimals of accuracy in the result, guaranteed. That’s effectively like doubling bandwidth and storage compared to the 64-bits-everywhere approach typically used for structural analysis.

A Scary-Looking Math Example

If you don’t like formulas, just skip this. Suppose you’re using a conjugate gradient solver, and you want to evaluate its kernel as fast as possible:

A theory exists to mechanically transform these formulas to a “normal form” that looks like this:

That, plus hardware-specific information, allows automatic data layout that minimizes indexing and temporary storage, and maximizes locality of access for any architecture. And with novel arithmetic like posits that supports the exact dot product, you get a bitwise identical result no matter how the task is organized to run in parallel, and at near-peak speed. Programmers won’t have to wrestle with data placement, nor will they have to waste hours trying to figure out if the parallel answer is different because of a bug or because of rounding errors.

What People Will Remember, 30 Years from Now

By 2047, people may look back on the era of IEEE floating-point arithmetic the way we now regard the EBCDIC character set used on IBM mainframes (which many readers may never have heard of, but it predates ASCII). They’ll wonder how people ever tolerated the lack of repeatability and portability and the rounding errors that were indistinguishable from programming bugs, and they may reminisce about how people wasted 15-decimal accuracy on every variable as insurance, when they only needed four decimals in the result. Not unlike the way some of us old-timers remember “vectorizing” code in 1987 to get it to run faster, or “unrolling” loops to help out the compiler.

Thirty years from now, the burden of code tuning and portability for arrays will be back where it belongs: on the computer itself. Programmers will have long forgotten how to tile matrices into submatrices because the compiler-installer combination will do that for tensors for any architecture, and will produce bitwise-identical results on all systems.

The big changes that are permitting this watershed are all happening now. This year. These are exciting times!


[1] A. Acar et al., “Tensor Computing for Internet of Things,” Dagstuhl Reports, Vol. 6, No. 4, 2016, Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, doi:10.4230/DagRep.6.4.57, http://drops.dagstuhl.de/opus/volltexte/2016/6691 pp. 57–79.

[2] Rosencrantz et al., “On Minimizing Materializations of Array-Valued Temporaries,” ACM Trans. Program. Lang. Syst., Vol. 28, No. 6, 2006, http://doi.acm.org/10.1145/118663, pp.1145–1177.

[3] L. Mullin and S. Thibault, “Reduction Semantics for Array Expressions: The Psi Compiler,” Technical Report, University of Missouri-Rolla Computer Science Dept., 1994.

[4] K. Berkling, Arrays and the Lambda Calculus, SU0CIS-90-22, CASE Center and School of CIS, Syracuse University, May 1990.

[5] S. Thibault et al., “Generating Indexing Functions of Regularly Sparse Arrays for Array Compilers,” Technical Report CSC-94-08, University of Missouri-Rolla, 1994.

[6] L. Mullin and J. Raynolds, Conformal Computing: Algebraically Connecting the Hardware/Software Boundary using a Uniform Approach to High-Performance Computation for Software and Hardware Applications, arXiv:0803.2386, 2008.

[7] H. Hunt et al., “A Transformation-Based Approach for the Design of Parallel/Distributed Scientific Software: The FFT,” CoRR, 2008, http://dblp.uni-trier.de/rec/bib/journals/corr/abs-0811-2535.

[8] http://arith24.arithsymposium.org/slides/s7-koenig.pdf.

[9] http://www.posithub.org.


About the Authors

John L. Gustafson
[email protected]

John L. Gustafson, Ph.D., is currently Visiting Scientist at A*STAR and Professor of Computer Science at National University of Singapore. He is a former Senior Fellow and Chief Product Architect at AMD, and a former Director at Intel Labs. His work showing practical speedups for distributed memory parallel computing in 1988 led to his receipt of the inaugural Gordon Bell Prize, and his formulation of the underlying principle of “weak scaling” is now referred to as Gustafson‘s law. His 2015 book, “The End of Error: Unum Computing” has been an Amazon best-seller in its category. He is a Golden Core member of IEEE. He is also an “SC Perennial” who has been to every Supercomputing conference since the first one in 1988. He is an honors graduate of Caltech and received his MS and PhD from Iowa State University.

Lenore Mullin
[email protected]

Lenore M. Mullin, Ph.D., is an Emeritus Professor, Computer Science, University at Albany, SUNY,  a Research Software Consultant to REX Computing, Inc. and Senior Computational Mathematician at Etaphase, Inc. Dr. Mullin invented a new theory of n-dimensional tensors/arrays in her 1988 Dissertation, A Mathematics of Arrays (MoA) that includes an indexing calculus, The Psi Calculus. This theory built on her tenure at IBM Research working with Turing Award Winner, Kenneth Iverson. She has built numerous software and hardware prototypes illustrating both the power and mechanization of MoA and the Psi Calculus. MoA was recognized by NSF with the 1992 Presidential Faculty Fellowship, entitled “Intermediate Languages for Enhanced Parallel Performance”, awarded to only 30 nationally. Her binary transpose was accepted and incorporated into Fortran 90. On sabbatical at MIT Lincoln Laboratory, she worked to improve the standard missile software through MoA design. As an IPA, she ran the Algorithms, Numerical and Symbolic Computation program in NSF’s CISE CCF. While on another leave she was Program Director in DOE’s ASCR Program. She lives in Arlington, Va.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC Career Notes: July 2020 Edition

July 1, 2020

In this monthly feature, we'll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it's a promotion, new company hire, or even an accolade, we've got Read more…

By Mariana Iriarte

Supercomputers Enable Radical, Promising New COVID-19 Drug Development Approach

July 1, 2020

Around the world, innumerable supercomputers are sifting through billions of molecules in a desperate search for a viable therapeutic to treat COVID-19. Those molecules are pulled from enormous databases of known compoun Read more…

By Oliver Peckham

HPC-Powered Simulations Reveal a Looming Climatic Threat to Vital Monsoon Seasons

June 30, 2020

As June draws to a close, eyes are turning to the latter half of the year – and with it, the monsoon and hurricane seasons that can prove vital or devastating for many of the world’s coastal communities. Now, climate Read more…

By Oliver Peckham

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This year is no different though the conversion of ISC to a digital Read more…

By John Russell

What’s New in HPC Research: Mosquitoes, [email protected], the Last Journey & More

June 29, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

AWS Solution Channel

Maxar Builds HPC on AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer

When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Recent U.S. events, most poignantly the killing of George Floy Read more…

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

ISC 2020 Keynote: Hope for the Future, Praise for Fugaku and HPC’s Pandemic Response

June 24, 2020

In stark contrast to past years Thomas Sterling’s ISC20 keynote today struck a more somber note with the COVID-19 pandemic as the central character in Sterling’s annual review of worldwide trends in HPC. Better known for his engaging manner and occasional willingness to poke prickly egos, Sterling instead strode through the numbing statistics associated... Read more…

By John Russell

ISC 2020’s Student Cluster Competition Winners Announced

June 24, 2020

Normally, the Student Cluster Competition involves teams of students building real computing clusters on the show floors of major supercomputer conferences and Read more…

By Oliver Peckham

Hoefler’s Whirlwind ISC20 Virtual Tour of ML Trends in 9 Slides

June 23, 2020

The ISC20 experience this year via livestreaming and pre-recordings is interesting and perhaps a bit odd. That said presenters’ efforts to condense their comments makes for economic use of your time. Torsten Hoefler’s whirlwind 12-minute tour of ML is a great example. Hoefler, leader of the planned ISC20 Machine Learning... Read more…

By John Russell

At ISC, the Fight Against COVID-19 Took the Stage – and Yes, Fugaku Was There

June 23, 2020

With over nine million infected and nearly half a million dead, the COVID-19 pandemic has seized the world’s attention for several months. It has also dominat Read more…

By Oliver Peckham

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

Contributors

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This