Tensors Come of Age: Why the AI Revolution Will Help HPC

By John Gustafson & Lenore Mullin

November 13, 2017

A Quick Retrospect

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Gordon Bell softened that to an annual award for the best speedup, what we now know as the Gordon Bell Prize.

John Gustafson

This year also marks the 30th Supercomputing Conference. At the first SC in 1988, Seymour Cray gave the keynote, and said he might consider combining up to 16 processors. Just weeks before that event, Sandia researchers had managed to get thousand-fold speedups on the 1024-processor nCUBE for several DOE workloads, but those results were awaiting publication.

The magazine Supercomputing Review was following the battle with interest, publishing a piece by a defender of the old way of doing things, Jack Worlton, titled “The Parallel Processing Bandwagon.” It declared parallelism a nutty idea that would never be the right way to build a supercomputer. Amdahl’s law and all that. A rebuttal by Gustafson titled “The Vector Gravy Train” was to appear in the next issue… but there was no next issue of Supercomputing Review. SR had made the bold step of turning into the first online magazine, back in 1987, with a new name.

Lenore Mullin

Happy 30th Anniversary, HPCwire!

What better occasion than to write about another technology that is coming of age, one we will look back on as a watershed? That technology is tensor computing: Optimized multidimensional array processing using novel arithmetic[1].

Thank you, AI

You can hardly throw a tchotchke on the trade show floor of SC17 without hitting a vendor talking about artificial intelligence (AI), deep learning, and neural nets. Google recently open-sourced its TensorFlow AI library and Tensor Processing Unit. Intel bought Nervana. Micron, AMD, ARM, Nvidia, and a raft of startups are suddenly pursuing an AI strategy. Two key ideas keep appearing:

  • An architecture optimized for tensors
  • Departure from 32-bit and 64-bit IEEE 754 floating-point arithmetic

What’s going on? And is this relevant to HPC, or is it unrelated? Why are we seeing convergent evolution to the use of tensor processors, optimized tensor algebras in languages, and nontraditional arithmetic formats?

What’s going on is that computing is bandwidth-bound, so we need to make much better use of the bits we slosh around a system. Tensor architectures place data closer to where it is needed. New arithmetic represents the needed numerical values using fewer bits. This AI-driven revolution will have a huge benefit for HPC workloads. Even if Moore’s law stopped dead in its tracks, these approaches increase computing speed and cut space and energy consumption.

Tensor languages have actually been around for years. Remember APL and Fortran 90, all you old-timers? However, now we are within reach of techniques that can automatically optimize arbitrary tensor operations on tensor architectures, using an augmented compilation environment that minimizes clunky indexing and unnecessary scratch storage[2]. That’s crucial for portability.

Portability suffers, temporarily, as we break free from standard numerical formats. You can turn float precision down to 16-bit, but then the shortcomings of IEEE format really become apparent, like wasting over 2,000 possible bit patterns on “Not a Number” instead of using them for numerical values. AI is providing the impetus to ask what comes after floats, which are awfully long in the tooth and have never followed algebraic laws. HPC people will someday be grateful that AI researchers helped fix this long-standing problem.

The Most Over-Discovered Trick in HPC

As early as the 1950s, according to the late numerical analyst Herb Keller, programmers discovered they could make linear algebra go faster by blocking the data to fit the architecture. Matrix-matrix operations in particular run best when the matrices are tiled into submatrices, and even sub-submatrices. That was the beginning of dimension lifting, an approach that seems to get re-discovered by every generation of HPC programmers. It’s time for a “grand unification” of the technique.

Level N BLAS

The BLAS developers started in the 1970s with loops on lists (level 1), then realizing doubly nested loops were needed (level 2), then triply nested (level 3), and then LAPACK and SCALAPACK introduced blocking to better fit computer architectures. In other words, we’ve been computing with tensors for a long time, but not admitting it! Kudos to Google for naming their TPU the way they did. What we need now is “level N BLAS.”

Consider this abstract way of thinking about a dot product of four-element vectors:

Notice the vector components are not numbered; think of them as a set, not a list, because that allows us to rearrange them to fit any memory architecture. The components are used once in this case, multiplied, and summed to some level (in this case, all the way down to a single number). Multiplications can be completely parallel if the hardware allows, and summation can be as parallel as binary sum reduction allows.

Now consider the same inputs, but used for 2-by-2 matrix-matrix multiplication:

 

Each input is used twice, either by a broadcast method or re-use, depending on what the hardware supports. The summation is only one level deep this time.

Finally, use the sets for an outer product, where each input is used four times to create 16 parallel multiplications, which are not summed at all.

All these operations can be captured in a single unified framework, and that is what we mean by “Level N BLAS.” The sets of numbers are best organized as tensors that fit the target architecture and its cost functions. A matrix really isn’t two-dimensional in concept; that’s just for human convenience, and semantics treat it that way. An algebra exists for index manipulation that can be part of the compiler smarts, freeing the programmer from having to worry about details like “Is this row-major or column-major order[4]?” Tensors free you from imposing linear ordering that isn’t required by the algorithm and that impedes optimal data placement.

Besides linear algebra, tensors are what you need for Fast Fourier Transforms (FFTs), convolutions for signal and image processing, and yes, neural networks. Knowledge representation models like PARAFAC or CANDECOMP use tensors. Most people aren’t taught tensors in college math, and tensors admittedly look pretty scary with all those subscripts. One of Einstein’s best inventions was a shorthand notation that gets rid of a lot of the subscripts (because General Relativity requires tensor math), but it still takes a lot of practice to get a “feel” for how tensors work. The good news is, computer users don’t have to learn that skill, and only a few computer programmers have to. There now exists a theory[4], and many prototypes[5], for handling tensors automatically. We just need a few programmers to make use of the existing theory of array indexing to build and maintain those tools for distribution to all[6]. Imagine being able to automatically generate a Fast Fourier Transform (FFT) without having to worry about the indexing! That’s already been prototyped[7].

Which leads us to another HPC trend that we need for architecture portability…

The Rise of the Installer Program

In the old days, code development meant edit, compile, link, and load. Nowadays, people never talk about “linkers” and “loaders.” But we certainly talk about precompilers, makefiles and installer programs. We’ve also seen the rise of just-in-time compilation in languages like Java, with system-specific byte codes to get both portability and sometimes, surprisingly high performance. The nature of who-does-what has changed quite a bit over the last few decades. Now, for example, HPC software vendors cannot ship a binary for a cluster supercomputer because they cannot know which MPI library is in use; the installer links that in.

The compiler, or preprocessor, doesn’t have to guess what the target architecture is; it can instead specify what needs to be done, but not how, stopping at an intermediate language level. The installer knows what the costs are of all the data motions in the example diagrams above, and can predict precisely what the cost of a particular memory layout is. What you can predict, you can optimize. The installer takes care of the how.

James Demmel has often described the terrible challenge of building a ScaLAPACK-like library that gets high performance for all possible situations. Call it “The Demmel Dilemma.” It appears we are about to resolve that dilemma. With tensor-friendly architectures, and proper division of labor between the human programmer and the preprocessor, compiler, and installer, we can look forward to a day when we don’t need 50 pages of compiler flag documentation, or endless trial-and-error experimentation with ways to lay out arrays in storage that is hierarchical, parallel, and complicated. Automation is feasible, and essential.

The Return of the Exact Dot Product

There is one thing we’ve left out though, and it is one of the most exciting developments that will enable all this to work. You’ve probably never heard of it. It’s the exact dot product approach invented by Ulrich Kulisch, back in the late 1960s, but made eminently practical by some folks at Berkeley just this year[8].

With floats, because of rounding errors, you will typically get a different result when you change the way a sum is grouped. Floats disobey the associative law: (a + b) + c, rounded, is not the same as a + (b + c). That’s particularly hazardous when accumulating a lot of small quantities into a single sum, like when doing Monte Carlo methods, or a dot product. Just think of how often a scientific code needs to do the sum of products, even if it doesn’t do linear algebra. Graphics codes are full of three-dimensional and two-dimensional dot products. Suppose you could calculate sums of products exactly, rounding only when converting back to the working real number format?

You might think that would take a huge, arbitrary precision library. It doesn’t. Kulisch noticed that for floating-point numbers, a fixed-size register with a few hundred bits suffices as scratch space for perfect accuracy results even for vectors that are billions of floats long. You might think it would run too slowly, because of the usual speed-accuracy tradeoff. Surprise: It runs 3–6 times faster than a dot product with rounding after every multiply-add. Berkeley hardware engineers discovered this and published their result just this summer. In fact, the exact dot product is an excellent way to get over 90 percent of the peak multiply-add speed of a system, because the operations pipeline.

Unfortunately, the exact dot product idea has been repeatedly and firmly rejected by the IEEE 754 committee that defines how floats work. Fortunately, it is an absolute requirement in posit arithmetic[9] and can greatly reduce the need for double precision quantities in HPC programs. Imagine doing a structural analysis program with 32-bit variables throughout, yet getting 7 correct decimals of accuracy in the result, guaranteed. That’s effectively like doubling bandwidth and storage compared to the 64-bits-everywhere approach typically used for structural analysis.

A Scary-Looking Math Example

If you don’t like formulas, just skip this. Suppose you’re using a conjugate gradient solver, and you want to evaluate its kernel as fast as possible:

A theory exists to mechanically transform these formulas to a “normal form” that looks like this:

That, plus hardware-specific information, allows automatic data layout that minimizes indexing and temporary storage, and maximizes locality of access for any architecture. And with novel arithmetic like posits that supports the exact dot product, you get a bitwise identical result no matter how the task is organized to run in parallel, and at near-peak speed. Programmers won’t have to wrestle with data placement, nor will they have to waste hours trying to figure out if the parallel answer is different because of a bug or because of rounding errors.

What People Will Remember, 30 Years from Now

By 2047, people may look back on the era of IEEE floating-point arithmetic the way we now regard the EBCDIC character set used on IBM mainframes (which many readers may never have heard of, but it predates ASCII). They’ll wonder how people ever tolerated the lack of repeatability and portability and the rounding errors that were indistinguishable from programming bugs, and they may reminisce about how people wasted 15-decimal accuracy on every variable as insurance, when they only needed four decimals in the result. Not unlike the way some of us old-timers remember “vectorizing” code in 1987 to get it to run faster, or “unrolling” loops to help out the compiler.

Thirty years from now, the burden of code tuning and portability for arrays will be back where it belongs: on the computer itself. Programmers will have long forgotten how to tile matrices into submatrices because the compiler-installer combination will do that for tensors for any architecture, and will produce bitwise-identical results on all systems.
The big changes that are permitting this watershed are all happening now. This year. These are exciting times! □


[1] A. Acar et al., “Tensor Computing for Internet of Things,” Dagstuhl Reports, Vol. 6, No. 4, 2016, Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, doi:10.4230/DagRep.6.4.57, http://drops.dagstuhl.de/opus/volltexte/2016/6691 pp. 57–79.

[2] Rosencrantz et al., “On Minimizing Materializations of Array-Valued Temporaries,” ACM Trans. Program. Lang. Syst., Vol. 28, No. 6, 2006, http://doi.acm.org/10.1145/118663, pp.1145–1177.

[3] L. Mullin and S. Thibault, “Reduction Semantics for Array Expressions: The Psi Compiler,” Technical Report, University of Missouri-Rolla Computer Science Dept., 1994.

[4] K. Berkling, Arrays and the Lambda Calculus, SU0CIS-90-22, CASE Center and School of CIS, Syracuse University, May 1990.

[5] S. Thibault et al., “Generating Indexing Functions of Regularly Sparse Arrays for Array Compilers,” Technical Report CSC-94-08, University of Missouri-Rolla, 1994.

[6] L. Mullin and J. Raynolds, Conformal Computing: Algebraically Connecting the Hardware/Software Boundary using a Uniform Approach to High-Performance Computation for Software and Hardware Applications, arXiv:0803.2386, 2008.

[7] H. Hunt et al., “A Transformation-Based Approach for the Design of Parallel/Distributed Scientific Software: The FFT,” CoRR, 2008, http://dblp.uni-trier.de/rec/bib/journals/corr/abs-0811-2535.

[8] http://arith24.arithsymposium.org/slides/s7-koenig.pdf.

[9] http://www.posithub.org.


About the Authors

John L. Gustafson
john.gustafson@nus.edu.sg

John L. Gustafson, Ph.D., is currently Visiting Scientist at A*STAR and Professor of Computer Science at National University of Singapore. He is a former Senior Fellow and Chief Product Architect at AMD, and a former Director at Intel Labs. His work showing practical speedups for distributed memory parallel computing in 1988 led to his receipt of the inaugural Gordon Bell Prize, and his formulation of the underlying principle of “weak scaling” is now referred to as Gustafson‘s law. His 2015 book, “The End of Error: Unum Computing” has been an Amazon best-seller in its category. He is a Golden Core member of IEEE. He is also an “SC Perennial” who has been to every Supercomputing conference since the first one in 1988. He is an honors graduate of Caltech and received his MS and PhD from Iowa State University.

Lenore Mullin
lenore@albany.edu

Lenore M. Mullin, Ph.D., is an Emeritus Professor, Computer Science, University at Albany, SUNY,  a Research Software Consultant to REX Computing, Inc. and Senior Computational Mathematician at Etaphase, Inc. Dr. Mullin invented a new theory of n-dimensional tensors/arrays in her 1988 Dissertation, A Mathematics of Arrays (MoA) that includes an indexing calculus, The Psi Calculus. This theory built on her tenure at IBM Research working with Turing Award Winner, Kenneth Iverson. She has built numerous software and hardware prototypes illustrating both the power and mechanization of MoA and the Psi Calculus. MoA was recognized by NSF with the 1992 Presidential Faculty Fellowship, entitled “Intermediate Languages for Enhanced Parallel Performance”, awarded to only 30 nationally. Her binary transpose was accepted and incorporated into Fortran 90. On sabbatical at MIT Lincoln Laboratory, she worked to improve the standard missile software through MoA design. As an IPA, she ran the Algorithms, Numerical and Symbolic Computation program in NSF’s CISE CCF. While on another leave she was Program Director in DOE’s ASCR Program. She lives in Arlington, Va.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC17 US Student Cluster Competition Teams: Defending the Home Turf

November 24, 2017

Nine US universities showed up to the SC17 Student Cluster Competition in an attempt to keep the trophies in the United States. Let’s use our video lens to get to know them a bit better…. Georgia Tech is a newbie Read more…

By Dan Olds

Long Flights to Cluster Fights: Meet the Asian Student Cluster Teams

November 22, 2017

Five teams from Asia traveled thousands of miles to compete at the SC17 Student Cluster Competition in Denver. Our cameras were there to meet ‘em, greet ‘em, and grill ‘em about their clusters and how they’re doi Read more…

By Dan Olds

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open question. The latest geo-region to throw its hat in the quantum co Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Wins “Best HPC Server” for the Apollo 6000 Gen10 System

Hewlett Packard Enterprise (HPE) was nominated for 14 HPCwire Readers’ and Editors’ Choice Awards—including “Best High Performance Computing (HPC) Server Product or Technology” and “Top Supercomputing Achievement.” The HPE Apollo 6000 Gen10 was named “Best HPC Server” of 2017. Read more…

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshop Read more…

By Andrew Jones

SC17 US Student Cluster Competition Teams: Defending the Home Turf

November 24, 2017

Nine US universities showed up to the SC17 Student Cluster Competition in an attempt to keep the trophies in the United States. Let’s use our video lens to ge Read more…

By Dan Olds

Long Flights to Cluster Fights: Meet the Asian Student Cluster Teams

November 22, 2017

Five teams from Asia traveled thousands of miles to compete at the SC17 Student Cluster Competition in Denver. Our cameras were there to meet ‘em, greet ‘em Read more…

By Dan Olds

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC Read more…

By Andrew Jones

SC Bids Farewell to Denver, Heads to Dallas for 30th Anniversary

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Share This