“Hardware-based improvements are going to get more and more difficult,” said Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). “I think that’s something that this crowd will probably, actually, be already familiar with.” Thompson, speaking at Supercomputing Frontiers Europe 2021, likely wasn’t wrong: the proximate death of Moore’s law has been a hot topic in the HPC community for a long time. But Thompson wasn’t just there to sound the death knell – he was there to discuss the future of computing, which, in his terms, was an approximate one.
Replacing Moore’s law
Thompson opened with a graph of computing power utilized by the National Oceanic and Atmospheric Administration (NOAA) over time. “Since the 1950s, there has been about a one trillion-fold increase in the amount of computing power being used in these models,” he said. But there was a problem: tracking a weather forecasting metric called mean absolute error (“When you make a prediction, how far off are you on that prediction?”), Thompson pointed out that “you actually need exponentially more computing power to get that [improved] performance.” Without those exponential gains in computing power, the steady gains in accuracy would slow, as well.
Enter, of course, Moore’s law, and the flattening of CPU clock frequencies in the mid-2000s. “But then we have this division, right?” Thompson said. “We start getting into multicore chips, and we’re starting to get computing power in that very specific way, which is not as useful unless you have that amount of parallelism.” Separating out parallelism, he explained, progress had dramatically slowed. “This might worry us if we want to, say, improve weather prediction at the same speed going forward,” he said.
So in 2020, Thompson and others wrote a paper examining ways to improve performance over time in a post-Moore’s law world. The authors landed on three main categories of promise: software-level improvements; algorithmic improvements; and new hardware architectures.
This third category, Thompson said, is experiencing the biggest moment right now, with GPUs and FPGAs exploding in the HPC scene and ever more tailor-made chips emerging. Just five years ago, only four percent of advanced computing users used specialized chips; now, Thompson said, it was 11 percent, and in five more years, it would be 17 percent. But over time, he cautioned, gains from specialized hardware would encounter similar problems to those currently faced by traditional hardware, leaving researchers looking for yet more avenues to improve performance.
Zooming out, Thompson asked the audience to consider performance improvements in terms of a simple equation: tasks ÷ computation × computation ÷ time. Thompson said the latter term of this equation – computations per unit time – represented hardware; the former – tasks per unit computation – represented algorithms.
“The question, of course, is: how big are the benefits that we can get from algorithms?”
Algorithmic progress
Thompson referenced a report from a White House advisory council, which read, in part: “In many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed.”
That report, Thompson cautioned, was citing a “pretty limited” study on linear solvers – and that wasn’t enough proof for him. So along with one of his students, Yash Sherry, Thompson went through 57 textbooks covering different areas of computer science across the decades – everything from operating systems and numerical analysis to statistics and cryptography. Through this work, which Thompson called “the first large census of important algorithms,” they identified around 100 algorithm “families” supported by around 1,100 research papers. This, he said, allowed them to graph those algorithms in terms of performance over time.
Thompson showed a few of those graphs as an example. Singling out one, he pointed out the enormous strides that algorithm improvements alone were able to achieve. “For this problem overall, there has been an enormous gain – in fact, a trillion-fold improvement in performance,” he said. “Now, compare that within the gray line here – that’s the hardware performance from spec.” The gray line, of course, was dwarfed.
That isn’t always the case, though. Some of the other algorithms progressed about on par with Moore’s law; others had almost no improvement over time. On average, Thompson said, “if your problem size is small [around n=1000], the gains are not that big – about six percent per year.” Bump it up a few orders of magnitude, though, and the gains were more like 15 percent per year; a few more, 28 percent.
These rates, Thompson said, compared favorably to the current state of Moore’s law. “In the 1990s, Moore’s law was improving very, very rapidly,” he said. “The gains were actually more than 52 percent per year. And so … gains from algorithms are not that high. But Moore’s law has slowed down a lot, right?” In fact, he showed that the current gains from Moore’s law were hovering around six percent — about the same as algorithmic improvements for small problem sizes.
“Can we continue to get these gains in algorithms?” Thompson asked – or would performance improvements from algorithms, too, suffer the same fate as Moore’s law? By way of example, he presented a sequence alignment algorithm used to establish how many edits “apart” two texts were. The algorithm, Thompson said, had experienced steady improvement until about 2015, and now, “this algorithm is as good as we can mathematically make it.”
But, Thompson said, “what if we were willing to accept a little bit of error? What if we’re willing to get the answer wrong – but just a bit?”
The approximate future of computing
The way past these mathematical limits in algorithm optimization, Thompson explained, was through approximation. He brought back the graph of algorithm improvement over time, adding in approximate algorithms – one 100 percent off, one ten percent off. “If you are willing to accept a ten percent approximation to this problem,” he said, you could get enormous jumps, improving performance by a factor of 32. “We are in the process of analyzing this data right now, but I think what you can already see here is that these approximate algorithms are in fact giving us very very substantial gains.”
Thompson presented another graph, this time charting the balance of approximate versus exact improvements in algorithms over time. “In the 1940s,” he said, “almost all of the improvements that people are making are exact improvements – meaning they’re solving the exact problem. … But you can see that as we approach these later decades, and many of the exact algorithms are starting to become already completely solved in an optimal way … approximate algorithms are becoming more and more important as the way that we are advancing algorithms.”
“This gives me hope that, indeed, this approximate future of computing will still allow us to have very large gains coming from algorithms,” he concluded.