David Patterson Kicks Off AI Hardware Summit Championing Domain Specific Chips

By John Russell

September 30, 2020

The 2020 AI Hardware Summit kicked off yesterday with long-time computer luminary David Patterson digging into all things TPU and extolling on how they outrun GPUs for AI needs. After presenting data in which the TPUv3 bested Nvidia’s V100, he was asked about Google’s forthcoming TPUv4 versus Nvidia A100. Expect the same kind of advantage for TPUv4, he suggested.

With that the AI Hardware Summit was off and running.

It’s a virtual conference this year with two days this week and two more next week (link to conference). Other highlights on opening day included seasoned AI watcher Karl Freund’s (senior analyst, Moor Insights and Strategy) spotlight on 2019 and 2020 accelerator trends; startups SambaNova and Groq providing glimpses into their systems; and a pair of fascinating panels – one on AI use for chip design and another on AI compiler development. There was actually a good deal more going on and it’s best to check out the agenda.

Patterson, of course, is a familiar name in computing. He’s a UC Berkeley professor, a Google distinguished engineer, and the RISC-V Foundation Vice-Chair. His work at Google on TPU development is well-known.

David Patterson

As he recalled, “Google was one of the first people to get excited about both deep neural networks, and then domain specific architectures. In 2013, they calculated that if 100 million users started doing deep neural networks, three minutes a day on CPUs, they would have to double the size of the data center. Not only would that be very expensive, that would take forever to build twice as many data centers in the cloud. So they set an emergency project whose goal was to make a factor of 10 improvement over existing CPUs and GPUs.”

To some extent the rest is history as Google developed its tensor processor unit focusing on the AI needs of Google’s workload.

“Why was it successful? First of all, it an amazing number of arithmetic units. It has 256 by 256, arithmetic units, 64,000 multiply accumulators. Secondly, that they were doing work on eight-bit integer data rather than 32-bit floating data so it can be more energy efficient and take less memory capacity and be faster. And because it was domain specific, it dropped a lot of the general-purpose features that dominate CPUs and GPUs like caches and branch predictors. This saves area and energy in lets the transistors get reused. The legacy of TPU v1 is not only its technical excellence, but the impact it made,” said Patterson.

Lots of interesting choices were made along the way, for example how many cores should the new device have. “Where we went to [for] advice is Seymour Cray…and when we asked him, he said, “If you’re plowing a field, what would you rather use to strong oxen or 1024 chickens? So we went with two strong oxen so the TPUv2 has two cores per chip so it wouldn’t have a slower clock cycle.”

In addition to presenting more detail around the TPUv1-though-TPUv3 architecture, Patterson’s talk reinforced the idea designing domain specific chips (and tools) for AI comprise an increasingly formidable approach, likening the TPU’s success to  a galvanizing proof point that’s now launching “1000 chips”.

“Let me conclude the slowing of Moore’s law means AI needs to tailor machines to be able to continue to make improvements in training and efforts. [A]ll the decisions you want to make are easier when it’s just for one domain rather than for general purpose. Despite using older technology and smaller chips, Google’s TPU v2 and v3 demonstrated a 50x performance improvement per watt versus general purpose supercomputers. I think the 2020s is a Cambrian era with all kinds of innovation, and exotic species, but which ones are going to flourish?”

Two such companies hoping to flourish are SambaNova and Groq.

SambaNova cofounder and CTO Kunle Olukotun walked briefly through its reconfigurable data flow architecture. Here’s brief excerpt from Olukotun’s remarks:

“We define a reconfigurable data flow architecture that’s optimized for data flow problems. So it takes these hierarchical pal (parallel) patterns and maps them to an architecture so they can be executed very efficiently. This is a reconfigurable architecture composed of reconfigurable compute, reconfigurable memory, and communication primitives that makes it very efficient to execute these sorts of data flow problems.

“The first incarnation of this reconfigurable Dataflow architecture is the Cardinal SN10 reconfigurable data flow unit (RDU). This is implemented in TSMC seven nanometer technology and 40 billion transistors. Over 50 kilometers of wire provide all the interconnect between the different components on the chip. It provides hundreds of teraflops of compute capability, and hundreds of megabytes of memory on chip. Just as importantly, it has different direct interfaces to terabytes of memory off chip. We’ve combined these RDU chips into systems that provide scalable performance for both training and inference. We call them data scale systems,” said Olukotun.

“When mapping data flow applications to the data scale system, a critical thing is to delicately balance computation and communication. If you look at conventional architectures, they allow you to program the computation, but they don’t allow you to program the communication and this is critical for getting efficient data flow. However, with reconfigurable dataflow, we are able to program the communication and the data flow, so that we can get a 10x improvement in performance on some applications. And we can enable applications that are not possible with current accelerator technology available in the form of GPUs.”

“We don’t expect the programmer to do this manually, we have a set of software called SambaFlow, which provides the capability to map these models very efficiently to our architecture. The idea is that the programmer can start either in one of the frameworks, PyTorch or TensorFlow, or they can provide their own graph of custom operations. If you start in one of the frameworks, then you’ll use a standard set of ML operations, and here we want to optimize the graph so that we can take advantage of both model parallelism and data parallelism. Then given a graph of operators, either custom operators or standard ML operators, we want to optimize the data flow in the graph. And this is done by number of different optimizations, such as tiling to improve the memory performance, exploiting parallelism within the operators, and then some very specific optimizations that that are specific to our architecture, such as streaming and nested pipelining.”

Groq cofounder and CEO Jonathan Ross gave a somewhat less technical presentation, noting recent key funding milestones, the company’s expanding portfolio, and use cases. It’s Tensor Streaming processor is another AI chip that seeks to reduce some of the overhead (instructions) required to use general purpose microprocessors by physically moving and reorganizing functional elements (e.g. with needed memory and support located nearby).

Groq’s says its TSP is capable of 18,900 IPS (inferences per second) on ResNet-50 v2 at batch size one and says it the fastest commercially available AI/ML accelerator, with a responsiveness measured in hundredths of a millisecond.

Here’s a brief portion of the description of the architecture excerpted from a paper presented at IEEE’s 2020 International Symposium on Computer Architecture (link to paper):

“To understand the novelty of our approach, consider the chip organization shown in Figure 1(a). In a conventional chip multiprocessor (CMP) each “tile” is an independent core which is interconnected using the on-chip network to exchange data between cores. Instruction execution is carried out over several stages: 1) instruction fetch (IF), 2) instruction decode (ID), 3) execution on ALUs (EX), 4) memory access (MEM), and 5) writeback (WB) to update the results in the GPRs. In contrast from conventional multicore, where each tile is a heterogeneous collection of functional units but globally homogeneous, the TSP inverts that and we have local functional homogeneity but chip-wide (global) heterogeneity.

“The TSP reorganizes the homogeneous two-dimensional mesh of cores in Figure 1(a) into the functionally sliced microarchitecture shown in Figure 1(b). In this approach, each tile implements a specific function and is stacked vertically into a “slice” in the Y-dimension of the 2D on-chip mesh. We disaggregate the basic elements of a core in Figure 1(a) per their respective functions: instruction control and dispatch (ICU), memory (MEM), integer (INT) arithmetic, float point (FPU) arithmetic, and network (NET) interface, as shown by the slice labels at the top of Figure 1(b).

“In this organization, each functional slice is independently controlled by a sequence of instructions specific to its on-chip role. For instance, the MEM slices support Read but not Add or Multiply, which are only in arithmetic functional slices (the VXM and MXM slices).”

Ross said the company was now shipping its latest Groq card, Groq node and Groq ware SDK solutions to customers worldwide. “We’re shipping to our customers both as individual PCIe cards and systems with eight cards each, and there’s even more on the roadmap to come,” said Ross.

As noted earlier there were many more activities in the first day. Here’s a link to coverage of the panel on AI use in chip design appearing in HPCwire‘s sister pub, EnterpriseAI.

Link to AI Hardware Summit: https://www.aihardwaresummit.com/events/ai-hardware-summit-2020

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire