IBM at Hot Chips: What’s Next for Power

By Tiffany Trader

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, and IBM is positioning its chips to be the air traffic controller at the center of it all. That was the high-level takeway of our interview with IBM Power architects Jeff Stuecheli and Bill Starke at Hot Chips this week.

The accomplished engineers were at the 30th iteration of Hot Chips to focus on the Power9 scale-up chips and servers, but they also provided details on upcoming developments in the roadmap, including a new buffered memory system suitable for scale-out processors.

Source: IBM slide (Hot Chips 30)

Having launched both the scale-out and scale-up Power9s, IBM is now working on a third P9 variant with “advanced I/O,” featuring IBM’s 25 GT/s PowerAXON signaling technology with upgraded OpenCAPI and NVLink protocols, and a new open standard for buffered memory.

AXON is an inspired appellation that the Power engineering team came up with and the IBM marketing team signed off on. The A and X are designations for IBM’s SMP buses – X links are on-module and A links are off; the O and N stand for OpenCAPI and NVLINK, respectively. The convenient acronym would be fine at that, but aligning with IBM’s penchant for cognitive computing, axons are the brain’s signalling devices, allowing neurons to communicate, so you could say, as Witnix founder and former Harvard HPC guy James Cuff did, that AI is literally “built right into the wire.”

You can see on these annotated Power9 die shots how going from a DDR memory controller to a redrive memory controller and going to smaller PHYs enabled IBM to double the number of AXON lanes.

“The PowerAXON concept gives us a lot of flexibility,” said Stuecheli. “One chip can be deployed to be a big SMP, it can be deployed to talk to lots of GPUs, it can talk to a mix of FPGAs and GPUs – that’s really our goal here is to build a processor that can then be customized toward these domain specific applications.”

For its future products, IBM is focusing on lots of lanes and lots of frequency. Its Power10 roadmap incorporates 32 GT/s signalling technology that will be able to run in 50 GT/s mode.

The idea that IO is composable is what OpenCAPI and PowerAXON are all about – and now IBM is bringing this same ethos to memory through the development of an open standard for buffered memory, appropriately called OpenCAPI memory.

With both Power8 and Power9, the chips made for scale-out boxes support direct-attached memory, while the scale-up variants, intended for machines with more than two sockets, employ buffered memory. The buffered memory system puts DRAM chips right next to IBM’s Centaur buffer chip (see figure below-right), enabling a large number of DDR channels to be funneled into one processor over SERDES. The agnostic interface hides the exact memory technology that’s on the DIMM from the processor, so the processor can work with different kinds of memory. This decoupling of memory technology from the processor technology means that, for example, enterprise customers upgrading from Power8 to Power9 can keep their existing DDR4 DRAM DIMMs.

Power9 Scale Up chipset block diagram

Stuecheli shared that the current buffered memory system (on Power8 and Power9 SU chips) adds a latency of approximately 10 nanoseconds compared to direct attached. This minimal overhead was accomplished “through careful framing of the packets as they go across the SERDES and bypasses in the DDR scheduling,” said Stuecheli.

While the Centaur-based approach is enterprise-focused, IBM wanted to offer the same buffered memory in its scale-out products. They are planning to introduce this capability as an open standard in the third (and presumably final) Power9 variant, due out in 2019. “We’ve been working through JEDEC to build memory DIMMs based around a thin buffer design,” said Stuecheli. “If you have an accelerator and you don’t like having that big expensive DDR PHY on it and you want to use just traditional SERDES to talk to memory you can do so with the new standardized memory interface we’re building,” he told the audience at Hot Chips. The interface spans from 1U small memory form factors all the way up to big tall DIMMs. The aim is to have an agnostic interface that attaches to a variety of memory types to it, whether that’s storage-class memory, or very high bandwidth, low capacity memory.

While the latency add was 10 nanoseconds on the proprietary design (with one port going to four DDR ports with a 16MB cache lookup), the new buffer IBM is building is a single port design with a single interface. It’s a much smaller chip without the cache, and IBM thinks it can reduce this latency to 5 nanoseconds. Stuecheli said that company-run simulations with loaded latency showed it doesn’t take much load at all before providing much lower latency than a direct-attached solution.

The roadmap shows the anticipated increase in memory bandwidth owing to the new memory system. Where the Power9 SU chip offers 210 GB/s of memory bandwidth (and Stuecheli says it’s actually closer to 230 GB/s), the next Power9 derivative chip, with the new memory technology, will be capable of deploying 350 GB/s per socket of bandwidth, according to Stuecheli.

“If you’re in HPC and disappointed in your bytes-per-flop ratio, that’s a pretty big improvement,” he said, adding “we’re taking what was essentially the Power10 memory subsystem and implementing that in Power9.” With Power10 bringing in DDR5, IBM expects to surpass 435 GB/s sustained memory bandwidth.

IBM believes that it has the right approach to push past DDR limitations. “When you think of Moore’s law kind of winding down, slowing down, you think of single-ended signaling with DDR memory slowing down,” Bill Starke said in a pre-briefing. “This composable system construct [that IBM is architecting] is enabling a proliferation of more heterogeneity in compute technology, along with a wider variation of memory technologies, all in this composable plug-and-play, put-it-together-how-you-want way where it’s all about a big high-bandwidth low-latency switching infrastructure.”

“With the flexibility of the attach on the memory side and on the compute acceleration side, it really boils down to thinking of the CPU chip as this big switch,” Stuecheli followed, “this big data switch that’s just one big pile of bandwidth connectivity that’s enabling any kind of memory to talk to any kind of acceleration, and it all plumbs right past the powerful general-purpose processor cores, so you’re pulling that whole compute estate together.”

HPC analyst Addison Snell (CEO of Intersect360 Research) came away from Tuesday’s Hot Chips talk with a favorable impression of the Power play. “IBM’s presentation at Hot Chips underscored two major themes,” Snell commented by email. “One, Power9 has excellent memory bandwidth and performance. Two, it is a great platform for attaching accelerators or co-processors. It’s an odd statement of direction, but maybe a visionary one, essentially saying a processor isn’t about computation per se, but rather it’s about feeding data to other computational elements.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Bailey Hutchison Convention Center and much of the surrounding Read more…

By Tiffany Trader

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

SC 30th Anniversary Perennials 1988-2018

November 8, 2018

Many conferences try, fewer succeed. Thirty years ago, no one knew if the first SC would also be the last. Thirty years later, we know it’s the biggest annual Read more…

By Doug Black & Tiffany Trader

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This