IBM Introduces its First Power10-based Server, the Power E1080; Targets Hybrid Cloud

By John Russell

September 8, 2021

IBM today introduced the Power E1080 server, its first system powered by a Power10 IBM microprocessor. The new system reinforces IBM’s emphasis on hybrid cloud markets and the new chip beefs up its inference capabilities. IBM – like other CPU makers – is hoping to make inferencing a core capability of host CPUs and diminish the need for separate AI accelerators. IBM’s Power8 and Power9 were usually paired with Nvidia GPUs to deliver AI (and HPC) capabilities.

“When we were designing the E1080, we had to be cognizant of how the pandemic was changing not only consumer behavior, but also our customers’ behavior and needs from their IT infrastructure,” said Dylan Boday, vice president of product management for AI and hybrid cloud, in the official announcement. “The E1080 is IBM’s first system designed from the silicon up for hybrid cloud environments, a system tailor-built to serve as the foundation for our vision of a dynamic and secure, frictionless hybrid cloud experience.”

Few details about the Power10 chip were discussed, nor was a more detailed spec sheet for the Power E1080 presented at an analyst/press pre-briefing last week. IBM instead chose to cite new key functional capabilities that blended the boundary between system and chip and to highlight favorable benchmarks. General availability for E1080 is scheduled for later this month. No timetable was given for direct sales (if any) of the Power10 chips.

Here are the highlights as reported by IBM:

  • Enhancements for hybrid cloud such as by the minute metering of Red Hat software including Red Hat OpenShift and Red Hat Enterprise Linux, 4.1x greater OpenShift containerized throughput per core vs x86-based servers, and “architectural consistency and cloud-like flexibility across the entire hybrid cloud environment to drive agility and improve costs without application refactoring.”
  • New hardware-driven performance improvements that deliver up to 50 percent more performance and scalability than its predecessor the [Power9-based] IBM Power E980, while also reducing energy use and carbon footprint of the E980. The E1080 also features four matrix math accelerators per core, enabling 5x faster inference performance as compared to the E980.
  • New security tools designed for hybrid cloud environments including transparent memory encryption “so there is no additional management setup,” 4x the encryption engines per core, allowing for 2.5x faster AES encryption as compared to the IBM Power E980, and “security software for every level of the system stack.”
  • Robust ecosystem of ISVs, Business Partners, and support to broaden the capabilities of the IBM Power E1080 and how customers can build their hybrid cloud environment, including record-setting performance for SAP applications in an 8-socket system. IBM is also launching a new tiered Power Expert Care service to help clients as they protect their systems against the latest cybersecurity threats while also providing hardware and software coherence and higher systems availability.”

In recent years IBM’s positioning of its Power platforms and Power CPU line has significantly transitioned from HPC-centricity to enterprise-centricity with a distinct hybrid cloud focus. Introduction of the E1080 server seems to complete the journey. Standing the Summit supercomputer (at ORNL) in 2018 based on IBM’s AC922 nodes with Power9 CPUs was probably the high-water mark for IBM HPC. Summit was the fastest supercomputer in the world for a couple cycles of the Top500.

However, Power9-based IBM systems achieved lackluster traction in the broader HPC market. IBM shifted gears gears a few times trying to find the right fit. The IBM purchase of Red Hat for $34 billion in 2019 marked a massive shift for IBM strategy to the cloud and stirred uncertainty about IBM’s plans for Power-based platforms and its role in the OpenPOWER Foundation. The integration of E1080 server line into IBM’s hybrid-cloud strategy now seems to remove the ambiguity surrounding IBM plans for the IBM Power product line.

Patrick Moorhead, founder and president of Moor Insights & Strategy, noted, “IBM has changed its focus on Power over the past few years. Power10 is focused on enterprise big data and AI inference workloads delivered in a secure, hybrid cloud model. It looks to really scream on SAP, Oracle, and OpenShift environments when compared to Cascade Lake. The performance numbers IBM touted make sense given the chip’s architecture.” 

“On-chip ML inference makes lots of sense when latency is of the upmost importance and being off-chip versus going through PCIe delivers just that in an open (ONNX supported) way. Some enterprises will even train models on these systems if they’re underutilized,” added Moore who said he thought IBM could gain traction, “if it aggressively markets and sells these systems against x86-systems. … I’d say the past few generations were marketed and sold to current clients as replacements for older IBM systems versus ‘going after’ Intel.”

Analyst Peter Rutten of IDC also thinks IBM’s E1080 is a good move. “Keep in mind that this is the 8- or 16-socket enterprise class system that runs AIX first and foremost, as well as OS i and Linux. This is IBM’s transactional/analytics processing system that offers 99.999% availability, high security, and a lot of performance for such traditional workloads as database. The new chip offers several benefits for this system – higher performance (versus Power9), less energy, more bandwidth, lower latency, greater security with baked-in encryption, and the MMA for AI inferencing on the chip, which is something enterprises increasingly want to be able to do on their traditional workloads. The way I see it, IBM hit sort of a sweet spot with this system.”

Rutten also doesn’t think IBM is easing out of HPC and AI. “I don’t see this as IBM meandering, but as parallel tracks. There is the scale-out portfolio that is all Linux and that’s focused on AI training, HPC, big data analytics. These are the one- and two-socket systems that include the AC922 which was used for Summit. IBM didn’t win the latest supercomputer RFPs but they revealed some very interesting features with Power10 for those workloads. The E1080 is based on a single chip module. But forthcoming is a Dual-Chip Module (DCM), which takes two Power10 chips and puts them (1200 mm2 combined) into the same form factor where there used to be just one Power9 processor. This DCM is targeting compute-dense, energy-dense, volumetric space-dense cloud-type configurations with systems ranging from 1 to 4 sockets. I think we’re going to see some screaming performance from these systems when they arrive.”

Top down view of IBM E1080

Not a lot was said about the specific Power10 chip inside the E1080 at IBM’s pre-launch briefing.

Responding to a question about the physical components of the system and new chip during Q&A at the briefing, Boday said, “The E1080 will scale to 240 cores in the entire system itself. The Power10 processor will have 15 cores, the prior generation (Power9-E980) was maxed out at 12. That allows us to scale up to the 240 cores (16 Power10s). We’re also improving the overall number of DIMM slots to where we can actually do 256 DDR4 DIMMs in the system. The overall memory bandwidth [is increased] to over 400 gigs per second, per socket. We’ve introduced a Gen5 PCIe slots and we’re allowing you to connect all of this together, [the] individual drawers and nodes of the system through a faster fabric that we call our SMP fabric.”

Ken King, general manager of IBM Power, “You will see more announcements coming later this year for more of our Power10 family coming to the market, and we’ll be rolling additional ones into early 2022 as well.” IBM disclosed last year that Power10 would be its first 7nm process part and was being fabbed by Samsung.

A fuller picture of the Power10 chip lineup and associated systems will emerge over the next few months. One interesting point is the inclusion on-chip inference capability. At the briefing, Satya Sharma, IBM fellow and CTO of IBM Power, emphasized the practice of “not requiring exotic accelerators” is a growing trend in the market. Indeed, IBM showcased such capabilities in its new Z series chip (Telum) at the recent Hot Chips conference. Intel has also announced plans to incorporate similar capabilities in its Sapphire Rapids CPU (Intel’s next-generation “Intel 7” processor).

Given IBM’s newer focus on adding inferencing capabilities to Power10, it would be interesting to see how the E1080 fares on the MLPerf inferencing competition. Boday was non-committal and said, “We’re excited about the number of MM (matrix multiply) engines per core that Power10 delivers and how those are going to be very advantageous. As we continue to build out those benchmarks, such as MLPerf, those are things that will be on the radar for us to deliver.” (See HPCwire article on IBM Power10 presentation last year at Hot Chips 2020.)

Mostly IBM stuck tightly to a script touting the new system’s functionality and favorable benchmarks versus physical specs. IBM is strongly promoting E1080’s security features and tools. The entire memory is encrypted, with no performance penalty or management set-up, said Satya Sharma, IBM fellow and CTO of IBM Power.

As an example, he said, “We are providing Forex crypto engines in every core. As a result, customers can get 2.5x more crypto performance. [Using this engine], you can do either end-to-end decryption or encryption. Or you can do [this] file systems or databases or applications. You can go from server all the way to network to the storage. With this crypto engine capability, you can implement full stack and end to end encryption.”

There is also a centralized dashboard for managing security on the E1080. “Customers can implement a number of different compliance automation tools [including] PCI, HIPAA-readiness, GDPR, and we would ensure that all of the servers in the server farm all comply with these security compliance profiles. At the same time, we are monitoring the entire server farm, if any of these servers go out of compliance,” added Sharma. He ticked through several security elements such as libraries of algorithms for so-called “post-quantum security” as well and isolation measure taken at the CPU and system level.

Certainly, SAP is a big factor in the enterprise market and IBM’s reported results again its own prior generation Power9 as well as x86 rivals will draw attention. It will be interesting to keep watching the Power10 family’s development and how many Power10 skus IBM ends up offering.

Analyst Shahin Kahn, of OrionX noted, “AI inference will be the tail that is wagging the Deep Learning dog. It is about infusing apps with AI models and feeding new data back to AI learning. So AI inference is a very large market attracting many new chip and system players. While increased focus on AI is to be expected, IBM’s innovations with memory really also stand out: Open Memory Interface, shared memory, large address space, memory bandwidth, memory clustering, and memory encryption are all very cool and very useful. In an interesting twist, Arm’s success helps expand the market for Power10 since developers who have already re-targeted their app once will find it a lot easier to do so a second or third time.”

Addison Snell, CEO of Intersect360 Research, thought IBM’s latest system and chip fit well into IBM’s expanding enterprise AI focus. “The Power E1080 is an interesting step in IBM’s continued focus on enterprise services and hybrid cloud. Power10 has features that would be useful in HPC, such as its Matrix Math Accelerator (MMA) engines, but IBM is focusing these exclusively on AI inference now—a whiplash-inducing abandonment of HPC since the installations of Summit and Sierra, which are still among the most powerful supercomputers in the world. For enterprise AI, it makes sense to move inferencing capabilities onto the CPU, and this will be part of a general trend among CPU providers,” said Snell.

Stay tuned.

Link to IBM announcement: https://www.hpcwire.com/off-the-wire/ibm-unveils-new-generation-of-ibm-power-servers-for-frictionless-scalable-hybrid-cloud/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The Case for an Edge-Driven Future for Supercomputing

September 24, 2021

“Exascale only becomes valuable when it’s creating and using data that we care about,” said Pete Beckman, co-director of the Northwestern-Argonne Institute of Science and Engineering (NAISE), at the most recent HPC Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Qubit Stream: Monte Carlo Advance, Infosys Joins the Fray, D-Wave Meeting Plans, and More

September 23, 2021

It seems the stream of quantum computing reports never ceases. This week – IonQ and Goldman Sachs tackle Monte Carlo on quantum hardware, Cambridge Quantum pushes chemistry calculations forward, D-Wave prepares for its Read more…

Asetek Announces It Is Exiting HPC to Protect Future Profitability

September 22, 2021

Liquid cooling specialist Asetek, well-known in HPC circles for its direct-to-chip cooling technology that is inside some of the fastest supercomputers in the world, announced today that it is exiting the HPC space amid multiple supply chain issues related to the pandemic. Although pandemic supply chain... Read more…

TACC Supercomputer Delves Into Protein Interactions

September 22, 2021

Adenosine triphosphate (ATP) is a compound used to funnel energy from mitochondria to other parts of the cell, enabling energy-driven functions like muscle contractions. For ATP to flow, though, the interaction between the hexokinase-II (HKII) enzyme and the proteins found in a specific channel on the mitochondria’s outer membrane. Now, simulations conducted on supercomputers at the Texas Advanced Computing Center (TACC) have simulated... Read more…

AWS Solution Channel

Introducing AWS ParallelCluster 3

Running HPC workloads, like computational fluid dynamics (CFD), molecular dynamics, or weather forecasting typically involves a lot of moving parts. You need a hundreds or thousands of compute cores, a job scheduler for keeping them fed, a shared file system that’s tuned for throughput or IOPS (or both), loads of libraries, a fast network, and a head node to make sense of all this. Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-apples) datacenter and edge categories. Perhaps more interesti Read more…

The Case for an Edge-Driven Future for Supercomputing

September 24, 2021

“Exascale only becomes valuable when it’s creating and using data that we care about,” said Pete Beckman, co-director of the Northwestern-Argonne Institut Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Qubit Stream: Monte Carlo Advance, Infosys Joins the Fray, D-Wave Meeting Plans, and More

September 23, 2021

It seems the stream of quantum computing reports never ceases. This week – IonQ and Goldman Sachs tackle Monte Carlo on quantum hardware, Cambridge Quantum pu Read more…

Asetek Announces It Is Exiting HPC to Protect Future Profitability

September 22, 2021

Liquid cooling specialist Asetek, well-known in HPC circles for its direct-to-chip cooling technology that is inside some of the fastest supercomputers in the world, announced today that it is exiting the HPC space amid multiple supply chain issues related to the pandemic. Although pandemic supply chain... Read more…

TACC Supercomputer Delves Into Protein Interactions

September 22, 2021

Adenosine triphosphate (ATP) is a compound used to funnel energy from mitochondria to other parts of the cell, enabling energy-driven functions like muscle contractions. For ATP to flow, though, the interaction between the hexokinase-II (HKII) enzyme and the proteins found in a specific channel on the mitochondria’s outer membrane. Now, simulations conducted on supercomputers at the Texas Advanced Computing Center (TACC) have simulated... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark Nossokoff looks at key storage trends in the context of the evolving HPC (and AI) landscape... Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

Intel Announces Sapphire Rapids with HBM, Reveals Ponte Vecchio Form Factors

June 28, 2021

From the ISC 2021 Digital event, Intel announced it will offer Sapphire Rapids with integrated HBM, detailed new Xe-HPC GPU form factors, and introduced commercial support for DAOS (distributed application object storage). Intel also announced a new Ethernet solution, aimed at smaller-scale HPC. With integrated High Bandwidth Memory (HBM), the forthcoming Intel Xeon Scalable processors... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire