UCIe 2.0 for 3D Chip Structures Offers up to 75 Times More Bandwidth Than Predecessor Spec

By Agam Shah

August 16, 2024

Advanced chips coming out of factories in the future will become significantly faster with a new interconnect specification that could provide up to 75 times more bandwidth than its predecessors. The Universal Chiplet Interconnect 2.0 (UCIe 2.0) is the latest spec for next-generation chips packed tightly in 3D structures.

The tighter designs will provide unprecedented improvements in speed and power efficiency.

“This is all about keeping things simple — deliver a ton load of bandwidth, but very little power and, … we’re all going to come out way ahead,” said Debendra Das Sharma, chair of the UCIe Consortium, which develops the UCIe specifications.

Facilitates the Move to 3D Designs

The chip-making industry is moving to 3D designs, in which chips are vertically stacked. The 3D structures have mini-chips that perform different functions – called chiplets – which will communicate using the UCIe 2.0 protocol.

“By 2028, chiplets – and systems of chips – will surpass the monolithic die,” said Kevin O’Buckley, senior vice president at Intel Foundry, in a canned public-relations statement published on Intel’s website. Intel didn’t cite the source for the numbers.

The UCIe 1.1 spec was designed for chips in 2D structures, but the 2.0 spec is the first for 3D structures, in which chiplets are stacked next to and on top of each other.

The three-dimensional structure will facilitate more communication channels between chiplets, whereas, in 2D structures, chiplets had to communicate linearly.

3D packaging can compute elements inside chips. Today’s PCs and servers already integrate a mixture of memory, CPUs, GPUs, AI cores, and power-management controls.

“Two main things where you’re going to see a huge impact are bandwidth and power efficiency,” Das Sharma said.

The UCIe 2.0 spec, an open standard, is significantly faster and more power-efficient than the UCIe 1.1 spec, released exactly a year ago.

The UCIe 2.0 spec also makes it viable for chip manufacturers to adopt 3D packaging. TSMC, Samsung, and Intel have their packaging technologies but are also working to support each other’s technologies.

The new spec also opens the door to putting connectors directly into the substrate. For example, many companies plan to implement newer optical interconnects into the substrate so chiplets can communicate at much faster speeds.

UCIe consortium members include the who’s who of device and chip makers, including Nvidia, Intel, AMD, Google, and TSMC. Apple isn’t a member but is expected to adopt 3D structures via TSMC packaging. The consortium was established in 2022.

Faster and More Power Efficient

Chiplets in the 3D structures will have bump pitches of up to 1 micron, which is much closer than the 25-55 micron for the 2.5D structures.

Smaller bump pitches are critical in creating smaller chip packages, allowing for faster bandwidth with more wires connecting chiplets.

“If I have a bump pitch of five microns and I go down to one micron, I have 25 times as many wires in a given area,” Das Sharma said.

The UCIe 2.0 protocol will support a transfer speed of up to 4 GT/s per channel, the same as the UCIe 1.1 spec. But chiplets will have more wires to connect – much like more memory channels – and will be closer to each other.

That increases the bandwidth density and cuts the amount of power required to transfer data.

Each chiplet has its own communications component – a NOC (network on chip) – that speeds up communication between chiplets.

“We start at 4000 gigabytes per second per square millimeter, and we go all the way up to 300,000 gigabytes per second — or 300 terabytes per second — per square millimeter once we hit one micron, a huge amount of bandwidth,” said Das Sharma.

The UCIe 1.1 bandwidth was 165 – 1317 GB/s, but with UCIe 2.0, there’s no limit as more wires will connect chiplets.

The shorter distance between chiplets also makes it magnitudes more power efficient than UCIe 1.1 or industry-standard interconnects.

“This thing helps us with power efficiency because my distance is smaller… and I don’t have much by way of circuitry,” said Das Sharma.

The UCIe 2.0 is expected to draw 0.05 picojoules per bit, going down to 0.01 picojoules per bit at a 1-micron bump pitch.

“If you look into PCI Express or Ethernet, it is 5 to 10 picojoules per bit depending on who is doing the design,” Das Sharma said.

UCIe 2.0 has new tools to manage, discover, and test chiplets throughout their lifecycle. It helps in the process of validation, deployment, and upgrade of chiplets. That will allow chip makers to get a handle on issues relating to manufacturing and performance of chiplets.

“DFX features in UCIe 2.0 provide a standardized approach to improving testability, manufacturability, and reliability across different chiplet designs and manufacturers,” UCIe said in a specification document.

The spec will support CXL, PCIe, and other known interconnects. However, many companies, including Nvidia and Ayar Labs, are developing their own interconnect, which they can put on top of UCIe 2.0.

“You can also map your own proprietary protocol on top of this — some people want to use it for their own scale-up kind of connectivity,” Das Sharma said.

Timeline

There’s no clear timeline for when chips based on UCIe 2.0 will reach the market, but it will take time. The UCIe 1.0 interconnect is still far from being implemented, and Intel showed off a test chip it made on its Intel 3 process last year.

“Individual member companies decide in terms of their own leads … if there is a well-defined spec, they can implement it, and then they will have the products out,” Das Sharma said.

The UCIe Consortium is releasing new specs at a one-year cadence, but there’s no clear timeline for when the follow-up spec to 2.0 will be released.

Das Sharma said there was enough demand for UCIe 2.0, so the spec was released.

UCIe has also established working groups to expand the interconnect to automotive companies looking for faster connections to install in cars.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it would build the supercomputer with Nvidia's Blackwell GPUs an Read more…

ZLUDA Takes Third Wack as a CUDA Emulator

October 7, 2024

The ZLUDA CUDA emulator is back in its third invocation. At one point, the project was quietly funded by AMD and demonstrated the ability to run unmodified CUDA applications with near-native performance on AMD GPUs. Cons Read more…

Quantum Companies D-Wave and Rigetti Again Face Stock Delisting

October 4, 2024

Both D-Wave (NYSE: QBTS) and Rigetti (Nasdaq: RGTI) are again facing stock delisting. This is a third time for D-Wave, which issued a press release today following notification by the SEC. Rigetti was notified of delisti Read more…

Alps Scientific Symposium Highlights AI’s Role in Tackling Science’s Biggest Challenges

October 4, 2024

ETH Zürich recently celebrated the launch of the AI-optimized “Alps” supercomputer with a scientific symposium focused on the future possibilities of scientific AI thanks to increased compute power and a flexible ar Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of its Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago this week emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, Read more…

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it w Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago this week emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whateve Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire