10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

By Doug Black

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been instrumental to AMD’s datacenter market resurgence. Nanometer cachet is incalculable: on it, companies, careers and fortunes are made and lost.

But that cachet is coming into question. As with the Linpack benchmark for ranking HPC system performance, a growing chorus of voices is calling for other ways to assess and characterize chips.

Before further discussion, let’s define our terms. A nanometer is one billionth of a meter, also expressed as 0.000000001 or 10-9 meters (for perspective, hair grows at roughly 1 nm per second**). In chip design, “nm” refers to the length of a transistor gate – the smaller the gate the more processing power that can be packed into a given space.

Some chip technologists argue that the nanometer is too narrow a measure of chip advancement. Writing in IEEE last month, nine computer scientists from MIT, Stanford, University of California/Berkeley and Taiwan-based chip manufacturer TSMC have put forward a new “density metric” designed to be a more holistic gauge. The nanometer metric “is all but obsolete today,” they said, for not simultaneously accounting for logic, memory and packaging/integration technologies. What’s needed is to capture a broader set of system-level performance indicators, connecting “the device technology advances to system-level benefits in a comprehensive fashion while acknowledging the synergy between various components.”

Available at https://purl.stanford.edu/jj585np1768, the data “suggest that a balanced growth between logic, memory and connectivity has been an implicit guide for computing system optimization,” according to the authors.

Classifying chips by transistor gate length – the “node number” – has been around since the 1960s. But over the past decade, “driven by competitive marketing,” the nanometer metric has been pushed, pulled and distorted in several ways, the authors stated. For one, the node number “has become decoupled from, and can be several times smaller than, the actual minimum gate length.” For another, different semiconductor manufacturers brand similar logic technologies with “different node labels, thus creating further confusion.”

They also point out that while 5nm chips are slated to go into production next year, the next-generation node will be 3nm, so “we will soon run out of nanometers for naming future generations of technologies.” More importantly, 3nm is roughly the size of about 12 atoms, so small it could create doubts that semiconductor technology advancement is nearing physical limits.

“Yet, it is a foregone conclusion that the semiconductor industry will continue to make progress,” the authors asserted, “because there are still many ways to advance semiconductor technology beyond 2-D miniaturization and also because societal demand for more capable electronic systems is insatiable.”

In place of the nm node number, the authors’ proposed “LMC density metric” would be a three-part number reflecting the relationship between density and “benefits for more advanced computing systems — the primary driver for progress in semiconductor technology.”

The three numbers:

  • DL: density of logic transistors in #/mm2
  • DM: bit density of main memory (currently the off-chip DRAM density in #/mm2)
  • DC: density of connections between the main memory and logic (in #/mm2)

The authors said that, based on design specifics, today’s leading edge technologies can be characterized by [38M, 383M, 12K].

“These three components of the system metric contribute to the overall speed and energy efficiency of computing systems,” the authors said. “This balance is implicit in computer architectures and allows the improvement of overall system performance in an optimal fashion.” They also noted that historical data show a “correlated growth” in logic, memory and connectivity, suggesting “a balanced increase of DL, DM, and DC for the decades to come.”

The metric places particular focus on the integrations of logic, memory and connectivity, the authors said. “In addition to being consistent with historical trends and our intuition about computing systems, the LMC density metric is applicable and extensible to future logic, memory, and packaging/integration technologies.”

Acknowledging that chip vendors may “continue to use their preferred labels to market their technologies,” the authors said the LMC density metric could foster “clear communications” by serving “as a common language to gauge technology advances among semiconductor manufacturers.”

Above all, the authors said, the LMC density metric “takes the semiconductor industry out of the quandary of using the vanishing nanometer as a label to describe advancements in semiconductor technology that will remain very important to society for a very long time to come.”

*  Roughly corresponding to Intel’s (delayed) 10nm process node

**  See comments from Phil Moriarty, professor of physics, School of Physics and Astronomy, University of Nottingham

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire