DOE Awards Argonne Lab $4M for Energy-Efficient Microchip Research

February 23, 2024

Feb. 23, 2024 — While the microchips inside electronic devices like cell phones and computers are incredibly small, transistors — the tiny electrical switches inside of microchips — are approaching the atomic level. Today’s microchips pack over 100 million transistors in an area the size of a pin head.

Argonne Senior Scientist Anil Mane holds a 300 mm silicon wafer coated by atomic layer deposition using the instrument in the background. Image: Argonne National Laboratory.

Despite their almost unimaginable size, the total number of such microelectronic devices consume an enormous amount of energy, which is growing exponentially. Predictions indicate that 20% of the world’s energy could be consumed by microelectronics by 2030.

Averting this crisis hinges on developing new transistors, materials and manufacturing processes to create ultra-low-energy microchips. Recently, the U.S. Department of Energy (DOE) awarded DOE’s Argonne National Laboratory $4 million to fund research that will use atomic layer deposition (ALD) to advance new materials and devices for creating microchips that use up to 50 times less energy than current chips.

Set to launch in early 2024, the project — which will last two and a half years — is funded by the Energy Efficient Scaling for Two Decades (EES2) program of the DOE’s Advanced Materials and Manufacturing Technologies Office. Argonne will partner with Stanford University, Northwestern University and Boise State University on the project. Argonne Distinguished Fellow Jeffrey Elam, who founded and directs Argonne’s groundbreaking ALD research program, will lead the research team.

“It is only recently that microelectronics started using a large fraction of the Earth’s electricity,” said Elam. ​“This is an urgent problem. DOE is committed to finding energy-efficient solutions that will flatten the demand curve for electricity use by microelectronics.”

Advanced technology, including the artificial intelligence (AI) explosion, is speeding up the pace at which energy is used in computing. AI applications analyze massive amounts of data and consume large amounts of electricity. As AI becomes widespread, enormous data centers that power those applications will face significant energy increases. The proliferation of ​“smart” devices and their data requirements also increase electricity use.

“Computers today spend over 90% of their energy shuttling data back-and-forth between the memory and logic functions, which exist on separate chips,” Elam said. ​“This limitation is known as the ​‘von Neumann bottleneck.’ Energy used to move the data is wasted as heat. As computing demand grows, we must develop low-power transistors and microchips to overcome this bottleneck and prevent an energy crisis.”

The project grew from Argonne’s Laboratory Directed Research and Development Program activities and a project funded by the DOE’s Office of Science. Threadwork is a research program that applies co-design to develop neuromorphic devices and terahertz interconnects that will enable high-performance detectors for high energy physics and nuclear physics.

Using Atomic Layer Deposition to Redesign the Microchip

Argonne is a pioneer in ALD, a thin-film deposition technique used extensively in microelectronics manufacturing. ALD produces extremely thin layers — only one-atom thick — to make microelectronics with great precision. These films are considered 2D since they have length and width, but essentially no thickness. A wide variety of thin films can be prepared by ALD on complex, 3D substrates.

“Atomic layer deposition is an ideal technology for fabricating ultra-low power electronics,” said Elam, an ALD researcher for more than 20 years. This makes ALD attractive for uses including lithium-ion batteries, solar cells, catalysts and detectors.

In this project, Argonne scientists will use ALD to redesign the microchip and eliminate the back-and-forth shuffling of data. Scientists want to close the gap between the microprocessor, or ​“brain,” and the memory chips. 3D integrated circuits can stack the memory and logic layers on top of each other, pancake-style. This could potentially reduce energy usage by 90%.

Currently, silicon is the semiconducting material used to make memory chips and microprocessors, but the 3D integration necessary to stack the layers is extremely difficult to achieve with silicon. Semiconductors control electric currents.

To overcome this limitation, researchers are developing an alternative, 2D semiconducting material, molybdenum disulfide (MoS2), to replace silicon. Building on previous research, Argonne scientists are using ALD to create atomically precise MoS2 films. ​“We can create extremely thin, 2D MoS2 sheets. These sheets will replace the bulky, 3D silicon thin films used in today’s transistors. This leaves more room on the microchip to effectively stack the memory and logic together, dramatically reducing energy,” Elam said.

New Electronic Devices Increase Energy Efficiency

Argonne, in collaboration with Boise State University, developed ALD methods for creating 2D MoS2 films. The team will demonstrate the use of MoS2 to create 2D semiconductor field effect transistors (2D-FETs) that can be stacked in 3D. FETs are conventional transistors but are based on 2D rather than 3D materials. This method allows the integration of memory and logic functions not possible with silicon.

Simultaneously, Argonne scientists are demonstrating the use of ALD MoS2 in memtransistors, electronic components used to build neuromorphic circuits. Neuromorphic circuits mimic connections between neurons in the brain to create microchips that use significantly less energy. This technology is relatively new. But neuromorphic circuits have the potential to use one million times less energy compared to conventional silicon devices.

Both 2D-FETs and memtransitors have been successfully demonstrated at the lab scale by growing MoS2 at high temperatures. Argonne scientists want to take the technology to the next level. Commercial manufacturing will require MoS2 to be deposited on large, pizza-sized wafers at low temperatures. In this DOE project, the research team will develop these capabilities to ensure that the MoS2 ALD is compatible with current semiconductor manufacturing processes. This is crucial to accelerating the integration of this technology into future semiconductors.

Scientists at the partner institutions will use their unique expertise to advance specific areas of the project. Professor Eric Pop at Stanford University will develop 2D-FET devices, Professor Mark Hersam at Northwestern University will develop memtransistors that utilize the ALD MoS2, and Professor Elton Graugnard at Boise State University will perform advanced characterization of the ALD MoS2 coatings to evaluate the quality of materials.

In parallel with the experimental work, Argonne is using modeling and simulation to design energy-efficient devices that incorporate ALD MoS2. This work will leverage high performance computers at the Argonne Leadership Computing Facility, a DOE Office of Science user facility at Argonne, to model and simulate circuits integrating 2D materials. The computers will measure energy savings and benchmark their performance against current silicon technologies. Researchers seek to advance the stacked devices toward a pilot-scale demonstration, with the goal of marketing them for commercial use by the microelectronics industry. The project is a new facet of Argonne’s growing portfolio of research and development using ALD technology to address a wide variety of energy challenges.

The Argonne team also includes Physicist Moinuddin Ahmed, Principal Materials Scientist Angel Yanguas-Gil, Computer Scientist Xingfu Wu, Assistant Computer Scientist Sandeep Madireddy and Senior Materials scientist Anil Mane. The project builds on Argonne’s extensive work advancing the science and technology to create the next generation of microelectronics. Along with innovations in energy-efficient microelectronics and architectures, scientists are developing new approaches to energy-efficient and environment-friendly manufacturing for microelectronics.

About ALCF and Argonne National Laboratory

The Argonne Leadership Computing Facility (ALCF) provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.


Source: Beth Burmahl, Argonne Lab

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire