Intel Teams with National Labs for Major Leaps in Semiconductors, Neuromorphics

By Oliver Peckham

October 2, 2020

After facing setbacks in its semiconductor execution, including a delay in its 7nm node, Intel is announcing two partnerships with U.S. national labs that reaffirm its commitment to advanced computing: first, a partnership with the DOE and the national labs that focuses on developing next-generation semiconductors and manufacturing techniques; and second, a partnership with Sandia National Laboratories to test the scale-up potential of neuromorphic computing.

Creating the next generations of semiconductors, manufacturing and computing

Intel’s new general partnership with the DOE and U.S. national labs is a “long-term agreement” – 10+ years – that will aim to “further support the United States’ leadership in advanced computing systems, including exascale, neuromorphic and quantum computing.” The practicalities of the partnership are far more specific, with Intel and the national labs working to create “next-generation semiconductor technologies, manufacturing processes, advanced system design and software enablement.”

The deal is still in its early stages; at the moment, task forces composed of researchers from Intel, the DOE and Argonne National Laboratory are working out the particulars of the program’s initiatives. Even in these early stages, however, the task forces are guided by three core focus areas: first, future silicon development R&D, including materials science and system-level modeling techniques; second, collecting architecture co-design requirements that will enable researchers to build the “next several generations” of HPC and AI architectures; and finally, software ecosystem development for exascale computing through the Aurora Center of Excellence. This final focus area will involve expanding open standards to “enable the broad use of Intel CPUs and GPUs” (along with other accelerators) in exascale applications.

Scaling up neuromorphic computing

The Loihi neuromorphic chip. Image courtesy of Intel.

Intel’s neuromorphic chip, which aims to directly mimic the behavior of the human brain, has already learned to smell, to touch, and even to assist children who use wheelchairs. Intel is currently on the fifth generation of its neuromorphic efforts – a chip named Loihi, introduced in 2017. Earlier this year, Intel scaled Loihi up into a system called Pohoiki Springs, a behemoth containing 768 Loihi chips, each with 128 cores and around 131,000 simulated computational “neurons” (totaling some 100 million digital neurons system-wide). Pohoiki Springs is very much a trial balloon, if a large one – it was initially made available only to members of the Intel Neuromorphic Research Community (INRC) via cloud.

The Pohoiki Springs system. Image courtesy of Intel.

Intel’s latest scaled-up neuromorphic system deployment, however, will be another story entirely. Through a three-year agreement with Sandia National Laboratories, Intel will supply a Loihi-based system to “lay the foundation for the later phase of the collaboration”: large-scale research on Intel’s forthcoming neuromorphic architecture and the delivery of Intel’s largest neuromorphic system to date. While that first system will amount to some 50 million computational neurons (and presumably contain around 384 Loihi chips), the latter system “could exceed … one billion neurons in computational capacity” – the equivalent of over 7,600 Loihi chips – “if research progresses as expected.”

Intel’s rapid scale-up of neuromorphic computing in the past several years signals confidence in the novel technology – confidence that Intel believes is well-earned, given early results that it says demonstrate energy efficiency on Pohoiki Springs that is four orders of magnitude better than state-of-the-art CPUs. Sandia, for its part, aims to identify the areas where neuromorphic computing can best be applied to help address some of the most pressing issues in the U.S., such as energy and national security.

“By applying the high-speed, high-efficiency and adaptive capabilities of neuromorphic computing architecture, Sandia National Labs will explore the acceleration of high-demand and frequently evolving workloads that are increasingly important for our national security,” said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “We look forward to a productive collaboration leading to the next generation of neuromorphic tools, algorithms, and systems that can scale to the billion neuron level and beyond.”

To put Intel’s neuromorphic computing through its paces, Sandia will evaluate the scaling of a variety of spiking neural network workloads, ranging from physics modeling to large-scale deep networks, that serve as good indicators for the chips’ suitability for particle interaction simulations. Sandia National Laboratories is one of the three national laboratories serving the National Nuclear Security Administration (NNSA), which – as the steward of the nation’s nuclear weapon stockpile – is particularly interested in particle and fluid simulations, and just announced another major supercomputer from HPE (powered by forthcoming Sapphire Rapids Xeons).

“Sandia National Labs has long been at the leading edge of large-scale computing, using some of the country’s most advanced high-performance computers to further national security. As the need for real-time, dynamic data processing becomes more pressing for this mission, we are exploring entirely new computing paradigms, such as neuromorphic architectures,” said Craig Vineyard, principal member of the technical staff at Sandia. “Our work has helped keep Sandia National Labs on the forefront of computing, and this new endeavor with Intel’s Neuromorphic Research Group will continue this legacy into the future.”

These two Intel/government partnerships were actually accompanied by a third; Intel has also secured its second military chip contract, through which it use its packaging technology to help the U.S. military develop chip prototypes.

To read more about recent developments in Intel’s exascale system development and neuromorphic computing advances, visit one of the articles below:

Aurora’s Troubles Move Frontier into Pole Exascale Position

Intel’s Neuromorphic Chip Scales Up (and It Smells)

Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch

Neuromorphic Computing to Assist Children Who Use Wheelchairs 

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire