Europe’s Chip Sovereignty Altering US Chip Companies’ Exascale Approach

By Agam Shah

November 16, 2022

Europe’s sovereign approach to exascale computing is complicating plans for U.S. chipmakers to break through in the market — and, in the process, empowering local chipmakers.

For one, European chip startup SiPearl is emerging as an early beneficiary amid efforts by the U.S. and EU to weaponize semiconductors and create the world’s fastest computers.

SiPearl, which is based in France, is becoming a go-to company for the world’s top chipmakers to pack proprietary accelerators in Europe’s upcoming top-end systems. The France-based company is at the spearhead of Europe’s plans to develop primary processors for exascale, with SiPearl’s made-in-Europe Rhea CPUs on EU’s roadmap for future exascale computers.

Like the U.S. and China, Europe is seeking chip independence as governments turn semiconductors into political bargaining chips. The EU is intensifying efforts to make supercomputers and exascale systems with homegrown processors and components, while cutting reliance on foreign technology.

Two of the fastest European supercomputers on the Top500 list – Lumi and Leonardo – are based on proprietary x86 chips from U.S. companies AMD and Intel. With a shift to Rhea, Intel, AMD and Nvidia see SiPearl’s chip as a gateway to put their GPUs and other accelerators in EU’s exascale systems.

SiPearl already has partnerships with Intel and Nvidia, and this week announced a partnership with AMD, which wants to expand the market for its Instinct GPUs to European supercomputers.

AMD’s Instinct GPUs already powers the world’s first exascale system, Frontier, at Oak Ridge National Laboratory. The SiPearl-AMD partnership revolves on making Rhea compatible with AMD’s Instinct accelerators by improving the ROCm parallel programming framework. AMD’s enterprise GPUs are compatible with x86 chips, and the focus will be on adding Arm-based compatibility with the GPUs.

“You have to spend lots of time in terms of integration testing and optimization. That’s what we do now with OneAPI [with Intel] and Nvidia with CUDA,” said Philippe Notton, CEO of SiPearl, in an interview with HPCwire at SC22 in Dallas. He added that since Nvidia has done some of the GPU porting on Arm already, it should make Nvidia’s GPU compatibility with Arm Rhea much easier.

The partnership with AMD will help SiPearl provide a wider range of GPUs along with Rhea to its high-performance computing customers, Notton said, adding that the AMD partnership is very similar to what SiPearl did with Intel on OneAPI.

“We have a dedicated team on both ends to ensure that basically our chip and Intel’s chip can work [with] OneAPI. And that is what we have just announced for AMD,” Notton said.

Beyond hardware, the road for the EU to exascale includes tools, compilers, runtimes and system integration tools for chips and accelerators. The EU is funding multiple efforts that include the EPI (European Processor Initiative), EUPEX (European Pilot for Exascale) and EuroHPC. Participants in these efforts include academics, researchers and European commercial organizations such as Atos, which plans to build exascale systems under its BullSequana line.

The Rhea processor is at the center of EU’s blueprints to create exascale systems, and software compatibility with Rhea is important for Nvidia, AMD and Intel to win more European exascale business. The first exascale system based on the Rhea CPU could go live in 2023 or 2024, according to a roadmap published by EPI.

EUPEX has built reference systems around Rhea processors, Atos’ BXI (BullSequana eXascale Interconnect) switches and OpenSequana racks. Each rack has up to 96 Rhea GPUs and 32 GPUs. There is no clarity on what GPUs would be used in the EUPEX systems.

The first European exascale supercomputer is expected to be JUPITER (Joint Undertaking Pioneer for Innovative and Transformative Exascale Research), which will be installed at the campus of Forschungszentrum Jülich (FZJ) and will be operated by the Jülich Supercomputing Centre. The system is expected to go live in 2023, and Jülich is sending out RFPs as part of its hardware procurement program.

JUPITER’s hardware specifications are not yet clear, and it is not certain if the system will be built using sovereign European technology. The system is being built ahead of the EU’s plans to develop an all-Euro supercomputer.

There is a desire to shift to made-in-Europe technologies in JUPITER, but its final configuration depends on the kind of proposals provided by the hardware makers, Estela Suarez, head of RG next-generation architectures and prototypes at Forschungszentrum Jülich, told HPCwire on the conference floor.

“This system is half financed by EuroHPC, and that’s definitely on the agenda to try to have as much European technology as possible on the system. At the end, it also depends on what is offered within the procurement, what the vendor brings… and how things turn out. But yes, we would definitely like to have some European technology features,” Suarez said.

The Jülich Supercomputing Center is already testing a range of chips that include chips from Intel and Nvidia, and computers that include quantum annealing systems from D-Wave.

SiPearl was founded in 2019 with funding from the European Union. Today, the company has offices in six locations, and though it started with EU funding, it is a for-profit organization.

“The way you manage a chip company is you need to go quite high in terms of value. If you just sell a chip without the software, you’re dead. The more you do, the better it is,” Notton said.

The EPI’s general-purpose processor roadmap includes a Rhea2 being made using the 5nm process in 2024, and a third-generation chip beyond 2025. The Rhea chip is based on Arm – which was previously European tech – and is also being viewed as a stopgap technology until European companies are realistically able to switch over to RISC-V, which is an instructional set architecture that is free to license. The Rhea chips mix Arm CPUs with RISC-V controllers.

The RISC-V architecture is not yet ready for high-performance computing due to software, compatibility and other issues, said Krste Asanović, a professor at the University of California, Berkeley and one of developers of RISC-V, during a speech at the Supercomputing 2022 conference.

SiPearl’s Notton said that licensing an Arm design was the quickest way to stand up an architecture to help Europe meet its goals to develop a sovereign chip.

“Arm is the only core that is competitive, beyond x86, if you want to have a quick time to market to be able to develop a chip in three years and not in 10 years,” Notton said.

While the EU has its own CPU, it may need to rely on U.S. companies for accelerators to reach its exascale goals. The EPI has a parallel effort underway to develop high-performance accelerators, though it may be years until it develops a competitive product.

EPI’s high-performance accelerator, called EPAC, is based on RISC-V, which meets the European goal of being non-proprietary and easy to replicate. EPI also wants to make sure that its homegrown accelerators can easily connect to other chips developed by EPI.

“We should not transmit the message that EPI is getting the next Nvidia or whatever. This requires a lot of time, a lot of technology. Maybe we will arrive there or not,” Filippo Mantovani, senior researcher at Barcelona Supercomputing Center, told HPCwire on the show floor.

The bigger goal is to create a strong knowledge base in creating chips and accelerators, which is important in developing European expertise and a thriving ecosystem in the region, said Mantovani, who is also a leader in EPI’s effort to develop accelerators.

“What is really important is that we have European support for building the whole chain of knowledge that you need to make good chips. It is not enough to have somebody with a good architectural idea on a blackboard unit. You need the whole chain of knowledge – from the architecture, to the mapping, to the technology, to the tape out, to the compiler, to the software and so on,” Mantovani said.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire