Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

By Steve Conway

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, and related developments in Europe and attended the event, to interview Anders Dam Jensen, Executive Director of the EuroHPC Joint Undertaking, on the latest activities at the JU (EuroHPC Joint Undertaking).

HPCWire: Since I began working on studies for the Commission in 2010, Europe has advanced strongly in HPC to become one of the global leaders, not just in funding and deploying leadership-class supercomputers such as LUMI and others, but increasingly also in producing technologies used in the supercomputers. How did this major progress happen?

Anders Jensen:  HPC has been identified by European policymakers as a game-changing technology, enabling groundbreaking research and innovation across various scientific disciplines. As the performance of HPC technologies has improved, their use has drastically increased in many scientific areas, and therefore, their strategic potential has also increased. HPC is now recognized as a powerful tool that is essential to pushing back the boundaries of science and as a highly strategic resource that is critical in guaranteeing Europe’s digital sovereignty.

Consequently, the European Union established the European High-Performance Computing Joint Undertaking (EuroHPC JU) in 2018. As a legal and financial entity with a formidable allocated budget of around 7 billion euros for the period 2021-2027, provided by the European Union and national contributions. The EuroHPC JU has since been able to procure its own infrastructure and invest sustainably in developing a full European supercomputing ecosystem. Thanks to substantial financial resources and appropriate legal frameworks, the JU started to invest in a complete, ‘made in Europe’ supercomputing supply chain, from the processors, hardware, and software to applications to be run on these supercomputers and know-how to develop a strong European expertise.

Anders Jensen, executive director of the EuroHPC Joint Undertaking

HPCwire: What’s been the impact of these investments?

Anders Jensen: The creation of the JU has undoubtedly been a major turning point in the European HPC strategy. Now, we can observe the effects of such an ambitious strategy. Europe is now officially considered a world leader in the field, with three EuroHPC supercomputers ranked among the world’s top 10 most powerful supercomputers. In parallel, the JU has significantly invested in a whole European stack of technologies: creating European CPUs for future systems with projects like the European Processor Initiative, developing European interconnect technologies, and developing software systems and applications. It is incredibly rewarding to see the strong and ambitious European political vision becoming a reality.

HPCWire: Speaking of producing HPC technology, where does the goal of HPC sovereignty stand today?

Anders Jensen: As I just highlighted, the primary raison d’être of the EuroHPC JU is to increase the digital autonomy and sovereignty of the European Union. Building a European sovereign supercomputing ecosystem is critical to ensure that research and science can remain and be done in Europe and, most importantly, to bolster Europe’s competitiveness and resilience toward foreign technologies and imports.

HPCwire: HPC sovereignty is a major goal, but I assume it will take some time to achieve fully.

Anders Jensen: HPC sovereignty remains more than ever a topical issue for all, not just Europe. As an example, I would point to what is happening in the chips race and how technology companies are currently hunting for alternative sources to GPUs that are needed for AI. 

 A strategy like this cannot be delivered overnight, and in Europe we are still dependent on foreign technologies, especially for processors. But a paradigm shift is taking place.  JUPITER’s general-purpose cluster module will rely on the first generation of the SiPearl Rhea1 processor, a European-developed technology supported by the EuroHPC JU. And that’s not all: our first exascale system will also rely on software stacks allied with European-developed technologies developed by the Julich Supercomputing Center within the framework of the DEEP projects that are funded by the EuroHPC JU. In a nutshell, HPC sovereignty remains paramount for Europe, and it’s not going to be completed anytime soon, but the first tangible achievements are now at hand. 

HPCWire: Historically, there has been a large divide between HPC capabilities in the so-called Big 6 national economies and the other EU member states. Can you talk about how the JU and the member states have been collaborating to address this with the consortium idea and initiatives such as EuroCC and CASTIEL?

Anders Jensen: Indeed, some European countries have been working at the forefront of HPC since the start of the supercomputer era and have developed important HPC capabilities, while others may not have fully developed their HPC capabilities due to factors such as limited resources, national priorities, differing research focuses or investment patterns.  One of the JU’s objectives is to even out the fact that European countries are at very different levels of HPC expertise and experience, which can impact their ability to engage in this field.

One of the founding principles of our JU is to pool together the resources of many countries to offer every participating country more opportunities than they would otherwise have. This goal means, for instance, that all European users can benefit from our supercomputers, no matter where they are located in Europe. It also means that smaller countries can purchase and own HPC systems. In our ecosystem, a certain number of world-class systems, especially the petascale and mid-range ones, are procured, co-owned, and managed by smaller countries. Smaller countries with less HPC background can also collaborate with other countries in consortia to host our biggest systems.  The example of LUMI is, for instance, very enlightening, with 11 European countries being part of this true European success story! 

HPCwire: What about EuroCC and CASTIEL?

Anders Jensen: In parallel, we have strategic initiatives like EuroCC and its Coordination and Support Action (CSA) CASTIEL that are addressing the expertise gap in the European HPC ecosystem and coordinating cooperation across Europe to ensure a consistent skills base. EuroCC has built a European network of more than 30 HPC competence centers across Europe. The EuroCC’s competence centers act as hubs to promote and facilitate HPC and related technologies across a range of users from academia, industry – especially SMEs (Small and Medium-sized Enterprises) – and public administration. The aim is to increase access to HPC opportunities and offer tailored solutions adapted to the local environment and in national languages, both in the highly experienced and less experienced countries at HPC. And we have just launched a call that will further increase the number of countries participating in this network.

HPCWire: Another important goal is sustainability, especially as the HPC community begins thinking about the next major milestone, zettascale computing. Does the JU have specific goals? Are they related to Europe’s decarbonization targets for 2030 and 2050?

Anders Jensen: Since the EuroHPC JU’s inception and in line with the EU’s ambitious plans to achieve carbon neutrality by 2050, sustainability is enshrined in our founding regulation and has been a cornerstone of our action. I would go even further and say that it is perhaps our trademark in the community because speed of calculation isn’t everything! 

Our commitment is to build supercomputers that are big on capability and speed and low on environmental impact, drawing on technologies such as water cooling, waste heat recycling, dynamic power-saving, and next-generation energy-efficient microprocessors. All EuroHPC JU supercomputers are, for instance, water-cooled, removing the requirement of high operational costs of air-cooled systems and, in parallel, reducing the energy footprint of our systems.  

HPCwire: How large a role does sustainability play in EuroHPC JU procurements?

Anders Jensen: Sustainability is taken into consideration not only when we design our procurements but also when we select entities that will host our supercomputers. It also makes financial sense, as it can help to bring the costs of electricity down. Much care has, for instance, been taken to ensure that these extremely powerful machines can operate in the most sustainable way possible. LUMI is fully powered using renewable energy. It uses natural cooling systems to cool down its processors, and all the waste heat it produces is reused for local district heating in the city of Kajaani, Finland. The same applies to MareNostrum 5, which will be fully powered with sustainable energy and whose waste heat will be recycled to heat the building where it is located in Barcelona. Another good example is MeluXina in Luxembourg, which is entirely powered by green energy from a cogeneration plant powered by waste wood.

In addition, the EuroHPC JU is also helping to redesign supercomputers from the inside out. To do so, we are, for instance, investing in the development of next-generation microprocessors that will rely on energy-efficient architectures such as the Rhea1 processor, which, as I mentioned, will power JUPITER. 

It’s gratifying to see these efforts being rewarded, with all our operational systems ranked among the greenest worldwide, with special mentions for MareNostrum 5, the sixth-greenest supercomputer in the world, just ahead of LUMI, seventh worldwide.

HPCWire: The Fortissimo initiative morphed into other programs aimed at providing HPC access and support to SMEs in Europe. Where does support for SMEs stand today, and why is it important? 

Anders Jensen: Supporting European SMEs remains a strategic priority for us, as they account for 99% of all European businesses, forming the backbone of our economy and representing a huge pool of potential HPC users! Stimulating the HPC innovation potential of SMEs is essential to position SMEs as technology leaders, fuel their success, and contribute to accelerating innovation and driving Europe’s economy. Supporting and educating SMEs on how to access and exploit HPC resources is also a way to expand the HPC user base and attract new users of HPC in different application domains.

HPCwire: HPC has increasingly been moving beyond science, engineering, and other established domains to support research in the liberal arts and other areas. Is that also happening in the JU?

Anders Jensen: Widening the use of HPC is part of the EuroHPC JU mission. Just a few weeks ago, the EuroHPC JU collaborated with the EuroCC network to organize an event focused on the fields of social sciences and humanities and how they can exploit HPC capabilities. This field has witnessed a profound shift towards computational methodologies in recent years. From analyzing large datasets to modeling human behavior or studying social patterns and population movements, the use of HPC has become increasingly indispensable in addressing the multifaceted challenges faced by researchers in these domains. Our role is to reach and support these non-traditional HPC users who are nevertheless curious about harnessing the power of HPC. In another vein, I really enjoyed the presentation of Holger Weiss, CEO of M.A.R.K.13 GROUP, at the EuroHPC Summit 2024, in which he explained how HPC was helping his animation studio to create high-end images for animated movies. The fact that HPC can also be a cultural enabler is a very refreshing perspective for our community.

The new development is that HPC and associated technologies, such as High-Performance Data Analytics (HPDA) and AI, are increasingly becoming general-purpose, widely deployed tools for both academia and large industrial organizations. There is still work to be done, however, to ensure widespread uptake of these novel technologies by SMEs. There are a few reasons for this, such as SMEs not having the relevant HPC expertise and often lacking the necessary computational resources. Additionally, SMEs are all facing the same essential challenge: the high risk of potential failure linked to a necessary high investment for testing these new technologies. 

HPCwire: That brings me back to the question about a successor for Fortissimo.

Anders Jensen: Following the success of the Fortissimo initiative and, more recently, FF4EuroHPC, I am very happy to announce that the successor of these initiatives, Fortissimo Plus (FFplus), is about to start for a four-year duration with funding from the EuroHPC JU. The consortium behind the project will consist of known partners already involved in the previous initiatives and a new beneficiary. The main objective remains unchanged: to support the uptake of HPC by SMEs by offering them the right knowledge and financial and technical support. The novelty of FFplus will be a special focus on adoption of HPC by SMEs working on large AI models, such as large language models.

HPCWire: The HPC and much larger AI markets have been converging. HPC has contributed key things to the AI market, such as 40 years of experience with parallelism and MPI communications. On the other hand, 11 large hyperscale companies control the global market for GPUs that the HPC community also needs. How do you see the HPC and mainstream AI communities co-existing?

Anders Jensen:  It is clear that AI is disrupting the traditional HPC ecosystem. AI applications have different requirements in terms of architectures, access policies, and runtime environments. For example, AI seems to be struggling with multi-tenancy and bare metal access to supercomputers which is very different from the way traditional HPC communities have been using them. 

Traditional HPC communities have adapted their tools, workflows and mindset in exploiting HPC resources in a direct manner, i.e., without relying on any virtualization or containerised solutions. AI users, on the contrary, are more accustomed to cloud computing environments which offer a sense of isolation and dedication of resources to specific users. The small but notable loss of performance coming from the virtualization layers does not bother AI users, who usually put more emphasis on usability than squeezing 100% out of the underlying hardware capabilities. 

Therefore, initiatives to train the AI community on how to optimally use supercomputing infrastructures might be very relevant and beneficial for all. Additionally, traditional supercomputing centers also need to adapt their modus operandi by providing tools and policies that better match the needs of AI users.

HPCWire: Is the JU involved in promoting responsible AI? In our early AI era, we’re using learning models that aren’t transparent and trustworthy, along with generative AI that can’t tell fact from fiction. That can lead to errors and bias if AI isn’t used appropriately.

Anders Jensen: Used correctly and combined with HPC, AI technology is certainly a powerful tool which can contribute to a more innovative, sustainable, and competitive economywhile also improving safety, education, and healthcare for citizens. It is a brand new process for us, but indeed, since last month, the EuroHPC JU is very much involved in promoting responsible AI.

At the beginning of March, we launched a new type of access call specifically to support ethical artificial intelligence, machine learning, and data-intensive applications in general, with a particular focus on foundation models and generative AI. The call is designed to serve industry organizations, SMEs, startups, and public sector entities requiring access to supercomputing resources to perform artificial intelligence and data-intensive activities. When submitting their proposals, the applicants are asked to provide ethical self-assessments, which are then carefully reviewed by the EuroHPC JU’s access resource committee. This is, for us, a first step in promoting responsible and trustworthy AI and ensuring the technology is used in the most appropriate mannerThis is a brand-new aspect of our activities, and I am sure that we will continue to improve our processes for assessing responsible AI as we gain more experience.

HPCwire: I assume the JU’s responsible AI initiatives are aligned with European Commission goals?

Anders Jensen: Yes, we are part of a bigger strategy at the European level. In January 2024, the European Commission launched its AI innovation package to support European startups and SMEs in the development of trustworthy AI that respects EU values and rules. As part of this new proposal, there is an ongoing legislative process to extend the mission of the EuroHPC JU and include the deployment of an AI supercomputing infrastructure to support Europe’s AI startups and researchers. Once the legislative process is completed, our regulation will provide us with more clarity and the means to best tackle such a new and ambitious task.

HPCWire: What’s next for the JU? I believe the next funding cycle isn’t far away.

Anders Jensen: As always, many things are in preparation at the JU, and I could spend hours telling you about what is coming up next for the JU. I will do my best to stay concise. On top of what we already discussed, we are about to finalize a number of quantum computer procurements to be deployed across Europe. In parallel, our call to set up European Quantum Excellence Centers is currently open and will close in mid-May. Our ambition is to offer the widest possible variety of different European quantum computing platforms and hybrid classical-quantum architectures, giving Europe the opportunity to be at the forefront of this emerging field and to provide European users with access to diverse and complementary quantum technologies. 

By the end of the month, our call to select entities to host our first industrial-grade supercomputers will close, and the coming months will be dedicated to evaluating the received proposals and preparing the hosting agreements. By adding these supercomputers specifically tailored to the needs of industrial users to the EuroHPC fleet, we will address the growing demand from industries for computing resources. This EuroHPC industrial-grade infrastructure will be instrumental in further boosting the innovation potential of enterprises in Europe.

Skills and competencies are always high on our list of priorities. As well as EUMaster4HPC, our pan-European Masters’ program to train the next generation of European supercomputing experts, we recently launched HPC Spectra, a new project focusing on skills. This project aims to improve the EuroHPC JU’s training strategy by developing an innovative EuroHPC Training Platform and co-organizing the 2024 and 2025 International HPC Summer Schools. More projects supporting the development of skills in Europe are also in the pipeline. As rightly highlighted by RafałDuczmal, the chair of the EuroHPC JU Governing Board, at the opening plenary of the EuroHPC Summit 2024, “talented and well-educated people are the greatest European asset,” and the EuroHPC’s job is to preserve, protect and empower this most precious resource.

Our current budget of 7 billion euros will take us up to 2027.  From my perspective, we have just reached the halfway point of this funding cycle. However, the reality is that preparations for the next funding cycle are already well underway on the European Commission’s side. The EuroHPC JU is not involved in these strategic discussions, but I am nevertheless confident that HPC and quantum computing will remain high priorities for the European Union after 2027.


Steve Conway, senior analyst at Intersect360 Research, has closely tracked AI progress for more than a decade. Conway has spoken and published widely on this topic, including an AI primer for senior U.S. military leaders co-authored with Johns Hopkins University Applied Physics Laboratory.

 

 

 

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Student Cluster Competition

May 16, 2024

The 2024 ISC 2024 competition welcomed 19 virtual (remote) and eight in-person teams. The in-person teams participated in the conference venue and, while the virtual teams competed using the Bridges-2 supercomputers at t Read more…

Grace Hopper Gets Busy with Science 

May 16, 2024

Nvidia’s new Grace Hopper Superchip (GH200) processor has landed in nine new worldwide systems. The GH200 is a recently announced chip from Nvidia that eliminates the PCI bus from the CPU/GPU communications pathway.  Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of the last panels at ISC 2024 — the discussion was fascinat Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can uncover patterns, generate insights, and make predictions that Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top500 list of the fastest supercomputers in the world. At s Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

ISC 2024: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Leading Solution Providers

Contributors

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire