PRACEdays Reflects Europe’s HPC Commitment

By Tiffany Trader

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at the Polytechnic University of Catalonia. The program was packed with high-level international keynote speakers covering the European HPC strategy and science and industrial achievements in HPC. A diverse mix of engaging sessions showcased the latest advances across the array of computational sciences within academia and industry.

What began as mainly an internal PRACE conference now boasts an impressive scientific program. Chair of the PRACE Scientific Steering Committee Erik Lindahl is one of the people spearheading the program’s growth and success. At PRACEdays, HPCwire spoke with the Stockholm University biophysics professor (and GROMACS project lead) about the goals of PRACE, the evolution of PRACEdays, and the latest bioscience and computing trends. So much interesting ground was covered, that we’re presenting the interview in two parts with part one focusing on PRACE activities and part two showcasing Lindahl’s research interests and his perspective on where HPC is heading with regard to artificial intelligence and mixed-precision arithmetic.

HPCwire: Tell us about your role as Chair of the PRACE Scientific Steering Committee.

Erik Lindahl

Erik Lindahl: The scientific steering committee is really the scientific oversight body and our job is to do the scientific prioritization in PRACE. The reason I have engaged in PRACE was very much based on creating a European network of science and making sure that rather than being happy just competing in Sweden – Sweden is a nice country but it’s a very small part of Europe – what I really love about PRACE is we are getting researchers throughout Europe to have a common community of computing. But I think, this is a more important goal of PRACE than we think. Machines are nice but machines come and go and four years later we’ve used that money, but building this network of human infrastructure, that is something that is lasting.

HPCwire: How is PRACEdays helping accomplish that goal?

Lindahl: We have all of these Centers of Excellence that we are bringing together here, so Europe has now eight Centers of Excellence that provide joint training, tutorials, and tools to improve application performance. These are very young; they’ve been around for roughly 18 months, so right now, we don’t have all students going to PRACEdays; we can’t handle a conference that large, but we have all these Centers of Excellent and the various organizations and EU projects get together and then they in turn go out and spread the knowledge in their networks. In a couple of years we might very well have a PRACEdays that’s 500 people and then I hope we have all the students here. From the start this was mostly a PRACE internal conference, and the part that I’m very happy about is that we are increasing the scientific content and that’s what it’s going to take for the scientists to come.

HPCwire: PRACEdays is the central event of the European HPC Summit Week 2017, now in its second year.

Lindahl: That’s something also I’m very happy with to see it co-organized. It comes back to the same thing; Europe has a very strong computational landscape, but we sometimes forget that because we don’t collaborate enough.

HPCwire: What is the mission of PRACE?

Lindahl: The important thing with PRACE, not just PRACEdays but PRACE as a whole project, is that we are really establishing a European organization for computing and this is partly more of a challenge in Europe because in contrast with the U.S., while you have your 50 states, it is clear that it is one country, one grant organization sponsoring computing. The challenge for Europe has of course been, I would argue, that the national organizations of Europe are far stronger than the states in the US, but of course on the equivalent of the federal level, the European Union, the system has historically been much weaker so that what PRACE has established is that we finally have an organization that is not just providing computing cycles on the European arena, but also helping establish what is the vision for computing and how should we – not just Europe as a region push computing – but how should scientists in Europe push computing and what are the really big grand challenges that people should start approaching. And the challenge here is that no matter how good individual groups are, these problems are really hard just as you are seeing in the states – as nice as California is, if California tried to go it alone they would find it pretty difficult to compete with China and Japan.

HPCwire: How does PRACE serve European researchers?

Lindahl: The main role of PRACE is to provision resources and PRACE makes it possible for researchers to get what we call tier 0 resources for the very largest problems, the problems that are so large that it gets difficult to allocate them in a single country, and in particular most of these national systems tend to have, I wouldn’t say conservative programs, but kind of continuous allocations. What PRACE tries to push is these really grand challenge ideas: risky research; it’s perfectly okay to fail. You can spend one hundred million core hours to possibly solve a really difficult problem. I think in large part we are starting to achieve that. As always, of course, scientists want more resources. I’m very happy with the way that PRACE 2 has gotten countries to sign on and significantly increase the resources compared with what we had a few years ago.

The other part that I personally really like about PRACE are the software values and part of it of course has to do with establishing a vision and making sure there is really good education because all of these students, no matter how good our universities are, when people are sitting whether it’s in Stockholm, Barcelona or Frankfurt, there might be a handful of students in their area. PRACE makes it possible to provide training at a much more advanced level than we normally can in our national systems. Cost-wise it is not as large a part of the budget, but when it comes to competing and [facilitating] advanced computing, it is probably just as important as buying these machines.

The third part of this has to do with our researchers, and this is a part of where my role comes in as chair of the scientific steering committee. Researchers, we are a bit of a split personality. On the one hand we don’t like to apply for resources; writing research grants takes time away from the research you would like to be doing. On the other hand, a very important factor of having to compete for resources is not just that I’m competing for resources but when we are writing these grant applications, that’s also when we need to formulate our ideas – that’s when I need to be better than I was two or three years ago. Can I identify the really important problems to solve here, what I would like to do the next few years? I think here surprisingly lies a danger in our national system, in particular the ones that are fairly generously funded because the generously funded system you become complacent and you are kind of used to getting your resources. What I like with PRACE is you get a challenge: what if you had a factor of ten more resources than you do now? But you can’t just say that you would like to have it; you need to have a really good idea to get that, and it starts to challenge our best researchers who in essence compete against each other in Europe and become better than they were last year, and I think that’s a very important driving factor for science.

HPCwire: What is the vision for the PRACEdays conference?

Lindahl: PRACEdays is fairly young as a conference and we are still trying to get it to find its form. It’s not really an industry conference in the sense of having vendors here – I think there are other great venues, both ISC and Supercomputing and we see no point with trying to compete with them, but we are increasingly trying to move PRACEdays to become the venue where we have the scientists meet – not necessarily disciplinary because as a biophysicist I tend to go to a biophysical society, but of course there are lots of people working with computational aspects that are interdisciplinary or they might very well be using similar types of molecular simulation models, and materials sciences. [At PRACEdays] we really focus on computational techniques. We get to see what people are doing in other domains. We are going to start having computers with one million processors, and I think as scientists it’s very easy to try to become incrementally better – we all do that all the time; my code scales better this year than it did last year, but we have colleagues that already scale to a quarter million processors. That’s a challenge; we need to pick up 100 times better than we are, which is of course difficult, but if we don’t even think about it, we don’t start to do the work. I like these challenges because I’m seeing what people can do in other areas that I don’t get in my disciplinary conferences.

PRACEdays is also a venue where we get to meet all the different groups – the Centers of Excellence that the European commission has started to fund, so I think all of this is part of a budding computational infrastructure that is really shared in Europe. It’s certainly not without friction. If there wasn’t any friction it would be because we weren’t approaching hard problems. But I think things are really moving in the right direction and we are starting to establish a scheme where if you are like me, if you are a biophysicist, you should not just go to your national organization; the best help, the best resources, the best training is on the European [level] today and that I’m very happy with.

HPCwire: Is it fair to think of PRACE as parallel to XSEDE in the US?

Lindahl: Yes and no, they have slightly different roles so PRACE works very close together with XSEDE and we are doing wonderful things together in training and we’re very happy to have them there. When it comes to the provisioning of resources. PRACE is more similar to the INCITE program, and this is intentional.

I think XSEDE does a wonderful thing in the US. The main thing that XSEDE managed to change in the US was to have a focus on the users, not just the focus on buying sexier machines, or how many boxes you have or how many FLOPS you have but what are you really doing for science and what does the scientist need and that was sorely needed not just in US but throughout the world.

This is a development that has happened in Europe too but the challenge with Europe is that we have lots of countries with very strong existing organizations there and if PRACE went in and started to take over the normal computing, suddenly I think you would alienate all these national organization that PRACE still very much depends on having good relations with, and that’s also why we’ve said that PRACE will engage in all these levels when it comes to training, when it comes to organization.

We have what we call a Tier 1 program, where it’s possible for researchers to get access to a large resource, say Knights Landing. A researcher in general in Europe who needs access to a special computer that’s not available in any other country, they can get access to that through these collaborative programs.

Then PRACE itself has hardware access through a program that’s much more similar to INCITE. The very largest programs, the programs that are really too large for any of the national systems, and I think overall that works well because I think on this level most countries see it as a complement rather than competing with their existing organizations.

HPCwire: The theme of this year’s PRACEdays is “HPC for Innovation: When Science Meets Industry.” Science and industry sometimes have split incentives. How much involvement should science have with industry and what’s your perspective on how public private partnerships and similar arrangements should work?

Lindahl: This is a difficult question and it comes down to the question of what is HPC. The traditional view that we’ve taken particularly in academia, is we focus on all of these very high end machines, whether it’s a petaflop, exaflop, yottaflop, the very extreme moonshot programs. That is of course important to large fields of science or I actually would say the reason academia stresses this is because academia’s role is the push the boundaries and industry normally shouldn’t be at the boundary, with a couple of exceptions today.

I think the joint role we have both in academia and industry is understanding this whole spectrum of approaches – so scientists might be thinking of running MPI over millions of processors but the very same techniques – if we can improve scaling, if we can make computers work faster – that’s used in machine learning too. In machine learning you might only run it over four nodes, but they too are just as interested in making this run faster, it’s just the problems they apply them to might be slightly different.

The other part that I think has changed completely in the last few years is this whole approach with artificial intelligence and machine learning that is now so extremely dependent on floating point performance in general. What we today call graphics processors, accelerators, they are now everywhere – it’s probably just a matter of time before you have a petaflop in your car. And it was less than ten years ago that a petaflop was the sexiest machines we had in the world. And at that level, even in your car, you are going to run parallel computations over maybe 20,000 cores. And when I was a student, we didn’t dream of that level of parallelism. Somewhere there, I think you are going to run on different machines because you wouldn’t buy a car if it cost you a billion dollars. The goals and the applications are different but the fundamental problems we work on are absolutely the same.

That was a bit of a detour, but when it comes to the public-private partnerships and the challenges here, there are certainly lots of areas where we are all starting to use commodity technology and accelerators might very well be one of them, so by the time that industry has caught on, by the time there is a market, then we can just go out and procure things on the open market, but then there are of course other areas where we are not quite sure where we are going to end up yet. And when it comes to industry, industry might not yet be at the point yet where it turns this into a product, and if we’re talking about chip development or networking technology, these things can also be very expensive. I certainly see that a role for some of these projects, we might very well have to engage together because there is no way that an academic lab can develop a competitive microprocessor we simply don’t have those resources; on the other hand, there is no way a company would do it because they are afraid they can’t market this and they can’t get their money back. So at some point starting to collaborate in this is not just okay, I think we have to do it.

The difficult part is we have to steer very carefully along this balance. This can’t turn into industry subsidies and similarly it can’t turn into industry subsidizing academia either because then it’s pointless. It’s a very difficult problem, but I don’t think we have any choice; we have to collaborate. If you start looking at machine learning nowadays, not just the most advanced hardware technology but in many case even the software, suddenly we have commercial companies hiring academics not because they are tier 2, but because they are the very best academics. So in artificial intelligence, some of the best research environments are actually in industry, not in academia. I think it’s a new world, but one we will gradually have to adapt to.


Stay tuned for part two, where Dr. Lindahl highlights his research passions and champions a promising future for HPC-AI synergies. We also couldn’t pass up the opportunity to ask about his pioneering work retooling the molecular dynamics code GROMACS to take advantage of single-precision arithmetic. It’s a fascinating story that takes on new relevance as AI algorithms push hardware vendors to optimize for single and even half precision instructions.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire