PRACEdays Reflects Europe’s HPC Commitment

By Tiffany Trader

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at the Polytechnic University of Catalonia. The program was packed with high-level international keynote speakers covering the European HPC strategy and science and industrial achievements in HPC. A diverse mix of engaging sessions showcased the latest advances across the array of computational sciences within academia and industry.

What began as mainly an internal PRACE conference now boasts an impressive scientific program. Chair of the PRACE Scientific Steering Committee Erik Lindahl is one of the people spearheading the program’s growth and success. At PRACEdays, HPCwire spoke with the Stockholm University biophysics professor (and GROMACS project lead) about the goals of PRACE, the evolution of PRACEdays, and the latest bioscience and computing trends. So much interesting ground was covered, that we’re presenting the interview in two parts with part one focusing on PRACE activities and part two showcasing Lindahl’s research interests and his perspective on where HPC is heading with regard to artificial intelligence and mixed-precision arithmetic.

HPCwire: Tell us about your role as Chair of the PRACE Scientific Steering Committee.

Erik Lindahl

Erik Lindahl: The scientific steering committee is really the scientific oversight body and our job is to do the scientific prioritization in PRACE. The reason I have engaged in PRACE was very much based on creating a European network of science and making sure that rather than being happy just competing in Sweden – Sweden is a nice country but it’s a very small part of Europe – what I really love about PRACE is we are getting researchers throughout Europe to have a common community of computing. But I think, this is a more important goal of PRACE than we think. Machines are nice but machines come and go and four years later we’ve used that money, but building this network of human infrastructure, that is something that is lasting.

HPCwire: How is PRACEdays helping accomplish that goal?

Lindahl: We have all of these Centers of Excellence that we are bringing together here, so Europe has now eight Centers of Excellence that provide joint training, tutorials, and tools to improve application performance. These are very young; they’ve been around for roughly 18 months, so right now, we don’t have all students going to PRACEdays; we can’t handle a conference that large, but we have all these Centers of Excellent and the various organizations and EU projects get together and then they in turn go out and spread the knowledge in their networks. In a couple of years we might very well have a PRACEdays that’s 500 people and then I hope we have all the students here. From the start this was mostly a PRACE internal conference, and the part that I’m very happy about is that we are increasing the scientific content and that’s what it’s going to take for the scientists to come.

HPCwire: PRACEdays is the central event of the European HPC Summit Week 2017, now in its second year.

Lindahl: That’s something also I’m very happy with to see it co-organized. It comes back to the same thing; Europe has a very strong computational landscape, but we sometimes forget that because we don’t collaborate enough.

HPCwire: What is the mission of PRACE?

Lindahl: The important thing with PRACE, not just PRACEdays but PRACE as a whole project, is that we are really establishing a European organization for computing and this is partly more of a challenge in Europe because in contrast with the U.S., while you have your 50 states, it is clear that it is one country, one grant organization sponsoring computing. The challenge for Europe has of course been, I would argue, that the national organizations of Europe are far stronger than the states in the US, but of course on the equivalent of the federal level, the European Union, the system has historically been much weaker so that what PRACE has established is that we finally have an organization that is not just providing computing cycles on the European arena, but also helping establish what is the vision for computing and how should we – not just Europe as a region push computing – but how should scientists in Europe push computing and what are the really big grand challenges that people should start approaching. And the challenge here is that no matter how good individual groups are, these problems are really hard just as you are seeing in the states – as nice as California is, if California tried to go it alone they would find it pretty difficult to compete with China and Japan.

HPCwire: How does PRACE serve European researchers?

Lindahl: The main role of PRACE is to provision resources and PRACE makes it possible for researchers to get what we call tier 0 resources for the very largest problems, the problems that are so large that it gets difficult to allocate them in a single country, and in particular most of these national systems tend to have, I wouldn’t say conservative programs, but kind of continuous allocations. What PRACE tries to push is these really grand challenge ideas: risky research; it’s perfectly okay to fail. You can spend one hundred million core hours to possibly solve a really difficult problem. I think in large part we are starting to achieve that. As always, of course, scientists want more resources. I’m very happy with the way that PRACE 2 has gotten countries to sign on and significantly increase the resources compared with what we had a few years ago.

The other part that I personally really like about PRACE are the software values and part of it of course has to do with establishing a vision and making sure there is really good education because all of these students, no matter how good our universities are, when people are sitting whether it’s in Stockholm, Barcelona or Frankfurt, there might be a handful of students in their area. PRACE makes it possible to provide training at a much more advanced level than we normally can in our national systems. Cost-wise it is not as large a part of the budget, but when it comes to competing and [facilitating] advanced computing, it is probably just as important as buying these machines.

The third part of this has to do with our researchers, and this is a part of where my role comes in as chair of the scientific steering committee. Researchers, we are a bit of a split personality. On the one hand we don’t like to apply for resources; writing research grants takes time away from the research you would like to be doing. On the other hand, a very important factor of having to compete for resources is not just that I’m competing for resources but when we are writing these grant applications, that’s also when we need to formulate our ideas – that’s when I need to be better than I was two or three years ago. Can I identify the really important problems to solve here, what I would like to do the next few years? I think here surprisingly lies a danger in our national system, in particular the ones that are fairly generously funded because the generously funded system you become complacent and you are kind of used to getting your resources. What I like with PRACE is you get a challenge: what if you had a factor of ten more resources than you do now? But you can’t just say that you would like to have it; you need to have a really good idea to get that, and it starts to challenge our best researchers who in essence compete against each other in Europe and become better than they were last year, and I think that’s a very important driving factor for science.

HPCwire: What is the vision for the PRACEdays conference?

Lindahl: PRACEdays is fairly young as a conference and we are still trying to get it to find its form. It’s not really an industry conference in the sense of having vendors here – I think there are other great venues, both ISC and Supercomputing and we see no point with trying to compete with them, but we are increasingly trying to move PRACEdays to become the venue where we have the scientists meet – not necessarily disciplinary because as a biophysicist I tend to go to a biophysical society, but of course there are lots of people working with computational aspects that are interdisciplinary or they might very well be using similar types of molecular simulation models, and materials sciences. [At PRACEdays] we really focus on computational techniques. We get to see what people are doing in other domains. We are going to start having computers with one million processors, and I think as scientists it’s very easy to try to become incrementally better – we all do that all the time; my code scales better this year than it did last year, but we have colleagues that already scale to a quarter million processors. That’s a challenge; we need to pick up 100 times better than we are, which is of course difficult, but if we don’t even think about it, we don’t start to do the work. I like these challenges because I’m seeing what people can do in other areas that I don’t get in my disciplinary conferences.

PRACEdays is also a venue where we get to meet all the different groups – the Centers of Excellence that the European commission has started to fund, so I think all of this is part of a budding computational infrastructure that is really shared in Europe. It’s certainly not without friction. If there wasn’t any friction it would be because we weren’t approaching hard problems. But I think things are really moving in the right direction and we are starting to establish a scheme where if you are like me, if you are a biophysicist, you should not just go to your national organization; the best help, the best resources, the best training is on the European [level] today and that I’m very happy with.

HPCwire: Is it fair to think of PRACE as parallel to XSEDE in the US?

Lindahl: Yes and no, they have slightly different roles so PRACE works very close together with XSEDE and we are doing wonderful things together in training and we’re very happy to have them there. When it comes to the provisioning of resources. PRACE is more similar to the INCITE program, and this is intentional.

I think XSEDE does a wonderful thing in the US. The main thing that XSEDE managed to change in the US was to have a focus on the users, not just the focus on buying sexier machines, or how many boxes you have or how many FLOPS you have but what are you really doing for science and what does the scientist need and that was sorely needed not just in US but throughout the world.

This is a development that has happened in Europe too but the challenge with Europe is that we have lots of countries with very strong existing organizations there and if PRACE went in and started to take over the normal computing, suddenly I think you would alienate all these national organization that PRACE still very much depends on having good relations with, and that’s also why we’ve said that PRACE will engage in all these levels when it comes to training, when it comes to organization.

We have what we call a Tier 1 program, where it’s possible for researchers to get access to a large resource, say Knights Landing. A researcher in general in Europe who needs access to a special computer that’s not available in any other country, they can get access to that through these collaborative programs.

Then PRACE itself has hardware access through a program that’s much more similar to INCITE. The very largest programs, the programs that are really too large for any of the national systems, and I think overall that works well because I think on this level most countries see it as a complement rather than competing with their existing organizations.

HPCwire: The theme of this year’s PRACEdays is “HPC for Innovation: When Science Meets Industry.” Science and industry sometimes have split incentives. How much involvement should science have with industry and what’s your perspective on how public private partnerships and similar arrangements should work?

Lindahl: This is a difficult question and it comes down to the question of what is HPC. The traditional view that we’ve taken particularly in academia, is we focus on all of these very high end machines, whether it’s a petaflop, exaflop, yottaflop, the very extreme moonshot programs. That is of course important to large fields of science or I actually would say the reason academia stresses this is because academia’s role is the push the boundaries and industry normally shouldn’t be at the boundary, with a couple of exceptions today.

I think the joint role we have both in academia and industry is understanding this whole spectrum of approaches – so scientists might be thinking of running MPI over millions of processors but the very same techniques – if we can improve scaling, if we can make computers work faster – that’s used in machine learning too. In machine learning you might only run it over four nodes, but they too are just as interested in making this run faster, it’s just the problems they apply them to might be slightly different.

The other part that I think has changed completely in the last few years is this whole approach with artificial intelligence and machine learning that is now so extremely dependent on floating point performance in general. What we today call graphics processors, accelerators, they are now everywhere – it’s probably just a matter of time before you have a petaflop in your car. And it was less than ten years ago that a petaflop was the sexiest machines we had in the world. And at that level, even in your car, you are going to run parallel computations over maybe 20,000 cores. And when I was a student, we didn’t dream of that level of parallelism. Somewhere there, I think you are going to run on different machines because you wouldn’t buy a car if it cost you a billion dollars. The goals and the applications are different but the fundamental problems we work on are absolutely the same.

That was a bit of a detour, but when it comes to the public-private partnerships and the challenges here, there are certainly lots of areas where we are all starting to use commodity technology and accelerators might very well be one of them, so by the time that industry has caught on, by the time there is a market, then we can just go out and procure things on the open market, but then there are of course other areas where we are not quite sure where we are going to end up yet. And when it comes to industry, industry might not yet be at the point yet where it turns this into a product, and if we’re talking about chip development or networking technology, these things can also be very expensive. I certainly see that a role for some of these projects, we might very well have to engage together because there is no way that an academic lab can develop a competitive microprocessor we simply don’t have those resources; on the other hand, there is no way a company would do it because they are afraid they can’t market this and they can’t get their money back. So at some point starting to collaborate in this is not just okay, I think we have to do it.

The difficult part is we have to steer very carefully along this balance. This can’t turn into industry subsidies and similarly it can’t turn into industry subsidizing academia either because then it’s pointless. It’s a very difficult problem, but I don’t think we have any choice; we have to collaborate. If you start looking at machine learning nowadays, not just the most advanced hardware technology but in many case even the software, suddenly we have commercial companies hiring academics not because they are tier 2, but because they are the very best academics. So in artificial intelligence, some of the best research environments are actually in industry, not in academia. I think it’s a new world, but one we will gradually have to adapt to.


Stay tuned for part two, where Dr. Lindahl highlights his research passions and champions a promising future for HPC-AI synergies. We also couldn’t pass up the opportunity to ask about his pioneering work retooling the molecular dynamics code GROMACS to take advantage of single-precision arithmetic. It’s a fascinating story that takes on new relevance as AI algorithms push hardware vendors to optimize for single and even half precision instructions.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed – and, as a result, PFAS are coming under increasing regu Read more…

Sweden Plans Expansion for Nvidia-Powered Berzelius Supercomputer

January 26, 2023

The Atos-built, Nvidia SuperPod-based Berzelius supercomputer – housed in and operated by Sweden’s Linköping-based National Supercomputer Centre (NSC) – is already no slouch. But now, Nvidia and NSC have announced Read more…

Multiverse, Pasqal, and Crédit Agricole Tout Progress Using Quantum Computing in FS

January 26, 2023

Europe-based quantum computing pioneers Multiverse Computing and Pasqal, and global bank Crédit Agricole CIB today announced successful conclusion of a 1.5-year POC study “to evaluate the contribution of an algorithmi Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for influence at the World Economic Forum. Intel CEO Pat Gels Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the European Union, China, and Japan. What is the value to be gained Read more…

AWS Solution Channel

Shutterstock_1687123447

Numerix Scales HPC Workloads for Price and Risk Modeling Using AWS Batch

  • 180x improvement in analytics performance
  • Enhanced risk management
  • Decreased bottlenecks in analytics
  • Unlocked near-real-time analytics
  • Scaled financial analytics

Overview

Numerix, a financial technology company, needed to find a way to scale its high performance computing (HPC) solution as client portfolios ballooned in size. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Supercomputer Research Predicts Extinction Cascade

January 25, 2023

The immediate impacts of climate change and land-use change are severe enough, but increasingly, researchers are warning that large enough changes can then snowball into catastrophic changes. New, supercomputer-powered r Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the Euro Read more…

Shutterstock 1134313550

Semiconductor Companies Create Building Block for Chiplet Design

January 24, 2023

Intel's CEO Pat Gelsinger last week made a grand proclamation that chips will be for the next few decades what oil and gas was to the world over the last 50 years. While that remains to be seen, two technology associations are joining hands to develop building blocks to stabilize the development of future chip designs. The goal of the standard is to set the stage for a thriving marketplace that fuels... Read more…

Royalty-free stock photo ID: 1572060865

Fujitsu Study Says Quantum Decryption Threat Still Distant

January 23, 2023

Global computer and chip manufacturer Fujitsu today reported that a new study performed on its 39-qubit quantum simulator suggests it will remain difficult for Read more…

At ORNL, Jeff Smith Becomes Interim Director, as Search for Permanent Lab Chief Continues

January 20, 2023

UT-Battelle, which manages Oak Ridge National Laboratory (ORNL) for the U.S. Department of Energy, has appointed Jeff Smith as interim director for the lab as t Read more…

Top HPC Players Creating New Security Architecture Amid Neglect

January 20, 2023

Security of high-performance computers is being neglected in the pursuit of horsepower, and there are concerns that the ignorance may be costly if safeguards ar Read more…

Ohio Supercomputer Center Debuts ‘Ascend’ GPU Cluster

January 19, 2023

Less than 10 months after it was announced, the Columbus-based Ohio Supercomputer Center (OSC) has debuted its Dell-built GPU cluster, “Ascend.” Designed to Read more…

Leading Solution Providers

Contributors

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire