Bernd Mohr started to design and develop tools for performance analysis of parallel programs already with his diploma thesis (1987) at the University of Erlangen in Germany, and continued this in his Ph.D. work (1987 to 1992). During a three year Postdoc position at the University of Oregon, Eugene, he designed and implemented the original TAU performance analysis framework. Since 1996 he has been a senior scientist at Forschungszentrum Jülich, Gemany’s largest multidisciplinary research center and home of Europe’s most parallel HPC system, a 28-rack BlueGene/Q. Since 2000, he is the team leader for the group “Programming Environments and Performance Optimization”. Besides being responsible for user support and training in regard to performance tools at the Jülich Supercomputing Centre (JSC), he is leading the Scalasca and Score-P performance tools efforts in Jülich. Since 2007, he also serves as deputy head for the JSC division “Application support”. He is an active member in the International Exascale Software Project (IESP/BDEC) and work package leader in the European (EESI2) and Jülich (EIC, ECL) Exascale efforts. For the SC and ISC Conference series, he serves on the Steering Committee. He is the author of several dozen conference and journal articles about performance analysis and tuning of parallel programs.
HPCwire: Congratulations on being named General Chair for SC17! As the first SC General Chair from outside the US, what perspective will you be bringing to the role?
Bernd Mohr: Thank you! Since people travel from all over the globe to come to a SC conference, one of our goals is to make certain that SC17 feels truly international. While SC is organized in a different US city every year, from the start it has been THE event where people from around the globe come together to network and to learn about the latest advancements in HPC, networking, data analytics and storage. The percentage of international attendees has steadily grown over the years and is now more than 25%. Within the SC committees, the percentage is even higher; over a third of our volunteers are from outside the US.
We also want to showcase that many interesting projects and research in HPC are done all over the world. While the community is following the Exascale efforts in the US, Japan, China and Europe, there is also amazing HPC work and research happening in places like South Africa, India, Saudi Arabia, Chile, Argentina, Brazil, and Mexico. Not to mention that our exhibit floor is generally sold out to capacity with companies coming from far away as Australia to participate in the SC experience.
We will also continue to promote diversity in HPC which has always been important to the organization, but was officially “started” at SC16 by General Chair John West from the Texas Advanced Computing Center. For SC17, we will work on extending this beyond the issue of gender and consider other factors like ethnicity and age.
This is an exciting time in the world of high performance computing – it truly has never been more important globally and I am proud to be so involved with such brilliant industry leaders and colleagues. We intend to make SC17 in Denver an international conference that is not to be missed.
HPCwire: You co-chaired the Workshops program at ISC 2016; can you tell us about the program and the reception for it?
It actually started in 2014 when I was working for ISC Events as a Program Consultant where I convinced them that having a separate Workshop Day was a good idea. Prior to that a few Workshops organized by the community were integrated into the regular conference program, but either the Workshop organizers were frustrated that they did not attract high attendance (because of the strong conference program running in parallel) or we were unhappy if the Workshop’s drew too many people from the conference program.
So, when ISC moved to Frankfurt in 2015, we introduced the separate Workshop Day on Thursday after the conference. Workshop registration fees were intentionally kept low to attract as many people as possible. I headed the effort as Workshop Chair. It was an immediate success: we had 17 workshops and nearly 600 attendees. The Workshop Day is hosted in the Marriott, the main ISC conference hotel, and everyone liked the atmosphere and hospitality there.
In 2016 I co-chaired Workshops together with Michela Taufer, so she would be become familiar with the event, as she is Workshop Chair for 2017. The program was enhanced by providing the option to publish workshop papers in an ISC Workshop Proceedings (published by Springer) similar to the main conference proceedings.
As expected, the interest in the community was higher than in 2015 and we ended up with 21 workshops (9 full-day and 12 half-day) and more than 600 attendees. We had to reject quite a few proposals as there were not enough rooms and time available for them. The workshop program covered a wide range of topics in hardware, software, and applications for extreme scale computing, data analytics, and cloud but also workshops on international collaboration, Women in HPC, and HPC training.
The 2016 Workshop Day featured about 170(!) expert presentations as well as over a dozen smaller panel discussions. This is a good indication that the community overall embraced the idea of the separate Workshop Day at the ISC HPC conference.
HPCwire: What are some of the major trends today in HPC that you’ll be looking out for in 2017? You are active in several exascale research efforts, including the International Exascale Software Project as well as European and Jülich Exascale efforts. What developments do you see as the most promising and what are the thorniest challenges?
The hot topic is currently Deep Learning, other (big) data analytics methods and tools, and how they can be integrated with workflows to analyze large data sets coming from scientific instruments or HPC simulations. In this regard, I just want to point out that SC since 2009(!) has been called the International Conference on High Performance Computing, Networking, Analysis, and Storage. So SC had these topics on their radar for quite some time and it is nice to see that the HPC community finally realizes that there are other important issues beyond flops and vectorization.
Currently people are using different hardware configurations and software stacks for compute and data-driven workflows. It will be interesting to see whether the technology will advance to provide the capability to architect a common hardware and software platform to handle both sorts of workflows and to combine them, at least in the context of scientific data and HPC.
Finally, we currently see a variety of processor types (multicore, manycore, GPGPU) and cluster architectures (homogeneous and heterogeneous) and with it a plethora of programming models and tools. While this makes a great topic for panel discussions, research projects and hundreds of research papers for HPC experts, it is a nightmare for application programmers and users who struggle to adapt and optimize their codes for all these architectures. While with some effort, it might be possible to make a code run on many platforms, it is very difficult if not impossible to make the code execute efficiently and with comparable performance on all systems.
HPCwire: Outside of the professional sphere, what can you tell us about yourself – personal life, family, background, hobbies, etc.?
To earn money for my studies at the university, I worked every weekend as a disc jockey. These were the ‘80’s and it was the era of disco music. “Gimme that night fever, night fever …”
I also like very much hiking and other outdoor activities. During my various trips to the U.S., I visited 35 National Parks and many more National Monuments. At one point, back in the late ‘90’s, I had visited all the National Parks in the continental U.S. West of Denver. However, the National Park service keeps “upgrading” National Monuments to Parks (e.g., Pinnacles and Channel Islands), so I do not know whether I’ll ever reach that goal again.
| Guangwen Yang