April 2, 2021 — Women are severely underrepresented in the field of HPC.
While they comprise about 51% of the general population, women account for only about 17% of the HPC workforce. Those numbers are slowly improving, thanks to the contributions of numerous female engineers, scientists and researchers.
Discovering the efficacy of new drugs has historically been a painstaking process, fraught with high failure rates. That’s why Hiranmayi is using HPC to make drug discovery faster and – ultimately – better.
A machine learning specialist at LLNL, she is part of the Accelerating Therapeutics for Opportunities in Medicine (ATOM) consortium’s data modeling team. By building deep learning models of secondary pharmacology, she hopes to give drug researchers the tools to predict adverse effects of drug candidates before they advance to animal and human trials.
“My goal is to release top performing models to the research community and make the results reproducible,” Hiranmayi explains. “This will help ATOM’s vision of transforming drug discovery from a slow, sequential, high failure process into a rapid, integrated, patient-centric model.”
For ATOM, Hiranmayi uses LLNL’s best in-class supercomputers, including its next-generation Sierra system for machine learning and algorithm development. She plans to release top performing machine learning models for 11 protein disease and related drug targets, with more expected in the future for additional protein targets as they become available.
A vital aspect of research is developing research software. And Marion Weinzierl’s work focuses on improving that process for the research community that relies on it.
As the research software engineer (RSE) theme leader of the N8 Centre of Excellence for Computationally Intensive Research (N8 CIR), she helps researchers use HPC through training, consultancy and hands-on support. Further, she also is involved in some RSE training that enables the exchange of supercomputing knowledge and skills.
“I believe that targeting RSEs is particularly beneficial in bringing HPC forward,” she says. “If we train a researcher or an academic, it will help them in their work. But if we train an RSE, it will potentially help a lot of researchers that they work with.”
As an RSE and computational scientist, one of the projects she works on, ExaClaw, involves code coupling and visualization for tsunami simulations; another seeks to add to MetOffice’s space weather prediction suite by using coupled codes for simulations..
“Raising the profile of RSEs, and working with, for and as an RSE, helps lots of use cases,” she adds. “I really like acting as interface and facilitator between researchers, research teams or research field.
Emma Barnes believes supercomputing shouldn’t be limited to supersized organizations.
As HPC and research computing team leader at the University of York, her philosophy is to make high performance computing accessible to everyone, no matter what subject they study or background they have in computing.
A major step in that direction was the installation of the university’s first major HPC cluster in 2018. The £2.5 million project offers researchers and academics free access to the technology, and has been a huge success with users from a range of disciplines and backgrounds. “The HPC cluster allows academics to realize their full research potential,” she says.
While Emma has a doctorate in astroparticle physics from the University of Edinburgh, she redirected her career path due to her love for programming and computing. Now, she builds and runs HPC infrastructure used for research, educates future users and explores new technologies that could benefit both teaching and research.
Supercomputing continues to advance at staggering rates, and Linda Dewar is at the forefront of that growth in the United Kingdom.
In her role as program manager on the HPC Systems team at EPCC, the supercomputing center at the University Of Edinburgh, Linda is overseeing the installation, commissioning, testing and onboarding of the £79 million ARCHER2 (Academic Research Computing High End Resource) Tier-1 HPC service, which is due to begin operation later in 2021.
According to Linda, the implementation – a significant upgrade from the initial ARCHER HPC system – “will increase the capacity of our network from 40Gb/s to 200Gb/s, which we expect to provide significant improvement in the transfer of data between our services and beyond.”
Linda says that ARCHER2 and related supercomputing resources support the work of more than 1,000 UK scientific researchers in such areas as engineering, climate modelling, particle physics and astronomy.
What’s next for HPC? Fernanda Foertter is discovering new dimensions of supercomputing as director of applications at Next Silicon, a startup intent on pioneering a radically new approach to HPC architecture.
“We hope to disrupt the current architectures dominating HPC, not only for apps that are compute bound but also the sorts of apps that have been left behind by systems chasing FLOP count,” explains Fernanda, who in a previous role was HPC data scientist at Oak Ridge National Laboratory. “And we are doing this in the same open source ecosystem that HPC developers love.”
Currently, the team is in the midst of testing applications on its architecture and Fernanda is excited by their progress: “From someone who has a lot of experience helping people port to GPUs from CPUs, this architecture is looking really promising.”
While Next Silicon is currently in stealth mode, Fernanda hopes the company will have preliminary results to share publicly by the time of SC21.
Natasha’s work is about as “macro” as you can get: She uses a radio telescope located in Western Australia, called the Murchison Widefield Array (MWA), to observe the universe.
For the past five years, Natasha – a senior lecturer and Australian Research Council Future Fellow at Curtin – has been working on GLEAM, which is the GaLactic and Extragalactic All-Sky MWA survey. And HPC has played a critical role in her research.
“I used several million CPU hours on Australia’s Tier 1 supercomputing centers, NCI and Pawsey, to process the original survey, transforming about half a petabyte of data into 20 gigapixel images of the sky,” says Natasha. “I’m now scaling up for GLEAM-X, where the computing and storage challenges are about 10 times larger.” She has also been allocated nearly 10M CPU hours on Magnus, Pawsey’s flagship supercomputer, and uses the new MWA cluster, Garrawarla.
Through her work, Natasha is able to view the entire sky at low frequencies, revealing a universe full of high-energy phenomena, like distant clusters of galaxies, ancient supernova remnants, cosmic magnetic fields, and burping baby black holes.
Interested in seeing what Natasha is seeing? Visit the original survey at GLEAMoscope, or use the GLEAM and GLEAM VR apps available on Google Play.
Learn more about the conference’s Women in IT Networking at SC (WINS) program. Applications for SC21 are closed, but consider applying for SC22!
Source: Cristin Merritt, SC21