Associate Laboratory Director for Computing Sciences
Lawrence Berkeley National Laboratory
And Professor of Computer Science
University of California at Berkeley
Katherine Yelick is a Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley and is also the Associate Laboratory Director for Computing Sciences at Lawrence Berkeley National Laboratory. She is the co-author of two books and more than 100 refereed technical papers on parallel languages, compilers, algorithms, libraries, architecture, and storage. She co-invented the UPC and Titanium languages and demonstrated their applicability across architectures through the use of novel runtime and compilation methods. She also co-developed techniques for self-tuning numerical libraries, including the first self-tuned library for sparse matrix kernels which automatically adapts the code to properties of the matrix structure and machine.
Her work includes performance analysis and modeling as well as optimization techniques for memory hierarchies, multicore processors, communication libraries, and processor accelerators. She has worked with interdisciplinary teams on application scaling, and her own applications work includes parallelization of a model for blood flow in the heart. She earned her Ph.D. in Electrical Engineering and Computer Science from MIT and has been a professor of Electrical Engineering and Computer Sciences at UC Berkeley since 1991 with a joint research appointment at Berkeley Lab since 1996. She has received multiple research and teaching awards and is a member of the California Council on Science and Technology and a member of the National Academies committee on Sustaining Growth in Computing Performance.
HPCwire: Hi Kathy. Congratulations on being selected as an HPCwire 2016 Person to Watchand for receiving the 2015 ACM/IEEE Computer Society Ken Kennedy Award for your prominent leadership and your important contributions to parallel computing languages! How would you characterize the current era of HPC programming and what are the biggest challenges?
Kathy Yelick: Thank you. I think HPC programming is at a crossroads. We need to determine whether we stick with the standard models and force them to work on new architectures or do we start from the ground up to find a way to program a system that is heterogeneous and had explicitly managed memory. The popular models, like MPI and OpenMP, are trying to adapt to these changes and some people talk about a “plus,” meaning using both MPI and OpenMP. But the interface between them is a big obstacle. For example, if you’re running OpenMP on 100 cores, only one core can do communication. But if you’re using 10,000 cores on 100 nodes, you need to get the parallelism on the node to work with the parallelism between the nodes.
HPCwire: What is your approach to teaching the skills required for a career in HPC? What advice do you offer young HPC professionals?
What I try to teach my students, even in the programming models class, is that it’s important to have experience working on a real scientific problem so they understand the issues that computational scientists face.
I think that young HPC professionals need a broad background in hardware, programming models and applications. It’s difficult to be an expert in these areas, but you need to know enough to communicate with experts across these fields. You can be the world’s expert on the topic of your thesis, but that doesn’t necessarily give you the skills to communicate with others.
HPCwire: What trends in high performance computing do you see as particularly relevant as you look forward to the year ahead?
I think one trend will be the merging of HPC and high performance data analytics. The people who do data analytics will gain greater understanding of how you get performance on problems at scale by understanding system architecture and bottlenecks, as well as other performance problems.
Another trend we’ll see is the blurring of the line between modeling and simulation and data-intensive science. This will include the use of observational data to drive simulation and the use of simulations to interpret data. This will lead to the creation of large community data sets from simulations and observations, which will enable a new kind of scientists – someone who doesn’t’ run simulations, but analyzes the data from others. This will allow all kinds of people to do science, not just those who are using experimental facilities or supercomputers.
I’d also look past the coming year – out to 2031 – to predict what the life of a scientist will be like:
- No personal/departmental computers
- Users don’t login to HPC Facilities
- Travel replaced by telepresence
- Lecturers teach millions of students
- Theorems proven by online communities
- Laboratory work is outsourced
- Experimental facilities are used remotely
- All scientific data is open
- Big science and team science democratized
HPCwire: Outside of the professional sphere, what can you tell us about yourself – personal life, family, background, hobbies, etc.?
Well, we are a dual career HPC family. My athletic goal is to keep up with my kids, whether it’s skiing or bicycling. I used to be on the women’s rowing team at MIT and I still have a rowing machine at home. I try to work out 40 minutes a day – the time it takes to watch one TV show.
HPCwire: Final question: What can you share about yourself that you think your colleagues would be surprised to learn?
Some of them know this, but my first job was making pizza at a Shakey’s Pizza Parlor. I got this job after I was rejected by the local hardware store – they didn’t think I could handle the cash register.
Women in HPC
|Dr. Yutong Lu
|William “Tim” Polk