Berkeley Lab’s Kathy Yelick Looks Back and Ahead as She Contemplates Next Career Stage

By Jon Bashor

July 25, 2019

In mid-April, Kathy Yelick announced she would step down as Associate Laboratory Director (ALD) for the Computing Sciences organization at Lawrence Berkeley National Laboratory, a position she has held since September 2010. Yelick was also Director of National Energy Research Scientific Computing Center (NERSC) from 2008 through 2012 and has been on the faculty of Electrical Engineering and Computer Sciences at UC Berkeley since 1991. She will return to campus in January 2020 while continuing to serve as a strategic advisor to Berkeley Lab Director Mike Witherell on lab-wide initiatives.

Kathy Yelick

“Kathy has played a central role in leading the transformation that computing has had across scientific inquiry, not only at our Lab but across the country,” said Berkeley Lab Director Mike Witherell in announcing Yelick’s decision. “We look forward to working with her on our Lab-wide strategic initiatives and on executing the new Strategic Plan.”

Berkeley Lab has posted the position and launched an international search for candidates to become the next ALD. Yelick recently sat down with retired Computing Sciences Communications Manager Jon Bashor to talk about the position and get her perspective on the organization and what the future holds.

A number of people in the HPC community were surprised by your announcement that you had decided to step down as the Computing Sciences ALD. What made you decide to step down at this time?

Katherine Yelick: First I want to say that I have thoroughly enjoyed this job and it’s been an honor to lead such a remarkable organization. I think it’s the best computing position anywhere in the Department of Energy national labs, with two of the highest-impact national facilities and the premier computational research organization. ESnet, DOE’s dedicated network for science, is essential to DOE researchers and collaborators around the world, providing critical support for data-intensive science. NERSC  is a leader in pushing technology forward and pulling the broad science community along to achieve scientific breakthroughs on the latest HPC systems.

I’m obviously biased, but I think we have the best computational research program among the labs. We have leading researchers in applied mathematics, computer and data science, and a strong culture of using our core strengths and technologies to deliver breakthrough science capabilities in other areas. Researchers from postdocs to senior scientists are committed to cross-disciplinary collaborations, and they team with software engineers to build software solutions that are typically not possible in a university setting.

So why now? This is a very exciting time in high performance computing, which is broadening from the traditional focus on modeling and simulation applications to those involving high performance data analytics and machine learning. At the same time there are significant technology disruptions from the end of transistor scaling and the emergence of quantum computation becoming viable for scientific applications in the foreseeable future. These will be transformative in science, and in my role as ALD I have helped to shape a strategic vision of a more productive, automated and performant set of computational methods and systems for scientific inquiry. It’s very rewarding to support this kind of high-impact work, but I do miss the more hands-on aspects of research – finding clever parallel algorithms or implementation techniques, working with students, and participating more directly in the educational mission of the university.

I’m especially excited about the ExaBiome project, part of DOE’s Exascale Computing Project, or ECP, which is developing scalable high performance methods for genome analysis. This is an example of using more computing to find more signals in current data sets, in addition to supporting the growth in sequencing data and interest in analyzing across data sets. But for me, personally, it’s also about using the class of programming techniques that I’ve worked on throughout my career to take algorithms that are predominantly written for shared memory machines and run them on HPC systems.

What about your tenure as ALD? What do you see as the most important accomplishments during that time?

Probably the most visible accomplishment was the 2015 opening of the new home for Computing Sciences: Shyh Wang Hall [shown in feature image above]. Our new building, which you can see from across the bay in San Francisco, allowed us to bring together staff from ESnet, NERSC, and the Computational Research Division, along with the Area staff who support our operational activities. The environment has proven great for fostering collaboration as people meet and talk in the hallways, meeting rooms and break rooms. As NERSC Director, I had three offices – one at UC Berkeley, one in Oakland for NERSC, and one at LBNL for my lab management role. That meant a lot of driving around for me, but more importantly it really kept the NERSC staff separated from the rest of Computing Sciences and the lab. We now have the NERSC staff and machine room, and much cheaper power by the way, back at the main lab site.

On a larger scale, we collaborated with both the DOE Office of Science labs and those of the National Nuclear Security Administration to launch the Exascale Computing Project, the largest DOE project of its kind. It required a lot of travel and a lot of negotiations for myself and others on the leadership team–including John Shalf and Jonathan Carter – as well as researchers taking the initiative to extend their current efforts and map them to exascale applications, software and hardware projects.

Some of the large initiatives we’ve launched include Berkeley Quantum, the lab’s cross-disciplinary leadership in quantum information sciences, an effort that grew out of our Laboratory Directed Research and Development (LDRD) program. Similarly, the Center for Advanced Mathematics for Energy Research Applications (CAMERA) is a successful program funded by the Offices of Advanced Scientific Computing Research and Basic Energy Sciences that began as three linked LDRD projects. More recently, the Machine Learning for Science initiative has helped to energize researchers across the Lab in developing new methods and applications of learning methods applied to important science problems.

We have also seen the lab’s overall budget grow to more than $1 billion per year. In particular, our program office, the Office of Advanced Scientific Computing Research, is now one of the largest funders of the lab. This has allowed us to hire more staff as we take on more operational and research challenges. Today, the Computing Sciences organization employees about 400 full-time career staff and postdocs. To help manage this workforce, we’ve hired new division directors for NERSC, ESnet and the Computational Research Division during that time.

Okay. Before being named ALD in 2010, you served as the NERSC Division Director since 2008, and in fact, held both positions simultaneously for two years. Many people in the community equate NERSC with the entire computing landscape at Berkeley Lab, but there’s a lot more going on. Can you give some other examples and tell us how it all fits together to make up the Computing Sciences Area?

NERSC does have exceptional brand recognition and has had an enormous impact, including providing computational support for six Nobel Laureates and their teams. This includes two projects each in chemistry and cosmology, along with climate modeling and neutrino science. There are over 2,000 peer-reviewed publications published by NERSC users each year. NERSC is often the first HPC experience for students and postdocs, and the center provides pre-installed software for common science packages, in addition to supporting a vast array of programming tools to users who want to build their own applications. NERSC has a long history of data-intensive science, including support for some of the major experiments at the Large Hadron Collider. NERSC systems are used to analyze data from major telescopes, genome sequencers (including a partnership with the Joint Genome Institute), and the team is working closely with light sources and other major experimental facilities.

The role of simulation and observation is increasingly blurred as scientists look to simulations to interpret and explain observational data, or to use measurements to augment first-principles simulations models. And high-throughput simulations, such as the highly successful Materials Project, use massive amounts of computing for simulation, and then create an interesting data analysis problem for both machine learning and traditional analysis techniques. NERSC has a fantastic team led by Sudip Dosanjh, and we’re all very excited about the upcoming delivery of NERSC-9, which will support simulation, data and learning applications. It will deploy some of the early exascale technology, including Cray’s Slingshot network and a mixture of CPU (AMD) and GPU (Nvidia) nodes.

There is also equally important work being done in ESnet, which under Inder Monga’s leadership is laying the foundation for DOE’s next-generation science network using software-defined networking and high-level services tailored to science. ESnet is critical to the idea of connecting DOE’s experimental facilities to the HPC facilities like NERSC for real-time analysis, as well as archiving, serving, and the reanalysis of experimental data. We call this the Superfacility model, because it combines DOE hallmark facilities into a single integrated system.

ESnet has pioneered some of the networking tools for science, including OSCARS, the On-Demand Secure Circuit and Advance Reservation System, which allows researchers to set up end-to-end dynamic circuits across multiple networks when and where they need them — and do it in just minutes rather than months. They also developed the Science DMZ concept, which provides a secure, high-speed architecture for science data transfers for research organizations. The Science DMZ has been adopted by other DOE labs, NSF-funded universities, and networks in other countries.

Our Computational Research Division (CRD) under David Brown is paving the way for the future of science, building methods and tools that automate high-throughput experiments, discover signals in noisy data, support programming of increasingly complex hardware, and use sophisticated mathematical and learning models for predictive simulation. Within DOE’s Exascale Computing Project (ECP), our goal is to produce the next-generation scientific applications for new problems and features of existing applications that will enable breakthrough discoveries by combining the best math, computer science, and exascale systems. The AMReX Co-Design Center led by John Bell is putting Adaptive Mesh Refinement methods into several ECP applications, so we use exascale systems for codes that are algorithmically efficient.

While the bulk of ECP’s portfolio and DOE’s computing investments more broadly have historically focused on modeling and simulation, there is increasing interest in collaborations with experimentalists. In the past, the algorithms and software for major experiments had largely been viewed as the purview of those science programs. The CAMERA Center led by James Sethian is a great example of the value of bringing advanced mathematics to DOE’s light sources and other facilities. CAMERA is funded by DOE but was established years ago through a strategic investment by the Lab and has proven to be a very successful model for collaboration. Another example is FastBit, an indexing technology which allows users to search massive datasets up to 40 times faster and was recognized with a 2008 R&D 100 award. Led by John Wu, this project was originally designed to meet the needs of particle physicists who need to sort through billions of data records just to find 100 key pieces of information, but translates to other applications, too, including biology and cybersecurity.

Berkeley Lab has an enormous opportunity to address the research and facility issues here with NERSC, ESnet, CRD, and its own experimental facilities as well as strong collaborations with facilities at other Labs. And we’re looking at the data issues that go beyond the big experiments to embedded sensors in the environment and supporting the entire lifecycle for scientific data.

So it sounds like there is a lot going on to prepare for exascale and the experimental data challenges. Are there other big changes you see in the future of HPC? 

Well, I think we’ve only started to scratch the surface of machine learning techniques in science.  It’s a huge area of interest across the Lab – over 100 projects are using or developing machine learning techniques for everything from understanding the universe to improving the energy efficiency of buildings. Deb Agarwal has been spearheading a cross-lab coordination of machine learning with highlights on our ml4sci.lbl.gov website. There are interesting research issues in bias, robustness, and interpretability of the methods, but with particular emphasis when applied in science. After all, as scientists our job is to ask why something is true, not just that things are correlated, and the models need to be consistent with known physical laws and be simple enough to be believable. And then there are issues of data size, model size, and how the various algorithms map onto HPC systems at scale. In addition to science applications, we’re looking at machine learning to improve facility operations, manage experiments, design hardware, write software, and generally help automate certain aspects of what we do.

But, of course, the big challenge in HPC, and computing more broadly, is the looming end of transistor density scaling and the related benefits in size, power, cost, and performance of computing systems. I think we’ve done a great job in the HPC community of getting the most out of the systems we have today, using tools like Sam Williams’ Roofline model to assess the performance of various applications relative to the peak possible when running on multicore, manycore, or accelerator processor architectures. But things are going to get a lot harder. It’s interesting to me that as hard as it is for people to think about exponential growth in general, the exponential improvements in computing are so ingrained in our field and everyone who uses a computing device that I think it’s hard for people to comprehend the impact that this change will have.

We are taking two approaches to this “beyond Moore” computing problem, the first and more immediate is one based on the traditional digital model of computing.

We’re looking at purpose-built architectures, already being used for machine learning, as a potential future for other scientific applications in the absence of Moore’s Law. In one project the team is reformulating the LS3DF algorithm to make it amenable to specialized hardware and to develop a custom accelerator for Density Functional Theory, a very popular method used at NERSC for materials and chemistry applications. The initial design/prototype will target an FPGA, and results will also be projected to an ASIC. We’re also looking at specialized edge devices for high-speed data rates coming from microscopes and other scientific instruments. Later, we intend to generalize our results to broader implications for the DOE HPC workload. The goal of this project is to determine the feasibility and benefit of specialized architectures for future science problems and explore various technology and business models for the future of HPC.

We also have an ambitious cross-laboratory effort in quantum information science, looking at technology, methods, software and systems for applying near-term computing devices to simulate DOE mission problems. Berkeley Lab is receiving $30 million over five years to build and operate an Advanced Quantum Testbed. Researchers will use this testbed to explore superconducting quantum processors and evaluate how these emerging quantum devices can be utilized to advance scientific research. As part of this effort, Berkeley Lab will collaborate with the MIT Lincoln Laboratory to deploy different quantum processor architectures.

It sounds like both an interesting and, well, challenging organization to lead and Berkeley Lab is mounting an extensive recruiting campaign to fill your job. What skills and experience do you think would best equip someone to succeed in the position?

Of course this requires strong leadership abilities, to rally people around common problems, both scientific and operational, and to manage a diverse set of individuals and activities, It’s’s almost impossible to understand all of the science being carried out with support from Computing Sciences, but you need curiosity to learn about that science, which ranges from cosmology to biology, so that you are comfortable talking about it. But it’s not just to represent projects, it’s being able to figure out how the pieces fit together.

One last question: How did you decide that now was the right time to step down?

For the past 10 years, my research has taken a back seat to my lab management responsibilities. And now there is a great need for high-end computing, for new applications in genomics, data analysis, machine learning and other areas. I’m excited about pursuing those opportunities directly. There are a lot of really great problems to work on, but I haven’t been able to as I’ve been problem-solving on a different level.

And this brings me back to why I first got interested in computer science, because I loved solving problems. I developed software and algorithms and really liked the challenge of getting the software to work as intended. This seems like a good point in my career to rediscover that feeling.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire