Berkeley Lab’s Kathy Yelick Looks Back and Ahead as She Contemplates Next Career Stage

By Jon Bashor

July 25, 2019

In mid-April, Kathy Yelick announced she would step down as Associate Laboratory Director (ALD) for the Computing Sciences organization at Lawrence Berkeley National Laboratory, a position she has held since September 2010. Yelick was also Director of National Energy Research Scientific Computing Center (NERSC) from 2008 through 2012 and has been on the faculty of Electrical Engineering and Computer Sciences at UC Berkeley since 1991. She will return to campus in January 2020 while continuing to serve as a strategic advisor to Berkeley Lab Director Mike Witherell on lab-wide initiatives.

Kathy Yelick

“Kathy has played a central role in leading the transformation that computing has had across scientific inquiry, not only at our Lab but across the country,” said Berkeley Lab Director Mike Witherell in announcing Yelick’s decision. “We look forward to working with her on our Lab-wide strategic initiatives and on executing the new Strategic Plan.”

Berkeley Lab has posted the position and launched an international search for candidates to become the next ALD. Yelick recently sat down with retired Computing Sciences Communications Manager Jon Bashor to talk about the position and get her perspective on the organization and what the future holds.

A number of people in the HPC community were surprised by your announcement that you had decided to step down as the Computing Sciences ALD. What made you decide to step down at this time?

Katherine Yelick: First I want to say that I have thoroughly enjoyed this job and it’s been an honor to lead such a remarkable organization. I think it’s the best computing position anywhere in the Department of Energy national labs, with two of the highest-impact national facilities and the premier computational research organization. ESnet, DOE’s dedicated network for science, is essential to DOE researchers and collaborators around the world, providing critical support for data-intensive science. NERSC  is a leader in pushing technology forward and pulling the broad science community along to achieve scientific breakthroughs on the latest HPC systems.

I’m obviously biased, but I think we have the best computational research program among the labs. We have leading researchers in applied mathematics, computer and data science, and a strong culture of using our core strengths and technologies to deliver breakthrough science capabilities in other areas. Researchers from postdocs to senior scientists are committed to cross-disciplinary collaborations, and they team with software engineers to build software solutions that are typically not possible in a university setting.

So why now? This is a very exciting time in high performance computing, which is broadening from the traditional focus on modeling and simulation applications to those involving high performance data analytics and machine learning. At the same time there are significant technology disruptions from the end of transistor scaling and the emergence of quantum computation becoming viable for scientific applications in the foreseeable future. These will be transformative in science, and in my role as ALD I have helped to shape a strategic vision of a more productive, automated and performant set of computational methods and systems for scientific inquiry. It’s very rewarding to support this kind of high-impact work, but I do miss the more hands-on aspects of research – finding clever parallel algorithms or implementation techniques, working with students, and participating more directly in the educational mission of the university.

I’m especially excited about the ExaBiome project, part of DOE’s Exascale Computing Project, or ECP, which is developing scalable high performance methods for genome analysis. This is an example of using more computing to find more signals in current data sets, in addition to supporting the growth in sequencing data and interest in analyzing across data sets. But for me, personally, it’s also about using the class of programming techniques that I’ve worked on throughout my career to take algorithms that are predominantly written for shared memory machines and run them on HPC systems.

What about your tenure as ALD? What do you see as the most important accomplishments during that time?

Probably the most visible accomplishment was the 2015 opening of the new home for Computing Sciences: Shyh Wang Hall [shown in feature image above]. Our new building, which you can see from across the bay in San Francisco, allowed us to bring together staff from ESnet, NERSC, and the Computational Research Division, along with the Area staff who support our operational activities. The environment has proven great for fostering collaboration as people meet and talk in the hallways, meeting rooms and break rooms. As NERSC Director, I had three offices – one at UC Berkeley, one in Oakland for NERSC, and one at LBNL for my lab management role. That meant a lot of driving around for me, but more importantly it really kept the NERSC staff separated from the rest of Computing Sciences and the lab. We now have the NERSC staff and machine room, and much cheaper power by the way, back at the main lab site.

On a larger scale, we collaborated with both the DOE Office of Science labs and those of the National Nuclear Security Administration to launch the Exascale Computing Project, the largest DOE project of its kind. It required a lot of travel and a lot of negotiations for myself and others on the leadership team–including John Shalf and Jonathan Carter – as well as researchers taking the initiative to extend their current efforts and map them to exascale applications, software and hardware projects.

Some of the large initiatives we’ve launched include Berkeley Quantum, the lab’s cross-disciplinary leadership in quantum information sciences, an effort that grew out of our Laboratory Directed Research and Development (LDRD) program. Similarly, the Center for Advanced Mathematics for Energy Research Applications (CAMERA) is a successful program funded by the Offices of Advanced Scientific Computing Research and Basic Energy Sciences that began as three linked LDRD projects. More recently, the Machine Learning for Science initiative has helped to energize researchers across the Lab in developing new methods and applications of learning methods applied to important science problems.

We have also seen the lab’s overall budget grow to more than $1 billion per year. In particular, our program office, the Office of Advanced Scientific Computing Research, is now one of the largest funders of the lab. This has allowed us to hire more staff as we take on more operational and research challenges. Today, the Computing Sciences organization employees about 400 full-time career staff and postdocs. To help manage this workforce, we’ve hired new division directors for NERSC, ESnet and the Computational Research Division during that time.

Okay. Before being named ALD in 2010, you served as the NERSC Division Director since 2008, and in fact, held both positions simultaneously for two years. Many people in the community equate NERSC with the entire computing landscape at Berkeley Lab, but there’s a lot more going on. Can you give some other examples and tell us how it all fits together to make up the Computing Sciences Area?

NERSC does have exceptional brand recognition and has had an enormous impact, including providing computational support for six Nobel Laureates and their teams. This includes two projects each in chemistry and cosmology, along with climate modeling and neutrino science. There are over 2,000 peer-reviewed publications published by NERSC users each year. NERSC is often the first HPC experience for students and postdocs, and the center provides pre-installed software for common science packages, in addition to supporting a vast array of programming tools to users who want to build their own applications. NERSC has a long history of data-intensive science, including support for some of the major experiments at the Large Hadron Collider. NERSC systems are used to analyze data from major telescopes, genome sequencers (including a partnership with the Joint Genome Institute), and the team is working closely with light sources and other major experimental facilities.

The role of simulation and observation is increasingly blurred as scientists look to simulations to interpret and explain observational data, or to use measurements to augment first-principles simulations models. And high-throughput simulations, such as the highly successful Materials Project, use massive amounts of computing for simulation, and then create an interesting data analysis problem for both machine learning and traditional analysis techniques. NERSC has a fantastic team led by Sudip Dosanjh, and we’re all very excited about the upcoming delivery of NERSC-9, which will support simulation, data and learning applications. It will deploy some of the early exascale technology, including Cray’s Slingshot network and a mixture of CPU (AMD) and GPU (Nvidia) nodes.

There is also equally important work being done in ESnet, which under Inder Monga’s leadership is laying the foundation for DOE’s next-generation science network using software-defined networking and high-level services tailored to science. ESnet is critical to the idea of connecting DOE’s experimental facilities to the HPC facilities like NERSC for real-time analysis, as well as archiving, serving, and the reanalysis of experimental data. We call this the Superfacility model, because it combines DOE hallmark facilities into a single integrated system.

ESnet has pioneered some of the networking tools for science, including OSCARS, the On-Demand Secure Circuit and Advance Reservation System, which allows researchers to set up end-to-end dynamic circuits across multiple networks when and where they need them — and do it in just minutes rather than months. They also developed the Science DMZ concept, which provides a secure, high-speed architecture for science data transfers for research organizations. The Science DMZ has been adopted by other DOE labs, NSF-funded universities, and networks in other countries.

Our Computational Research Division (CRD) under David Brown is paving the way for the future of science, building methods and tools that automate high-throughput experiments, discover signals in noisy data, support programming of increasingly complex hardware, and use sophisticated mathematical and learning models for predictive simulation. Within DOE’s Exascale Computing Project (ECP), our goal is to produce the next-generation scientific applications for new problems and features of existing applications that will enable breakthrough discoveries by combining the best math, computer science, and exascale systems. The AMReX Co-Design Center led by John Bell is putting Adaptive Mesh Refinement methods into several ECP applications, so we use exascale systems for codes that are algorithmically efficient.

While the bulk of ECP’s portfolio and DOE’s computing investments more broadly have historically focused on modeling and simulation, there is increasing interest in collaborations with experimentalists. In the past, the algorithms and software for major experiments had largely been viewed as the purview of those science programs. The CAMERA Center led by James Sethian is a great example of the value of bringing advanced mathematics to DOE’s light sources and other facilities. CAMERA is funded by DOE but was established years ago through a strategic investment by the Lab and has proven to be a very successful model for collaboration. Another example is FastBit, an indexing technology which allows users to search massive datasets up to 40 times faster and was recognized with a 2008 R&D 100 award. Led by John Wu, this project was originally designed to meet the needs of particle physicists who need to sort through billions of data records just to find 100 key pieces of information, but translates to other applications, too, including biology and cybersecurity.

Berkeley Lab has an enormous opportunity to address the research and facility issues here with NERSC, ESnet, CRD, and its own experimental facilities as well as strong collaborations with facilities at other Labs. And we’re looking at the data issues that go beyond the big experiments to embedded sensors in the environment and supporting the entire lifecycle for scientific data.

So it sounds like there is a lot going on to prepare for exascale and the experimental data challenges. Are there other big changes you see in the future of HPC? 

Well, I think we’ve only started to scratch the surface of machine learning techniques in science.  It’s a huge area of interest across the Lab – over 100 projects are using or developing machine learning techniques for everything from understanding the universe to improving the energy efficiency of buildings. Deb Agarwal has been spearheading a cross-lab coordination of machine learning with highlights on our ml4sci.lbl.gov website. There are interesting research issues in bias, robustness, and interpretability of the methods, but with particular emphasis when applied in science. After all, as scientists our job is to ask why something is true, not just that things are correlated, and the models need to be consistent with known physical laws and be simple enough to be believable. And then there are issues of data size, model size, and how the various algorithms map onto HPC systems at scale. In addition to science applications, we’re looking at machine learning to improve facility operations, manage experiments, design hardware, write software, and generally help automate certain aspects of what we do.

But, of course, the big challenge in HPC, and computing more broadly, is the looming end of transistor density scaling and the related benefits in size, power, cost, and performance of computing systems. I think we’ve done a great job in the HPC community of getting the most out of the systems we have today, using tools like Sam Williams’ Roofline model to assess the performance of various applications relative to the peak possible when running on multicore, manycore, or accelerator processor architectures. But things are going to get a lot harder. It’s interesting to me that as hard as it is for people to think about exponential growth in general, the exponential improvements in computing are so ingrained in our field and everyone who uses a computing device that I think it’s hard for people to comprehend the impact that this change will have.

We are taking two approaches to this “beyond Moore” computing problem, the first and more immediate is one based on the traditional digital model of computing.

We’re looking at purpose-built architectures, already being used for machine learning, as a potential future for other scientific applications in the absence of Moore’s Law. In one project the team is reformulating the LS3DF algorithm to make it amenable to specialized hardware and to develop a custom accelerator for Density Functional Theory, a very popular method used at NERSC for materials and chemistry applications. The initial design/prototype will target an FPGA, and results will also be projected to an ASIC. We’re also looking at specialized edge devices for high-speed data rates coming from microscopes and other scientific instruments. Later, we intend to generalize our results to broader implications for the DOE HPC workload. The goal of this project is to determine the feasibility and benefit of specialized architectures for future science problems and explore various technology and business models for the future of HPC.

We also have an ambitious cross-laboratory effort in quantum information science, looking at technology, methods, software and systems for applying near-term computing devices to simulate DOE mission problems. Berkeley Lab is receiving $30 million over five years to build and operate an Advanced Quantum Testbed. Researchers will use this testbed to explore superconducting quantum processors and evaluate how these emerging quantum devices can be utilized to advance scientific research. As part of this effort, Berkeley Lab will collaborate with the MIT Lincoln Laboratory to deploy different quantum processor architectures.

It sounds like both an interesting and, well, challenging organization to lead and Berkeley Lab is mounting an extensive recruiting campaign to fill your job. What skills and experience do you think would best equip someone to succeed in the position?

Of course this requires strong leadership abilities, to rally people around common problems, both scientific and operational, and to manage a diverse set of individuals and activities, It’s’s almost impossible to understand all of the science being carried out with support from Computing Sciences, but you need curiosity to learn about that science, which ranges from cosmology to biology, so that you are comfortable talking about it. But it’s not just to represent projects, it’s being able to figure out how the pieces fit together.

One last question: How did you decide that now was the right time to step down?

For the past 10 years, my research has taken a back seat to my lab management responsibilities. And now there is a great need for high-end computing, for new applications in genomics, data analysis, machine learning and other areas. I’m excited about pursuing those opportunities directly. There are a lot of really great problems to work on, but I haven’t been able to as I’ve been problem-solving on a different level.

And this brings me back to why I first got interested in computer science, because I loved solving problems. I developed software and algorithms and really liked the challenge of getting the software to work as intended. This seems like a good point in my career to rediscover that feeling.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Speeds NAMD by 1.8x: Saves Xeon Processor Users Millions of Compute Hours

August 12, 2020

Potentially saving datacenters millions of CPU node hours, Intel and the University of Illinois at Urbana–Champaign (UIUC) have collaborated to develop AVX-512 optimizations for the NAMD scalable molecular dynamics cod Read more…

By Rob Farber

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yesterday, Intel reported an Optane and DAOS-based system finishe Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows users to virtually “walk” around the massive supercomputer Read more…

By Oliver Peckham

Supercomputer Simulations Examine Changes in Chesapeake Bay

August 8, 2020

The Chesapeake Bay, the largest estuary in the continental United States, weaves its way south from Maryland, collecting waters from West Virginia, Delaware, DC, Pennsylvania and New York along the way. Like many major e Read more…

By Oliver Peckham

Student Success from ‘Scratch’: CHPC’s Proof is in the Pudding

August 7, 2020

Happy Sithole, who directs the South African Centre for High Performance Computing (SA-CHPC), called the 13th annual CHPC National conference to order on December 1, 2019, at the Birchwood Conference Centre in Kempton Pa Read more…

By Elizabeth Leake

AWS Solution Channel

University of Adelaide Provides Seamless Bioinformatics Training Using AWS

The University of Adelaide, established in South Australia in 1874, maintains a rich history of scientific innovation. For more than 140 years, the institution and its researchers have had an impact all over the world—making vital contributions to the invention of X-ray crystallography, insulin, penicillin, and the Olympic torch. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

New GE Simulations on Summit to Advance Offshore Wind Power

August 6, 2020

The wind energy sector is a frequent user of high-power simulations, with researchers aiming to optimize wind flows and energy production from the massive turbines. Now, researchers at GE are preparing to undertake a lar Read more…

By Oliver Peckham

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yeste Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows use Read more…

By Oliver Peckham

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

August 5, 2020

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community Read more…

By Hartwig Anzt and Jack Dongarra

Implement Photonic Tensor Cores for Machine Learning?

August 5, 2020

Researchers from George Washington University have reported an approach for building photonic tensor cores that leverages phase change photonic memory to implem Read more…

By John Russell

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Machines, Connections, Data, and Especially People: OAC Acting Director Amy Friedlander Charts Office’s Blueprint for Innovation

August 3, 2020

The path to innovation in cyberinfrastructure (CI) will require continued focus on building HPC systems and secure connections between them, in addition to the Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This