Berkeley Lab’s Kathy Yelick Looks Back and Ahead as She Contemplates Next Career Stage

By Jon Bashor

July 25, 2019

In mid-April, Kathy Yelick announced she would step down as Associate Laboratory Director (ALD) for the Computing Sciences organization at Lawrence Berkeley National Laboratory, a position she has held since September 2010. Yelick was also Director of National Energy Research Scientific Computing Center (NERSC) from 2008 through 2012 and has been on the faculty of Electrical Engineering and Computer Sciences at UC Berkeley since 1991. She will return to campus in January 2020 while continuing to serve as a strategic advisor to Berkeley Lab Director Mike Witherell on lab-wide initiatives.

Kathy Yelick

“Kathy has played a central role in leading the transformation that computing has had across scientific inquiry, not only at our Lab but across the country,” said Berkeley Lab Director Mike Witherell in announcing Yelick’s decision. “We look forward to working with her on our Lab-wide strategic initiatives and on executing the new Strategic Plan.”

Berkeley Lab has posted the position and launched an international search for candidates to become the next ALD. Yelick recently sat down with retired Computing Sciences Communications Manager Jon Bashor to talk about the position and get her perspective on the organization and what the future holds.

A number of people in the HPC community were surprised by your announcement that you had decided to step down as the Computing Sciences ALD. What made you decide to step down at this time?

Katherine Yelick: First I want to say that I have thoroughly enjoyed this job and it’s been an honor to lead such a remarkable organization. I think it’s the best computing position anywhere in the Department of Energy national labs, with two of the highest-impact national facilities and the premier computational research organization. ESnet, DOE’s dedicated network for science, is essential to DOE researchers and collaborators around the world, providing critical support for data-intensive science. NERSC  is a leader in pushing technology forward and pulling the broad science community along to achieve scientific breakthroughs on the latest HPC systems.

I’m obviously biased, but I think we have the best computational research program among the labs. We have leading researchers in applied mathematics, computer and data science, and a strong culture of using our core strengths and technologies to deliver breakthrough science capabilities in other areas. Researchers from postdocs to senior scientists are committed to cross-disciplinary collaborations, and they team with software engineers to build software solutions that are typically not possible in a university setting.

So why now? This is a very exciting time in high performance computing, which is broadening from the traditional focus on modeling and simulation applications to those involving high performance data analytics and machine learning. At the same time there are significant technology disruptions from the end of transistor scaling and the emergence of quantum computation becoming viable for scientific applications in the foreseeable future. These will be transformative in science, and in my role as ALD I have helped to shape a strategic vision of a more productive, automated and performant set of computational methods and systems for scientific inquiry. It’s very rewarding to support this kind of high-impact work, but I do miss the more hands-on aspects of research – finding clever parallel algorithms or implementation techniques, working with students, and participating more directly in the educational mission of the university.

I’m especially excited about the ExaBiome project, part of DOE’s Exascale Computing Project, or ECP, which is developing scalable high performance methods for genome analysis. This is an example of using more computing to find more signals in current data sets, in addition to supporting the growth in sequencing data and interest in analyzing across data sets. But for me, personally, it’s also about using the class of programming techniques that I’ve worked on throughout my career to take algorithms that are predominantly written for shared memory machines and run them on HPC systems.

What about your tenure as ALD? What do you see as the most important accomplishments during that time?

Probably the most visible accomplishment was the 2015 opening of the new home for Computing Sciences: Shyh Wang Hall [shown in feature image above]. Our new building, which you can see from across the bay in San Francisco, allowed us to bring together staff from ESnet, NERSC, and the Computational Research Division, along with the Area staff who support our operational activities. The environment has proven great for fostering collaboration as people meet and talk in the hallways, meeting rooms and break rooms. As NERSC Director, I had three offices – one at UC Berkeley, one in Oakland for NERSC, and one at LBNL for my lab management role. That meant a lot of driving around for me, but more importantly it really kept the NERSC staff separated from the rest of Computing Sciences and the lab. We now have the NERSC staff and machine room, and much cheaper power by the way, back at the main lab site.

On a larger scale, we collaborated with both the DOE Office of Science labs and those of the National Nuclear Security Administration to launch the Exascale Computing Project, the largest DOE project of its kind. It required a lot of travel and a lot of negotiations for myself and others on the leadership team–including John Shalf and Jonathan Carter – as well as researchers taking the initiative to extend their current efforts and map them to exascale applications, software and hardware projects.

Some of the large initiatives we’ve launched include Berkeley Quantum, the lab’s cross-disciplinary leadership in quantum information sciences, an effort that grew out of our Laboratory Directed Research and Development (LDRD) program. Similarly, the Center for Advanced Mathematics for Energy Research Applications (CAMERA) is a successful program funded by the Offices of Advanced Scientific Computing Research and Basic Energy Sciences that began as three linked LDRD projects. More recently, the Machine Learning for Science initiative has helped to energize researchers across the Lab in developing new methods and applications of learning methods applied to important science problems.

We have also seen the lab’s overall budget grow to more than $1 billion per year. In particular, our program office, the Office of Advanced Scientific Computing Research, is now one of the largest funders of the lab. This has allowed us to hire more staff as we take on more operational and research challenges. Today, the Computing Sciences organization employees about 400 full-time career staff and postdocs. To help manage this workforce, we’ve hired new division directors for NERSC, ESnet and the Computational Research Division during that time.

Okay. Before being named ALD in 2010, you served as the NERSC Division Director since 2008, and in fact, held both positions simultaneously for two years. Many people in the community equate NERSC with the entire computing landscape at Berkeley Lab, but there’s a lot more going on. Can you give some other examples and tell us how it all fits together to make up the Computing Sciences Area?

NERSC does have exceptional brand recognition and has had an enormous impact, including providing computational support for six Nobel Laureates and their teams. This includes two projects each in chemistry and cosmology, along with climate modeling and neutrino science. There are over 2,000 peer-reviewed publications published by NERSC users each year. NERSC is often the first HPC experience for students and postdocs, and the center provides pre-installed software for common science packages, in addition to supporting a vast array of programming tools to users who want to build their own applications. NERSC has a long history of data-intensive science, including support for some of the major experiments at the Large Hadron Collider. NERSC systems are used to analyze data from major telescopes, genome sequencers (including a partnership with the Joint Genome Institute), and the team is working closely with light sources and other major experimental facilities.

The role of simulation and observation is increasingly blurred as scientists look to simulations to interpret and explain observational data, or to use measurements to augment first-principles simulations models. And high-throughput simulations, such as the highly successful Materials Project, use massive amounts of computing for simulation, and then create an interesting data analysis problem for both machine learning and traditional analysis techniques. NERSC has a fantastic team led by Sudip Dosanjh, and we’re all very excited about the upcoming delivery of NERSC-9, which will support simulation, data and learning applications. It will deploy some of the early exascale technology, including Cray’s Slingshot network and a mixture of CPU (AMD) and GPU (Nvidia) nodes.

There is also equally important work being done in ESnet, which under Inder Monga’s leadership is laying the foundation for DOE’s next-generation science network using software-defined networking and high-level services tailored to science. ESnet is critical to the idea of connecting DOE’s experimental facilities to the HPC facilities like NERSC for real-time analysis, as well as archiving, serving, and the reanalysis of experimental data. We call this the Superfacility model, because it combines DOE hallmark facilities into a single integrated system.

ESnet has pioneered some of the networking tools for science, including OSCARS, the On-Demand Secure Circuit and Advance Reservation System, which allows researchers to set up end-to-end dynamic circuits across multiple networks when and where they need them — and do it in just minutes rather than months. They also developed the Science DMZ concept, which provides a secure, high-speed architecture for science data transfers for research organizations. The Science DMZ has been adopted by other DOE labs, NSF-funded universities, and networks in other countries.

Our Computational Research Division (CRD) under David Brown is paving the way for the future of science, building methods and tools that automate high-throughput experiments, discover signals in noisy data, support programming of increasingly complex hardware, and use sophisticated mathematical and learning models for predictive simulation. Within DOE’s Exascale Computing Project (ECP), our goal is to produce the next-generation scientific applications for new problems and features of existing applications that will enable breakthrough discoveries by combining the best math, computer science, and exascale systems. The AMReX Co-Design Center led by John Bell is putting Adaptive Mesh Refinement methods into several ECP applications, so we use exascale systems for codes that are algorithmically efficient.

While the bulk of ECP’s portfolio and DOE’s computing investments more broadly have historically focused on modeling and simulation, there is increasing interest in collaborations with experimentalists. In the past, the algorithms and software for major experiments had largely been viewed as the purview of those science programs. The CAMERA Center led by James Sethian is a great example of the value of bringing advanced mathematics to DOE’s light sources and other facilities. CAMERA is funded by DOE but was established years ago through a strategic investment by the Lab and has proven to be a very successful model for collaboration. Another example is FastBit, an indexing technology which allows users to search massive datasets up to 40 times faster and was recognized with a 2008 R&D 100 award. Led by John Wu, this project was originally designed to meet the needs of particle physicists who need to sort through billions of data records just to find 100 key pieces of information, but translates to other applications, too, including biology and cybersecurity.

Berkeley Lab has an enormous opportunity to address the research and facility issues here with NERSC, ESnet, CRD, and its own experimental facilities as well as strong collaborations with facilities at other Labs. And we’re looking at the data issues that go beyond the big experiments to embedded sensors in the environment and supporting the entire lifecycle for scientific data.

So it sounds like there is a lot going on to prepare for exascale and the experimental data challenges. Are there other big changes you see in the future of HPC? 

Well, I think we’ve only started to scratch the surface of machine learning techniques in science.  It’s a huge area of interest across the Lab – over 100 projects are using or developing machine learning techniques for everything from understanding the universe to improving the energy efficiency of buildings. Deb Agarwal has been spearheading a cross-lab coordination of machine learning with highlights on our ml4sci.lbl.gov website. There are interesting research issues in bias, robustness, and interpretability of the methods, but with particular emphasis when applied in science. After all, as scientists our job is to ask why something is true, not just that things are correlated, and the models need to be consistent with known physical laws and be simple enough to be believable. And then there are issues of data size, model size, and how the various algorithms map onto HPC systems at scale. In addition to science applications, we’re looking at machine learning to improve facility operations, manage experiments, design hardware, write software, and generally help automate certain aspects of what we do.

But, of course, the big challenge in HPC, and computing more broadly, is the looming end of transistor density scaling and the related benefits in size, power, cost, and performance of computing systems. I think we’ve done a great job in the HPC community of getting the most out of the systems we have today, using tools like Sam Williams’ Roofline model to assess the performance of various applications relative to the peak possible when running on multicore, manycore, or accelerator processor architectures. But things are going to get a lot harder. It’s interesting to me that as hard as it is for people to think about exponential growth in general, the exponential improvements in computing are so ingrained in our field and everyone who uses a computing device that I think it’s hard for people to comprehend the impact that this change will have.

We are taking two approaches to this “beyond Moore” computing problem, the first and more immediate is one based on the traditional digital model of computing.

We’re looking at purpose-built architectures, already being used for machine learning, as a potential future for other scientific applications in the absence of Moore’s Law. In one project the team is reformulating the LS3DF algorithm to make it amenable to specialized hardware and to develop a custom accelerator for Density Functional Theory, a very popular method used at NERSC for materials and chemistry applications. The initial design/prototype will target an FPGA, and results will also be projected to an ASIC. We’re also looking at specialized edge devices for high-speed data rates coming from microscopes and other scientific instruments. Later, we intend to generalize our results to broader implications for the DOE HPC workload. The goal of this project is to determine the feasibility and benefit of specialized architectures for future science problems and explore various technology and business models for the future of HPC.

We also have an ambitious cross-laboratory effort in quantum information science, looking at technology, methods, software and systems for applying near-term computing devices to simulate DOE mission problems. Berkeley Lab is receiving $30 million over five years to build and operate an Advanced Quantum Testbed. Researchers will use this testbed to explore superconducting quantum processors and evaluate how these emerging quantum devices can be utilized to advance scientific research. As part of this effort, Berkeley Lab will collaborate with the MIT Lincoln Laboratory to deploy different quantum processor architectures.

It sounds like both an interesting and, well, challenging organization to lead and Berkeley Lab is mounting an extensive recruiting campaign to fill your job. What skills and experience do you think would best equip someone to succeed in the position?

Of course this requires strong leadership abilities, to rally people around common problems, both scientific and operational, and to manage a diverse set of individuals and activities, It’s’s almost impossible to understand all of the science being carried out with support from Computing Sciences, but you need curiosity to learn about that science, which ranges from cosmology to biology, so that you are comfortable talking about it. But it’s not just to represent projects, it’s being able to figure out how the pieces fit together.

One last question: How did you decide that now was the right time to step down?

For the past 10 years, my research has taken a back seat to my lab management responsibilities. And now there is a great need for high-end computing, for new applications in genomics, data analysis, machine learning and other areas. I’m excited about pursuing those opportunities directly. There are a lot of really great problems to work on, but I haven’t been able to as I’ve been problem-solving on a different level.

And this brings me back to why I first got interested in computer science, because I loved solving problems. I developed software and algorithms and really liked the challenge of getting the software to work as intended. This seems like a good point in my career to rediscover that feeling.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a reference collection of open-source HPC software components and bes Read more…

By John Russell

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a p Read more…

By John Russell

With Optane Gaining, Intel Exits NAND Flash

October 21, 2020

In a sign that its 3D XPoint memory technology is gaining traction, Intel Corp. is departing the NAND flash memory and storage market with the sale of its manufacturing base in China to SK Hynix of South Korea. The $9 Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing another major EuroHPC design win. Finnish supercomputing cent Read more…

By Oliver Peckham

AWS Solution Channel

Live Webinar: AWS & Intel Research Webinar Series – Fast scaling research workloads on the cloud

Date: 27 Oct – 5 Nov

Join us for the AWS and Intel Research Webinar series.

You will learn how we help researchers process complex workloads, quickly analyze massive data pipelines, store petabytes of data, and advance research using transformative technologies. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a variety of observatories and astronomers – but when COVID Read more…

By Oliver Peckham

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a referen Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

ROI: Is HPC Worth It? What Can We Actually Measure?

October 15, 2020

HPC enables innovation and discovery. We all seem to agree on that. Is there a good way to quantify how much that’s worth? Thanks to a sponsored white pape Read more…

By Addison Snell, Intersect360 Research

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This