Berkeley Lab’s Kathy Yelick Looks Back and Ahead as She Contemplates Next Career Stage

By Jon Bashor

July 25, 2019

In mid-April, Kathy Yelick announced she would step down as Associate Laboratory Director (ALD) for the Computing Sciences organization at Lawrence Berkeley National Laboratory, a position she has held since September 2010. Yelick was also Director of National Energy Research Scientific Computing Center (NERSC) from 2008 through 2012 and has been on the faculty of Electrical Engineering and Computer Sciences at UC Berkeley since 1991. She will return to campus in January 2020 while continuing to serve as a strategic advisor to Berkeley Lab Director Mike Witherell on lab-wide initiatives.

Kathy Yelick

“Kathy has played a central role in leading the transformation that computing has had across scientific inquiry, not only at our Lab but across the country,” said Berkeley Lab Director Mike Witherell in announcing Yelick’s decision. “We look forward to working with her on our Lab-wide strategic initiatives and on executing the new Strategic Plan.”

Berkeley Lab has posted the position and launched an international search for candidates to become the next ALD. Yelick recently sat down with retired Computing Sciences Communications Manager Jon Bashor to talk about the position and get her perspective on the organization and what the future holds.

A number of people in the HPC community were surprised by your announcement that you had decided to step down as the Computing Sciences ALD. What made you decide to step down at this time?

Katherine Yelick: First I want to say that I have thoroughly enjoyed this job and it’s been an honor to lead such a remarkable organization. I think it’s the best computing position anywhere in the Department of Energy national labs, with two of the highest-impact national facilities and the premier computational research organization. ESnet, DOE’s dedicated network for science, is essential to DOE researchers and collaborators around the world, providing critical support for data-intensive science. NERSC  is a leader in pushing technology forward and pulling the broad science community along to achieve scientific breakthroughs on the latest HPC systems.

I’m obviously biased, but I think we have the best computational research program among the labs. We have leading researchers in applied mathematics, computer and data science, and a strong culture of using our core strengths and technologies to deliver breakthrough science capabilities in other areas. Researchers from postdocs to senior scientists are committed to cross-disciplinary collaborations, and they team with software engineers to build software solutions that are typically not possible in a university setting.

So why now? This is a very exciting time in high performance computing, which is broadening from the traditional focus on modeling and simulation applications to those involving high performance data analytics and machine learning. At the same time there are significant technology disruptions from the end of transistor scaling and the emergence of quantum computation becoming viable for scientific applications in the foreseeable future. These will be transformative in science, and in my role as ALD I have helped to shape a strategic vision of a more productive, automated and performant set of computational methods and systems for scientific inquiry. It’s very rewarding to support this kind of high-impact work, but I do miss the more hands-on aspects of research – finding clever parallel algorithms or implementation techniques, working with students, and participating more directly in the educational mission of the university.

I’m especially excited about the ExaBiome project, part of DOE’s Exascale Computing Project, or ECP, which is developing scalable high performance methods for genome analysis. This is an example of using more computing to find more signals in current data sets, in addition to supporting the growth in sequencing data and interest in analyzing across data sets. But for me, personally, it’s also about using the class of programming techniques that I’ve worked on throughout my career to take algorithms that are predominantly written for shared memory machines and run them on HPC systems.

What about your tenure as ALD? What do you see as the most important accomplishments during that time?

Probably the most visible accomplishment was the 2015 opening of the new home for Computing Sciences: Shyh Wang Hall [shown in feature image above]. Our new building, which you can see from across the bay in San Francisco, allowed us to bring together staff from ESnet, NERSC, and the Computational Research Division, along with the Area staff who support our operational activities. The environment has proven great for fostering collaboration as people meet and talk in the hallways, meeting rooms and break rooms. As NERSC Director, I had three offices – one at UC Berkeley, one in Oakland for NERSC, and one at LBNL for my lab management role. That meant a lot of driving around for me, but more importantly it really kept the NERSC staff separated from the rest of Computing Sciences and the lab. We now have the NERSC staff and machine room, and much cheaper power by the way, back at the main lab site.

On a larger scale, we collaborated with both the DOE Office of Science labs and those of the National Nuclear Security Administration to launch the Exascale Computing Project, the largest DOE project of its kind. It required a lot of travel and a lot of negotiations for myself and others on the leadership team–including John Shalf and Jonathan Carter – as well as researchers taking the initiative to extend their current efforts and map them to exascale applications, software and hardware projects.

Some of the large initiatives we’ve launched include Berkeley Quantum, the lab’s cross-disciplinary leadership in quantum information sciences, an effort that grew out of our Laboratory Directed Research and Development (LDRD) program. Similarly, the Center for Advanced Mathematics for Energy Research Applications (CAMERA) is a successful program funded by the Offices of Advanced Scientific Computing Research and Basic Energy Sciences that began as three linked LDRD projects. More recently, the Machine Learning for Science initiative has helped to energize researchers across the Lab in developing new methods and applications of learning methods applied to important science problems.

We have also seen the lab’s overall budget grow to more than $1 billion per year. In particular, our program office, the Office of Advanced Scientific Computing Research, is now one of the largest funders of the lab. This has allowed us to hire more staff as we take on more operational and research challenges. Today, the Computing Sciences organization employees about 400 full-time career staff and postdocs. To help manage this workforce, we’ve hired new division directors for NERSC, ESnet and the Computational Research Division during that time.

Okay. Before being named ALD in 2010, you served as the NERSC Division Director since 2008, and in fact, held both positions simultaneously for two years. Many people in the community equate NERSC with the entire computing landscape at Berkeley Lab, but there’s a lot more going on. Can you give some other examples and tell us how it all fits together to make up the Computing Sciences Area?

NERSC does have exceptional brand recognition and has had an enormous impact, including providing computational support for six Nobel Laureates and their teams. This includes two projects each in chemistry and cosmology, along with climate modeling and neutrino science. There are over 2,000 peer-reviewed publications published by NERSC users each year. NERSC is often the first HPC experience for students and postdocs, and the center provides pre-installed software for common science packages, in addition to supporting a vast array of programming tools to users who want to build their own applications. NERSC has a long history of data-intensive science, including support for some of the major experiments at the Large Hadron Collider. NERSC systems are used to analyze data from major telescopes, genome sequencers (including a partnership with the Joint Genome Institute), and the team is working closely with light sources and other major experimental facilities.

The role of simulation and observation is increasingly blurred as scientists look to simulations to interpret and explain observational data, or to use measurements to augment first-principles simulations models. And high-throughput simulations, such as the highly successful Materials Project, use massive amounts of computing for simulation, and then create an interesting data analysis problem for both machine learning and traditional analysis techniques. NERSC has a fantastic team led by Sudip Dosanjh, and we’re all very excited about the upcoming delivery of NERSC-9, which will support simulation, data and learning applications. It will deploy some of the early exascale technology, including Cray’s Slingshot network and a mixture of CPU (AMD) and GPU (Nvidia) nodes.

There is also equally important work being done in ESnet, which under Inder Monga’s leadership is laying the foundation for DOE’s next-generation science network using software-defined networking and high-level services tailored to science. ESnet is critical to the idea of connecting DOE’s experimental facilities to the HPC facilities like NERSC for real-time analysis, as well as archiving, serving, and the reanalysis of experimental data. We call this the Superfacility model, because it combines DOE hallmark facilities into a single integrated system.

ESnet has pioneered some of the networking tools for science, including OSCARS, the On-Demand Secure Circuit and Advance Reservation System, which allows researchers to set up end-to-end dynamic circuits across multiple networks when and where they need them — and do it in just minutes rather than months. They also developed the Science DMZ concept, which provides a secure, high-speed architecture for science data transfers for research organizations. The Science DMZ has been adopted by other DOE labs, NSF-funded universities, and networks in other countries.

Our Computational Research Division (CRD) under David Brown is paving the way for the future of science, building methods and tools that automate high-throughput experiments, discover signals in noisy data, support programming of increasingly complex hardware, and use sophisticated mathematical and learning models for predictive simulation. Within DOE’s Exascale Computing Project (ECP), our goal is to produce the next-generation scientific applications for new problems and features of existing applications that will enable breakthrough discoveries by combining the best math, computer science, and exascale systems. The AMReX Co-Design Center led by John Bell is putting Adaptive Mesh Refinement methods into several ECP applications, so we use exascale systems for codes that are algorithmically efficient.

While the bulk of ECP’s portfolio and DOE’s computing investments more broadly have historically focused on modeling and simulation, there is increasing interest in collaborations with experimentalists. In the past, the algorithms and software for major experiments had largely been viewed as the purview of those science programs. The CAMERA Center led by James Sethian is a great example of the value of bringing advanced mathematics to DOE’s light sources and other facilities. CAMERA is funded by DOE but was established years ago through a strategic investment by the Lab and has proven to be a very successful model for collaboration. Another example is FastBit, an indexing technology which allows users to search massive datasets up to 40 times faster and was recognized with a 2008 R&D 100 award. Led by John Wu, this project was originally designed to meet the needs of particle physicists who need to sort through billions of data records just to find 100 key pieces of information, but translates to other applications, too, including biology and cybersecurity.

Berkeley Lab has an enormous opportunity to address the research and facility issues here with NERSC, ESnet, CRD, and its own experimental facilities as well as strong collaborations with facilities at other Labs. And we’re looking at the data issues that go beyond the big experiments to embedded sensors in the environment and supporting the entire lifecycle for scientific data.

So it sounds like there is a lot going on to prepare for exascale and the experimental data challenges. Are there other big changes you see in the future of HPC? 

Well, I think we’ve only started to scratch the surface of machine learning techniques in science.  It’s a huge area of interest across the Lab – over 100 projects are using or developing machine learning techniques for everything from understanding the universe to improving the energy efficiency of buildings. Deb Agarwal has been spearheading a cross-lab coordination of machine learning with highlights on our ml4sci.lbl.gov website. There are interesting research issues in bias, robustness, and interpretability of the methods, but with particular emphasis when applied in science. After all, as scientists our job is to ask why something is true, not just that things are correlated, and the models need to be consistent with known physical laws and be simple enough to be believable. And then there are issues of data size, model size, and how the various algorithms map onto HPC systems at scale. In addition to science applications, we’re looking at machine learning to improve facility operations, manage experiments, design hardware, write software, and generally help automate certain aspects of what we do.

But, of course, the big challenge in HPC, and computing more broadly, is the looming end of transistor density scaling and the related benefits in size, power, cost, and performance of computing systems. I think we’ve done a great job in the HPC community of getting the most out of the systems we have today, using tools like Sam Williams’ Roofline model to assess the performance of various applications relative to the peak possible when running on multicore, manycore, or accelerator processor architectures. But things are going to get a lot harder. It’s interesting to me that as hard as it is for people to think about exponential growth in general, the exponential improvements in computing are so ingrained in our field and everyone who uses a computing device that I think it’s hard for people to comprehend the impact that this change will have.

We are taking two approaches to this “beyond Moore” computing problem, the first and more immediate is one based on the traditional digital model of computing.

We’re looking at purpose-built architectures, already being used for machine learning, as a potential future for other scientific applications in the absence of Moore’s Law. In one project the team is reformulating the LS3DF algorithm to make it amenable to specialized hardware and to develop a custom accelerator for Density Functional Theory, a very popular method used at NERSC for materials and chemistry applications. The initial design/prototype will target an FPGA, and results will also be projected to an ASIC. We’re also looking at specialized edge devices for high-speed data rates coming from microscopes and other scientific instruments. Later, we intend to generalize our results to broader implications for the DOE HPC workload. The goal of this project is to determine the feasibility and benefit of specialized architectures for future science problems and explore various technology and business models for the future of HPC.

We also have an ambitious cross-laboratory effort in quantum information science, looking at technology, methods, software and systems for applying near-term computing devices to simulate DOE mission problems. Berkeley Lab is receiving $30 million over five years to build and operate an Advanced Quantum Testbed. Researchers will use this testbed to explore superconducting quantum processors and evaluate how these emerging quantum devices can be utilized to advance scientific research. As part of this effort, Berkeley Lab will collaborate with the MIT Lincoln Laboratory to deploy different quantum processor architectures.

It sounds like both an interesting and, well, challenging organization to lead and Berkeley Lab is mounting an extensive recruiting campaign to fill your job. What skills and experience do you think would best equip someone to succeed in the position?

Of course this requires strong leadership abilities, to rally people around common problems, both scientific and operational, and to manage a diverse set of individuals and activities, It’s’s almost impossible to understand all of the science being carried out with support from Computing Sciences, but you need curiosity to learn about that science, which ranges from cosmology to biology, so that you are comfortable talking about it. But it’s not just to represent projects, it’s being able to figure out how the pieces fit together.

One last question: How did you decide that now was the right time to step down?

For the past 10 years, my research has taken a back seat to my lab management responsibilities. And now there is a great need for high-end computing, for new applications in genomics, data analysis, machine learning and other areas. I’m excited about pursuing those opportunities directly. There are a lot of really great problems to work on, but I haven’t been able to as I’ve been problem-solving on a different level.

And this brings me back to why I first got interested in computer science, because I loved solving problems. I developed software and algorithms and really liked the challenge of getting the software to work as intended. This seems like a good point in my career to rediscover that feeling.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NSF Budget Approved for $8.3B in 2020, a 2.5% Increase

January 16, 2020

The National Science Foundation (NSF) has been spared a President Trump-proposed budget cut that would have rolled back its funding to 2012 levels. Congress passed legislation last month that sets the budget at $8.3 bill Read more…

By Staff report

NOAA Updates Its Massive, Supercomputer-Generated Climate Dataset

January 15, 2020

As Australia burns, understanding and mitigating the climate crisis is more urgent than ever. Now, by leveraging the computing resources at the National Energy Research Scientific Computing Center (NERSC), the U.S. National Oceanic and Atmospheric Administration (NOAA) has updated its 20th Century Reanalysis Project (20CR) dataset... Read more…

By Oliver Peckham

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of the countries in Europe, has signed a four-year, $89-million Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, the gold standard programming languages for fast performance Read more…

By John Russell

Quantum Computing, ML Drive 2019 Patent Awards

January 14, 2020

The dizzying pace of technology innovation often fueled by the growing availability of computing horsepower is underscored by the race to develop unique designs and application that can be patented. Among the goals of ma Read more…

By George Leopold

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

Andrew Jones Joins Microsoft Azure HPC Team

January 13, 2020

Andrew Jones announced today he is joining Microsoft as part of the Azure HPC engineering & product team in early February. Jones makes the move after nearly 12 years at the UK HPC consultancy Numerical Algorithms Gr Read more…

By Staff report

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

White House AI Regulatory Guidelines: ‘Remove Impediments to Private-sector AI Innovation’

January 9, 2020

When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal a Read more…

By Doug Black

IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent addition Read more…

By John Russell

HPCwire Awards Highlight Supercomputing Achievements in the Sciences

January 7, 2020

In November at SC19 in Denver, the HPCwire Readers’ and Editors’ Choice awards program celebrated its 16th year of honoring remarkable achievements in high-performance computing. With categories ranging from Best Use of HPC in Energy to Top HPC-Enabled Scientific Achievement, many of the winners contributed to groundbreaking developments in the sciences. This editorial highlights those awards. Read more…

By Oliver Peckham

Blasts from the (Recent) Past and Hopes for the Future

December 23, 2019

What does 2020 look like to you? What did 2019 look like? Lots happened but the main trends were carryovers from 2018 – AI messaging again blanketed everything; the roll-out of new big machines and exascale announcements continued; processor diversity and system disaggregation kicked up a notch; hyperscalers continued flexing their muscles (think AWS and its Graviton2 processor); and the U.S. and China continued their awkward trade war. Read more…

By John Russell

ARPA-E Applies ML to Power Generation Designs

December 19, 2019

The U.S. Energy Department’s research arm is leveraging machine learning technologies to simplify the design process for energy systems ranging from photovolt Read more…

By George Leopold

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight G Read more…

By Doug Black

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Summit Has Real-Time Analytics: Here’s How It Happened and What’s Next

October 3, 2019

Summit – the world’s fastest publicly-ranked supercomputer – now has real-time streaming analytics. At the 2019 HPC User Forum at Argonne National Laborat Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This