Berkeley Lab’s Kathy Yelick Looks Back and Ahead as She Contemplates Next Career Stage

By Jon Bashor

July 25, 2019

In mid-April, Kathy Yelick announced she would step down as Associate Laboratory Director (ALD) for the Computing Sciences organization at Lawrence Berkeley National Laboratory, a position she has held since September 2010. Yelick was also Director of National Energy Research Scientific Computing Center (NERSC) from 2008 through 2012 and has been on the faculty of Electrical Engineering and Computer Sciences at UC Berkeley since 1991. She will return to campus in January 2020 while continuing to serve as a strategic advisor to Berkeley Lab Director Mike Witherell on lab-wide initiatives.

Kathy Yelick

“Kathy has played a central role in leading the transformation that computing has had across scientific inquiry, not only at our Lab but across the country,” said Berkeley Lab Director Mike Witherell in announcing Yelick’s decision. “We look forward to working with her on our Lab-wide strategic initiatives and on executing the new Strategic Plan.”

Berkeley Lab has posted the position and launched an international search for candidates to become the next ALD. Yelick recently sat down with retired Computing Sciences Communications Manager Jon Bashor to talk about the position and get her perspective on the organization and what the future holds.

A number of people in the HPC community were surprised by your announcement that you had decided to step down as the Computing Sciences ALD. What made you decide to step down at this time?

Katherine Yelick: First I want to say that I have thoroughly enjoyed this job and it’s been an honor to lead such a remarkable organization. I think it’s the best computing position anywhere in the Department of Energy national labs, with two of the highest-impact national facilities and the premier computational research organization. ESnet, DOE’s dedicated network for science, is essential to DOE researchers and collaborators around the world, providing critical support for data-intensive science. NERSC  is a leader in pushing technology forward and pulling the broad science community along to achieve scientific breakthroughs on the latest HPC systems.

I’m obviously biased, but I think we have the best computational research program among the labs. We have leading researchers in applied mathematics, computer and data science, and a strong culture of using our core strengths and technologies to deliver breakthrough science capabilities in other areas. Researchers from postdocs to senior scientists are committed to cross-disciplinary collaborations, and they team with software engineers to build software solutions that are typically not possible in a university setting.

So why now? This is a very exciting time in high performance computing, which is broadening from the traditional focus on modeling and simulation applications to those involving high performance data analytics and machine learning. At the same time there are significant technology disruptions from the end of transistor scaling and the emergence of quantum computation becoming viable for scientific applications in the foreseeable future. These will be transformative in science, and in my role as ALD I have helped to shape a strategic vision of a more productive, automated and performant set of computational methods and systems for scientific inquiry. It’s very rewarding to support this kind of high-impact work, but I do miss the more hands-on aspects of research – finding clever parallel algorithms or implementation techniques, working with students, and participating more directly in the educational mission of the university.

I’m especially excited about the ExaBiome project, part of DOE’s Exascale Computing Project, or ECP, which is developing scalable high performance methods for genome analysis. This is an example of using more computing to find more signals in current data sets, in addition to supporting the growth in sequencing data and interest in analyzing across data sets. But for me, personally, it’s also about using the class of programming techniques that I’ve worked on throughout my career to take algorithms that are predominantly written for shared memory machines and run them on HPC systems.

What about your tenure as ALD? What do you see as the most important accomplishments during that time?

Probably the most visible accomplishment was the 2015 opening of the new home for Computing Sciences: Shyh Wang Hall [shown in feature image above]. Our new building, which you can see from across the bay in San Francisco, allowed us to bring together staff from ESnet, NERSC, and the Computational Research Division, along with the Area staff who support our operational activities. The environment has proven great for fostering collaboration as people meet and talk in the hallways, meeting rooms and break rooms. As NERSC Director, I had three offices – one at UC Berkeley, one in Oakland for NERSC, and one at LBNL for my lab management role. That meant a lot of driving around for me, but more importantly it really kept the NERSC staff separated from the rest of Computing Sciences and the lab. We now have the NERSC staff and machine room, and much cheaper power by the way, back at the main lab site.

On a larger scale, we collaborated with both the DOE Office of Science labs and those of the National Nuclear Security Administration to launch the Exascale Computing Project, the largest DOE project of its kind. It required a lot of travel and a lot of negotiations for myself and others on the leadership team–including John Shalf and Jonathan Carter – as well as researchers taking the initiative to extend their current efforts and map them to exascale applications, software and hardware projects.

Some of the large initiatives we’ve launched include Berkeley Quantum, the lab’s cross-disciplinary leadership in quantum information sciences, an effort that grew out of our Laboratory Directed Research and Development (LDRD) program. Similarly, the Center for Advanced Mathematics for Energy Research Applications (CAMERA) is a successful program funded by the Offices of Advanced Scientific Computing Research and Basic Energy Sciences that began as three linked LDRD projects. More recently, the Machine Learning for Science initiative has helped to energize researchers across the Lab in developing new methods and applications of learning methods applied to important science problems.

We have also seen the lab’s overall budget grow to more than $1 billion per year. In particular, our program office, the Office of Advanced Scientific Computing Research, is now one of the largest funders of the lab. This has allowed us to hire more staff as we take on more operational and research challenges. Today, the Computing Sciences organization employees about 400 full-time career staff and postdocs. To help manage this workforce, we’ve hired new division directors for NERSC, ESnet and the Computational Research Division during that time.

Okay. Before being named ALD in 2010, you served as the NERSC Division Director since 2008, and in fact, held both positions simultaneously for two years. Many people in the community equate NERSC with the entire computing landscape at Berkeley Lab, but there’s a lot more going on. Can you give some other examples and tell us how it all fits together to make up the Computing Sciences Area?

NERSC does have exceptional brand recognition and has had an enormous impact, including providing computational support for six Nobel Laureates and their teams. This includes two projects each in chemistry and cosmology, along with climate modeling and neutrino science. There are over 2,000 peer-reviewed publications published by NERSC users each year. NERSC is often the first HPC experience for students and postdocs, and the center provides pre-installed software for common science packages, in addition to supporting a vast array of programming tools to users who want to build their own applications. NERSC has a long history of data-intensive science, including support for some of the major experiments at the Large Hadron Collider. NERSC systems are used to analyze data from major telescopes, genome sequencers (including a partnership with the Joint Genome Institute), and the team is working closely with light sources and other major experimental facilities.

The role of simulation and observation is increasingly blurred as scientists look to simulations to interpret and explain observational data, or to use measurements to augment first-principles simulations models. And high-throughput simulations, such as the highly successful Materials Project, use massive amounts of computing for simulation, and then create an interesting data analysis problem for both machine learning and traditional analysis techniques. NERSC has a fantastic team led by Sudip Dosanjh, and we’re all very excited about the upcoming delivery of NERSC-9, which will support simulation, data and learning applications. It will deploy some of the early exascale technology, including Cray’s Slingshot network and a mixture of CPU (AMD) and GPU (Nvidia) nodes.

There is also equally important work being done in ESnet, which under Inder Monga’s leadership is laying the foundation for DOE’s next-generation science network using software-defined networking and high-level services tailored to science. ESnet is critical to the idea of connecting DOE’s experimental facilities to the HPC facilities like NERSC for real-time analysis, as well as archiving, serving, and the reanalysis of experimental data. We call this the Superfacility model, because it combines DOE hallmark facilities into a single integrated system.

ESnet has pioneered some of the networking tools for science, including OSCARS, the On-Demand Secure Circuit and Advance Reservation System, which allows researchers to set up end-to-end dynamic circuits across multiple networks when and where they need them — and do it in just minutes rather than months. They also developed the Science DMZ concept, which provides a secure, high-speed architecture for science data transfers for research organizations. The Science DMZ has been adopted by other DOE labs, NSF-funded universities, and networks in other countries.

Our Computational Research Division (CRD) under David Brown is paving the way for the future of science, building methods and tools that automate high-throughput experiments, discover signals in noisy data, support programming of increasingly complex hardware, and use sophisticated mathematical and learning models for predictive simulation. Within DOE’s Exascale Computing Project (ECP), our goal is to produce the next-generation scientific applications for new problems and features of existing applications that will enable breakthrough discoveries by combining the best math, computer science, and exascale systems. The AMReX Co-Design Center led by John Bell is putting Adaptive Mesh Refinement methods into several ECP applications, so we use exascale systems for codes that are algorithmically efficient.

While the bulk of ECP’s portfolio and DOE’s computing investments more broadly have historically focused on modeling and simulation, there is increasing interest in collaborations with experimentalists. In the past, the algorithms and software for major experiments had largely been viewed as the purview of those science programs. The CAMERA Center led by James Sethian is a great example of the value of bringing advanced mathematics to DOE’s light sources and other facilities. CAMERA is funded by DOE but was established years ago through a strategic investment by the Lab and has proven to be a very successful model for collaboration. Another example is FastBit, an indexing technology which allows users to search massive datasets up to 40 times faster and was recognized with a 2008 R&D 100 award. Led by John Wu, this project was originally designed to meet the needs of particle physicists who need to sort through billions of data records just to find 100 key pieces of information, but translates to other applications, too, including biology and cybersecurity.

Berkeley Lab has an enormous opportunity to address the research and facility issues here with NERSC, ESnet, CRD, and its own experimental facilities as well as strong collaborations with facilities at other Labs. And we’re looking at the data issues that go beyond the big experiments to embedded sensors in the environment and supporting the entire lifecycle for scientific data.

So it sounds like there is a lot going on to prepare for exascale and the experimental data challenges. Are there other big changes you see in the future of HPC? 

Well, I think we’ve only started to scratch the surface of machine learning techniques in science.  It’s a huge area of interest across the Lab – over 100 projects are using or developing machine learning techniques for everything from understanding the universe to improving the energy efficiency of buildings. Deb Agarwal has been spearheading a cross-lab coordination of machine learning with highlights on our ml4sci.lbl.gov website. There are interesting research issues in bias, robustness, and interpretability of the methods, but with particular emphasis when applied in science. After all, as scientists our job is to ask why something is true, not just that things are correlated, and the models need to be consistent with known physical laws and be simple enough to be believable. And then there are issues of data size, model size, and how the various algorithms map onto HPC systems at scale. In addition to science applications, we’re looking at machine learning to improve facility operations, manage experiments, design hardware, write software, and generally help automate certain aspects of what we do.

But, of course, the big challenge in HPC, and computing more broadly, is the looming end of transistor density scaling and the related benefits in size, power, cost, and performance of computing systems. I think we’ve done a great job in the HPC community of getting the most out of the systems we have today, using tools like Sam Williams’ Roofline model to assess the performance of various applications relative to the peak possible when running on multicore, manycore, or accelerator processor architectures. But things are going to get a lot harder. It’s interesting to me that as hard as it is for people to think about exponential growth in general, the exponential improvements in computing are so ingrained in our field and everyone who uses a computing device that I think it’s hard for people to comprehend the impact that this change will have.

We are taking two approaches to this “beyond Moore” computing problem, the first and more immediate is one based on the traditional digital model of computing.

We’re looking at purpose-built architectures, already being used for machine learning, as a potential future for other scientific applications in the absence of Moore’s Law. In one project the team is reformulating the LS3DF algorithm to make it amenable to specialized hardware and to develop a custom accelerator for Density Functional Theory, a very popular method used at NERSC for materials and chemistry applications. The initial design/prototype will target an FPGA, and results will also be projected to an ASIC. We’re also looking at specialized edge devices for high-speed data rates coming from microscopes and other scientific instruments. Later, we intend to generalize our results to broader implications for the DOE HPC workload. The goal of this project is to determine the feasibility and benefit of specialized architectures for future science problems and explore various technology and business models for the future of HPC.

We also have an ambitious cross-laboratory effort in quantum information science, looking at technology, methods, software and systems for applying near-term computing devices to simulate DOE mission problems. Berkeley Lab is receiving $30 million over five years to build and operate an Advanced Quantum Testbed. Researchers will use this testbed to explore superconducting quantum processors and evaluate how these emerging quantum devices can be utilized to advance scientific research. As part of this effort, Berkeley Lab will collaborate with the MIT Lincoln Laboratory to deploy different quantum processor architectures.

It sounds like both an interesting and, well, challenging organization to lead and Berkeley Lab is mounting an extensive recruiting campaign to fill your job. What skills and experience do you think would best equip someone to succeed in the position?

Of course this requires strong leadership abilities, to rally people around common problems, both scientific and operational, and to manage a diverse set of individuals and activities, It’s’s almost impossible to understand all of the science being carried out with support from Computing Sciences, but you need curiosity to learn about that science, which ranges from cosmology to biology, so that you are comfortable talking about it. But it’s not just to represent projects, it’s being able to figure out how the pieces fit together.

One last question: How did you decide that now was the right time to step down?

For the past 10 years, my research has taken a back seat to my lab management responsibilities. And now there is a great need for high-end computing, for new applications in genomics, data analysis, machine learning and other areas. I’m excited about pursuing those opportunities directly. There are a lot of really great problems to work on, but I haven’t been able to as I’ve been problem-solving on a different level.

And this brings me back to why I first got interested in computer science, because I loved solving problems. I developed software and algorithms and really liked the challenge of getting the software to work as intended. This seems like a good point in my career to rediscover that feeling.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has been unveiled in upstate New York that will be used by IBM Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digita Read more…

By Aaron Dubrow

Supercomputers Help Predict Carbon Dioxide Levels

December 10, 2019

The Earth’s terrestrial ecosystems – its lands, forests, jungles and so on – are crucial “sinks” for atmospheric carbon, holding nearly 30 percent of our annual CO2 emissions as they breathe in the carbon-rich Read more…

By Oliver Peckham

Finally! SC19 Competitors Live and in Color!

December 10, 2019

You know the saying “better late than never”? That’s how my cluster competition coverage is faring this year. With SC19 coming late in November, quickly followed by my annual trip to South Africa to cover their clu Read more…

By Dan Olds

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum processor chips. The new controller is a mixed-signal SoC named Ho Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

GPU Scheduling and Resource Accounting: The Key to an Efficient AI Data Center

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community!]

GPUs are the new CPUs

GPUs have become a staple technology in modern HPC and AI data centers. Read more…

What’s New in HPC Research: Natural Gas, Precision Agriculture, Neural Networks and More

December 6, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has Read more…

By Doug Black

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum proces Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science its Read more…

By John Russell

NSCI Update – Adapting to a Changing Landscape

December 2, 2019

It was November of 2017 when we last visited the topic of the National Strategic Computing Initiative (NSCI). As you will recall, the NSCI was started with an Executive Order (E.O. No. 13702), that was issued by President Obama in July of 2015 and was followed by a Strategic Plan that was released in July of 2016. The question for November of 2017... Read more…

By Alex R. Larzelere

Tsinghua University Racks Up Its Ninth Student Cluster Championship Win at SC19

November 27, 2019

Tsinghua University has done it again. At SC19 last week, the eight-time gold medal-winner team took home the top prize in the 2019 Student Cluster Competition Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
CEJN
CJEN
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This