Q&A with ORNL’s Bronson Messer, an HPCwire Person to Watch in 2022

By HPCwire Editorial Team

August 12, 2022

HPCwire presents our interview with Bronson Messer, distinguished scientist and director of Science at the Oak Ridge Leadership Computing Facility (OLCF), ORNL, and an HPCwire 2022 Person to Watch. Messer recaps ORNL’s journey to exascale and sheds light on how all the pieces line up to support the all-important science. Also covered are the role of the Exascale Computing Project, insights into architectural directions and evolving HPC-AI synergies. This interview was conducted by email earlier this year.

Bronson, congratulations on being named a 2022 HPCwire Person to Watch! Can you give us a summary overview of your responsibilities at Oak Ridge Leadership Computing Facility and what your position entails?

Bronson Messer

As the director of science for the OLCF, I’m responsible for marshalling all our resources toward making sure the science that only leadership computing can enable gets done. That job starts before an allocation is made on the machines, continues through the computational campaigns, and really doesn’t have a formal end, as I continue to communicate the impact made by those projects to a wide variety of audiences even years after they are over. It’s a great job for a science junkie like me: I get to develop a more-than-pedestrian understanding about the full gamut of science we support at OLCF (i.e., almost all scientific disciplines) while “living close” to some of the world’s most powerful computers. The little Appalachian boy programming a TRS-80 Model 1 that was me in the early 80’s would be very jealous.

Please highlight some of the successes that Oak Ridge has had on the path to exascale. (HW, SW, applications, people – anything!)

I think our biggest successes on the road to exascale are wrapped up in the chance we took with Titan back at the beginning of the last decade. There was considerable skepticism when we first adopted hybrid CPU-GPU computing, going all-in with Titan. We have continued along that path with Summit, a path that has proven fruitful as we now stand on the precipice of exascale.

That journey is as much about the people we have deployed around the machines and their expertise as it is about the hardware. I have been especially fortunate to work alongside some of the most skilled and experienced folks in HPC over the past decade and a half, across all the various aspects of endeavor that are necessary to deploy resources at the scale we have. In particular, our liaison model – pairing domain scientist who have top-notch HPC skills with individual projects – is a methodology that has enabled the arrival at exascale along the road of hybrid-node computing in a real way.

How has your team interfaced with the Exascale Computing Project (ECP)? What can you share about the ECP’s role in supporting exascale-readiness from your perspective?

We are close partners with ECP. There is hardly a facet of the project that OLCF is not deeply involved in, from application development to hardware and integration. We have provided the primary development and testing platform for all the ECP application development and software technology teams in the form of Summit, to the tune of a few million node-hours per year over the past few years. We also understand the ECP teams to be part of our traditional early-science teams. We have instantiated the third version of our Center for Accelerated Application Readiness (CAAR) to prepare a group of applications for Frontier, and we consider the ECP development teams to be a part of that. Indeed, many of the same OLCF folks working with our CAAR teams are also working on ECP apps and other software. The ECP teams are also part of the first group of users on our test and development system for Frontier. I anticipate the ECP apps will deliver some of our earliest scientific results on Frontier.

Milestones are inspiring and exciting. What excites you most about entering the exascale era? What are some examples of the science and hopefully breakthroughs that will be unlocked? In what ways will having exascale systems – and I mean the entire ecosystem not just the hardware – be game-changing?

The great thing (to me, anyway) about supercomputing is that there is no one “killer app.” Supercomputing is useful across the entire scientific enterprise, so the list of new insights and questions that will gleaned from exascale computing is … countably infinite. But I do have a couple of places where I think the effect will be especially sharp and profound. The first is in the design cycle for engineering in aerospace, CFD, and related fields. The ability to do design simulations with the requisite physical fidelity to deploy real machines and do that on human time scales (i.e., about a day or overnight) is a real game-changer for a lot of researchers, in academia and in industry. Related to that is the continuing quest to understand turbulence, the last great classical physics problem. Resolution – meaning memory – is required to make progress on this front, and Frontier will provide a significant jump. The ability to resolve convection in the atmosphere on roughly kilometer scales is a place where this additional resolution isn’t just gratuitous. Rather, it leads to new physics and new understanding.

In addition, we are fielding huge storage systems as part of Frontier. The ability to quickly query very large collections of data and do non-trivial amounts of compute on those data will lead to insights in a number of fields, with drug discovery being a very important example.

Heterogeneous computing architecture, largely dependent upon accelerators (GPUs mostly), has become the dominant approach to supercomputing (with the notable exception of Top500 leader Fugaku) and is the backbone of the U.S. exascale program. Where do you see computer architecture headed? What will be the follow-on to today’s dominant heterogeneous (CPU plus accelerator) landscape?

I think the general outlines of CPU+accelerator computing probably has quite a bit of gas left in it. More important to developers is the abstraction of the memory hierarchy into “close and fast” and “far and slow” memory spaces. That model has been with us for a while, it’s just made more obvious and, maybe, important, with hybrid-node computing. The compute engines might change a bit, but having that kind of structure and having heterogeneity on the node are likely going to persist for a while. That doesn’t mean we might not field multiple partitions of differing HW in the future (i.e. push some heterogeneity up from the node level), but I think that might be more a matter of expedience for getting science done: To make sure all the steps of the process of actually getting insight out of a computational experiment, data analysis, or inference are done as efficiently as possible.

What is the opportunity for bringing HPC and AI capabilities together in one architecture? I have heard it said (I forget by whom!) that Summit is (already) the world’s first big HPC-AI supercomputer. What is the state of adoption/implementation for converged AI-HPC workflows? Do you also see a need for purpose-built AI architectures (like Cerebras, SambaNova, Groq, etc)?

We have recently looked at this idea that HPC and AI are coming together, based on what we see in our user programs. That confluence is already here. A large fraction of the projects we support in Summit make use of both “traditional” (I really hate that moniker for this) simulation and AI and ML techniques. These projects use AI/ML in a number of steps in their computational campaigns as well, from before the first simulation run to train surrogate models, to design of experiments, and through the analysis after the data are generated.

If the purpose-built architectures can be made amenable to joining in on all these steps – through policy or software or both – then I think the acceleration they hope to achieve can be as impactful as, for example, the Tensor Cores on Summit proved to be.

It has been proposed that in the not-so-distant future, quantum accelerators will be integrated into either an HPC architecture or workflow. How do you see these technologies coming together? Is this something OLCF is preparing for?

OLCF has an active Quantum Computing User Program where we manage access to a number of commercial quantum computing providers. We are also actively soliciting proposals to our Director’s Discretionary allocation program for “hybrid” proposals that want to take advantage of these resources coupled with an allocation on Summit.

I’m most excited for the promise of quantum computing to help solve problems that are already “quantum.” Some of these problems are treated classically now because we can’t figure out how to write software to solve the “real” quantum equations fast enough. One that is particularly interesting to me is the idea of quantum kinetics for neutrinos in dense astrophysical environments like neutron stars and core-collapse supernovae. I think we are years away from having “quantum accelerators” hanging off HPC nodes, solving the quantum kinetic equations that will tell us how neutrinos change flavor in these explosive environments, but maybe a student I help to train will see that happen.

Are there any other computing trends you would like to comment on? Any areas you are concerned about, or identify as in need of more attention/investment?

Moving numbers to and from memory is the single most important bottleneck for scientific computing. This has been known by practitioners in HPC for a long time, and our now partners in AI and ML are quickly pushing right up against this reality as well. There are no easy technical answers to increase memory bandwidth and limit the amount of energy it takes to move those bits, but it should be perhaps the single most motivating notion as we go forward.

Outside of the professional sphere, what can you tell us about yourself – unique hobbies, favorite places, etc.? Is there anything about you your colleagues might be surprised to learn?

I wear my Appalachian origins on my sleeve, so most people who know me know I grew up in the Great Smoky Mountains. A bit of an obsession with fly fishing goes along with that origin story. But not everyone knows that I am an avid lacrosse player and coach, that I finally got my (honorary) high school diploma this past year, or that I’m a multi-day Jeopardy! champion.

Messer is one of 12 HPCwire People to Watch for 2022. You can read the interviews with the other honorees at this link.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Researchers Propose New Solution to Quantum Internet Transmission Problem

July 22, 2024

Getting intact qubits from here-to-there is the basic challenge for any quantum internet scheme. Now, scientists from the University of Chicago, Stanford University, and California Institute of Technology have introduced Read more…

Can Cerabyte Crack the $1-Per-Petabyte Barrier with Ceramic Storage?

July 20, 2024

A German startup named Cerabyte is hoping to solve the burgeoning market for secondary and archival data storage with a novel approach that uses lasers to etch bits onto glass with a ceramic coating. The “grey ceramic� Read more…

Weekly Wire Roundup: July 15-July 19, 2024

July 19, 2024

It's summertime (for most of us), and the HPC-related headlines aren't as plentiful as they once were. But not everything has to happen at high tide-- this week still had some waves! Idaho National Laboratory's Bitter Read more…

ARM, Fujitsu Targeting Open-source Software for Power Efficiency in 2-nm Chip

July 19, 2024

Fujitsu and ARM are relying on open-source software to bring power efficiency to an air-cooled supercomputing chip that will ship in 2027. Monaka chip, which will be made using the 2-nanometer process, is based on the Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI projects.). Other tools have joined the CUDA castle siege. AMD Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Researchers Propose New Solution to Quantum Internet Transmission Problem

July 22, 2024

Getting intact qubits from here-to-there is the basic challenge for any quantum internet scheme. Now, scientists from the University of Chicago, Stanford Univer Read more…

Can Cerabyte Crack the $1-Per-Petabyte Barrier with Ceramic Storage?

July 20, 2024

A German startup named Cerabyte is hoping to solve the burgeoning market for secondary and archival data storage with a novel approach that uses lasers to etch Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI pro Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire