Pfizer Discusses Use of Supercomputing and AI for Covid Drug Development

By Oliver Peckham

March 24, 2022

Over 16 months ago, Pfizer achieved a historic scientific moonshot — the unprecedentedly swift development and authorization of a novel vaccine for a novel virus using methods that hitherto had not been used in approved drugs at scale. Throughout the pandemic, nearly every public research supercomputer pivoted to some form of Covid research, but the pharmaceutical giants were characteristically cagey about their use of advanced technologies for vaccine and therapeutic development. At a session held during Nvidia’s GTC22 this week, Joe Ucuzoglu, CEO of Deloitte, spoke to Lidia Fonseca, executive vice president and chief digital and technology officer for Pfizer, about the company’s use of HPC and AI in the development of its groundbreaking vaccines and therapeutics for Covid-19.

Ucuzoglu opened the session — a fireside chat titled “Pfizer’s AI-enabled transformation” — by lauding the “fastest development of a novel vaccine in history” and calling Pfizer a “poster child for the full promise of AI to society.” He then continued by asking Fonseca how Pfizer is driving technology innovation in its value chain.

“Pfizer is applying digital data and AI across the entire value chain,” Fonseca said, “making our work faster and easier and enhancing every aspect of our business. We’re driving this end-to-end innovation with three strategic priorities in mind: first, to improve patient health outcomes; second, to bring medicines to patients faster; and third, to fuel tomorrow’s breakthrough therapies.”

Joe Ucuzoglu (left) and Lidia Fonseca (right). Image courtesy of Nvidia.

On supercomputing

“In research and discovery, we leveraged supercomputing, AI and machine learning to accelerate the identification of the most promising target compounds,” Fonseca said — though, of course, she did not disclose any details of the hardware it operates beyond a cursory reference to help from its host, Nvidia.

“Nvidia has been a key partner in helping advance Pfizer’s supercomputing and AI capabilities,” she said. “Supercomputing helped us to fast-track the progression from discovery to development for Paxlovid, our oral treatment. Using sophisticated computational modeling and simulation techniques, we can now test molecular compounds in a virtual rather than physical lab environment. In the case of Paxlovid, this enabled us to test a fraction of the millions of known compounds that might have worked to treat Covid-19 so that we could quickly narrow down to just those compounds that had the highest chance of becoming medicines.”

(Paxlovid is one of the few therapeutics that has demonstrated consistent effectiveness in reducing the risk of death from a Covid infection. To learn more about how Paxlovid works to disable SARS-CoV-2, read further reporting from HPCwire on supercomputer simulations of the antiviral here.)

“Supercomputing and advanced analytics also helped us hone Comirnaty, our Covid vaccine,” Fonseca continued. “Many of the allergic reactions that clinical trial participants reported while testing our vaccine resulted from certain lipid nanoparticles in the vaccine itself. Using supercomputing, we ran molecular dynamics simulations to find the right combination of lipid nanoparticle properties that reduce allergic reactions, thereby creating as safe and effective a vaccine as possible.”


A visualization of Paxlovid’s inhibition mechanism, produced on the MareNostrum 4 supercomputer.

On AI and machine learning

Fonseca repeatedly touched on how advanced technologies had transformed Pfizer’s clinical trial processes, which can often take years under normal circumstances.

“To set up our clinical trial for Covid, we used real-time predictive models to forecast the virus’ prevalence at a county level, identifying where the next big wave of infection would hit,” Fonseca explained. “This helped our development team optimize their selection of clinical trial sites based on where we anticipated recruitment being strongest. That’s how, in just four months, we were able to launch our clinical trial with 46,000 participants at 150 sites in six countries.”

“For patients, we launched an enhanced adverse event portal with AI capabilities to manage patient reporting more efficiently during the clinical trials,” she continued. “We also leveraged AI and machine learning to identify discrepancies in how clinical trial participants reported their symptoms in response to the vaccine, which was critical to our study timelines and to maintaining data quality and integrity.”

“During the vaccine clinical trials, we aggregated and refreshed the trial data every four hours. This meant that we could get the latest data to our clinicians and scientists with greater speed and frequency than before Covid, when it could take a few weeks after each participant visit to aggregate the data.”

Once the trials were done and Comirnaty was approved, Pfizer’s eye turned to using ML and AI to optimize shipping and distribution. “To support the manufacturing and distribution of more than three billion doses of our vaccine in 2021, we deployed several important data and AI capabilities,” Fonseca said. “We implemented AI and machine learning to predict product throughput and yield; this supports more consistent production and allows our manufacturing to be more predictable — an important consideration, given the urgency of scaling up our vaccine production.”

“Additionally, we used both AI and machine learning to predict product temperatures and enable preventative maintenance for the more than 3,000 freezers that house our vaccine doses, and we also leverage IoT and sensors to monitor and track vaccine shipments and temperatures at close to 100 percent accuracy — pretty important, as you can imagine.”

On the future

Ucuzoglu also asked Fonseca to speak to the future of advanced technologies in healthcare and at Pfizer, and she agreed to gaze into her “crystal ball” for the next five to ten years.

“The growing application of quantum computing will drive speed in discovery and development that we cannot imagine today,” she said, “[and] the landscape of AI companies will continue to proliferate, with new AI players that specialize in various areas, including data generation, data aggregation, advanced analytics and AI value generators that create algorithms.”

There would, she said, be more applied algorithms; more use of predictive technologies in drug discovery and clinical trials; and more decentralization of these trials. “There will be a substantial number of AI-discovered molecules,” she added.

“We’re seeing the healthcare industry being rewired across the entire patient journey,” Fonseca had said earlier in the session. “The pandemic actually served as a catalyst. … I believe Covid-19 accelerated these trends by as much as five years.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ARM, Fujitsu Targeting Open-source Software for Power Efficiency in 2-nm Chip

July 19, 2024

Fujitsu and ARM are relying on open-source software to bring power efficiency to an air-cooled supercomputing chip that will ship in 2027. Monaka chip, which will be made using the 2-nanometer process, is based on the Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI projects.). Other tools have joined the CUDA castle siege. AMD Read more…

Quantum Watchers – Terrific Interview with Caltech’s John Preskill by CERN

July 17, 2024

In case you missed it, there's a fascinating interview with John Preskill, the prominent Caltech physicist and pioneering quantum computing researcher that was recently posted by CERN’s department of experimental physi Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to communities across the globe. As climate change is warming ocea Read more…

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical claims. A paper published on July 10 by researchers in the U. Read more…

Belt-Tightening in Store for Most Federal FY25 Science Budets

July 15, 2024

If it’s summer, it’s federal budgeting time, not to mention an election year as well. There’s an excellent summary of the curent state of FY25 efforts reported in AIP’s policy FYI: Science Policy News. Belt-tight Read more…

SCALEing the CUDA Castle

July 18, 2024

In a previous article, HPCwire has reported on a way in which AMD can get across the CUDA moat that protects the Nvidia CUDA castle (at least for PyTorch AI pro Read more…

Aurora AI-Driven Atmosphere Model is 5,000x Faster Than Traditional Systems

July 16, 2024

While the onset of human-driven climate change brings with it many horrors, the increase in the frequency and strength of storms poses an enormous threat to com Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

Shutterstock 2203611339

NSF Issues Next Solicitation and More Detail on National Quantum Virtual Laboratory

July 10, 2024

After percolating for roughly a year, NSF has issued the next solicitation for the National Quantum Virtual Lab program — this one focused on design and imple Read more…

NCSA’s SEAS Team Keeps APACE of AlphaFold2

July 9, 2024

High-performance computing (HPC) can often be challenging for researchers to use because it requires expertise in working with large datasets, scaling the softw Read more…

Anders Jensen on Europe’s Plan for AI-optimized Supercomputers, Welcoming the UK, and More

July 8, 2024

The recent ISC24 conference in Hamburg showcased LUMI and other leadership-class supercomputers co-funded by the EuroHPC Joint Undertaking (JU), including three Read more…

Generative AI to Account for 1.5% of World’s Power Consumption by 2029

July 8, 2024

Generative AI will take on a larger chunk of the world's power consumption to keep up with the hefty hardware requirements to run applications. "AI chips repres Read more…

US Senators Propose $32 Billion in Annual AI Spending, but Critics Remain Unconvinced

July 5, 2024

Senate leader, Chuck Schumer, and three colleagues want the US government to spend at least $32 billion annually by 2026 for non-defense related AI systems.  T Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Leading Solution Providers

Contributors

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire