Protein Dynamics on the Supercomputer Big Screen

By By Dan Krotz, LBNL Communications

June 3, 2005

Now playing at a supercomputer near you: proteins in action. Scientists from Berkeley Lab and UC Berkeley are using one the world's most powerful computers to simulate how protein molecules move, rotate, and fold as they carry out life's most fundamental tasks.

This simulation of a tyrosine kinase reveals how the protein changes shape.

Although they only approximate real-life phenomena, the increasingly realistic movies are becoming useful complements to real-world experiments in helping scientists determine how proteins function. Using them, biologists can gain a better understanding of how incorrectly folded proteins lead to a range of diseases, or how other proteins synthesize adenosine triphosphate (ATP), the fuel that powers many biomolecular motors.

“Proteins are very complex molecules with thousands of atoms, but they don't come with a user's manual,” says John Kuriyan of Berkeley Lab's Physical Biosciences Division. “Fortunately, over the past few years, rapid increases in computing power and better simulation programs have made it possible to visualize protein dynamics like never before.”

The simulations are created at the National Energy Research Scientific Computing Center (NERSC), which is located at Berkeley Lab and is the flagship scientific computing facility for DOE's Office of Science. NERSC boasts the raw power needed to develop simulations that are detailed enough to capture a protein's fastest movement, and long enough to portray their relatively infrequent but biologically important changes. In some cases, this means stringing together femtosecond-length, atom-scale snapshots of a 50,000-atom protein, frame by frame, into movies that span several nanoseconds. (A femtosecond is one-millionth of a nanosecond, and a nanosecond is one-billionth of a second.)

Although these high-resolution simulations take days to prepare even on a supercomputer, they enable Kuriyan and colleagues to test-drive proteins under a variety of conditions. They can see what happens when a protein is given just enough energy to teeter on the edge of a conformational change. Or they can prod a protein to change shape, and gauge how forcefully it resists or how readily it gives in.

“The simulations allow us to push here and there and determine how the protein responds,” says Kuriyan, who is also a Howard Hughes Medical Institute investigator and a Chancellor's Professor in UC Berkeley's Department of Molecular and Cell Biology and Department of Chemistry. “This is important because it isn't always obvious which experiments will address a protein's mechanistic properties.”

Berkeley Lab researchers aren't the only scientists spearheading the development of these virtual protein movies, but they're uniquely suited to lead the way. Along with NERSC, Berkeley Lab and UC Berkeley have joint appointees like theoretical chemists David Chandler and Phillip Geissler, who are constantly refining the fundamental molecular theories on which the simulations are based. Add the expertise of experimental biologists like Kuriyan, who put real proteins through the wringer to learn how they work, and the Lab has an ideal blend of theory, practical know-how, and computing power to create almost lifelike movies.

So far, Kuriyan and colleagues have used NERSC simulations to learn how certain proteins, called Src tyrosine kinases, transmit signals initiated by growth factor receptors in human cells. Mutant forms of these proteins can trigger cancer. They've also simulated the conformational and energy changes that proteins involved in DNA replication must undergo in order to rapidly copy DNA strands.

In each case, the simulations furthered their understanding of protein dynamics and helped guide real-world experiments. They also underscored the need for powerful computers. The quickest motion in a protein is the stretch of the carbon-hydrogen bond, which occurs in about one femtosecond. This means that each frame of a simulation must depict a protein's movement femtosecond by femtosecond. If it doesn't, the simulation will skip over these carbon-hydrogen stretches and be as true to life as a jerky 1920s movie. But interesting changes in proteins, such as the rotation of a portion of the ATP-making enzyme, often occur in the microsecond to millisecond time scale – up to nine orders of scale slower than a femtosecond. In other words, a simulation must weave together billions of femtosecond-length snapshots in order to capture one or two rare but important changes.

The dilemma is like filming the muscular movements of a person. To capture the smallest muscle twitch, down to a single heartbeat, the film must have several frames per second. Unfortunately, the person might not do something significant, like go skydiving, for days. The filmmaker must churn through miles of film reel in order to record that infrequent but important leap from an airplane.

“This is why computer speed becomes very important. The faster the computer can simulate each frame of the movie, the more frames can be generated, and the sooner we will get to something interesting,” says Kuriyan.

He adds that the simulations aren't perfect. They're constructed frame by frame, so no matter how fast they become, they will always gloss over some nuance of a protein's motion. In addition, a ten-nanosecond simulation of a large protein molecule sometimes requires 20 to 40 days of a supercomputer's processor time to create.

“We'd like to run a simulation and get the answer in a day, so we can change what we are doing in the lab,” says Kuriyan. “We're not close to being there yet. But increases in computer speed have enabled dramatic advances recently, and this trend will continue. It is very important to support national resources like NERSC that help maintain competitiveness and very fast computation.”

Kuriyan and Martin Karplus of Harvard University discuss the promise of protein molecule simulations in a paper entitled “Molecular dynamics and protein function,” which was published online by the Proceedings of the National Academy of Science on May 3.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Student Cluster Competition

May 16, 2024

The 2024 ISC 2024 competition welcomed 19 virtual (remote) and eight in-person teams. The in-person teams participated in the conference venue and, while the virtual teams competed using the Bridges-2 supercomputers at t Read more…

Grace Hopper Gets Busy with Science 

May 16, 2024

Nvidia’s new Grace Hopper Superchip (GH200) processor has landed in nine new worldwide systems. The GH200 is a recently announced chip from Nvidia that eliminates the PCI bus from the CPU/GPU communications pathway.  Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of the last panels at ISC 2024 — the discussion was fascinat Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can uncover patterns, generate insights, and make predictions that Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top500 list of the fastest supercomputers in the world. At s Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

ISC 2024: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Leading Solution Providers

Contributors

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire