NERSC Supercomputers Help Berkeley Lab Scientists Map Key DNA Protein Complex

September 14, 2017

Sept. 14, 2017 — Chalking up another success for a new imaging technology that has energized the field of structural biology, researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) obtained the highest resolution map yet of a large assembly of human proteins that is critical to DNA function.

The scientists reported their achievement today in an advance online publication of the journal Nature. They used cryo-electron microscopy (cryo-EM) to resolve the 3-D structure of a protein complex called transcription factor IIH (TFIIH) at 4.4 angstroms, or near-atomic resolution. This protein complex is used to unzip the DNA double helix so that genes can be accessed and read during transcription or repair.

The cryo-EM structure of Transcription Factor II Human (TFIIH). The atomic coordinate model, colored according to the different TFIIH subunits, is shown inside the semi-transparent cryo-EM map. Credit: Basil Greber/Berkeley Lab and UC Berkeley – Click for GIF

“When TFIIH goes wrong, DNA repair can’t occur, and that malfunction is associated with severe cancer propensity, premature aging, and a variety of other defects,” said study principal investigator Eva Nogales, faculty scientist at Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division. “Using this structure, we can now begin to place mutations in context to better understand why they give rise to misbehavior in cells.”

TFIIH’s critical role in DNA function has made it a prime target for research, but it is considered a difficult protein complex to study, especially in humans.

Mapping complex proteins

“As organisms get more complex, these proteins do, too, taking on extra bits and pieces needed for regulatory functions at many different levels,” said Nogales, who is also a UC Berkeley professor of molecular and cell biology and a Howard Hughes Medical Institute investigator. “The fact that we resolved this protein structure from human cells makes this even more relevant to disease research. There’s no need to extrapolate the protein’s function based upon how it works in other organisms.”

Biomolecules such as proteins are typically imaged using X-ray crystallography, but that method requires a large amount of stable sample for the crystallization process to work. The challenge with TFIIH is that it is hard to produce and purify in large quantities, and once obtained, it may not form crystals suitable for X-ray diffraction.

Enter cryo-EM, which can work even when sample amounts are very small. Electrons are sent through purified samples that have been flash-frozen at ultracold temperatures to prevent crystalline ice from forming.

Cryo-EM has been around for decades, but major advances over the past five years have led to a quantum leap in the quality of high-resolution images achievable with this technique.

“When your goal is to get resolutions down to a few angstroms, the problem is that anymotion gets magnified,” said study lead author Basil Greber, a UC Berkeley postdoctoral fellow at the California Institute for Quantitative Biosciences (QB3). “At high magnifications, the slight movement of the specimen as electrons move through leads to a blurred image.”

Making movies

The researchers credit the explosive growth in cryo-EM to advanced detector technology that Berkeley Lab engineer Peter Denes helped develop. Instead of a single picture taken for each sample, the direct detector camera shoots multiple frames in a process akin to recording a movie. The frames are then put together to create a high-resolution image. This approach resolves the blur from sample movement. The improved images contain higher quality data, and they allow researchers to study the sample in multiple states, as they exist in the cell.

Since shooting a movie generates far more data than a single frame, and thousands of movies are being collected during a microscopy session, the researchers needed the processing punch of supercomputers at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab. The output from these computations was a 3-D map that required further interpretation.

“When we began the data processing, we had 1.5 million images of individual molecules to sort through,” said Greber. “We needed to select particles that are representative of an intact complex. After 300,000 CPU hours at NERSC, we ended up with 120,000 images of individual particles that were used to compute the 3-D map of the protein.”

To obtain an atomic model of the protein complex based on this 3-D map, the researchers used PHENIX (Python-based Hierarchical ENvironment for Integrated Xtallography), a software program whose development is led by Paul Adams, director of Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division and a co-author of this study.

Not only does this structure improve basic understanding of DNA repair, the information could be used to help visualize how specific molecules are binding to target proteins in drug development.

“In studying the physics and chemistry of these biological molecules, we’re often able to determine what they do, but how they do it is unclear,” said Nogales. “This work is a prime example of what structural biologists do. We establish the framework for understanding how the molecules function. And with that information, researchers can develop finely targeted therapies with more predictive power.”

Other co-authors on this study are Pavel Afonine and Thi Hoang Duong Nguyen, both of whom have joint appointments at Berkeley Lab and UC Berkeley; and Jie Fang, a researcher at the Howard Hughes Medical Institute.

NERSC is a DOE Office of Science User Facility located at Berkeley Lab. In addition to NERSC, the researchers used the Lawrencium computing cluster at Berkeley Lab. This work was funded by the National Institute of General Medical Sciences and the Swiss National Science Foundation.

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. Learn more about computing sciences at Berkeley Lab.


Source: NERSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Launches Commercial Quantum Network with Samsung, ORNL

December 14, 2017

In the race to commercialize quantum computing, IBM is one of several companies leading the pack. Today, IBM announced it had signed JPMorgan Chase, Daimler AG, Samsung and a number of other corporations to its IBM Q Net Read more…

By Tiffany Trader

TACC Researchers Test AI Traffic Monitoring Tool in Austin

December 13, 2017

Traffic jams and mishaps are often painful and sometimes dangerous facts of life. At this week’s IEEE International Conference on Big Data being held in Boston, researchers from TACC and colleagues will present a new Read more…

By HPCwire Staff

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the d Read more…

By John Russell

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in advanci Read more…

By Tiffany Trader

IBM Launches Commercial Quantum Network with Samsung, ORNL

December 14, 2017

In the race to commercialize quantum computing, IBM is one of several companies leading the pack. Today, IBM announced it had signed JPMorgan Chase, Daimler AG, Read more…

By Tiffany Trader

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This