HPC for Life: Genomics, Brain Research, and Beyond

By Warren Froelich

July 19, 2018

Editor’s note: In part I, “HPC Serves as ‘Rosetta Stone’ for the Information Age,” we explored how high-performance computing is transforming digital data into valuable insight and leading to amazing discoveries. Part II follows the path of HPC into new areas of brain research and astrophysics.

During the past few decades, the life sciences have witnessed one landmark discovery after another with the aid of HPC, paving the way toward a new era of personalized treatments based on an individual’s genetic makeup, and drugs capable of attacking previously intractable ailments with few side effects.

Genomics research is generating torrents of biological data to help “understand the rules of life” for personalized treatments believed to be the focus for tomorrow’s medicine. The sequencing of DNA has rapidly moved from the analysis of data sets that were megabytes in size to entire genomes that are gigabytes in size. Meanwhile, the cost of sequencing has dropped from about $10,000 per genome in 2010 to $1,000 in 2017, thus requiring increased speed and refinement of computational resources to process and analyze all this data.

In one recent genome analysis, an international team led by Jonathan Sebat, a professor of psychiatry, cellular and molecular medicine and pediatrics at UC San Diego School of Medicine, identified a risk factor that may explain some of the genetic causes for autism: rare inherited variants in regions of non-code DNA. For about a decade, researchers knew that the genetic cause of autism partly consisted of so-called de novo mutations, or gene mutations that appear for the first time. But those sequences represented only 2 percent of the genome. To investigate the remaining 98 percent of the genome in ASD (autism spectrum disorder), Sebat and colleagues analyzed the complete genomes of 9,274 subjects from 2,600 families, representing a combined data total on the range of terabytes.

As reported in the April 20, 2018, issue of Science, DNA sequences were analyzed with Comet, along with data from other large studies from the Simons Simplex Collection and the Autism Speaks MSSNG Whole Genome Sequencing Project.

“Whole genome sequencing data processing and analysis are both computationally and resource intensive,” said Madhusudan Gujral, an analyst with SDSC and co-author of the paper. “Using Comet, processing and identifying specific structural variants from a single genome took about 2 ½ days.”

SDSC Distinguished Scientist Wayne Pfeiffer added that with Comet’s nearly 2,000 nodes and several petabytes of scratch space, tens of genomes can be processed at the same time, taking the data processing requirement from months down to weeks.

In cryo-Electron Microscopy (cryo-EM), biological samples are flash-frozen so rapidly that damaging ice crystals are unable to form. As a result, researchers are able to view highly-detailed reconstructed 3D models of intricate, microscopic biological structures in near-native states. Above is a look inside of one of the cryo-electron microscopes available to researchers at the Timothy Baker Lab at UC San Diego. Image credit: Jon Chi Lou, SDSC

Not long ago, the following might have been considered an act of wizardry from a Harry Potter novel. First, take a speck of biomolecular matter, invisible to the naked eye, and then deep-freeze it to near absolute zero. Then, blast this material, now frozen in time, with an electron beam. Finally, add the power of a supercomputer aided by a set of problem-solving rules called algorithms. And, presto! A three-dimensional image of the original biological speck appears on a computer monitor at atomic resolution. Not really magic or even sleight-of-hand, this innovation – given the name of cryo-electron microscopy or simply cryo-EM — garnered the 2017 Nobel Prize in chemistry for the technology’s invention in the 1970s.

Today, researchers seeking to unravel the structure of proteins in atomic detail, in hopes of treating many intractable diseases, are increasingly turning to cryo-EM as an alternative to time-tested X-ray crystallography. A key advantage of the cryo-EM is that no crystallization of the protein is required, a barrier for those proteins that defy being turned into a crystal. Even so, the technology didn’t take off until the development of more sensitive electron detectors and advanced computational algorithms needed to turn reams of data into often aesthetically pleasing three-dimensional images.

“About 10 years ago, cryo-EM was known as blob-biology,” said Robert Sinkovits, director of scientific computing applications at SDSC. ”You got an overall shape, but not at the resolution you would get with X-ray crystallography, which required working with a crystal. But it was kind of a black art to create these crystals and some things simply wouldn’t crystalize. You can use cryo-EM for just about anything.”

Several molecular biologists and chemists at UC San Diego are taking advantage of the university’s cryo-EM laboratory and SDSC’s computing resources, to reveal the inner workings and interactions of several targeted proteins critical to the understanding of diseases such as fragile X syndrome and childhood liver cancer.

“This will be a growing area for HPC, in part, as we continue to automate the process,” said Sinkovits.

Machine Learning and Brain Implants

It’s a concept that can boggle the brain, and ironically is now being used to imitate that very organ. Called “machine learning,” this innovation typically involves training a computer or robot on millions of actions so that the computer learns how to derive insight and meaning from the data as time advances.

Recently, a collaborative team led by researchers at SDSC and the Downstate Medical Center in Brooklyn, N.Y., applied a novel computer algorithm to mimic how the brain learns, with the aid of Comet and the Center’s Neuroscience Gateway. The goal: to identify and replicate neural circuitry that resembles the way an unimpaired brain controls limb movement.

The study, published in the March-May 2017 issue of the IBM Journal of Research, laid the groundwork to develop realistic “biomimetric neuroprosthetics” – brain implants that replicate brain circuits and function – that one day could replace lost or damaged brain cells from tumors, stroke or other diseases.

The researchers trained their model using spike-timing dependent plasticity (STDP) and reinforced learning, believed to be the basis for memory and learning in mammalian brains. Briefly, the process refers to the ability of synaptic connections to become stronger based on when they are activated in relation to each other, meshed with a system of biochemical rewards or punishments that are tied to correct or incorrect decisions.

“Only the fittest individual (models) remain, those models that are better able to learn better, survive and propagate their genes,” said Salvador Dura-Bernal, a research assistant professor in physiology and pharmacology with Downstate, and the paper’s first author.

As for the role of HPC in this study: “Since thousands of parameter combinations need to be evaluated, this is only possible by running the simulations using HPC resources such as those provided by SDSC,” said Dura-Bernal. “We estimated that using a single processor instead of the Comet system would have taken almost six years to obtain the same results.”

On the Horizon

Other impressive data producers are waiting in the wings posing further challenges on tomorrow’s super facilities. For example, an ambitious upgrade to the Large Hadron Collider will result in a substantial increase in the intensity of proton beam collisions, far greater than anything built before. From the mid-2020s forward, the experiments at the LHC are expected to yield 10 times more data each year than the combined output of data generated during the three-years leading up to the Higgs discovery. Beyond that, future accelerators are being discussed that would be housed in 100-km long tunnels to reach collision energies many times that of the LHC, while still others are suggesting the construction of colliders based on different geometric shapes, perhaps linear rather than ring. More powerful machines, by definition, will translate into torrents of more data to digest and analyze.

The future site of the Simons Observatory, located in the high Atacama Desert in Northern Chile inside the Chajnator Science Preserve (photo licensed under CC BY-SA 4.0)

Under an agreement with the Simons Foundation Flatiron Institute, SDSC’s Gordon is being re-purposed to provide computational support for the POLARBEAR and successor project called the Simon Array. The projects — led by UC Berkeley and funded first by the Simons Foundation and then the NSF under a five-year, $5 million grant — will deploy the most powerful cosmic microwave background (CMB) radiation telescope and detector ever made to detect what are, in essence, the leftover ‘heat’ from the Big Bang in the form of microwave radiation.

“The POLARBEAR experiment alone collects nearly one gigabyte of data every day that must be analyzed in real time,” said Brian Keating, a professor of physics at UC San Diego’s Center for Astrophysics & Space Sciences and co-PI for the POLARBEAR/Simons Array project.

“This is an intensive process that requires dozens of sophisticated tests to assure the quality of the data. Only be leveraging resources such as Gordon are we able to continue our legacy of success.”

“As the scale of data and complexity of these experimental projects increase, it is more important than ever before that centers like SDSC respond by providing HPC systems and expertise that become part of the integrated ecosystem of research and discovery,” said Norman.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Graphcore Introduces Next-Gen Intelligence Processing Unit for AI Workloads

July 15, 2020

British hardware designer Graphcore, which emerged from stealth in 2016 to launch its first-generation Intelligence Processing Unit (IPU), has announced its next-generation IPU platform: the IPU-Machine M2000. With the n Read more…

By Oliver Peckham

heFFTe: Scaling FFT for Exascale

July 15, 2020

Exascale computing aspires to provide breakthrough solutions addressing today’s most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security. This has been the mai Read more…

By Jack Dongarra and Stanimire Tomov

There’s No Storage Like ATGC: Breakthrough Helps to Store ‘The Wizard of Oz’ in DNA

July 15, 2020

Even as storage density reaches new heights, many researchers have their eyes set on a paradigm shift in high-density information storage: storing data in the four nucleotides (A, T, G and C) that constitute DNA, a metho Read more…

By Oliver Peckham

Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch

July 15, 2020

Moving neuromorphic technology from the laboratory into practice has proven slow-going. This week, National University of Singapore researchers moved the needle forward demonstrating an event-driven, visual-tactile perce Read more…

By John Russell

What’s New in HPC Research: Volcanoes, Mobile Games, Proteins & More

July 14, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

AWS Solution Channel

INEOS TEAM UK Accelerates Boat Design for America’s Cup Using HPC on AWS

The America’s Cup Dream

The 36th America’s Cup race will be decided in Auckland, New Zealand in 2021. Like all the teams, INEOS TEAM UK will compete in a boat whose design will have followed guidelines set by race organizers to ensure the crew’s sailing skills are fully tested. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and increase the vehicle’s speed and efficiency. These fluid dyn Read more…

By Oliver Peckham

Graphcore Introduces Next-Gen Intelligence Processing Unit for AI Workloads

July 15, 2020

British hardware designer Graphcore, which emerged from stealth in 2016 to launch its first-generation Intelligence Processing Unit (IPU), has announced its nex Read more…

By Oliver Peckham

heFFTe: Scaling FFT for Exascale

July 15, 2020

Exascale computing aspires to provide breakthrough solutions addressing today’s most critical challenges in scientific discovery, energy assurance, economic c Read more…

By Jack Dongarra and Stanimire Tomov

Get a Grip: Intel Neuromorphic Chip Used to Give Robotics Arm a Sense of Touch

July 15, 2020

Moving neuromorphic technology from the laboratory into practice has proven slow-going. This week, National University of Singapore researchers moved the needle Read more…

By John Russell

Max Planck Society Begins Installation of Liquid-Cooled Supercomputer from Lenovo

July 9, 2020

Lenovo announced today that it is supplying a new high performance computer to the Max Planck Society, one of Germany's premier research organizations. Comprise Read more…

By Tiffany Trader

President’s Council Targets AI, Quantum, STEM; Recommends Spending Growth

July 9, 2020

Last week the President Council of Advisors on Science and Technology (PCAST) met (webinar) to review policy recommendations around three sub-committee reports: Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Q&A: HLRS’s Bastian Koller Tackles HPC and Industry in Germany and Europe

July 6, 2020

In this exclusive interview for HPCwire – sadly not face to face – Steve Conway, senior advisor for Hyperion Research, talks with Dr.-Ing Bastian Koller about the state of HPC and its collaboration with Industry in Europe. Koller is a familiar figure in HPC. He is the managing director at High Performance Computing Center Stuttgart (HLRS) and also serves... Read more…

By Steve Conway, Hyperion

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This