HPC for Life: Genomics, Brain Research, and Beyond

By Warren Froelich

July 19, 2018

Editor’s note: In part I, “HPC Serves as ‘Rosetta Stone’ for the Information Age,” we explored how high-performance computing is transforming digital data into valuable insight and leading to amazing discoveries. Part II follows the path of HPC into new areas of brain research and astrophysics.

During the past few decades, the life sciences have witnessed one landmark discovery after another with the aid of HPC, paving the way toward a new era of personalized treatments based on an individual’s genetic makeup, and drugs capable of attacking previously intractable ailments with few side effects.

Genomics research is generating torrents of biological data to help “understand the rules of life” for personalized treatments believed to be the focus for tomorrow’s medicine. The sequencing of DNA has rapidly moved from the analysis of data sets that were megabytes in size to entire genomes that are gigabytes in size. Meanwhile, the cost of sequencing has dropped from about $10,000 per genome in 2010 to $1,000 in 2017, thus requiring increased speed and refinement of computational resources to process and analyze all this data.

In one recent genome analysis, an international team led by Jonathan Sebat, a professor of psychiatry, cellular and molecular medicine and pediatrics at UC San Diego School of Medicine, identified a risk factor that may explain some of the genetic causes for autism: rare inherited variants in regions of non-code DNA. For about a decade, researchers knew that the genetic cause of autism partly consisted of so-called de novo mutations, or gene mutations that appear for the first time. But those sequences represented only 2 percent of the genome. To investigate the remaining 98 percent of the genome in ASD (autism spectrum disorder), Sebat and colleagues analyzed the complete genomes of 9,274 subjects from 2,600 families, representing a combined data total on the range of terabytes.

As reported in the April 20, 2018, issue of Science, DNA sequences were analyzed with Comet, along with data from other large studies from the Simons Simplex Collection and the Autism Speaks MSSNG Whole Genome Sequencing Project.

“Whole genome sequencing data processing and analysis are both computationally and resource intensive,” said Madhusudan Gujral, an analyst with SDSC and co-author of the paper. “Using Comet, processing and identifying specific structural variants from a single genome took about 2 ½ days.”

SDSC Distinguished Scientist Wayne Pfeiffer added that with Comet’s nearly 2,000 nodes and several petabytes of scratch space, tens of genomes can be processed at the same time, taking the data processing requirement from months down to weeks.

In cryo-Electron Microscopy (cryo-EM), biological samples are flash-frozen so rapidly that damaging ice crystals are unable to form. As a result, researchers are able to view highly-detailed reconstructed 3D models of intricate, microscopic biological structures in near-native states. Above is a look inside of one of the cryo-electron microscopes available to researchers at the Timothy Baker Lab at UC San Diego. Image credit: Jon Chi Lou, SDSC

Not long ago, the following might have been considered an act of wizardry from a Harry Potter novel. First, take a speck of biomolecular matter, invisible to the naked eye, and then deep-freeze it to near absolute zero. Then, blast this material, now frozen in time, with an electron beam. Finally, add the power of a supercomputer aided by a set of problem-solving rules called algorithms. And, presto! A three-dimensional image of the original biological speck appears on a computer monitor at atomic resolution. Not really magic or even sleight-of-hand, this innovation – given the name of cryo-electron microscopy or simply cryo-EM — garnered the 2017 Nobel Prize in chemistry for the technology’s invention in the 1970s.

Today, researchers seeking to unravel the structure of proteins in atomic detail, in hopes of treating many intractable diseases, are increasingly turning to cryo-EM as an alternative to time-tested X-ray crystallography. A key advantage of the cryo-EM is that no crystallization of the protein is required, a barrier for those proteins that defy being turned into a crystal. Even so, the technology didn’t take off until the development of more sensitive electron detectors and advanced computational algorithms needed to turn reams of data into often aesthetically pleasing three-dimensional images.

“About 10 years ago, cryo-EM was known as blob-biology,” said Robert Sinkovits, director of scientific computing applications at SDSC. ”You got an overall shape, but not at the resolution you would get with X-ray crystallography, which required working with a crystal. But it was kind of a black art to create these crystals and some things simply wouldn’t crystalize. You can use cryo-EM for just about anything.”

Several molecular biologists and chemists at UC San Diego are taking advantage of the university’s cryo-EM laboratory and SDSC’s computing resources, to reveal the inner workings and interactions of several targeted proteins critical to the understanding of diseases such as fragile X syndrome and childhood liver cancer.

“This will be a growing area for HPC, in part, as we continue to automate the process,” said Sinkovits.

Machine Learning and Brain Implants

It’s a concept that can boggle the brain, and ironically is now being used to imitate that very organ. Called “machine learning,” this innovation typically involves training a computer or robot on millions of actions so that the computer learns how to derive insight and meaning from the data as time advances.

Recently, a collaborative team led by researchers at SDSC and the Downstate Medical Center in Brooklyn, N.Y., applied a novel computer algorithm to mimic how the brain learns, with the aid of Comet and the Center’s Neuroscience Gateway. The goal: to identify and replicate neural circuitry that resembles the way an unimpaired brain controls limb movement.

The study, published in the March-May 2017 issue of the IBM Journal of Research, laid the groundwork to develop realistic “biomimetric neuroprosthetics” – brain implants that replicate brain circuits and function – that one day could replace lost or damaged brain cells from tumors, stroke or other diseases.

The researchers trained their model using spike-timing dependent plasticity (STDP) and reinforced learning, believed to be the basis for memory and learning in mammalian brains. Briefly, the process refers to the ability of synaptic connections to become stronger based on when they are activated in relation to each other, meshed with a system of biochemical rewards or punishments that are tied to correct or incorrect decisions.

“Only the fittest individual (models) remain, those models that are better able to learn better, survive and propagate their genes,” said Salvador Dura-Bernal, a research assistant professor in physiology and pharmacology with Downstate, and the paper’s first author.

As for the role of HPC in this study: “Since thousands of parameter combinations need to be evaluated, this is only possible by running the simulations using HPC resources such as those provided by SDSC,” said Dura-Bernal. “We estimated that using a single processor instead of the Comet system would have taken almost six years to obtain the same results.”

On the Horizon

Other impressive data producers are waiting in the wings posing further challenges on tomorrow’s super facilities. For example, an ambitious upgrade to the Large Hadron Collider will result in a substantial increase in the intensity of proton beam collisions, far greater than anything built before. From the mid-2020s forward, the experiments at the LHC are expected to yield 10 times more data each year than the combined output of data generated during the three-years leading up to the Higgs discovery. Beyond that, future accelerators are being discussed that would be housed in 100-km long tunnels to reach collision energies many times that of the LHC, while still others are suggesting the construction of colliders based on different geometric shapes, perhaps linear rather than ring. More powerful machines, by definition, will translate into torrents of more data to digest and analyze.

The future site of the Simons Observatory, located in the high Atacama Desert in Northern Chile inside the Chajnator Science Preserve (photo licensed under CC BY-SA 4.0)

Under an agreement with the Simons Foundation Flatiron Institute, SDSC’s Gordon is being re-purposed to provide computational support for the POLARBEAR and successor project called the Simon Array. The projects — led by UC Berkeley and funded first by the Simons Foundation and then the NSF under a five-year, $5 million grant — will deploy the most powerful cosmic microwave background (CMB) radiation telescope and detector ever made to detect what are, in essence, the leftover ‘heat’ from the Big Bang in the form of microwave radiation.

“The POLARBEAR experiment alone collects nearly one gigabyte of data every day that must be analyzed in real time,” said Brian Keating, a professor of physics at UC San Diego’s Center for Astrophysics & Space Sciences and co-PI for the POLARBEAR/Simons Array project.

“This is an intensive process that requires dozens of sophisticated tests to assure the quality of the data. Only be leveraging resources such as Gordon are we able to continue our legacy of success.”

“As the scale of data and complexity of these experimental projects increase, it is more important than ever before that centers like SDSC respond by providing HPC systems and expertise that become part of the integrated ecosystem of research and discovery,” said Norman.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UT Dallas Grows HPC Storage Footprint for Animation and Game Development

October 28, 2020

Computer-generated animation and video game development are extraordinarily computationally intensive fields, with studios often requiring large server farms with hundreds of terabytes – or even petabytes – of storag Read more…

By Staff report

Frame by Frame, Supercomputing Reveals the Forms of the Coronavirus

October 27, 2020

From the start of the pandemic, supercomputing research has been targeting one particular protein of the coronavirus: the notorious “S” or “spike” protein, which allows the virus to pry its way into human cells a Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. The acquisition helps AMD keep pace during a time of consolida Read more…

By John Russell

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chip maker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the Europe Read more…

By George Leopold

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a reference collection of open-source HPC software components and bes Read more…

By John Russell

AWS Solution Channel

Rapid Chip Design in the Cloud

Time-to-market and engineering efficiency are the most critical and expensive metrics for a chip design company. With this in mind, the team at Annapurna Labs selected Altair AcceleratorRead more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. Th Read more…

By John Russell

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a referen Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This