Accelerating High Performance Computing (HPC) for Population-level Genomics

By Mileidy Giraldo, Ph.D.

September 30, 2019

The development of Next-Generation Sequencing (NGS) technologies in the late 2000’s led to a dramatic decrease in the cost of DNA sequencing. The advent of NGS coupled to the advancements in HPC storage and computing technologies at the time created the perfect storm for a deluge of genomics data. This confluence of events led to a pressing question: how best to put all this data to use?

A genome is an organism’s complete set of DNA and, as such, it ultimately determines all biological functions and the myriad of variations that make some of us susceptible while others immune to different diseases. Therefore, it is of great interest to the biomedical community to determine an individual’s genome, which is a process much like deciphering scrambled letters (genome sequencing) so one can assemble them into words (genome assembly) to write a book, (variant analysis).

Given the new affordability of NGS methods and the increased computing and storage capacities of the last decade, genomics can now be performed at the population level. Large national genomics initiatives such as the “UK Biobank,” the “All of Us program” in the US, Singapore’s “GenomeAsia,” “Genomics Thailand,” etc. are emerging all around the world. With goals of sequencing 500K to over 1M participants in a few years’ time, these country-wide efforts aim to capture the genetic variation of their people to make Precision Medicine a reality. With Precision Medicine the hope is to deliver individualized prevention, diagnosis, and treatment by leveraging knowledge from a person’s genetic background.

The greatest challenge such population-level genomics efforts face is scale: scaling up in input data from exomes (the portions of a genome that code information for protein synthesis) to whole genomes, as well as scaling up production (from a handful to tens of thousands of samples), and the corresponding challenges it creates for the HPC infrastructure. Exomes correspond to 1% of whole genomes and are small regions in genes dictating important biological functions. Yet, exomes cannot provide the comprehensive picture found in the remaining 99% of whole genomes. Exomes were traditionally sequenced because of their smaller size, lower cost, and faster processing. Today, many genomics centers around the world are making the transition from exome to whole-genome sequencing, while also trying to tackle unprecedented volumes of data from hundreds of thousands of patients.

Three out of the four analysis stages in genomics take place in the HPC environment of a cluster or supercomputer, including genome assembly (assembling the DNA letters into words), variant analyses (comparing how a word/gene is spelled in different people), and downstream bioinformatics (measuring effect of variations on function/disease). Therefore, scaling out genomics productions largely depends on the HPC technologies made available to the genomics applications.

With these dependencies in mind, Lenovo set out to identify which technologies bring the most acceleration to genomics workflows. To that end, we conducted a systematic study of the performance of hundreds of parameters on 30+ tools in the Broad Institute’s Genome Analysis Tool Kit (GATK) Germline Variant Calling Workflow against hundreds of permutations of hardware building blocks, system tunings, data types (exomes, whole genomes), execution modes (latency vs. throughput), and software implementations (e.g. standard vs. Spark, etc.).

Today, it still takes a typical datacenter 150-160 hrs. to process a single whole genome and 4-6 hrs. for an exome. In 2017, Intel’s work on BIGstack (Intel’s reference architecture for GATK workflows) reduced processing times to 10.8 hr. and 25 min, respectively. As a result of Lenovo’s permutation tests of the hardware, software, and system factors affecting the performance of genomics workflows we identified an optimized architecture that can process 1 whole genome in 5.5 hrs. and 1 exome in 4 minutes with no specialty hardware. With Lenovo’s genomics optimized hardware, a data center can expect to process 4.5 genomes or 343 exomes per node per day. Some genomics solutions out there promise processing times around 3-4 hr. for a whole genome but require expensive, specialized hardware that does not scale well for large volumes and licensing proprietary software. Lenovo’s optimized genomics architecture on the other hand, provides a 27X to 40X performance improvement on non-specialty hardware, and does so in a manner that is more affordable, more scalable, and reduces costs by leveraging open-source software that is validated and widely-accepted by the scientific community.

Another byproduct of Lenovo’s systematic genomics performance testing was the ability to generate a fluid rather than a static refence architecture for genomics as is the norm in the HPC industry. Every genomics data center adopts a different mix of workloads, analyses workflows, has different active and archiving storage needs, a different mix of research types to support, and therefore needs a customized architecture tailored to their specific needs. Thus, we converted the lessons learned from our genomics benchmarking and systematic testing into formulas captured in an industry-first Genomics Sizing Tool.

Lenovo’s Genomics Sizing Tool calculates the projected HPC usage for an expected workload; for example, it outputs the compute nodes, active, and archive storage needed to meet a workload quota (e.g., 50K genomes/yr.). The Sizing Tool can also be used to size the current production capabilities of an existing cluster: e.g., to answer the questions of “[H]ow many genomes can I process with my current cluster?,” Or, “[H]ow many genomes/yr. can this year’s budget afford me?”

We are leveraging both Lenovo’s optimized architecture and the Genomics Sizing Tool to help data centers around the world accelerate their workflows and plan their HPC resources more effectively as they embark on ever increasing workloads from cohort-level and population-level genomics projects. Lenovo’s team of Genomics experts work together with the data center’s researchers, developers, and HPC experts to create custom HPC usage designs projecting data growth over time, designing data flow, storage, and management across the cluster. These exercises in HPC usage and projections are proving invaluable in workload management, budget planning, IT expenditure justification and allocation, and resource accountability. Through its commitment to developing and adopting cutting-edge technological innovation, Lenovo is enabling the worldwide movement to sequence entire populations, bringing such initiatives closer to making precision medicine a reality, and delivering on its promise of Smarter Technology for All. A white paper will soon follow with a detailed description of the systematic permutation tests and benchmarks alluded to here as well as the resulting optimizations and Genomics Sizing Tool accelerating and sizing the HPC resources for deploying genomics at scale.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing ( Read more…

By John Russell

Ethernet Technology Consortium Launches 800 Gigabit Ethernet Specification

April 7, 2020

The newly rebranded Ethernet Technology Consortium (ETC), formerly known as the 25 Gigabit Ethernet Consortium, announced a new 800 Gigabit Ethernet specification and an expanded scope aimed at meeting the needs of perfo Read more…

By Tiffany Trader

Spanish Researchers Introduce HPC-Ready COVID-19 Spread Simulator

April 7, 2020

With governments in a mad scramble to identify the policies most likely to curb the spread of the pandemic without unnecessarily crippling the global economy, researchers are turning to AI and high-performance computing Read more…

By Oliver Peckham

Stony Brook Researchers to Run COVID-19 Simulations on Supercomputers

April 6, 2020

A wide range of supercomputers are crunching the infamous “spike” protein of the novel coronavirus, from Summit more than a month ago to [email protected] to a Russian cluster just a week ago. Read more…

By Staff report

What’s New in Computing vs. COVID-19: Fast-Tracked Research, Susceptibility Study, Antibodies & More

April 6, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

Army Seeks AI Ground Truth

April 3, 2020

Deep neural networks are being mustered by U.S. military researchers to marshal new technology forces on the Internet of Battlefield Things. U.S. Army and industry researchers said this week they have developed a “c Read more…

By George Leopold

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems Read more…

By John Russell

Ethernet Technology Consortium Launches 800 Gigabit Ethernet Specification

April 7, 2020

The newly rebranded Ethernet Technology Consortium (ETC), formerly known as the 25 Gigabit Ethernet Consortium, announced a new 800 Gigabit Ethernet specificati Read more…

By Tiffany Trader

ECP Milestone Report Details Progress and Directions

April 1, 2020

The Exascale Computing Project (ECP) milestone report issued last week presents a good snapshot of progress in preparing applications for exascale computing. Th Read more…

By John Russell

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

LLNL Leverages Supercomputing to Identify COVID-19 Antibody Candidates

March 30, 2020

As COVID-19 sweeps the globe to devastating effect, supercomputers around the world are spinning up to fight back by working on diagnosis, epidemiology, treatme Read more…

By Staff report

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium- Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be Read more…

By John Russell

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Summit Joins the Fight Against the Coronavirus

March 6, 2020

With the coronavirus sweeping the globe, tech conferences and supply chains are being hit hard – but now, tech is hitting back. Oak Ridge National Laboratory Read more…

By Staff report

CINECA’s Carlo Cavazzoni Describes the Supercomputing Battle Against COVID-19

March 17, 2020

The latest episode of the This Week in HPC podcast features Carlo Cavazzoni, a senior staff member at CINECA, one of the leading supercomputing organizations in Europe. Intersect360 Research's Addison Snell spoke to Cavazzoni to discuss both CINECA's work using supercomputing to combat COVID-19 and Cavazzoni's personal experience living near the epicenter... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This