Accelerating High Performance Computing (HPC) for Population-level Genomics

By Mileidy Giraldo, Ph.D.

September 30, 2019

The development of Next-Generation Sequencing (NGS) technologies in the late 2000’s led to a dramatic decrease in the cost of DNA sequencing. The advent of NGS coupled to the advancements in HPC storage and computing technologies at the time created the perfect storm for a deluge of genomics data. This confluence of events led to a pressing question: how best to put all this data to use?

A genome is an organism’s complete set of DNA and, as such, it ultimately determines all biological functions and the myriad of variations that make some of us susceptible while others immune to different diseases. Therefore, it is of great interest to the biomedical community to determine an individual’s genome, which is a process much like deciphering scrambled letters (genome sequencing) so one can assemble them into words (genome assembly) to write a book, (variant analysis).

Given the new affordability of NGS methods and the increased computing and storage capacities of the last decade, genomics can now be performed at the population level. Large national genomics initiatives such as the “UK Biobank,” the “All of Us program” in the US, Singapore’s “GenomeAsia,” “Genomics Thailand,” etc. are emerging all around the world. With goals of sequencing 500K to over 1M participants in a few years’ time, these country-wide efforts aim to capture the genetic variation of their people to make Precision Medicine a reality. With Precision Medicine the hope is to deliver individualized prevention, diagnosis, and treatment by leveraging knowledge from a person’s genetic background.

The greatest challenge such population-level genomics efforts face is scale: scaling up in input data from exomes (the portions of a genome that code information for protein synthesis) to whole genomes, as well as scaling up production (from a handful to tens of thousands of samples), and the corresponding challenges it creates for the HPC infrastructure. Exomes correspond to 1% of whole genomes and are small regions in genes dictating important biological functions. Yet, exomes cannot provide the comprehensive picture found in the remaining 99% of whole genomes. Exomes were traditionally sequenced because of their smaller size, lower cost, and faster processing. Today, many genomics centers around the world are making the transition from exome to whole-genome sequencing, while also trying to tackle unprecedented volumes of data from hundreds of thousands of patients.

Three out of the four analysis stages in genomics take place in the HPC environment of a cluster or supercomputer, including genome assembly (assembling the DNA letters into words), variant analyses (comparing how a word/gene is spelled in different people), and downstream bioinformatics (measuring effect of variations on function/disease). Therefore, scaling out genomics productions largely depends on the HPC technologies made available to the genomics applications.

With these dependencies in mind, Lenovo set out to identify which technologies bring the most acceleration to genomics workflows. To that end, we conducted a systematic study of the performance of hundreds of parameters on 30+ tools in the Broad Institute’s Genome Analysis Tool Kit (GATK) Germline Variant Calling Workflow against hundreds of permutations of hardware building blocks, system tunings, data types (exomes, whole genomes), execution modes (latency vs. throughput), and software implementations (e.g. standard vs. Spark, etc.).

Today, it still takes a typical datacenter 150-160 hrs. to process a single whole genome and 4-6 hrs. for an exome. In 2017, Intel’s work on BIGstack (Intel’s reference architecture for GATK workflows) reduced processing times to 10.8 hr. and 25 min, respectively. As a result of Lenovo’s permutation tests of the hardware, software, and system factors affecting the performance of genomics workflows we identified an optimized architecture that can process 1 whole genome in 5.5 hrs. and 1 exome in 4 minutes with no specialty hardware. With Lenovo’s genomics optimized hardware, a data center can expect to process 4.5 genomes or 343 exomes per node per day. Some genomics solutions out there promise processing times around 3-4 hr. for a whole genome but require expensive, specialized hardware that does not scale well for large volumes and licensing proprietary software. Lenovo’s optimized genomics architecture on the other hand, provides a 27X to 40X performance improvement on non-specialty hardware, and does so in a manner that is more affordable, more scalable, and reduces costs by leveraging open-source software that is validated and widely-accepted by the scientific community.

Another byproduct of Lenovo’s systematic genomics performance testing was the ability to generate a fluid rather than a static refence architecture for genomics as is the norm in the HPC industry. Every genomics data center adopts a different mix of workloads, analyses workflows, has different active and archiving storage needs, a different mix of research types to support, and therefore needs a customized architecture tailored to their specific needs. Thus, we converted the lessons learned from our genomics benchmarking and systematic testing into formulas captured in an industry-first Genomics Sizing Tool.

Lenovo’s Genomics Sizing Tool calculates the projected HPC usage for an expected workload; for example, it outputs the compute nodes, active, and archive storage needed to meet a workload quota (e.g., 50K genomes/yr.). The Sizing Tool can also be used to size the current production capabilities of an existing cluster: e.g., to answer the questions of “[H]ow many genomes can I process with my current cluster?,” Or, “[H]ow many genomes/yr. can this year’s budget afford me?”

We are leveraging both Lenovo’s optimized architecture and the Genomics Sizing Tool to help data centers around the world accelerate their workflows and plan their HPC resources more effectively as they embark on ever increasing workloads from cohort-level and population-level genomics projects. Lenovo’s team of Genomics experts work together with the data center’s researchers, developers, and HPC experts to create custom HPC usage designs projecting data growth over time, designing data flow, storage, and management across the cluster. These exercises in HPC usage and projections are proving invaluable in workload management, budget planning, IT expenditure justification and allocation, and resource accountability. Through its commitment to developing and adopting cutting-edge technological innovation, Lenovo is enabling the worldwide movement to sequence entire populations, bringing such initiatives closer to making precision medicine a reality, and delivering on its promise of Smarter Technology for All. A white paper will soon follow with a detailed description of the systematic permutation tests and benchmarks alluded to here as well as the resulting optimizations and Genomics Sizing Tool accelerating and sizing the HPC resources for deploying genomics at scale.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX develop Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computi Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” The newly announced SuperPods come just two years after the Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U.S. Entity List bars U.S. firms from supplying key technolog Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fle Read more…

Nvidia’s Newly DPU-Enabled SuperPOD Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Intel Partners Debut Latest Servers Based on the New Intel Gen 3 ‘Ice Lake’ Xeons

April 7, 2021

Fresh from Intel’s launch of the company’s latest third-generation Xeon Scalable “Ice Lake” processors on April 6 (Tuesday), Intel server partners Cisco, Dell EMC, HPE and Lenovo simultaneously unveiled their first server models built around the latest chips. And though arch-rival AMD may... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire