Accelerating High Performance Computing (HPC) for Population-level Genomics

By Mileidy Giraldo, Ph.D.

September 30, 2019

The development of Next-Generation Sequencing (NGS) technologies in the late 2000’s led to a dramatic decrease in the cost of DNA sequencing. The advent of NGS coupled to the advancements in HPC storage and computing technologies at the time created the perfect storm for a deluge of genomics data. This confluence of events led to a pressing question: how best to put all this data to use?

A genome is an organism’s complete set of DNA and, as such, it ultimately determines all biological functions and the myriad of variations that make some of us susceptible while others immune to different diseases. Therefore, it is of great interest to the biomedical community to determine an individual’s genome, which is a process much like deciphering scrambled letters (genome sequencing) so one can assemble them into words (genome assembly) to write a book, (variant analysis).

Given the new affordability of NGS methods and the increased computing and storage capacities of the last decade, genomics can now be performed at the population level. Large national genomics initiatives such as the “UK Biobank,” the “All of Us program” in the US, Singapore’s “GenomeAsia,” “Genomics Thailand,” etc. are emerging all around the world. With goals of sequencing 500K to over 1M participants in a few years’ time, these country-wide efforts aim to capture the genetic variation of their people to make Precision Medicine a reality. With Precision Medicine the hope is to deliver individualized prevention, diagnosis, and treatment by leveraging knowledge from a person’s genetic background.

The greatest challenge such population-level genomics efforts face is scale: scaling up in input data from exomes (the portions of a genome that code information for protein synthesis) to whole genomes, as well as scaling up production (from a handful to tens of thousands of samples), and the corresponding challenges it creates for the HPC infrastructure. Exomes correspond to 1% of whole genomes and are small regions in genes dictating important biological functions. Yet, exomes cannot provide the comprehensive picture found in the remaining 99% of whole genomes. Exomes were traditionally sequenced because of their smaller size, lower cost, and faster processing. Today, many genomics centers around the world are making the transition from exome to whole-genome sequencing, while also trying to tackle unprecedented volumes of data from hundreds of thousands of patients.

Three out of the four analysis stages in genomics take place in the HPC environment of a cluster or supercomputer, including genome assembly (assembling the DNA letters into words), variant analyses (comparing how a word/gene is spelled in different people), and downstream bioinformatics (measuring effect of variations on function/disease). Therefore, scaling out genomics productions largely depends on the HPC technologies made available to the genomics applications.

With these dependencies in mind, Lenovo set out to identify which technologies bring the most acceleration to genomics workflows. To that end, we conducted a systematic study of the performance of hundreds of parameters on 30+ tools in the Broad Institute’s Genome Analysis Tool Kit (GATK) Germline Variant Calling Workflow against hundreds of permutations of hardware building blocks, system tunings, data types (exomes, whole genomes), execution modes (latency vs. throughput), and software implementations (e.g. standard vs. Spark, etc.).

Today, it still takes a typical datacenter 150-160 hrs. to process a single whole genome and 4-6 hrs. for an exome. In 2017, Intel’s work on BIGstack (Intel’s reference architecture for GATK workflows) reduced processing times to 10.8 hr. and 25 min, respectively. As a result of Lenovo’s permutation tests of the hardware, software, and system factors affecting the performance of genomics workflows we identified an optimized architecture that can process 1 whole genome in 5.5 hrs. and 1 exome in 4 minutes with no specialty hardware. With Lenovo’s genomics optimized hardware, a data center can expect to process 4.5 genomes or 343 exomes per node per day. Some genomics solutions out there promise processing times around 3-4 hr. for a whole genome but require expensive, specialized hardware that does not scale well for large volumes and licensing proprietary software. Lenovo’s optimized genomics architecture on the other hand, provides a 27X to 40X performance improvement on non-specialty hardware, and does so in a manner that is more affordable, more scalable, and reduces costs by leveraging open-source software that is validated and widely-accepted by the scientific community.

Another byproduct of Lenovo’s systematic genomics performance testing was the ability to generate a fluid rather than a static refence architecture for genomics as is the norm in the HPC industry. Every genomics data center adopts a different mix of workloads, analyses workflows, has different active and archiving storage needs, a different mix of research types to support, and therefore needs a customized architecture tailored to their specific needs. Thus, we converted the lessons learned from our genomics benchmarking and systematic testing into formulas captured in an industry-first Genomics Sizing Tool.

Lenovo’s Genomics Sizing Tool calculates the projected HPC usage for an expected workload; for example, it outputs the compute nodes, active, and archive storage needed to meet a workload quota (e.g., 50K genomes/yr.). The Sizing Tool can also be used to size the current production capabilities of an existing cluster: e.g., to answer the questions of “[H]ow many genomes can I process with my current cluster?,” Or, “[H]ow many genomes/yr. can this year’s budget afford me?”

We are leveraging both Lenovo’s optimized architecture and the Genomics Sizing Tool to help data centers around the world accelerate their workflows and plan their HPC resources more effectively as they embark on ever increasing workloads from cohort-level and population-level genomics projects. Lenovo’s team of Genomics experts work together with the data center’s researchers, developers, and HPC experts to create custom HPC usage designs projecting data growth over time, designing data flow, storage, and management across the cluster. These exercises in HPC usage and projections are proving invaluable in workload management, budget planning, IT expenditure justification and allocation, and resource accountability. Through its commitment to developing and adopting cutting-edge technological innovation, Lenovo is enabling the worldwide movement to sequence entire populations, bringing such initiatives closer to making precision medicine a reality, and delivering on its promise of Smarter Technology for All. A white paper will soon follow with a detailed description of the systematic permutation tests and benchmarks alluded to here as well as the resulting optimizations and Genomics Sizing Tool accelerating and sizing the HPC resources for deploying genomics at scale.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UT Dallas Grows HPC Storage Footprint for Animation and Game Development

October 28, 2020

Computer-generated animation and video game development are extraordinarily computationally intensive fields, with studios often requiring large server farms with hundreds of terabytes – or even petabytes – of storag Read more…

By Staff report

Frame by Frame, Supercomputing Reveals the Forms of the Coronavirus

October 27, 2020

From the start of the pandemic, supercomputing research has been targeting one particular protein of the coronavirus: the notorious “S” or “spike” protein, which allows the virus to pry its way into human cells a Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. The acquisition helps AMD keep pace during a time of consolida Read more…

By John Russell

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chip maker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the Europe Read more…

By George Leopold

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a reference collection of open-source HPC software components and bes Read more…

By John Russell

AWS Solution Channel

Rapid Chip Design in the Cloud

Time-to-market and engineering efficiency are the most critical and expensive metrics for a chip design company. With this in mind, the team at Annapurna Labs selected Altair AcceleratorRead more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. Th Read more…

By John Russell

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a referen Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This