Accelerating Genomics Pipelines Using Intel’s Open Omics Acceleration Framework on AWS

By Amazon Web Services

September 5, 2022

This post was contributed by Sanchit Misra, PhD, Vasimuddin Md., PhD,  Saurabh Kalikar, PhD, and Narendra Chaudhary, PhD, research scientists at Intel Labs.

Introduction

We are in the epoch of digital biology, that is fueled by the convergence of three revolutions:

    • the measurement of biological systems at high resolution,
    • novel data science (AI and data management) techniques that can be applied on this data, and
    • widespread use of the massive public data repositories, large collaborative projects, and consortia, which in turn promote the use of cloud due to easy data access.

Genomics is a primary example of this trend, where high-throughput next-generation sequencing (NGS) devices are being used to sequence DNA, mRNA, regulatory regions, the gut microbiome, etc. Computational workflows are also being developed, standardized rapidly, and scaled by running on the cloud. With the enormous quantities of genomic data being collected, processing times are often in the order of billions of core hours, and the cost of processing increase commensurately. As a result, customers are looking for optimized tools and systems that incur the shortest runtimes and lowest costs.

Intel’s Open Omics Acceleration Framework (in short, Open Omics) is an open-sourced high throughput framework for accelerating omics pipelines. Intel is developing this framework with the following characteristics:

  • Community driven: Open Omics framework is being built based on extensive discussions with thought leaders in digital biology to understand the requirements of the user community. Moreover, Intel is building the framework with a modular design. This enables the developer community to use efficient modules to achieve faster performance for existing and new software tools in a productive manner.
  • Open-sourced: so that anyone can customize it for variations in use-cases.
  • Hardware accelerated: uses the underlying hardware efficiently to reduce cloud costs.
  • Supports full application stack: The application layer supports a wide range of applications in genomics, single cell analysis, and drug discovery. The middleware layer has scalable and efficient implementations of key building blocks, such as data management and key compute motifs. All of this is optimized for the processor, memory, storage, and networking.

In this blog, we showcase the first version of Open Omics and benchmark three applications that are used in processing NGS data – sequence alignment tools BWA-MEM, minimap2, and single cell ATAC-Seq on Xeon-based Amazon Elastic Compute Cloud (Amazon EC2) Instances.

Applications benchmarked for this blog

BWA-MEM and Minimap2 are popular software tools for aligning short reads and long reads to a reference sequence. The Open Omics version of BWA-MEM is called BWA-MEM2 and that of minimap2 is called mm2-fast. These are efficient architecture-aware implementations of original tools that were built in collaboration with Prof. Heng Li. They are both drop-in replacements that significantly reduce runtime and cloud costs while maintaining command line interface and output identical to original tools [2,3] and have been open sourced. Open Omics BWA-MEM has been used by more than 40 peer-reviewed genomics studies already, including research on Covid-19 [5,6], gut microbiome [7], and cancer [8].

ATAC-seq assays are used for identifying accessible chromatin regions in the DNA. ATACWorks [1] is a toolkit that is used to de-noise and identify accessible chromatin regions, and it uses deep learning on 1D data. The Open Omics version of ATACWorks builds an efficient 1D dilated convolution layer and demonstrates reduced precision (BFloat16) training to achieve significant performance gain without any loss of accuracy [4].

Benchmarking the Open Omics Acceleration framework on AWS

Amazon EC2 Instances used in this benchmarking

The four types of Amazon EC2 Instances used in this benchmarking study are detailed in the following table.

Table 1: Details of the Amazon EC2 instance types used for benchmarking. On-Demand and Spot pricing are from the publish date for the US-East (Virginia) Region, and is subject to change over time. Please consult the Amazon EC2 pricing page for current pricing in your region.  
Instance names On-Demand hourly rate Spot hourly rate Number of vCPUs Memory
c5.12xlarge $2.04 $0.4984 48 96 GiB
m5.12xlarge $2.304 $0.4933 48 192 GiB
c6i.16xlarge $2.72 $0.7602 64 128 GiB
m6i.16xlarge $3.072 $0.7406 64 256 GiB

Prerequisites

An AWS account with permissions to provision Amazon S3 buckets for input and output data storage, as well as sufficient permissions/limits to provision Amazon EC2 C5, M5, C6i, and M6i Instances.

How to benchmark Open Omics Acceleration Framework on AWS

The configuration details and steps used for benchmarking baseline and Open Omics versions of all three applications on EC2 Instances are detailed at IntelLab’s GitHub page. Typical process involves launching the corresponding EC2 Instances, connecting to the instances, installing the software, downloading the datasets, and executing the baseline and Open Omics versions. In the following subsections, we report results for the three applications on on-demand instances with dedicated tenancy. Compared to on-demand costs shown, the EC2 Spot Instances can provide nearly 75% cost savings.

Benchmarking Results: BWA-MEM

We used m5.12xlarge and m6i.16xlarge instances, with 48 and 64 threads (one thread per vCPU), respectively. The m-instance types were used because they provide 4 GB memory per vCPU that is required to run Open Omics BWA-MEM.

Figure 1 shows that on the same instance type (m5), Open Omics BWA-MEM achieves 1.8-2.3x speedup over the baseline BWA-MEM. Using the m6i instance type gives further performance gain, achieving 2.6-3.5x over baseline BWA-MEM on m5. The performance reported here of Open Omics BWA-MEM on m6i instance is ~1.7x faster than best performance on latest GPU. Please refer to this blog post and this video for a comparison.

The speedups are lower for ERR194147 dataset because it has reads of length ~100, providing less scope of parallelization. In comparison, the other two datasets that have reads of length ~150. A majority of the modern short read sequencers have read lengths ≥ 150 and they are expected to grow further. Therefore, we can expect higher speedups in the future.

Figure 1: Comparison of execution time of baseline BWA-MEM and Open Omics BWA-MEM on m5 and m6i instances for two different use cases – paired end and single end – for the three datasets used. The vertical bars show the execution time, while the line graph shows the speedup compared to baseline BWA-MEM on m5.

Figure 2 shows the price-performance chart for BWA-MEM. It demonstrates that Open Omics BWA-MEM achieves significant cost cuttings compared to baseline BWA-MEM. Moreover, the m6i instances not only provide faster performance compared to m5 instances, they also incur lower costs.

Figure 2: Comparison of On-Demand Instance costs per sample processed of baseline BWA-MEM and Open Omics BWA-MEM on m5 and m6i instances for two different use cases – paired end and single end – for the three datasets used.

Benchmarking Results: Minimap2

For this experiment, we used c5.12xlarge and c6i.16xlarge instances, using 48 and 64 threads (one thread per vCPU), respectively. Figure 3 shows that on the same instance type (c5), Open Omics minimap2 achieves 1.5-1.9x speedup over the baseline minimap2. Using the c6i instance type, gives further performance gain achieving 2-2.4x over baseline minimap2 on c5.

Figure 3: Comparison of execution time of baseline minimap2 and Open Omics minimap2 on c5 and c6i instances for mapping reads obtained from different sequencing technologies – Oxford Nanopore Technologies (ONT), Pacific Biosciences CLR (CLR), Pacific Biosciences HiFi (HiFi) – to the reference human genome. The vertical bars show the execution time, while the line graph shows the speedup compared to baseline minimap2 on c5.

The price-performance chart shown in Figure 4 clearly demonstrates that Open Omics minimap2 costs nearly the same on the c6i and c5 instances, while achieving significant cost savings over the baseline minimap2 running on c5.

Figure 4: Comparison of On-Demand Instance costs per sample of baseline minimap2 and Open Omics minimap2 on c5 and c6i instances for the three datasets used.

Benchmarking Results: ATAC-Seq data analysis

Figure 5 compares the execution time of the baseline and Open Omics versions of ATACWorks on c5 and c6i instances. The baseline version of ATACWorks is created by replacing the CUDA based deep learning modules with Intel® oneDNN library. Open Omics version uses Intel’s new optimized implementation of the 1D convolutions. The chart shows…

Read the full blog to learn more. Reminder: You can learn a lot from AWS HPC engineers by subscribing to the HPC Tech Short YouTube channel, and following the AWS HPC Blog channel.

 

Return to Solution Channel Homepage
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Microsoft Closes Confidential Computing Loop with AMD’s Milan Chip

September 22, 2022

Microsoft shared details on how it uses an AMD technology to secure artificial intelligence as it builds out a secure AI infrastructure in its Azure cloud service. Microsoft has a strong relationship with Nvidia, but is also working with AMD's Epyc chips (including the new 3D VCache series), MI Instinct accelerators, and also... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as the first computer programmer. The company also announced tw Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing that Hopper-generation GPUs (which promise greater energy eff Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

AWS Solution Channel

Shutterstock 1194728515

Simulating 44-Qubit quantum circuits using AWS ParallelCluster

Dr. Fabio Baruffa, Sr. HPC & QC Solutions Architect
Dr. Pavel Lougovski, Pr. QC Research Scientist
Tyson Jones, Doctoral researcher, University of Oxford

Introduction

Currently, an enormous effort is underway to develop quantum computing hardware capable of scaling to hundreds, thousands, and even millions of physical (non-error-corrected) qubits. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing t Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Survey Results: PsiQuantum, ORNL, and D-Wave Tackle Benchmarking, Networking, and More

September 19, 2022

The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. Read more…

HPC + AI Wall Street to Feature ‘Spooky’ Science for Financial Services

September 18, 2022

Albert Einstein famously described quantum mechanics as "spooky action at a distance" due to the non-intuitive nature of superposition and quantum entangled par Read more…

Analog Chips Find a New Lease of Life in Artificial Intelligence

September 17, 2022

The need for speed is a hot topic among participants at this week’s AI Hardware Summit – larger AI language models, faster chips and more bandwidth for AI machines to make accurate predictions. But some hardware startups are taking a throwback approach for AI computing to counter the more-is-better... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Leading Solution Providers

Contributors

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire