The Revolution in the Lab is Overwhelming IT

By John Russell

October 5, 2015

Sifting through the vast treasure trove of data spilling from modern life science instruments is perhaps the defining challenge for biomedical research today. NIH, for example, generates about 1.5PB of data a month, and that excludes NIH-funded external research. Not only have DNA sequencers become extraordinarily powerful, but also they have proliferated in size and type, from big workhorse instruments like the Illumina HiSeq X Ten, down to reliable bench-top models (MiSeq) suitable for small labs, and there are now in advanced development USB-stick sized devices that plug into a USB port.

“The flood of sequence data, human and non-human that may impact human health, is certainly growing and in need of being integrated, mined, and understood. Further, there are emerging technologies in imaging and high resolution structure studies that will be generating a huge amount of data that will need to be analyzed, integrated, and understood,”[i] said Jack Collins, Director of the Advanced Biomedical Computing Center at the Frederick National Laboratory for Cancer Research, NCI.

Here are just a few of the many feeder streams to the data deluge:

  • DNA Sequencers. An Illumina (NASDAQ: ILMN) top-of-the-line HiSeq Ten Series can generate a full human genome in just 18 hours (and generate 3TB) and deliver 18000 genomes in a year. File size for a single whole genome sample may exceed 75GB.
  • Live cell imaging. High throughput imaging in which robots screen hundreds of millions of compounds on live cells typically generate tens of terabytes weekly.
  • Confocal imaging. Scanning 100s of tissue section, with sometimes many scans per section, each with 20-40 layers and multiple fluorescent channels can produce on the order of 10TB weekly.
  • Structural Data. Advanced investigation into form and structure is driving huge and diverse datasets derived from many sources.

Broadly, the flood of data from various LS instruments stresses virtually every part of most research computational environment (cpu, network, storage, system and application software). Indeed, important research and clinical work can be delayed or not attempted because although generating the data is feasible, the time required to perform the data analysis can be impractical. Faced with these situations, research organizations are forced to retool the IT infrastructure.

“Bench science is changing month to month while IT infrastructure is refreshed every 2-7 years. Right now IT is not part of the conversation [with life scientists] and running to catch up,” noted Ari Berman, GM of Government Services, the BioTeam consulting firm and a member of Tabor EnterpriseHPC Conference Advisory Board.

The sheer volume of data is only one aspect of the problem. Diversity in files and data types further complicates efforts to build the “right” infrastructure. Berman noted in a recent presentation that life sciences generates massive text files, massive binary files, large directories (many millions of files), large files ~600Gb and very many small files ~30kb or less. Workflows likewise vary. Sequencing alignment and variant calling offer one set of challenges; pathway simulation presents another; creating 3D models – perhaps of the brain and using those to perform detailed neurosurgery with real-time analytic feedback

“Data piles up faster than it ever has before. In fact, a single new sequencer can typically generate terabytes of data a day. And as a result, an organization or lab with multiple sequencers is capable of producing petabytes of data in a year. The data from the sequencers must be analyzed and visualized using third-party tools. And then it must be managed over time,” said Berman.

Human Brain ProjectAn excellent, though admittedly high-end, example of the growing complexity of computational tools being contemplated and developed in life science research is presented by the European Union Human Brain Project[ii] (HBP). Among its lofty goals are creation of six information and communications technology (ICT) platforms intended to enable “large-scale collaboration and data sharing, reconstruction of the brain at different biological scales, federated analysis of clinical data to map diseases of the brain, and development of brain-inspired computing systems.”

The elements of the planned HPC platform include[iii]:

  • Neuroinformatics: a data repository, including brain atlases.
  • Brain Simulation: building ICT models and simulations of brains and brain components.
  • Medical Informatics: bringing together information on brain diseases.
  • Neuromorphic Computing: ICT that mimics the functioning of the brain.
  • Neurorobotics: testing brain models and simulations in virtual environments.
  • HPC Infrastructure: hardware and software to support the other Platforms.

(Tellingly HBP organizers have recognized the limited computational expertise of many biomedical researchers and also plan to develop technical support and training programs for users of the platforms.)

There is broad agreement in the life sciences research community that there is no single best HPC infrastructure to handle the many LS use cases. The best approach is to build for the dominant use cases. Even here, said Berman, building HPC environments for LS is risky, “The challenge is to design systems today that can support unknown research requirements over many years.” And of course, this all must be accomplished in a cost-constrained environment.

Vertical Focus: HPC in BioIT“Some lab instruments know how to submit jobs to clusters. You need heterogeneous systems. Homogeneous clusters don’t work well in life sciences because of the varying uses cases. Newer clusters are kind of a mix and match of things we have fat nodes with tons of cpus and thin nodes with really fast cpus, [for example],” said Berman.

Just one genome, depending upon the type of sequencing and the coverage, can generate 100GB of data to manage. Capturing, analyzing, storing, and presenting the accumulating data requires a hybrid HPC infrastructure that blends traditional cluster computing with emerging tools such as iRODS (Integrated Rule-Oriented Data System) and Hadoop. Unsurprisingly the HPC infrastructure is always a work in progress

Here’s a snapshot of the two of the most common genomic analysis pipelines:

  1. DNA Sequencing. DNA extracted from tissue samples is run through the high-throughput NGS instruments. These modern sequencers generate hundreds of millions of short DNA sequences for each sample, which must then be ‘assembled’ into proper order to determine the genome. Researchers use parallelized computational workflows to assemble the genome and perform quality control on the reassembly—fixing errors in the reassembly.
  2. Variant Calling. DNA variations (SNPs, haplotypes, indels, etc) for an individual are detected, often using large patient populations to help resolve ambiguities in the individual’s sequence data. Data may be organized into a hybrid solution that uses a relational database to store canonical variations, high-performance file systems to hold data, and a Hadoop-based approach for specialized data-intensive analysis. Links to public and private databases help researchers identify the impact of variations including, for example, whether variants have known associations with clinically relevant conditions.

The point is that life science research – and soon healthcare delivery – has been transformed by productivity leaps in the lab that now are creating immense computational challenges. (next Part 2: Storage Strategies)

[i] Presented on a panel at Leverage Big Data conference, March 2015; http://www.leveragebigdata.com;

[ii] https://www.humanbrainproject.eu/

[iii] https://www.humanbrainproject.eu/discover/the-project/platforms;jsessionid=emae995mioyqxt99x2a14ljg

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Rockport Networks Launches 300 Gbps Switchless Fabric, Reveals 396-Node Deployment at TACC

October 27, 2021

Rockport Networks emerged from stealth this week with the launch of its 300 Gbps switchless networking architecture focused on the needs of the high-performance computing and the advanced-scale AI market. Early customers Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerated for AI applications. Now, Amazon Web Services (AWS) is int Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testbed (AQT), which is based at Lawrence Berkeley National Labor Read more…

Graphcore Introduces Larger-Than-Ever IPU-Based Pods

October 22, 2021

After launching its second-generation intelligence processing units (IPUs) in 2020, four years after emerging from stealth, Graphcore is now boosting its product line with its largest commercially-available IPU-based sys Read more…

Quantum Chemistry Project to Be Among the First on EuroHPC’s LUMI System

October 22, 2021

Finland’s CSC has just installed the first module of LUMI, a 550-peak petaflops system supported by the European Union’s EuroHPC Joint Undertaking. While LUMI -- pictured in the header -- isn’t slated to complete i Read more…

AWS Solution Channel

Royalty-free stock illustration ID: 577238446

Putting bitrates into perspective

Recently, we talked about the advances NICE DCV has made to push pixels from cloud-hosted desktops or applications over the internet even more efficiently than before. Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Rockport Networks Launches 300 Gbps Switchless Fabric, Reveals 396-Node Deployment at TACC

October 27, 2021

Rockport Networks emerged from stealth this week with the launch of its 300 Gbps switchless networking architecture focused on the needs of the high-performance Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerate Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testb Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

LLNL Prepares the Water and Power Infrastructure for El Capitan

October 21, 2021

When it’s (ostensibly) ready in early 2023, El Capitan is expected to deliver in excess of two exaflops of peak computing power – around four times the powe Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire