NERSC Develops Archiving Strategies for Genome Data

By Nicole Hemsoth

November 25, 2005

When researchers at the Production Genome Facility at DOE's Joint Genome Institute found they were generating data so fast they couldn't find anywhere to store the files, let alone make them easily accessible for analysis, a collaboration with NERSC's Mass Storage Group developed strategies for improving the reliability of data storage while making retrieval easier.

The Department of Energy's Joint Genome Institute (JGI) is one of the world's leading facilities in the scientific quest to unravel the genetic data intrinsic to all living things. With advances in automatic sequencing of genomic information, however, scientists at the JGI's Production Genome Facility (PGF) found themselves overrun with sequencing data; their production capacity had grown so rapidly that data had overflowed the existing storage capacity. Since the resulting data are used by researchers around the world, ensuring that the data are both reliably archived and easily retrievable is a key issue.

As one of the world's largest public DNA sequencing facilities, the PGF produces 2 million files per month of trace data, 25 to 100 kb each, 100 assembled projects per month of 50 to 250 mb each, and several very large assembled projects per year, on the order of 50 gb. In aggregate, this averages about 2,000 gb per month. (The sequence of a strand of DNA or RNA is the order of its base pairs. Kb stands for kilobase, a thousand base pairs, mb for megabase, a million base pairs, and gb for gigabase, a billion base pairs.)

In addition to the amount of data, the way the data are produced is a major challenge to storage and retrieval. Data from the sequencing of many different organisms are produced in parallel each day, such that a daily “archive” spreads the data for a particular organism over many tapes.

DNA sequences are the fundamental building blocks in the rapidly expanding field of genomics. Constructing a genomic sequence is an iterative process. The trace fragments are assembled and then the sequence is refined by comparing it with other sequences to confirm the assembly. Once the sequence is assembled, information about its function is gleaned by comparing and contrasting the sequence with other sequences from both the same organism and other organisms.

Current sequencing methods generate a large volume of trace files that have to be managed — typically 100,000 files or more. And to check for errors in the sequence or make detailed comparisons with other sequences, researchers often need to refer back to these traces. Unfortunately, the traces are usually provided as a group of files lacking information about where the traces occur in the sequence, making the researcher's job more difficult.

This problem was compounded by the PGF's lack of sufficient online storage, which made organization and retrieval of data difficult and led to unnecessary replication of files. The situation required significant staff time, moving files and reorganizing file systems, to find sufficient space for ongoing production needs, and it required auxiliary tape storage that was not particularly reliable.

Staff from the PGF and the Mass Storage Group at the National Energy Research Scientific Computing Center (NERSC) agreed to work together to address the two key issues facing the genome researchers. The immediate goal was for a NERSC High Performance Storage System (HPSS) to become the archive for the JGI data, replacing the less reliable local tape operation and freeing up disk space at the PGF for more immediate production needs. The second goal was to collaborate with JGI to improve the data handling capabilities of the sequencing and distribution processes.

NERSC storage systems are robust and available 24 hours a day, seven days a week, as well as highly scalable and configurable. Through ESnet, the Energy Sciences Network, NERSC has high-quality, high-bandwidth connectivity to other DOE laboratories and major universities.

Most of the low-level data produced by the PGF are now routinely archived at NERSC, with roughly 50 gb's worth of raw trace data being transferred from JGI to NERSC each night. This archive forms the foundation for further steps to enhance the utility of the data.

To accomplish the archive process, NERSC staff came up with the following solutions to address the main challenges:

The use of HTAR, an HPSS variant of tape archive (tar) files, combining multiple small files into chunks large enough for efficient transfer and storage,

The design and implementation of a directory structure that would allow easy location of the various files,
The creation of scripts that would run on the PGF machines to transfer the files, and

Network tuning and configuration changes to support and optimize the data transfer between the PGF and NERSC.

Using these techniques, the archiving system can be scaled up over time as the amount of data continues to increase — up to billions of files can be handled. The data have been aggregated into larger collections which hold tens of thousands of files in a single file in the NERSC storage system. This data can now be accessed as one large file, or each individual file can be accessed without retrieving the whole aggregate.

Not only will the new techniques be able to handle future data, they also saved the day when the PGF staff discovered a major problem: raw data processed by software with an undetected bug. By rough estimate, the original data collection comprised up to 100,000 files a day at a cost of a dollar a file — $1.2 million worth of processing over a period of six months.

But rather than go back to the sequencing machines the JGI staff were able to retrieve the raw data from NERSC and reprocess it in a month and a half. The estimated savings was about a million dollars, and the end result was a more reliable archive — proving that dependable, flexible data storage is not only a better way to do science but can save lots of time and lots of money.

This is a reprint of an article originally published by Berkeley Lab Computing Sciences.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire