Getting useful information from life sciences laboratory data in a timely manner requires selecting a suitable architecture that brings together complementary compute, memory, storage and networking resources. As noted in an earlier article in this series, there are some general rules of thumb for selecting which type of compute node works best for different workloads.
Greatly generalizing, skinny nodes work well with highly serialized workloads such as BLAST runs. Frequently, these workloads can be accelerated using GPU and FPGA nodes. Massively parallel workloads such as those encountered when running advanced modeling and simulation algorithms benefit from fat nodes that have large shared memory and make use of many cores at a given time.
Naturally, the completion of any job depends on other factors beyond the compute power available. The storage and networking infrastructure must offer suitable performance characteristics to ensure the compute nodes are continuously satiated to get from lab results to actionable information in the fastest time possible.
Therein lies the problem for most organizations. Many industries are at an advantage when it comes to selecting the best IT architecture. They may work with a relatively small number of file types and sizes and use a limited number of algorithms on those files. Thus, there is often a clear choice as to which IT architecture can be used and optimized to convert data to information.
This is not the case in the life sciences. There is a wide spectrum of data types, files, and file sizes involved in any type of research. This has great implications on storage and data management choices. Interestingly the main challenges have been around for a couple of years and are well described in an excellent extended blog[i] by consulting firm BioTeam, a specialist in life science computational infrastructure.
Particularly problematic is that IT often the last to know about major lab-side changes, as noted in the except below from the blog:
“One of the recurring themes encountered out in the real world is how IT organizations are often taken utterly by surprise when new laboratory instruments or techniques arrive on-premise and immediately require non-trivial amounts of IT resources. Here are just a few examples we’ve seen over the past few years:
- Instrument Upgrades: In place upgrades to existing instruments can often slip under the radar of even the most watchful IT organizations. The cliché example here would be the Illumina HiSeq genome sequencing platform where a HiSeq 2000 instrument can be upgraded to a HiSeq 2500 by swapping flow cells and reagents. The IT requirements for a HiSeq 2500 can be quite a bit higher.
- Instrument Duty Cycle Changes: IT resources are often provisioned for instruments based on an understanding of the common duty cycle. Often the first use of an instrument is for basic experimentation and validation of the intended protocol and result output. When the results are good, scientific leadership may decide to dramatically change the way the instrument is used. The resources required for an instrument that runs for a few hours a week followed by two weeks of data processing is quite different from an instrument that is operated and scheduled 24×7 in a core facility operational model.
- New Sensors: A scientist took the “regular” camera off of the confocal microscope rig and replaced it with a new CCD sensor capable of capturing 15,000 video frames per second. IT was not informed and the microscope storage platform was not altered.
- DIY Innovation: A scientist had trouble using a confocal microscope for live cell imaging experiments — the cells being examined did not survive long under the microscope. Working with a few colleagues they hacked together a clever DIY incubation enclosure around the microscope rig to better control environmental conditions. All of a sudden live cell imaging efforts that previously could only last for 20-40 minutes are being run for 24-hour or even longer periods. Demand for storage, compute and visualization resources spikes accordingly.
- Broken Procurement: In general terms this is what happens when researchers spend 100% of their budget on the instrument and the reagent kits (and perhaps an operator to run the machine as well) while neglecting to plan or budget for the IT resources needed to sustain operation and downstream analysis. This problem used to be much worse in years past where we saw instrument salespeople outright lying to customers about IT requirements and cost in order to win a sale. In 2013 and beyond we still continue to see poorly-managed laboratory instrument procurement processes. Given the data flows coming from these instruments it is essential that procurement is able to model, plan and budget for the full lifecycle cost of the instrument. This includes instrument data capture, QC efforts, data movement, data storage, processing/analytical resources as well as long-term or archival storage of both the raw and derived data.
“There is no easy technology fix for these issues. This is an organization problem that requires an organizational solution. One method we’ve seen work well in one large research institute was an internal requirement that any research procurement with a dollar cost exceeding $50,000 had be routed through the IT organization for review. It is important to note that in this scenario IT does not have veto power or influence over the scientific procurement – the review requirement existed for the purposes of ensuring that the IT organization was aware of R&D procurement and would not be surprised at the loading dock with the sudden arrival of a complex system. Other smaller organizations often handle this via regular communication or the formation of IT/Research working group and operational committees that discuss and review planned procurement with a focus on IT impact.”
Data variety rules the lab
When trying to determine the appropriate storage and networking solutions to pair with compute capacity it makes sense to determine the dominant file characteristics used throughout the organization and the performance characteristics of the algorithms used to analyze that data.
Some examples of common data types and workloads include:
- Large binary and text files: Many labs work with large flat text files of DNA or protein sequences, such as BLAST formatted And those studying genomic with work with Fastqand SAM files. Running a sample against one of these databases or comparing properties requires streaming the entire file to nodes for analysis. A suitable storage solution would need high sustained throughput.
- Many tiny files in a single directory: Output from mass spectrometers often consists of many thousands of tiny files (a few kilobytes each) sent to a single directory. A storage solution must be able to handle numerous reads taking into account both the data and metadata associated with each file.
- Many files in a complex file and folder hierarchy: Output from some of the commonly used next-gen sequencers can produce multiple nested folders of data. A storage solution with an easy to use global filesystem can help make locating and managing data over time a much easier task.
Algorithms that access these different types of data sets create different stress on the infrastructure. For example, when performing a BLAST search, many bioinformatics analysis routines compare a query file against the entire database. This requires ingesting the entire database resulting in long sequential reads of a big file. This action is performed once for the execution of the algorithm. So storage and networking solutions must be capable of sustained high throughput to make the analysis job run in the fastest time.
In contrast, other work such as molecular modeling or structure prediction often requires working with many small algorithms and workflows that create highly variable random IO access patterns and read/write requests.
Just look at the typical steps that are required to get from the output of a next-gen sequencer to actionable data. With many genomics workflows, many terabytes (petabytes in aggregate) of data must routinely be moved from the DNA sequencing machines that generate the data to the computational component that performs the DNA alignment, assembly, and subsequent genomic analysis.
It’s not just the variety of data, but as noted in a previous article in this series, the volume of data must be addressed. It is not unusual for an organization to be dealing with tens of PBs. Today’s sequencers can generate 3TB of data in 18 hours. Confocal imaging systems scan hundreds of tissue sections per week, producing data volumes in the 1 to 10 TB per week. High resolution medical imaging using scans to create 3D images of organ, muscles, and other features generate tens of TBs per week.
The great variety of data types and file sizes, combined with the vast volumes of data, means organizations cannot build a single system that is optimized for a single workflow. All of these factors mean storage plays an increasingly important role in life sciences success. Solutions must support highly variable workloads in an HPC environment and be capable of supporting the collaborative nature of the industry where different algorithms are frequently used to analyze the same data.
Additionally, storage solutions must provide life sciences organizations with the flexibility to store data for longer times on appropriate cost/performance devices, while offering data management tools to migrate and protect that data. And there must be a way to facilitate the sharing of very large datasets.
Traditional storage solutions can introduce major performance and management problems when scaled to meet today’s increased requirements for the life sciences. As a result, organizations need an array of storage solutions with different I/O and throughput capabilities to meet the varying performance requirements of their workflows. The solutions must be extremely scalable in capacity and density. Furthermore, the right file system is essential. In general, there must be more use of Lustre and GPFS. In fact, parallel and distributed storage is becoming the norm in the life sciences, said Ari Berman, General Manager of Government Services and Principal Investigator, The BioTeam, in a talk at the HPC User Forum 2015.
Lastly, building an architecture for IO requires very high performance network. Petascale storage capacity is meaningless if the compute nodes cannot access the data in a timely manner. Simply put, much of the research done in life sciences organizations involves working with terabytes of data for an individual experiment or computational job. Increasingly, the ability to manipulate these large datasets is how scientific insight is gained.
Genomic workflows at this scale are already stressing traditional IT infrastructures, with many organizations finding that installed systems simply cannot keep pace. Soon NGS devices, the applications used to analyze the data, and analytic workflows will grow in use and sophistication, driving the size of the datasets into the petabyte range. This will further aggravate IT infrastructure issues.
What is needed are infrastructures optimized to handle workflows where petabytes of data may be quickly analyzed. Merely being able to store petabytes of data is not sufficient; the data must be accessed, analyzed, and manipulated quickly within these workflows. Petascale analysis infrastructure will enable new areas of discovery and research and the adoption of such IT architectures will further increase the pace and scale of future genomic analysis, pushing forward the boundaries of our research and understanding
[i] Life Science Storage and Data Management, BioTeam, http://bioteam.net/2013/12/life-science-storage-data-management/#toc-11