Life Sciences Storage Issues and Computational Workflow Acceleration

By Nicole Hemsoth

October 8, 2012

Introduction

Life Sciences can mean different things to different people. In genomic research, it referrers to the art of sequencing; in BioPharma, it covers molecular dynamics and protein docking; and in clinical, electronic records. However, all three markets have one thing in common, the sequencing of the human genome and the control, analysis, and distribution of that data. Today with the continued decrease in sequencing costs, life sciences research is moving from beakers to bytes and increasingly relies on the analysis of large volumes of data. 

Data-dominated efforts today aim to accelerate drug R&D, improve clinical trials, and personalize medicine. Most of the work in these areas requires the use of high performance computing clusters or supercomputers to derive decision-making information from terabytes to petabytes of data.

Across these widely disparate areas of work, researchers face similar computational infrastructure problems that can impede progress. To avoid obstacles and accelerate their research, life scientists need a low-latency, high-performance computing infrastructure that delivers predictable and consistent performance. They also need to collaborate and share large datasets with upstream and downstream partners. And they need an infrastructure that supports automation to simplify data aggregation, assimilation, and management.

The need for speed

Life sciences research and development increasingly relies on computational analysis. Such analysis provides the critical information needed to make intelligent decisions about which new drug candidates hold promise and should be advanced and which should be put aside.

With the growing reliance on computational analysis, and the changes in data generation and usage, life sciences organizations need an IT infrastructure that ensures computational workflows are optimized and not impeded.

Pressure to run the workflows as fast as possible so research decisions can be made sooner comes from several business drivers.

Many pharmaceutical companies today have sparse new drug pipelines. Delays caused by slowdowns in research due to slow data analysis simply keep the pipelines empty.

Because less than one percent of all drug candidates make it to market and the cost of moving a drug along the development pipeline mounts hugely the further along it gets, knowing which drugs to fail out of the process early is key to financial success. Faster early-stage analysis provides the data needed to make an early decision providing a significant savings in time and investment.

Compounding the need to quickly identify promising candidates and fill the pipelines is the fact that many blockbuster drugs have gone or are going off-patent and must be replaced. In fact, patent expirations from 2010 to 2013 will jeopardize revenues amounting to more than $95 billion for ten of the largest drug companies, according to Nature.

Competition to fill the pipelines is heating up. The drastic reduction in new lab equipment operating costs is allowing even the smallest life sciences organizations to compete in early stage R&D.

These factors are forcing companies to change the way they approach new drug research.

First, there is a greater focus on computational analysis during early stage research and development. The idea is to use information-based models, simulations, virtual molecule screening, and other techniques identify promising new drug candidates quickly and kill off less promising candidates to avoid incurring the costs of later stage clinical trials, development, and approval.

Second, many organizations are seeking to reduce their R&D costs. To accomplish this while still trying to fill their pipelines, they are expanding collaborations with universities, non-profit organizations, and the government. Specifically, beyond opening offices in university-rich places like Cambridge, MA, many pharmaceutical and biotech companies are joining collaborative groups such as the Structural Genomics Consortium, a public-private partnership that supports the discovery of new medicines through open access research. There are also government-led early-stage R&D efforts, such as those underway at the National Center for Advancing Translational Sciences, a group with the goal of developing new methods and technologies to improve diagnostics capabilities and therapeutic efforts across a wide range of human diseases.

Third, the desire to cut costs is creating an emerging market for Sequencing-as-a-Service (SEQaaS). Rather than invest in the sequencing equipment, chemicals, and experienced staff needed to perform the operations, many companies are outsourcing their sequencing to providers such as Illumina, PerkinElmer, and others. This allows them to concentrate on other aspects of drug discovery and development pipeline.  

Storage complications and challenges that can impede analysis workflows

These business drivers, combined with the adoption of new lab technologies such as next-generation sequencing, confocal microscopy, and X-ray crystallography, are driving up the volumes of data that life sciences organizations must store and manage. These large volumes and the collaborative nature of life sciences research are placing new demands on storage solution performance and data manageability. 

For example, new lab equipment, particularly next-generation sequences are producing multiple terabytes of data per run that must be analyzed and compared to large genomic databases. And while the format of raw data from sequences has varied over time as sequencing vendors have incorporated different processing steps into their algorithms, organizations using the sequencing data must still perform post-sequencing computations and analysis on various size files to derive useful information. From an infrastructure perspective, the sequencing data needs to be staged on high-performance parallel storage arrays so analytic workflows can run at top speeds.

Another factor to consider is that much of the data generated in life sciences organizations now must be retained. When sequencing for clinical applications is approved by the FDA, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) requires that this patient data be retained for 20+ years. 

In pharmaceutical companies, long-term access to experimental data is growing as companies seek indications for previously approved drugs. With pipelines sparse, this area of work is exploding. From a storage perspective, older data must be moved to lower cost storage after its initial analysis or use and then be easily found and migrated to higher performance storage when exploring its use for a new indication.

Complicating data management and computational workflows is that fact that life sciences research has become more multi-disciplinary and more collaborative. Within an organization, data from new lab equipment is incredibly rich and of interest to many groups. Researchers in the different disciplines use different analysis tools running on clients with different operating systems and they need to perform their analysis at different times in the data’s life cycle. This makes computational workflows highly unpredictable. This can result in a vastly different user experience from day-to-day. A run that takes two minutes one day might take 45 minutes the next.

An additional implication of the multi-disciplinary and more collaborative nature of life sciences research is that data increasingly must be shared. This can pose problems within a company and it certainly needs special attention when organizations team together and must share petabyte-size databases across widely dispersed geographical regions.

DDN as your technology partner

All of these factors mean storage plays an increasingly important role in life sciences success. Solutions must support highly variable workloads in an HPC environment and be capable of supporting the collaborative nature of the industry. They also must allow researchers using different clients and hosts to have shared access to the data needed for their analysis.

Additionally, solutions must provide life sciences organizations with the flexibility to store data for longer times on appropriate cost/performance devices, while offering data management tools to migrate and protect that data. And there must be a way to facilitate the sharing of very large datasets.

Traditional storage solutions can introduce major performance and management problems when scaled to meet today’s increased requirements for the life sciences. This is why the Cornell Center for Advanced Computing, the National Cancer Institute, TGen, Virginia Tech, the Wellcome Trust Sanger Institute, and many more life sciences organizations are partnering with DataDirect Networks (DDN).

DDN offers an array of storage solutions with different I/O and throughput capabilities to meet the cost/performance requirements of any life sciences workflow. The solutions are extremely scalable in capacity and density. Based on its Storage Fusion Architecture, the DDN SFA 12K line offers a number of firsts including up to 40 GB/s host throughput for reads AND writes, 3.6 PB per rack, and the ability to scale to more than 7.2 PB per system. Furthermore, DDN lets organizations control their cost and performance profile by mixing a variety of media in the same system – SSD, SAS, and SATA – to achieve the appropriate cost/performance mix for their applications.

By consolidating on DDN storage, organizations get fast, scalable storage that solves performance inconsistency issues and provides easy-to-manage long term data retention.

In addition, DDN offers several technologies that help with the common challenges in life sciences research.

For researchers that must share and exchange large datasets within their organization, with collaborative partners, or with sequencing providers, DDN offers Web Object Scaler (WOS), a scale-out cloud storage appliance solution. WOS is an object-based storage system that allows organizations to easily build and deploy their own storage clouds across geographically distributed sites. The storage can scale to unprecedented levels while still being managed as a single entity. WOS provides high-speed access to hyperscale-sized data in the cloud from anywhere in the world, enabling globally distributed users to collaborate as part of a powerful peer-to-peer workflow.

To simplify and automate data management issues so researchers from multiple disciplines can all access the same data, DDN has integrated WOS with the Integrated Rule-Oriented Data-management System (iRODS). The iRODS data grid is an open source, next-generation adaptive middleware architecture for data management that helps researchers organize, share, and find collections of data in file systems.

And to ensure researchers get a high-performance, consistent experience, DDN offers DirectMon, an advanced storage configuration and monitoring solution. DirectMon works across DDN’s line of DDN SFA Storage Arrays, as well as GRIDScaler and EXAScaler shared file system appliances. DirectMon removes the complexity out of managing storage, its ease-of-use features and notifications allow administrators to quickly resolve problems, freeing-up valuable time to concentrate on more important tasks.

For more information about DDN solutions for the life sciences, visit http://www.ddn.com/en/applications/biopharma

Additional information can be found by visiting
http://www.ddn.com/en/applications/life-sciences

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Researchers Test AI Traffic Monitoring Tool in Austin

December 13, 2017

Traffic jams and mishaps are often painful and sometimes dangerous facts of life. At this week’s IEEE International Conference on Big Data being held in Boston, researchers from TACC and colleagues will present a new Read more…

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the d Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in advanci Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

ESnet Now Moving More Than 1 Petabyte/wk

December 12, 2017

Optimizing ESnet (Energy Sciences Network), the world's fastest network for science, is an ongoing process. Recently a two-year collaboration by ESnet users – the Petascale DTN Project – achieved its ambitious goal t Read more…

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed

November 28, 2017

Everyone by now knows that Nanyang Technological University of Singapore (NTU) took home the highest LINPACK Award and the Overall Championship from the recently concluded SC17 Student Cluster Competition. We also already know how the teams did in the Highest LINPACK and Highest HPCG competitions, with Nanyang grabbing bragging rights for both benchmarks. Read more…

By Dan Olds

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This