“Today you buy Tylenol (500mg for almost everyone), but it doesn’t positively impact everybody because genetically we are all different,” says Rajiv Nema, Director of Marketing at SAP (HANA, Mobile Innovations). “$70B is spent on cancer medicine in the United States alone, and 40% – almost $30B – of that is wasted because the medicine does not positively impact the patient. Personalized medicine is comprised of genomics data, proteomics data, Electronic Medical Records, sensor data, your FitBit data… There is a lot of different data all sitting in different silos. They all have to come together so when you visit your doctors, they can look at all the data at once and analyze it and prescribe you a medication.”
Nema is referring to a revolution underway in the field of medicine and the life sciences in general. Genomic research, supported by high performance computing, advances in storage, Big Data, and predictive analytics, is fundamentally changing the way medicine is practiced. The stage is set for a new era of personalized medicine, especially when dealing with complex diseases.
In order to diagnose and treat the problem, the traditional medical model focuses on such data as individual clinical symptoms, medical and family history, lab results and imaging. This is primarily a reactive approach – treatment is triggered by the onset of symptoms.
On the other hand, personalized medicine uses the patient’s genetic profile as a guide to the prevention, diagnosis, and treatment of disease. This allows health care providers to make informed treatment decisions based on the study of genetic variations and their influence to determine how well a particular cohort responds to medications and other treatment methods.
Predictive analytics provide clinicians and researchers with the ability to analyze data from multiple sources such as genomic data, imaging data, and pathology slides to assemble the information needed to drive personalized medicine.
Big Data Makes Big Demands
But there are problems. Applying predictive analytics to the human genome and other personal data sources means dealing with massive data sets, a classic example of Big Data in action. Further complicating matters is the fact that this data comes in both structured and unstructured formats – the latter ranging from email messages and Word documents to images and audio files. Often IT infrastructures are just not set up to rapidly access and deal with petabytes of unstructured data.
Because these Big Data sets are also being mobilized and moved between different sources and on to different platforms, data management becomes a major issue – especially when dealing with workloads that include everything from the output of data rich confocal microscopy to huge amounts of small, individual genomic files.
Personalized medicine based on predictive analytics can only fulfill its promise if the data can be quickly and easily accessed using a single, cohesive platform.
High Throughput Intelligent Data Management Solution
The answer is to build a system that can accommodate high throughput ingest and low latency storage and compute, coupled with an intelligent rules based data management solution. This allows unstructured data to be ingested from different modalities and sources into a platform that can run analytic queries at very low latency to satisfy time to results at the bedside. The system also must cater to the needs of the physician community by automating data management and supplying a cohesive platform for secure data distribution/collaboration, and finally long term archive.
The Life Science and clinical IT community have been trying to achieve this goal for the past decade. Several middleware software and hardware based solutions tried to tie data silos via HL7 interfaces and give the physicians the impression of a single pane of glass for patient results. This method is very complex to troubleshoot and very cumbersome to reproduce for large unstructured data sets from remote systems. In addition the advent of clinical meaningful genomic and proteomics data sets have created even more data sources and challenges.
From a storage and data management perspective what is needed is a system that:
- Allows large data sets to be ingested at very high speeds
- Couples a low latency HPC storage solution that leverages a parallel file system for secondary data analysis and visualization
- Provides a secure data distribution and archival solution – in one single, cohesive, easy to manage platform.
The DDN GRIDScaler and EXAScaler storage appliances, based on the Storage Fusion Architecture engine (SFA), in addition to the Web Object Scaler (WOS), a family of self-contained collaborative object storage appliances configured with disk storage and CPU and memory resources, are a big step forward toward solving this data management paradigm.
With the addition of predictive analytics software, you have a single platform that can handle the most demanding genomic Big Data analytic workloads. This includes real-time data capture and analysis. The performance of this underlying platform has garnered DDN the world performance record with analytics solutions like SAS GRID, Vertica, and Kx, and allows DDN solutions to deliver up to 8x the performance vs. white box approaches for Hadoop and other open source offerings.
The combination of GRIDScaler or EXAScaler parallel file system appliances with WOS object storage, supplies a platform for data management from a single pane of glass. The solution also offers enhanced visualization capabilities that can be shared on a collaborative basis without regard to geographic location.
This comprehensive storage approach also helps facilitate compliance with regulatory mandates such as HIPPA, as well as the implementation of effective data security and preservation measures. WOS provides the means to guarantee a robust chain of trust for both clinicians and researchers.
Toward Personalized Medicine
With an enabling IT infrastructure consisting of: well-tuned HPC clusters; storage architecture with fast, parallel access to extensive genomic data; the right predictive analytic tools; and a collaborative means of accessing, archiving and sharing data, the foundation is in place for the evolution of personalized medicine.
This approach:
- Provides data integration from heterogeneous sources including genomics, electronic medical records, annotations, etc.
- Facilitates access to all patient-specific data by both clinicians and researchers, enabling them to make evidence-based therapy decisions at the patients bed-side
- Empowers researchers to correlate the genomic evidence of millions of high risk patients in a centralized HPC solution and visualize the results in real time
- Provides mobile and flexible access to any patient-related data
- Answers the need for speed in clinical settings when patients’ lives may be at stake.
In short, personalized medicine brings a new dimension to the field by allowing clinicians to make evidence-based therapy decisions at the patient’s bed, as well as supervising high-risk patients to prevent emergencies. Researchers can investigate the genomes of millions of high-risk patients using HPC and predictive analytics and analyze these results in real time to develop new, personalized interventions.
We are at the beginning of the development of personalized medicine, including creating the technological underpinnings to support this revolution. To learn more, see how DDN is being leveraged by the leading global genomic research facilities like Stanford University Center for Genomics and Personalized Medicine, Weill Cornell Medical College, Tokyo Institute of Technology (TokyoTech), The Translational Genomics Research Institute (TGen), the University of Southern California, Keck School of Medicine, and many more at ddn.com/.