The leap forward in genomics technology promises to change health care as we know it. Sequencing a human genome, which costs millions of dollars just a few years ago, now costs thousands. And the prospect of mapping a genome for under a thousand dollars is on the horizon.
But cheap gene sequencing, by itself, won’t usher in a health care revolution. An article in the New York Times this week points out that turning those sequenced genomes into something useful is the true bottleneck. Doctors would like to be able to use their patients genome to determine their susceptibility to specific diseases or to devise personalized treatments for conditions they already have.
Sequencing all the DNA base pairs is really the easy part of the problem. It just reflects the ordering of these bases — adenine (A) , thymine (T), guanine (G), cytosine (C) — in the chromosomes. The bioinformatics software necessary to extract useful information from this low-level biomolecular alphabet is much more complex and therefore costly, and necessitates a fair amount of computing power.
According to David Haussler, director of the center for biomolecular science and engineering at the University of California, Santa Cruz, that’s why it costs more to analyze a genome than to sequence it, and that discrepancy is expected to grow as the cost of sequencing falls.
The NYT article reports that the cost of sequencing a human genome has decreased by a factor of more than 800 since 2007, while computing costs have only decreased by a factor of four. That has resulted in an enormous accumulation of unanalyzed data that is being generated by all the cheap sequencing equipment.
According to the article, the current capacity of sequencers worldwide is able to 13 quadrillion DNA base pairs per year. For this year alone, it is estimated that 30,000 human genomes will be sequenced, a figure that is expected to rise to the millions within just a few years.
Not only is that too much data to analyze in aggregate, it’s also too difficult to share that volume of data between researchers. Even the fastest commercial networks are too slow to send multiple terabytes of information in anything less than a few weeks. That’s why BGI (Beijing Genomics Institute), the largest genomics research institute in the world, has resorted to sending computer disks of sequenced data via FedEx.
Cloud computing may help alleviate these problems. In fact, some believe that Google alone has enough compute and storage capacity to handle the global genomics workload. Others believe that there is just too much raw data and researchers will have to pre-process it to reduce the volume or just hold onto the unique bits.
But there are even more challenging problems ahead. Metagenomics, which aggregates DNA sequences of a whole population of organism, is even more data-intensive. For example, the microbial species in the human digestive tract represent about a million times as much sequenced data as the human genome. And since that microbial population can have a profound effect on the its human host, that genomic data becomes a pseudo-extension of the person’s genetic profile.
On top of that is all is the data associated with the RNA, proteins and other various biochemicals in the body. To get a complete picture of human health, all of this data has to be integrated as well. Data deluge indeed.