Visit additional Tabor Communication Publications
June 02, 2011
If you've been following the health care debate in the US, it's become fairly clear that the current trajectory of medical costs will soon be unsustainable for the economy. The latest government figures has the average US health care spend per person at over $8,000, and is projected to top $13,000 by 2018. Whether the latest health care legislation will do much to curb these costs is debatable.
If that $13,000 per capita figure holds up, that means about 20 percent of the nation's GDP will be spent on medical bills. Other developed nations are currently about twice as efficient as the US, but even there health care cost are outrunning incomes. Fortunately, economic forces that strong have a way of disrupting the status quo.
Probably the lowest hanging fruit for optimizing the health care sector is in information technology. Even though we think of medicine as a high-tech endeavor, it's mostly based on 30-year-old IT infrastructure overlaid with a manual labor approach to data collection and analysis. Essentially we have a system using 20th century computing technology, but with 21st century wages.
Just going to a doctor's office and filling out a medical history form (on paper!) for the 100th time should give you some idea of how antiquated the health care industry has become. It's as if the Internet was never invented.
But it's not just about your medical records ending up in isolated silos. The amount of data that can be applied to your health is actually growing by leaps and bounds. The results of medical research, genomic studies, and clinical drug trials are accumulating at an exponential rate. Like most sectors nowadays, health care revolves around data.
In general though, your health care provider doesn't do anything with all this information since the analysis has to done by a time-constrained, high-paid specialist, i.e., your doctor. But that could soon change. The latest advanced analytics technologies are looking to mine these rich medical data repositories and transform the nature of health care forever. Not surprisingly, IT companies are lining up to get a piece of the action.
IBM, in particular, has been pushing its analytics story for all sorts of medical applications. Last week, the compay announced it was expanding its Dallas-based Health Analytics Solution Center with additional people and technology.
Part of this is about sliding the IBM Watson supercomputing technology into a medical setting. With it's impressive Jeopardy performance under its belt, IBM is now applying HPC-type analytics to understand medical text. Specifically, they want to combine Watson's smarts with voice recognition technology from Nuance Communications to connect doctors to their patients' medical data via a handheld device like a tablet or smart phone. From the press release:
By using analytics to determine hidden meaning buried in medical records, pathology reports, images and comparative data, computers can extract relevant patient data and present it to physicians, ultimately leading to improved patient care.
Analytics vendor SAS is also in the game. In May, they unveiled a new Center for Health Analytics and Insights organization that is designed to apply advanced analytics across health care and life sciences. Although the specifics were a little thin, the group will focus on "evidence-based medicine, adaptive clinical research, cost mitigation and many aspects of customer intelligence."
It's not all about clinical care though. One of the most expensive undertakings of the health care industry is ensuring drug safety. Both the FDA and pharma have had some spectacular failures in this area, the most recent being Vioxx, a pain-relief drug that was pulled from the market in 2004 after it was discovered that it was causing strokes and heart attacks in some patients.
A recent study by the RAND Corporation suggests data mining can be used to find some of these dangerous drugs before they enter into widespread usage. RAND CTO Siddhartha Dalal and researcher Kanaka Shetty developed an algorithm to search the PubMed database to uncover these bad players. The software employed machine learning algorithms in order to provide the sophistication necessary to differentiate truly dangerous compounds from ones that only looked suspicious (false positives). According to the authors, the algorithm uncovered 54 percent of all detected FDA warnings using just the literature published before warnings were issued.
A more ambitious medical technology is envisioned by the X PRIZE Foundation, a non-profit devoted to encouraging revolutionary technologies. Recently they teamed with Qualcomm to come up with the Tricorder X PRIZE, offering a $10 million award to develop "a mobile solution that can diagnose patients better than or equal to a panel of board certified physicians." In other words, make the Star Trek tricorder a reality.
The device is intended to bring together wireless sensors, cloud computing, and other technologies to perform the initial diagnosis, and direct them to a "real" doctor if the situation warrants. Presumably the cloud computing component will support the necessary data mining and expert system intelligence, while the tricorder itself would mostly act as the data collection interface and do some medical imaging perhaps. The X PRIZE Foundation will publish the specific design requirements later this year, with the competition expected to launch in 2012.
None of these solutions are being promoted as substitutes for doctors or other medical professionals. Inevitably though, if these technologies become established, these jobs will be very different. With powerful analytics available, doctors won't have to memorize all the information about the biology, drugs, and medical procedures any more. In truth, they can't even do that today; there is already far too much data, and it continues to expand.
In an analytics-supported health care system, medical practitioners will need to do less data collection and analysis and more meta-data analysis. Just as today, writers don't need to know how to spell words (remember, 50 years ago a spell checker was a person, not a piece of software) doctors will not need to memorize which drugs are applicable to which diseases. And that means a lot fewer doctor and less supporting staff. Essentially we'll be replacing very expensive PhD's with very cheap computer cycles.
If that seems like a scary prospect, consider the more frightening scenario of a health care system that bypassed this technology and tried to burden medical practitioners with the data deluge. Also consider that without advanced analytics, the majority of the population will be burdened by the long-term costs of sub-standard medical care.
Beyond that, advanced analytics will also be involved in propelling other health care technology forward, including drug discovery, genomics, and the whole field of personalized medicine. Many of these advances will enable medical conditions like heart disease, cancer and diabetes to be prevented, which is a far less expensive proposition than treatment.
It's reasonable to be optimistic here. Nature abhors a vacuum -- in fact, any sort of stark discontinuity. Our problematic health care model will eventually be transformed by technologies that make economic sense. Advanced analytics is poised to be a big part of this.
Posted by Michael Feldman - June 02, 2011 @ 7:19 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.