Visit additional Tabor Communication Publications
February 17, 2009
OAK RIDGE, Tenn., Feb. 17 -- In the blink of an eye, people at risk of becoming blind can now be screened for eye diseases such as diabetic retinopathy and age-related macular degeneration.
Using a technology originally developed at the Department of Energy's Oak Ridge National Laboratory to understand semiconductor defects, three locations in Memphis have been equipped with digital cameras that take pictures of the retina. Those images are relayed to a center where they are analyzed and the patient knows in minutes whether he or she needs additional medical attention.
"Once we've taken pictures of the eyes, we transmit that information to our database, where it is compared to thousands of images of known retinal disease states," said Ken Tobin, who led the ORNL team that developed the technology. "From there, the computer system is able to determine whether the patient passes the screening or it provides a follow-up plan that includes seeing an ophthalmologist."
In coming weeks, cameras will be installed at four rural and urban health care centers serving the Mississippi Delta, and another camera is planned for a federally funded health center in Chattanooga. Eventually, the goal is to have hundreds of cameras throughout the United States and beyond. If disease can be detected early, treatments can preserve vision and significantly reduce the occurrence of debilitating blindness.
This project takes advantage of ORNL's proprietary content-based image retrieval technology, which quickly sorts through large databases and finds visually similar images. For more than a decade manufacturers of semiconductors have used this technology to rapidly scan hundreds of thousands of tiny semiconductors to learn quickly about problems in the manufacturing process.
"Our approach allows us to adapt a proven technology to describe key regions of the retina, and this information can then be used to index images in a content-based image retrieval library," Tobin said. "What separates this from other methods is that we have automated the process of diagnosing retinal disease by capturing the expert knowledge of an ophthalmologist in a patient archive.
Leading the medical portion of the project is Edward Chaum, an ophthalmologist and Plough Foundation professor of retinal diseases at the University of Tennessee Health Science Center (http://www.eye.utmem.edu) Hamilton Eye Institute in Memphis. Chaum, the lead researcher on the National Eye Institutes grant that has funded much of this research, is especially excited about the number of people, particularly the indigent and underserved communities, that this technology will help.
"Right now, with 21 million diabetics in the United States, we need to be screening 400,000 patients for diabetic eye disease every week," Chaum said. "Less than half of these diabetics receive the recommended annual eye exam, which is absolutely essential to minimize serious eye complications and potential blindness."
By 2050 the number of diabetics in the United States is expected to double, so the task of screening patients becomes even more daunting. Looking beyond the United States and more near term, the World Health Organization estimates that by 2025 more than 1 million patients will need to be screened worldwide for diabetes every day.
"To reach this goal, we are going to have to change the health care delivery paradigm," Chaum said, "and that will mean distributing these cameras to clinics and offices of primary care physicians."
The other component is the establishment of a network that allows the images to be sent nationwide, and eventually worldwide, as Chaum envisions this being a global effort that can only be done with this automated technology and the connectivity of the World Wide Web.
Other researchers involved in this project are Tom Karnowski and Luca Giancardo of ORNL's Measurement Science and Systems Engineering Division, Stacy Li of the University of Tennessee Health Science Center in Memphis and Karen Fox of the Delta Health Alliance.
The researchers have published a number of papers, most recently in Retina, The Journal of Retinal and Vitreous Diseases. The paper, titled "Automated Retinal Diagnosis by CBIR," appears in Vol. 28, No. 10 (2008).
Additional funding for this project, begun in June 2004 through ORNL's Laboratory Directed Research and Development program, has been provided by the National Eye Institute, The Plough Foundation, the Army Medical and Material Command, the University of Tennessee Health Science Center and the U.S. Health Resources and Services Administration.
UT-Battelle manages Oak Ridge National Laboratory for the Department of Energy.
Source: Oak Ridge National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.