Deep Learning Paves Way for Better Diagnostics

By Tiffany Trader

September 19, 2016

Stanford researchers are leveraging GPU-based machines in the Amazon EC2 cloud to run deep learning workloads with the goal of improving diagnostics for a chronic eye disease, called diabetic retinopathy. The disease is a complication of diabetes that can lead to blindness if blood sugar is poorly controlled. It affects about 45 percent of diabetics and 100 million people worldwide, many in developing nations.

Final-year Stanford PhD students Apaar Sadhwani and Jason Su got involved in developing the diagnostic solution as part of a class project and corresponding Kaggle competition that was held last year. Sponsor Amazon provided AWS cloud credits in support of the research.

diabetic-retinopathy_5-classes_sadhwani-su_400x
Source: Automatic Grading of Eye Diseases Through Deep Learning, 2016

After Kaggle, the duo decided to turn their research project into a cloud-based platform that hospitals and clinics can use to guide the diagnosis of eye diseases. Their approach relies on a convolutional neural net (CNN) that grades the severity of diabetic retinopathy disease states into five categories: 0-4, with 0 being normal and 4 being the most severe.

The researchers have been training their model with a data set of 80,000 images from EYEPACS, a web-based application for exchanging eye-related clinical information, run by the California Health Foundation. “Getting data is the most constraining part of applying deep learning to a medical setting,” said Sadhwani, “but we are working closely with partners to get more data.”

They’ve also had to address a class imbalance in the data set. “We have a lot more 0’s and 1’s than 3’s and 4’s, for example,” said Sadhwani. As the disease progresses to stage four (known as proliferative diabetic retinopathy, or PDR), image data is more rare. A total of about 10,000 stage four images are required for optimal results.

The training problem is run on AWS Elastic Compute Cloud (EC2) with single-GPU and multi-GPU nodes. Some S3 storage and Elastic Block Store (EBS) services are also employed. The training takes about three days to a week for a given model.

Within EC2, the researchers are using Starcluster which lets them build custom clusters among the nodes and network them together. They used a master node to store all their training data and up to 28 different training nodes. All these separate training nodes would access the master node so they wouldn’t have to mirror the data onto each of the nodes.

“With Starcluster and AWS you can bring up different node types independently on demand,” said Su. “So we would run this experiment that would only need a single-GPU node and then after that finished we could shut down that node and save money. Then we would scale it up to a larger resolution image and we would need four-GPU nodes for that – so we’d spin that up, train on that, and come back three days later and shut that off. AWS provides this flexibility for scaling up and scaling down for cost and for trying out different ideas.”

The researchers relied on AWS spot instance pricing to further improve the economics. Their program saves a state every “epoch,” which relates to one pass through the data set, so losing a node did not incur a big setback. With 55 epochs in a run, the most they would lose is 1/55th of their training progress.

They used the g2.2xlarge instance type and the g2.8xlarge instance type for training their final models. They trained two kinds of models, one on low-res images and the final model on high-res images, for which they employed the larger multi-GPU nodes.

Amazon’s GPU instances are based on older Nvidia GRID K520 graphics cards, which at 4 GB per GPU do not have an ideal memory profile for training based on very high-resolution images.

“Typically in deep learning, you have a 256×256 image, or about one-sixteenth of a megapixel and we’re at four megapixels, so memory is a huge part of doing this problem,” said Sadhwani. “Our workaround was to scale to 4-GPU nodes, which effectively had 4 gigabytes of memory each [GPU], but we lose some to overhead because we have to have the model independently at each of the separate GPUs. It would be more advantageous to have a single GPU with a full 16 gigabytes.”

Because their model was dealing with these high-resolution images, they used Torch to split it across the 4-GPU node to fine-tune its parameters. Currently, they are moving to a distributed training model, which enables several different nodes to train essentially the same model but with independent data. This gives them the ability to train one model across many GPUs, rather than a single model on a single GPU node and thus accelerates the training.

The researchers are eyeing clouds with higher-memory GPUs, which could mean holding out for upgraded Amazon instances or moving to the Microsoft Azure cloud with its Tesla K80s.

They are not interested in CPUs. “It would take significantly longer, at least a factor of 50,” said Sadhwani. “The kind of neural networks we are using [convolutional neural nets] harness parallelization a lot. Even if we were not using this special class of network, there is at least a 10x speedup going from CPUs to GPUs, but for this particular variety that speedup is magnified a lot more, in the neighborhood of 100x.”

Diabetic retinopathy is a disease of the blood vessels in the eye. As the sugar level in the blood rises, it causes the walls of the blood vessels to thin and eventually they’ll crack and bleed. The most important thing to look for is tiny dot bleeds, called hemorrhages. They are very small and difficult to locate even with advanced algorithms. The deep learning model must also be trained to ignore or flag likely camera artifacts, which appear in approximately 40 percent of the images, and can obscure identification of disease traits.

To address these challenges, the Stanford team’s approach uses two networks, a lesion detector and a main network. The lesion detector looks at a small part of the image and outputs a number between 0 and 1, a probability. The lesion detector has so far achieved an accuracy of 99 percent for negatives and 76 percent for positives. The purpose of the main network is to characterize details about where the disease-related features are with respect to the important parts of the eye.

deep-learning-fused-architecture_sadhwani-su_800x
Source: Automatic Grading of Eye Diseases Through Deep Learning, 2016

The outputs of these two pipelines are then fused together. This provides a way to combine low-level details about where there are dot hemorrhages with high-level information like which parts of the image should actually be ignored because they are corrupted by artifacts. The fuse network is responsible for integrating all these signals together to deliver a final probability for the disease class.

Right now the team has been working with five classes, but they say that in the clinical setting, these grades are not tracked with such granularity. In terms of intervention, there are really three stages: 0) no action is required; 1) monitor the progress of the disease; and 2) medical intervention such as surgery is required.

“Moving to three-classes would increase the accuracy of our models because it’s a simpler problem and easier to solve,” said Su.

The ultimate goal here is to deliver a digital assistant to radiologists, opthamologists and other clinicians, so they can screen more patients, more frequently.

“Using an automated tool to augment human resources, you can more closely monitor the changes in the disease state as they progress to more effectively treat the disease,” said Su.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

A Beginner’s Guide to the ASC19 Finals

April 22, 2019

Three thousand watts. That's how much power the competitors in the 2019 ASC Student Supercomputer Challenge here in Dalian, China, have to work with. Everybody would like more juice to run compute-intensive HPC simulatio Read more…

By Alex Woodie

Is Data Science the Fourth Pillar of the Scientific Method?

April 18, 2019

Nvidia CEO Jensen Huang revived a decade-old debate last month when he said that modern data science (AI plus HPC) has become the fourth pillar of the scientific method. While some disagree with the notion that statistic Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing the bounds of what's possible in business and science, in w Read more…

By Alex Woodie with Doug Black and Tiffany Trader

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Bridging HPC and Cloud Native Development with Kubernetes

The HPC community has historically developed its own specialized software stack including schedulers, filesystems, developer tools, container technologies tuned for performance and large-scale on-premises deployments. Read more…

Google Open Sources TensorFlow Version of MorphNet DL Tool

April 18, 2019

Designing optimum deep neural networks remains a non-trivial exercise. “Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expens Read more…

By John Russell

A Beginner’s Guide to the ASC19 Finals

April 22, 2019

Three thousand watts. That's how much power the competitors in the 2019 ASC Student Supercomputer Challenge here in Dalian, China, have to work with. Everybody Read more…

By Alex Woodie

At ASF 2019: The Virtuous Circle of Big Data, AI and HPC

April 18, 2019

We've entered a new phase in IT -- in the world, really -- where the combination of big data, artificial intelligence, and high performance computing is pushing Read more…

By Alex Woodie with Doug Black and Tiffany Trader

Interview with 2019 Person to Watch Michela Taufer

April 18, 2019

Today, as part of our ongoing HPCwire People to Watch focus series, we are highlighting our interview with 2019 Person to Watch Michela Taufer. Michela -- the Read more…

By HPCwire Editorial Team

Intel Gold U-Series SKUs Reveal Single Socket Intentions

April 18, 2019

Intel plans to jump into the single socket market with a portion of its just announced Cascade Lake microprocessor line according to one media report. This isn Read more…

By John Russell

BSC Researchers Shrink Floating Point Formats to Accelerate Deep Neural Network Training

April 15, 2019

Sometimes calculating solutions as precisely as a computer can wastes more CPU resources than is necessary. A case in point is with deep learning. In early stag Read more…

By Ken Strandberg

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

Nvidia Doubles Down on Medical AI

April 9, 2019

Nvidia is collaborating with medical groups to push GPU-powered AI tools into clinical settings, including radiology and drug discovery. The GPU leader said Monday it will collaborate with the American College of Radiology (ACR) to provide clinicians with its Clara AI tool kit. The partnership would allow radiologists to leverage AI techniques for diagnostic imaging using their own clinical data. Read more…

By George Leopold

Digging into MLPerf Benchmark Suite to Inform AI Infrastructure Decisions

April 9, 2019

With machine learning and deep learning storming into the datacenter, the new challenge is optimizing infrastructure choices to support diverse ML and DL workfl Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Intel Extends FPGA Ecosystem with 10nm Agilex

April 11, 2019

The insatiable appetite for higher throughput and lower latency – particularly where edge analytics and AI, network functions, or for a range of datacenter ac Read more…

By Doug Black

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This