HPC Bests Physicians in Matching Heart Transplant Donors and Recipients

By Michael Feldman

September 13, 2011

Health care analytics is an emerging application area that promises to help cut costs and provide better patient outcomes. To reach that goal though requires sophisticated software that can mimic some of the intelligence of real live physicians. At Lund University and Skåne University in Sweden, researchers are attempting to do just that by building a model of heart-transplant recipients and donors to improve survival times.

The so-called “survival model” is designed to discover the optimal matches between recipients and donor for heart transplants. It takes into account such factors as age, blood type (both donor and recipient), weight, gender, age, and time during a transplant when there is no blood flow to the heart. Just analyzing those six variables leads to about 30,000 distinct combinations to track. When you want to match tens of thousands of recipients and donors across that spread of combinations, you need a rather sophisticated software model and some serious computing horsepower.

To build the application, the Lund researchers used MATLAB and a set of related MathWorks libraries, namely the Neural Network Toolbox, the Parallel Computing Toolbox, and the MATLAB Distributed Computing Server. With that, they built their predictive artificial neural network (ANN) models, in this case, a simulation that predicts survival rates for heart transplant patients based on the suitability of the donor match. The ANN models are “trained” using donor and recipient data encapsulated in two databases: the International Society for Heart and Lung Transplantation (ISHLT) registry and the Nordic Thoracic Transplantation Database (NTTD).

The key software technology for the ANN application is MathWorks’ Neural Network Toolbox.  The package contains tools for designing and simulating neural networks, which can be used for artificial intelligence-type applications such as pattern recognition, quantum chemistry, speech recognition, game-playing and process control.   These types of application don’t lend themselves easily to the type of formal analysis done in traditional computing.

For the ANN models, training involves correlating donor and recipient data, such that the risk factors are weighted accurately. If done correctly, the simulations can become adept at associating these factors with the heart transplant survival rates. In this case, the results from the simulations were used to pick out the best and worst donors for any particular recipient.

The ultimate goal is to determine the mean survival times after transplantation for waiting recipients, so that doctors can make the best possible decision with regard to matches. In the research study, they analyzed about 10,000 patients that had already received transplants in order to verify the accuracy of the algorithms.

What they found was that the ANN models could increase the five-year survival rate raised by 5 to 10 percent compared to the traditional selection criteria performed by practicing physicians. Perhaps more importantly, using a randomized trial based on preliminary results, approximately 20 percent more patients would be considered for transplantation under these models, says Dr. Johan Nilsson, Associate Professor in the Division of Cardiothoracic Surgery at Lund University.

Because of the combinatorial load of the recipient-donor variables, the models are very compute-intensive. On a relative small cluster, the MATLAB-derived ANN simulation took about five days. That was significantly better the open source software packages (R and Python) they started out with. Under that environment, runs took about three to four weeks and were beset with crashes and inaccurate results.

To run the simulation, the researchers used a nine-node Apple Xserve cluster (which includes a head node and a filesharing node), along with 16 TB of disk, all lashed to together with a vanilla GigE network. Memory size on the nodes ranged form 24 to 48 GB. According to Nilsson, with the latest MATLAB configuration, they use 64 CPUs to run the ANN simulation.

Nilsson, who is a physician, programmed the application himself, noting that the MATLAB environment was easy to set up and use, adding there was no need for deep knowledge of parallel computing. The biggest roadblock he encountered was the need to customize an error function (MATLAB Neural Network does not have any cross-entropy error routine.) There were also some problems encountered in setting up the Xserve cluster, but once they replaced Apple’s Xgrid protocol with the MATLAB Distributed Computing Server, many of those problems disappeared.

The Apple Xserve cluster is not exactly state of the art for high performance computing these day. Presumably with a late model HPC setup, they could cut the five-day turnaround time for the simulation even more, which would speed up the research even further.

In the short term, the Lund and Skåne team intend to continue to optimize the software and explore other solutions like regression tree and logistic regression algorithms, as well as add support for vector machines. In parallel, they want to start transitioning the technology into a clinical setting.

According to Nilsson, once they’ve fully cooked the models, they can do away with the high performance computing environment. “In a future clinical setting,” he says, “the application could be used on any desktop computer, and the matching process will take only seconds to a couple of minutes.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This