How Digital Twins of the Human Body Can Advance Healthcare

By Dr. Eng Lim Goh

October 1, 2018

One of the most exciting aspects of supercomputing, for me, is when we step in the world of research and take on some of the great challenges of the ages. Over recent years, advances in high performance computing (HPC) technology have shortened the time between hypothesis and insight for researchers. And now we have a new challenge on which to concentrate our brain and computational power—as we try to power the simulation of digital brains.

Hewlett Packard Enterprise helps the EPFL Blue Brain Project (BBP) to advance the understanding of the brain by supplying the supercomputing power they require to digitally reconstruct and simulate the mammalian brain. Over the last few years, HPE has taken on a number of complex and seemingly impossible challenges: we’ve pioneered Memory-Driven Computing and designed computers to assist inter-planetary missions and take people into the farthest reaches of space—as well as reaching back in time and space, in collaboration with the COSMOS group, to study the beginnings of our universe.

However, reaching inside ourselves and studying the brain is perhaps even more challenging than taking on the galaxies.

The human brain is one of the most complex phenomena in the universe, and its digital reconstruction requires next-generation supercomputers and deep collaboration between brain researchers and computer engineers.  As our president and CEO, Antonio Neri, said: “Our mission is to create technologies that improve our quality of life, including powering technologies for the healthcare industry to deliver targeted treatments and save lives, HPE is bringing advanced supercomputing and bespoke applications to empower new research that can have transformative benefits for the neuroscientific community and society at large.”

The Blue Brain Project (BBP) aims to build comprehensive digital models of the brain, which will provide the basis for a potentially unlimited range of simulations, each representing an in-silico experiment. These digital experiments will not only require huge computing power, but also a range of very different computing profiles to support models of the brain’s different levels of organization and their interactions, as well as different types of modelling and simulation methodologies.

As soon as Antonio received a request from our Swiss team to work with BBP, he recognized the importance of this as an extension of our global partnerships to model and understand the human body. He asked me to go to Geneva and, only three days after the initial request, I was lucky enough to be sitting with the BBP team co-designing the system with them. It is particularly exciting as the BBP’s challenge to study the mammalian brain dovetails nicely with two existing projects we are working on:

  • The Living Heart, a collaboration between HPE and Stanford University to create multi-scale 3D models of the heart to monitor circulation and to virtually test medications in development and ultimately predict drug-induced arrhythmias, even if the patient is on the other side of the world.
  • DZNE is studying a population of 30,000 people over 30 years to find answers for brain diseases like Alzheimer’s, leveraging an HPE supercomputer with Memory-Driven Computing properties to improve the lives of the 1B people around the world living with neurological disorders.

As we are tackling with DZNE, populations around the world are aging, and brain diseases such as dementia and Alzheimer’s are becoming more prevalent. In fact, DZNE studies show that fighting dementia currently costs $1 trillion per year. Understanding the brain and coming up with cures and innovations that will help ease the burden on health providers—while providing people with a better quality of life—is becoming more and more important. And brain-related illnesses are not the only problem. By 2025, 1.2 billion people on Earth will be elderly. According to the World Health Organization, by 2020 chronic diseases such as cancer and diabetes will account for almost three quarters of deaths worldwide.

Until now, the viable option for testing medical hypotheses has been by testing on animals or humans. This has risks and can raise ethical questions; however, it is also more expensive, slower and less accurate than if we can create computer models that can simulate human body functions. It may seem impossible to switch entirely to computer testing, but only a few years ago people would never have imagined getting on board an aircraft that had been completely designed and simulated on a computer. Nowadays, the first aircraft that comes off the production line is the final design, not a prototype, because we have advanced our testing capabilities to the point where we are able to learn as much, if not more, in the computer model than we would from traditional processes. I firmly believe that something similar can be achieved through our work with organizations such as BBP and the Living Heart—we may eventually be able to construct digital models that are more effective than any human or animal tests.

Taking it forwards, we can hope to create “digital twins” of organs like the heart, or even of single cells, for individual patients. Simulations can then be run to find out how different people would react to different treatments. At that point, we will have taken the massive step from, generalized, traditional and sometimes even inaccurate research to the provision of truly personalized medical care with models that can be run at low cost and almost in real time to aid diagnosis and treatment plans. This will augment the advances that we have already made in precision medicine, as HPE is helping doctors to stop thinking of the “average patient” and helping them treat the “actual patient.”

Of course, such incredible ambition needs a quite incredible computer. Modelling an individual neuron at BBP today leads to around 20,000 ordinary differential equations. When modelling entire brain regions, this quickly rises to 100 billion equations that have to be solved concurrently. To provide the massive amount of computing power that will be necessary, BBP will be installing an HPE SGI 8600 supercomputer system in Lugano, Switzerland, and will make use of a cluster comprising 372 compute nodes. The HPE SGI 8600 is a sixth-generation system designed to solve the world’s most complex problems in areas ranging from life, earth, and space sciences, to engineering, manufacturing and national security—meaning that it was specially created for applications like BBP in which the power of big data and AI is harnessed to further our human understanding of the world we live in.

Endeavors like our collaboration with BBP bring a lot of hope for the future. There are great advances in areas like medicine thanks to big data, AI and super-computing, but there are also gaps that are slowing enhancement down—such as the need to digitize more patient records and get the data into a format that can be used. Projects such as this one show that we are pushing past the limitations and using data to accomplish feats that once seemed impossible. If we can do that, then the possible will follow in good time. It is something I am very proud to be a part of.

Return to Solution Channel Homepage

HPE Resources

Follow @HPE_HPC

HPE on Facebook

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized silicon designs catered toward general-purpose cloud computing Read more…

By Tiffany Trader

The Internet of Criminal Things—Trust in the Gods but Verify!

February 20, 2019

“Are we under attack?” asked Professor Elmarie Biermann of the Cyber Security Institute during the recent South African Centre for High Performance Computing’s (CHPC) National Conference in Cape Town. A quick show Read more…

By Elizabeth Leake, STEM-Trek

Machine Learning Takes Heat for Science’s Reproducibility Crisis

February 19, 2019

Scientists are raising red flags about the accuracy and reproducibility of conclusions drawn by machine learning frameworks. Among the remedies are developing new ML systems that can question their own predictions, show Read more…

By George Leopold

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

The Perils of Becoming Trapped in the Cloud

Terms like ‘open systems’ have been bandied about for decades. While modern computer systems are relatively open compared to their predecessors, there are still plenty of opportunities to become locked into proprietary interfaces. Read more…

What’s New in HPC Research: Wind Farms, Gravitational Lenses, Web Portals & More

February 19, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This