Cray XK7 Titan Used to Simulate Complicated Blood Flow

By Eric Gedenk, Oak Ridge National Laboratory

October 14, 2015

A team of researchers from Brown University, ETH Zurich, the Universita da Svizzera Italiana (USI) and Consiglio Nazionale delle Ricerche (CNR) is using America’s largest, most powerful supercomputer to help understand and fight diseases affecting some of the body’s smallest building blocks.

The team, led by Brown’s George Karniadakis, is using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF)—a US Department of Energy (DOE) Office of Science User Facility located at Oak Ridge National Laboratory—to simulate hundreds of millions of red blood cells in an attempt to develop better drug delivery methods and predictors to fight against tumor formation and sickle cell anemia.

Karniadakis’ group approaches these simulations from a unique place—despite the research focus on medical issues, the team actually is based in Brown’s Applied Mathematics division. Karniadakis explained that mathematicians are in a good position to help guide computation for the multiple scales involved in simulating parts of the human body.

“All biological systems are multiscale systems,” Karniadakis said. “The research goes from the protein, to the cell, to the tissue, all the way to the human. You have to cover scales from one nanometer to one meter.”

During the first year of a 3-year Innovative and Novel Computational Impact on Theory and Experiment (INCITE) allocation at both the OLCF and the Argonne Leadership Computing Facility, the team has been performing a suite of simulations related to different diseases and drug delivery methods to better predict, diagnose, and treat several mysterious hematological, or blood-based, diseases.

The team’s research has made it a finalist for this year’s Association for Computing Machinery Gordon Bell Prize—one of the most prestigious awards in high-performance computing—presented at the SC15 supercomputing conference, held this year in Austin, Texas.

The Brown team is joined by co-principal investigator Petros Koumoutsakos and Diego Rossinelli—the researcher leading the Gordon Bell effort—from ETH Zurich

At the OLCF, the team primarily has focused its disease research on sickle cell anemia (SCA) and tumor cells, as well as on developing better drug delivery methods.

Accelerating state of the art
SCA is a disease that causes red blood cells to become rigid and “sickle-shaped,” leading to chronic circulatory system problems and an increased risk for death.

Despite its prevalence—roughly 8 percent of the African-American population carries the trait, and more than 180,000 babies are born with the disorder every year—very little is known about how this red-blood-cell-related disorder interacts with human blood vessels.

The team uses dissipative particle dynamics (DPD) in its simulations to study blood flow as a collection of individual particles rather than one fluid object. To model each individual particle’s behavior accurately for any meaningful length of time, the team needed leadership-class supercomputing power.

“Our work is done with dissipative particle dynamics, meaning you basically model everything in the simulation to be either an individual particle or a collection of particles,” project collaborator and Brown doctoral researcher Yu-Hang Tang said. “It’s very easy for the number of particles in the system to grow wildly. For example, if we want to model just one red blood cell, we don’t just put in particles for the red blood cells; we also need particles for the fluid surrounding it. That might get you 50,000 particles just to simulate a single red blood cell.”

OLCF Titan graphicIn addition to its research on SCA, the team is also leveraging Titan to understand how diseases could be treated. Thus far, the team simulated blood and cancer cell separation using microfluidic devices, which can manipulate extremely small amounts of fluids—typically microliters (1 millionth of a liter) or smaller.

Tang focused on how blood and cancerous tumor cells might be separated by microfluidic devices, and his simulations are 1–3 times larger—in terms of the number of simulated cells and computational elements—than the current state of the art within the field.

“In our tumor cell study, we used specifically arranged obstacles in the microfluidic device,” Tang said. “We use these obstacles to separate cells, because different types of cells have different shapes, so when two cells hit the same obstacle, their responses to this will be different, causing them to go different directions in the device.”

Such microfluidic devices would allow doctors to take a very small sample of blood and quickly identify whether someone had a malignant tumor. This “lab on a chip” could help doctors test for illness in the least invasive way possible.

Tang and his collaborators exploited Titan’s GPU accelerators and developed uDeviceX, a GPU-driven particle solver—an important part of the team’s code that helps plot individual particles in the simulation. Tang’s new solver showed a forty-five-fold decrease in time to solution compared with competing state-of-the-art methods.

Moreover, Tang’s extensive work with GPUs has led to the team’s newest computational tool—the multiscale universal interface (MUI). The team’s research interests require a variety of different codes, with certain computing architectures working better with certain codes and different architectures benefitting other codes. It also gives the team freedom to focus specific solvers for different scales simultaneously.

MUI allows the team to quickly connect its contrasting codes into one larger code, significantly cutting down the team’s computational cost for running its simulations by focusing on different hardware configurations’ strengths.

The team not only can target different solvers or other portions of a code toward certain parts of a supercomputer but also is able to offload different parts of a code on different supercomputers. These distinct pieces of code do not have to communicate during each time step but ultimately will share their results during the course of a simulation.

Tang credits computer graphics for inspiring MUI. “Basically, I made MUI by borrowing a concept from computer graphics, where when you want to render the color of a pixel, you’re actually doing interpolation from the nearby pixels,” Tang said. “We borrowed this into the MUI world when you want to do different kinds of simulations. We proposed a general framework where you can interpolate the data that you want from nearby points by easily inserting your own interpolation algorithms.” OLCF staff member Wayne Joubert helped the team scale MUI to make efficient use of Titan’s large node count.

Microscopic scales, macroscopic implications
Karniadakis and Tang both emphasized that multidisciplinary collaboration must remain central in keeping their research moving forward. “We are targeting different pathologies, because having that as a canvas, we can develop interesting mathematics and computational algorithms that can be used in other contexts,” Karniadakis said, adding that some of Tang’s methods already were being adopted for materials science computations.

In addition to developing more efficient algorithms, the team has worked on developing methods that would be “hardware aware,” allowing them to mount their codes quickly and seamlessly on a variety of supercomputing infrastructures. Karniadakis also led a team working on domain decomposition methods—which minimize communications between a supercomputer’s nodes, increasing efficiency and time to solution. The work led to the team being named a finalist for the 2011 Gordon Bell Prize.

Despite significant advancements in the team’s code development, Karniadakis still sees plenty of room for Vertical Focus: HPC in BioITimprovements. As high-performance computers continue to get more powerful, Karniadakis predicts his team will develop larger-scale simulations capable of gaining deeper insight into blood-based illnesses.

“We are pushing the envelope on using current computational resources, and one of the difficulties we still struggle with is whether we can actually compute from the onset of a disease to the effects of that disease,” Karniadakis said. “It can take years between the onset and effect of a disease. In this context, you cannot just rely on computing or mathematics alone, so we rely on both.”

Image credit: Yu-Hang Tang, Brown University
Source: Oak Ridge National Laboratory. For more information, please visit science.energy.gov.

Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire