ISC Keynote: The Algorithms of Life – Scientific Computing for Systems Biology

By John Russell

June 19, 2019

Systems biology has existed loosely under many definitions for a couple of decades. It’s the notion of describing living systems using first-principle physics and mathematics to capture life in equations that are both descriptive and predictive – and let’s add productive by which we mean being able to deliver therapies (drugs et. al) to enhance health and fight disease.

Doing that has proven difficult at best and disappointing at worst as even a cursory glance at the state of healthcare reveals; that’s notwithstanding many marvelous breakthroughs such as sequencing the human genome and the steady chipping away at functional genomics (and other ‘omics) to understand better how DNA informs what we become.

Ivo Sbalzarini

With apologies to ISC organizers I’ve stolen the name of the opening keynote by Ivo Sbalzarini –  The Algorithms of Life – Scientific Computing for Systems Biology – for the headline of this article in an attempt to capture his expansive presentation. Thanks also to Sbalzarini for providing a few of his slides.

Given all we know today and the steady gush of experimental data from modern instruments, what we are missing, said Sbalzarini, are the algorithms to make sense of it all. Having poked away at this problem for nearly as long as it has been around, Sbalzarini presented a sweeping approach to digging out those algorithms by capitalizing on recent advances in imaging technology, immersive virtual/augmented reality, a sophisticated analysis approach that leverages particle-mesh mathematics and which has been built into a software platform (OpenFPM), and lastly, no surprise, the steadily growing power of HPC.

 

As in many important life sciences advances the ‘lowly’ fruit fly took center stage. In this instance the analysis was to investigate a dysregulation in embryogenesis – specifically the failure of tissue to fold properly. In the end, the researchers identified the DNA influence, the chemical environment influence, and the mechanical environment influence, and delivered a predictive understanding of the embryo’s tissue response. Lest you think this is old work, it was presented last week at the New York Scientific Data Summit.

Getting from Sbalzarini’s early nascent research 15 years ago to the impressive results (and tool suite) presented is a long journey. We’ll summarize as practical but the ISC is likely to archive its keynote; for biologists it is well worth watching.

Advanced imaging, such as light sheet microscopy, now makes it possible to observe life science phenomena in 3D and great detail at the cellular and intracellular level.

“We can image an embryo from the time it is a fertilized to the time it moves out of the microscope field by itself and continues its life. When we image the fruit fly embryo over the 72 hours of development, we gather 180 TB of image data. If you would like to visualize that in real-time. That means a rendering performance or a rendering throughput in real-time of about 1.8 Gigapixels per second,” said Sbalzarini[i]. A key advantage here is the animal stays alive unlike older approaches requiring stains and fixing.

Hardly just pretty pictures, the extensive image data captured (and visualizations possible) are the raw input for building hypotheses and predictive models. The other primary driver is Sbalzarini clever adaption of particle-mesh technology to convert the data into actionable, in silico simulation. Underlying HPC infrastructure, of course, is the engine without which the whole process would grind to a halt.

“The numerical methods are particle methods or hybrid particle mesh methods. They comprise an interesting class of numerical methods. They discretize the system by particles, so if you have a complex geometry, you don’t need to generate the mesh for the simulation, but you simply fill the geometry with particles that store the variables; there can be a mesh in addition in order to do far field equations in order to compute for example forces for far field equations, for example,” he said.

“This is a classic framework of particle-mesh methods to solve partial differential equations, but particle methods as an algorithm are much more general than that. I would define everything as a particle method that is composed of dots of zero dimension elements that are characterized by a position in some space and some properties that they carry. Such an algorithm can be used to solve partial differential equations where the particles are the colocation points of your various discretization and they store the values of the field at that position.”

He adds quickly, “There is nothing that limits us to having particles interacting in a deterministic fashion and this then also allows us to solve stochastic different equations, numerically or to perform agent based simulation or agent based modeling.”

Building the computational tools to deliver these models has been a challenging and lengthy task for which Sbalzarini is well-qualified. He is the chair of scientific computing for systems biology on the faculty of computer science of TU Dresden, as well as the faculty of mathematics, and director of the TUD-Department in the Center for Systems Biology Dresden. He also is a permanent Senior Research Group Leader with the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden.

Leaving out many details and with regrets for over-simplification, Sbalzarini and colleagues imaged the fruit fly embryo; used machine learning to identify ‘algorithms’, converted the data and algorithms into models based on particle-mesh approaches using their home-developed platform; ran computational experiments to test their hypotheses; and used immersive visualization technology as a step to allow researchers to see the real process and simulations unfold. “It is possible to walk around inside the simulation,” he said. Informed by what they saw and their knowledge, researchers tweaked parameters and hypotheses, iteratively converging on a solution.

“To me it is a very nice example of how HPC and these numerically intricate simulations that we can do with these machines allow us to bridge really from the molecular scale to the tissue scaler in order to explain how things work and in order to propose remedies,” said Sbalzarini.

Sbalzarini reminded the audience living systems are computing machines themselves, “[A fruit fly embryo] is a massively parallel and fully self-organized system in which we can view every single cell as a processing element that executes programs. [It’s a] highly interconnected computer and able to solve NP hard problems with billions or hundreds of billions of processing elements. We know a lot about the hardware of this computer – the proteins, the molecules, the lipids, the fats out of which this computer is made – and thanks to sequencing technology, [we’re] able to read the source code of this computer, which is the genomic sequence. However we have no idea what algorithms this source code implements on his hardware.”

Now, advanced imaging and machine learning capabilities are catalyzing researchers’ ability to identify ‘mechanistic’ guidelines and incorporate traditional formulations (ODEs/PDEs) of physics laws and mathematics into the life sciences tool box. Chemical diffusion. Fluid dynamics. EMI influences. Activation energy thresholds. These are the kinds of attributes that can be captured in particle-mesh models.

When Sbalzarini began his studies in earnest, he used an NEC SX-5 with 512 processors housed at CSCS (Swiss Supercomputer Center). In 2005 that became a Cray XT-3 with 1664 processors. A lot has changed since. The first iteration of the system biology software platform his team developed was Parallel Particle Mesh Library (PPM) written in Fortran 90 many years ago.  It served as layer between MPI and Client Applications for simulations of physical systems using Particle-Mesh methods. The PPM library runs on single and multi-processor architectures, and handles 2D and 3D problems.

“The PPM library had two parts, what we call the PPM core, which is implemented in all the communication primitives, the load balancing, the file IO, [and] the distributed data structures. And the PPM numerics using frequently used numerical solvers; it does this in part by using the abstractions from the core and in part by renting third party libraries such as PETSc or FFTW. On top of PPM there is a domain specific programming language called PPM Language which provides a reasonably simple way of coding PPM but you could also directly interface with the Fortran API,”

PPML used overloading and generic interfaces and provided for the limitations of the important routines for different hardware platforms such as vector processors, like the NEC system, shared memory, distributed memory, even single processor systems, said Sbalzarini.

It was a beast to maintain. “Because of overloading the amount of source code in the PPM library was huge, several millions of lines of code that needed to be maintained here and ported. What we liked about PPML was the abstraction on which it is based. It’s a set of abstract data types and abstract operators for computing that are in our opinion the most coarse-grained abstractions possible that still cleanly separate computation from communication. So in PPM an abstraction would either only compute but not incur any communication overhead or it would only communicate but not do any computation,” he said.

Five years ago the platform was upgraded, “We decided to keep the abstractions, to keep the definitions of the data types and the operators, but now implement a C++ library which is called OpenFPM (Open Framework for Particle Method Library) and make use of template metaprogrammingin C++ for compiled time code generation. OpenFPM can do much more than PPM, for example it can do simulations in arbitrary dimensional spaces where PPM is limited to 2D and 3D. OpenFPM allowed particle properties to be objects of any C++ that the user can define and all the communication and file IO will work for it,” he said.

Adopting template metaprogramming reduced the amount of code needed to “about a factor of ten less complexity than the PPM.”

Sbalzarini presented many more details in his rich talk. It will be interesting to watch how widely OpenFPM is used and if it gains tractions in other domains. Ease of use is a key question for many biomedical researchers and clinicians. Sbalzarini said, “This hopefully makes HPC so easy to use that every science-based application in biology, in computational biology, and also in other fields can benefit.”

That said computer expertise, particularly HPC expertise, has historically been lacking in life science although that is changing and fairly quickly.

The main motivation is to understand biology and to understand how cells form tissues, and eventually to be able to provide novel explanations for disease phenotypes and maybe therapies for disease, said Sbalzarini. Nevertheless, “For us as computer scientists it’s also just a lot of fun because what we do combines several technologies that we think are fun to work with, technologies like virtual reality, HPC, massively scalable software systems, building microscopes and playing with optics, or using and developing artificial intelligence and learning algorithms to interface with the living things in the microscope.”

[i]Some quotes have been very lightly edited to improve readability.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks.  These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvidia GPUs). Recently, MLCommons introduced the results of i Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever physical processor they want, without making code changes, the Read more…

IBM Quantum Summit Evolves into Developer Conference

October 2, 2024

Instead of its usual quantum summit this year, IBM will hold its first IBM Quantum Developer Conference which the company is calling, “an exclusive, first-of-its-kind.” It’s planned as an in-person conference at th Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed that the company will release Falcon Shores as a GPU. The com Read more…

Texas A&M HPRC at PEARC24: Building the National CI Workforce

October 1, 2024

Texas A&M High-Performance Research Computing (HPRC) significantly contributed to the PEARC24 (Practice & Experience in Advanced Research Computing 2024) conference. Eleven HPRC and ACES’ (Accelerating Computin Read more…

A Q&A with Quantum Systems Accelerator Director Bert de Jong

September 30, 2024

Quantum technologies may still be in development, but these systems are evolving rapidly and existing prototypes are already making a big impact on science and industry. One of the major hubs of quantum R&D is the Q Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks.  These benchmarks have focused on mathematical ML operations and accelerators (e.g., N Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago today emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whatever ph Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

IBM and NASA Launch Open-Source AI Model for Advanced Climate and Weather Research

September 25, 2024

IBM and NASA have developed a new AI foundation model for a wide range of climate and weather applications, with contributions from the Department of Energy’s Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Building the Quantum Economy — Chicago Style

September 24, 2024

Will there be regional winner in the global quantum economy sweepstakes? With visions of Silicon Valley’s iconic success in electronics and Boston/Cambridge� Read more…

How GPUs Are Embedded in the HPC Landscape

September 23, 2024

Grasping the basics of Graphics Processing Unit (GPU) architecture is crucial for understanding how these powerful processors function, particularly in high-per Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced  export controls on quantum computing technologies as well as new controls for advanced semiconductors and additiv Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire