Adding MUSCLE to Multiscale Simulations

By Joris Borgdorff, Derek Groen, and Mariusz Mamonski

December 11, 2013

Multiscale models help understand phenomena with a wider scope or an increased level of detail. These models allow us to take the best from multiple worlds, for example by combining models with a fine-grained time or space resolution with models that capture systems over a large baseline.

Classical examples of multiscale modeling include coupling atomistic models to coarse-grained models, where several atoms are represented instead as a single fused particle, or coupling fine fluid dynamics models to coarser structural mechanics models. In both cases, we have a fairly good understanding of the single scale phenomena, but still have little knowledge of the interactions between these phenomena. As understanding these interactions is key to understanding these phenomena as a whole, many researchers are now actively developing and using multiscale modeling techniques [1].

Researchers from different disciplines recognized their common need for a general multiscale computing approach, and started the European e-Infrastructure MAPPER project in 2010. The project aimed to bring their demanding multiscale applications to HPC, in an approach that uses the commonalities in multiscale modeling for applications in biomedicine, hydrology, nanomaterials, fusion, and systems biology.

muscle5The project settled on the theoretical and components-based Multiscale Modeling and Simulation Framework [2], which defines a multiscale model as a set of coupled single scale models (see Fig. 1). This approach allows code reuse, since single scale models often already exist, and defines clear opportunities for scheduling and distributing multiscale models, since the coupling is separated from the single scale code. Single scale models (or submodels) are implemented, verified, and validated in isolation, after which their interaction is added. They interact through input and output ports, sending or receiving simple parameters there, or entire datasets or geometries. A conduit transports the data from one port to another and intermediate components transform the data, implementing so-called scale-bridging techniques.

The Multiscale Coupling Library and Environment 2 (MUSCLE 2, http://apps.man.poznan.pl/trac/muscle) was created to implement and execute multiscale models with feedback loops, which we call cyclic coupling topologies. MUSCLE is truly a domain-independent approach, as it has so far been adopted, amongst others, by the abovementioned applications in MAPPER, and run on several supercomputers and clusters in Europe, as well as the Amazon cloud infrastructure. It consists of a library,  scripted coupling and a runtime environment.

Figure 2. Layered design of MUSCLE 2, separating implementation, coupling and execution.
Figure 2. Layered design of MUSCLE 2, separating implementation, coupling and execution.

By design, submodels do their computations independently, and MUSCLE 2 allows them to be implemented in different programming languages (C, C++, Fortran, Java, Scala, Python, or MATLAB) and for them to run on multiple machines. Conduits in MUSCLE 2 use shared memory communication where possible, and TCP/IP communication otherwise.  Using TCP/IP makes communicating between different computers as transparent as running on a single host. A central simulation manager acts as a white-page service for the submodels in a simulation, but after a submodel is registered there it does decentralized message passing with other submodels to prevent possible bottlenecks. By running each submodel in a separate process or thread, MUSCLE 2 has inherent multiscale parallelism.

MUSCLE 2 separates the submodel code, which just knows about input and output ports, from coupling code, which knows which ports will be coupled. This allows users to change the coupling topology without recompiling or redeploying code. Additionally, the coupling code is independent from the resources that the simulation will be eventually run on, so the same coupling can be submitted to multiple machines or be spread out over them, even when they reside in different countries. This allows us not only to use more resources but, more importantly, to take advantage of architectures that are optimal for each of the models involved. For example, some models in a simulation may greatly benefit from GPGPUs, whereas others have large memory requirements.

Within a supercomputer, MUSCLE 2 can make direct connections between processes, but almost all supercomputers have firewalls in place that prevent direct connections between worker nodes in different supercomputers. In MUSCLE 2we resolve this by providing a TCP/IP forwarding service, the MUSCLE Transport Overlay, that runs on interactive nodes of the cluster and forwards messages between MUSCLE 2 installations on different clusters.

To optimize the speed for large messages, MUSCLE 2 has the option of using the MPWide library (http://www.github.com/djgroen/MPWide), a high-performance communication library that was used to cosmological simulations in parallelized across multiple supercomputers [3]. MUSCLE 2 processes can be run directly as a supercomputer job and have few dependencies, but require the address of the site simulation manager to connect to other processes. This can be done manually or automated via a dedicated service such as provided by the QosCosGrid software stack (http://www.qoscosgrid.org/). QosCosGrid provides middleware solutions, notably on the Polish national grid (PL-Grid), that allow users to schedule and coordinate simulations running in multiple distributed HPC resources [4]. MUSCLE 2 is open source software (LGPL version 3 license) that runs on Linux and OS X and can be freely installed without administrative privileges.

In memoriam Mariusz Mamonski (1984 – 2013)

We wish to dedicate this paper to the memory of Mariusz Mamonski, whose sudden decease came as a shock to us all. The MUSCLE 2 team would like to thank Mariusz for all his professional and personal contributions to distributed multiscale computing. His dedication to end-users, his insight in software quality and his experience with infrastructures was truly impressive, and he will be sorely missed.

Biographies:

muscle3Joris Borgdorff is a PhD candidate at the Computational Science group of the University of Amsterdam, researching the formal background of multiscale and complex systems modeling and the applied aspects of distributed multiscale computing. He received a BSc in Mathematics and in Computer Science (2006) and an MSc in Applied Computing Science (2009) from Utrecht University. He is involved in the European MAPPER and Sophocles projects.

 

muscle2Derek Groen is a post-doctoral researcher at the Centre for Computational Science at University College London and a Fellow of the Software Sustainability Institute. He has expertise in high performance and distributed computing, as well as multiscale simulation. Derek has worked on a range of applications, and modelled star clusters, cosmological dark matter structures, clay-polymer nanocomposite materials, turbulence and human bloodflow using supercomputers. He obtained his PhD in Computational Astrophysics in Amsterdam in 2010.

 

muscle1Mariusz Mamonski (1984 – 2013) received his diploma in Computer Science at the Poznan University of Technology (Laboratory of Computing Systems) in 2008. He started working at the Application Department of the Poznan Supercomputing and Networking Center in 2005. He contributed to several research EU projects, in particular: GridLab, InteliGrid, BREIN and QosCosGrid, and was involved in the national and European e-Infrastracture projects PL-Grid and MAPPER. His research primarily focussed on web services, queueing systems and parallel execution and programing environments. He was an active member of the Open Grid Forum Distributed Resource Management Application API (OGF DRMAA) working group.

Acknowledgements

We would like to thank the Bartosz Bosak and Krzysztof Kurowski from the Poznan Supercomputing and Networking Center for their support and input, and we thank Alfons G. Hoekstra from the University of Amsterdam for his feedback.

References:

[1] Groen et al., Survey of Multiscale and Multiphysics Applications and Communities, IEEE Computing in Science and Engineering, http://dx.doi.org/10.1109/MCSE.2013.47.

[2] Borgdorff et al., Foundations of distributed multiscale computing: Formalization, specification, and analysis, Journal of Parallel and Distributed Computing http://dx.doi.org/10.1016/j.jpdc.2012.12.011

[3] Groen et al., A lightweight communication library for distributed computing, accepted by the Journal of Open Research Software, http://arxiv.org/abs/1312.0910.

[4] Kravtsov et al., Grid-enabling complex system applications with QosCosGrid: An architectural perspective, Proceedings of the 2008 International Conference on Grid Computing & Applications, Las Vegas, Nevada, USA, 2008, pp. 168–174.

[5] Borgdorff et al., Distributed Multiscale Computing with MUSCLE 2, the Multiscale Coupling Library and Environment, submitted to the Journal of Computational Science, http://arxiv.org/abs/1311.5740

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s Hot and What’s Not at ISC 2018?

June 22, 2018

As the calendar rolls around to late June we see the ISC conference, held in Frankfurt (June 24th-28th), heave into view. With some of the pre-show announcements already starting to roll out, what do we think some of the Read more…

By Dairsie Latimer

Servers in Orbit, HPE Apollos Make 4,500 Trips Around Earth

June 22, 2018

The International Space Station shines a little brighter in the night sky thanks to what amounts to an orbiting supercomputer lofted to the outpost last year as part of a year-long experiment to determine if high-end com Read more…

By George Leopold

HPCwire Readers’ and Editors’ Choice Awards Turns 15

June 22, 2018

A hallmark of sustainability is this: If you are not serving a need effectively and efficiently you do not last. The HPCwire Readers’ and Editors’ Choice awards program has stood the test of time. Each year, our read Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Taking the AI Training Wheels Off: From PoC to Production

Even though it seems simple now, there were a lot of skills to master in learning to ride a bike. From balancing on two wheels, and steering in a straight line, to going around corners and stopping before running over the dog, it took lots of practice to master these skills. Read more…

Tribute: Dr. Bob Borchers, 1936-2018

June 21, 2018

Dr. Bob Borchers, a leader in the high performance computing community for decades, passed away peacefully in Maui, Hawaii, on June 7th. His memorial service will be held on June 22nd in Reston, Virginia. Dr. Borchers Read more…

By Ann Redelfs

What’s Hot and What’s Not at ISC 2018?

June 22, 2018

As the calendar rolls around to late June we see the ISC conference, held in Frankfurt (June 24th-28th), heave into view. With some of the pre-show announcement Read more…

By Dairsie Latimer

Servers in Orbit, HPE Apollos Make 4,500 Trips Around Earth

June 22, 2018

The International Space Station shines a little brighter in the night sky thanks to what amounts to an orbiting supercomputer lofted to the outpost last year as Read more…

By George Leopold

HPCwire Readers’ and Editors’ Choice Awards Turns 15

June 22, 2018

A hallmark of sustainability is this: If you are not serving a need effectively and efficiently you do not last. The HPCwire Readers’ and Editors’ Choice aw Read more…

By Tiffany Trader

ISC 2018 Preview from @hpcnotes

June 21, 2018

Prepare for your social media feed to be saturated with #HPC, #ISC18, #Top500, etc. Prepare for your mainstream media to talk about supercomputers (in between t Read more…

By Andrew Jones

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chair Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose p Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This