Researchers Advance User-Level Container Solution for HPC

By Isabel Campos & Jorge Gomes

December 18, 2017

Most scientific computing facilities, such us HPC or grid infrastructures, are shared among different research disciplines, and thus the system software environment needs to be generic enough to accommodate different user and applications profiles; they are multi-user environments.

Because of managerial and technical constraints, such infrastructures cannot afford offering every research project a tailored environment in their machines. Therefore the interest of exploring the applicability of containers technology on such systems is rather evident from the end-user point of view.

Researchers need then to customize their applications software to fit the computing center environment at the level of system software and batch system. Containers provide a way to pack and deploy software including all the dependencies in a way that can be executed in a seamless way, independently of the underlying Linux Operating System and environment. The main benefit of integrating the execution of containers in HPC systems would then be to provide a way to execute applications homogeneously across different resource centers.

The flagship container software, Docker, cannot be used in a satisfactory way on HPC systems, grids and in general multi-user oriented infrastructures. Deploying Docker on such facilities presents a number of problems related to the fact that within the container, processes are executed with the root id. This raises security concerns among system managers, as the Docker root might be able to gain access to root privileges in the host machine. Also, when executed as root, the processes escape from the usual managerial limits on resource consumption or accounting, imposed on regular users at shared facilities.

User-level tools

The user-level tool udocker provides a layer for users to execute Docker containers, that by definition, does not require the intervention of the system administrators. Udocker combines the pulling, extraction and execution of Docker containers without requiring privileges. The Docker image is extracted on a user-space filesystem area, and from there on, it is executed in an chroot-like environment.

udocker provides a command line interface that mimics Docker, providing a subset of its commands to be able to handle Docker images at the level of pulling, extracting and execute containers “á la Docker”.

Processes are run without privileges under the regular user id, under the same process tree, thus facilitating the enforcement of the managerial limits imposed to regular users in HPC or grid resource centers.

udocker provides several ways, depending on the application and host environment, to execute containerized applications. It is also possible to access specialized hardware like Infiniband for MPI jobs, or GPGPUs, making it adequate to execute containers in batch systems and HPC infrastructures.

udocker enables the execution of Docker containers with different engines based on intercepting system calls. Depending on the application requirements the user may choose to run in one execution mode or another. For instance CPU-intensive applications may use udocker in the ptrace execution mode, to intercept and modify pathnames; if the application is I/O intensive the interception of system calls via library pre-loading using the Fakechroot execution mode is a more adequate way to run the container. All the tools and libraries required by udocker and its execution modes are provided with udocker itself.

The udocker execution mode RunC employs the technology of user namespaces to run the containers in rootless mode. This feature can be used with modern Linux distributions with kernels from 3.9 on. However most HPC systems are conservative environments and it will take some time until they will be able to support this execution mode.

Regarding impact in performance, in the figure presented below we have plotted the weak scaling performance of openQCD, a comprehensive software package to run Lattice QCD simulations (a CPU-intensive application) from 8 to 256 cores.

As we see, the performance of the containerized version of openQCD is slightly higher than the one on the host itself. This is especially so when the execution takes place within a single node (the test machine has 24-core nodes).

This behavior has been reported consistently by container users across different hardware and system software settings, and it is related to the better libraries available in the more advanced versions of the operating systems inside the container. Clearly this feature opens the door to container exploitation in HPC mainframes since there the software system is by necessity very conservative.

Figure Caption: Weak Scaling performance of openQCD with a local lattice of Volume=32^4. The tests have been performed on the Finisterrae-II HPC system at CESGA (Spain).

Since its first release in June 2016 udocker expanded quickly in the open source community. It is being used in large international collaborations like the case of MasterCode, a leading particle physics phenomenology collaboration, which uses udocker to handle the library complexity of the set of codes included in the MasterCode.

It has also been adopted by a number of software projects to complement Docker. Among them openmole, bioconda, Common Workflow Language or SCAR.

System Administration level

Beyond the user level, several solutions have been developed in recent times to support system administrators in deploying customized containers for their users. These solutions rely on the installation of system software by the system administrator, which also is in charge of preparing the containers that the users are authorized to run on the system. The most popular of these tools is Singularity.

Singularity can be downloaded and installed from source or binaries, and must be installed by root for the software to have all the functionalities. Singularity binaries are therefore installed with SUID and need be deployed in a filesystem that allows SUID. Given the security concerns on network filesystems regarding SUID, Singularity is normally installed in a directory locally accessible to the users (i.e., not network-mounted).

Singularity offers its own containers registry, the Singularity Hub, and its own specification to create containers, the Singularity Recipe (i.e., the Singularity equivalent of the Dockerfile specification).

The default container format is squashfs, which is a compressed read-only Linux file system, where the images need to be created by root.

It also supports a sandbox format, in which the container is deployed inside a standard Unix directory, much like udocker. In particular, executing udocker in Singularity execution mode will cause the container to be executed via Singularity if installed in the system. In order to do this udocker exploits the sandbox mode.

The container building environment of Singularity belongs to root. Containers may be built either from a Singularity recipe, from a previous container coming from the Singularity Hub, or importing a container from the Docker repository. Notice that the Singularity format for containers is not compatible with Docker; therefore, in the latter case the container needs to be converted to the Singularity format.

Once the container exists, it can be executed by a regular user in a way analogous to Docker. These containers can also be checked at the binary level, at the level of sensitive content of the filesystem for example, or even for particular features defined by the system administrator.

The comparison of the most popular tools, udocker and Singularity, shows that they have a completely different scope, and the selection of one solution or another depends on the priorities at the user level and the computing center management policies.

Singularity is a system administration level tool, to be installed at this level, giving the managers of the infrastructure full control of which containers are run into the system or not. Udocker however is a user tool that acts as a layer over different execution methods, enabling regular users to run containers in their own user space, much in the philosophy of the jailed systems.

About the Authors

Jorge Gomes is a computing researcher at the Laboratory of Instrumentation and Experimental Particle Physics (LIP). He worked in the development of advanced data acquisition systems at CERN, and participated in pioneering projects in the domain of digital satellite data communications, IP over ATM, and advanced videoconferencing over IP networks. Since 2001 he has participated in numerous projects regarding distributed computing, networks and security in Europe and Latin America. He is the head of the LIP Advanced Computing and Digital Infrastructures Group and technical coordinator of the Portuguese National Grid Infrastructure, representative of Portugal in the Council of the European Grid Infrastructure (EGI) and responsible for the Portuguese participation in IBERGRID, that joins Portuguese and Spanish distributed computing infrastructures.

Isabel Campos is a physics researcher at the Spanish National Research Council (CSIC). She holds a PhD in the area of Lattice QCD simulations, and has hold research associate positions at DESY-Hamburg and Brookhaven National Lab, and Leibniz Supercomputing Center in Munich. Since 2005 she has participated in numerous project aimed at developing software and deploy distributed computing infrastructures in Europe. She is the head of the e-Science and Computing group at IFCA-CSIC, coordinator of the Spanish National Grid Infrastructure, representative of Spain in the Council of the European Grid Infrastructure (EGI) and responsible for the Spanish participation in IBERGRID, that joins the Spanish and Portuguese distributed computing infrastructures.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AMD Epyc CPUs Now on Bare Metal IBM Cloud Servers

April 1, 2020

AMD’s expanding presence in the datacenter and cloud computing markets took a step forward with today’s announcement that its 7nm 2nd Gen Epyc 7642 CPUs are now available on IBM Cloud bare metal servers. AMD, whose Read more…

By Doug Black

Supercomputer Testing Probes Viral Transmission in Airplanes

April 1, 2020

It might be a long time before the general public is flying again, but the question remains: how high-risk is air travel in terms of viral infection? In an article for the Texas Advanced Computing Center (TACC), Faith Si Read more…

By Staff report

ECP Milestone Report Details Progress and Directions

April 1, 2020

The Exascale Computing Project (ECP) milestone report issued last week presents a good snapshot of progress in preparing applications for exascale computing. There are roughly 30 ECP application development (AD) subproj Read more…

By John Russell

Russian Supercomputer Employed to Develop COVID-19 Treatment

March 31, 2020

From Summit to [email protected], global supercomputing is continuing to mobilize against the coronavirus pandemic by crunching massive problems like epidemiology, therapeutic development and vaccine development. The latest a Read more…

By Staff report

What’s New in HPC Research: Supersonic Jets, Skin Modeling, Astrophysics & More

March 31, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium- Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be Read more…

By John Russell

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Conversation: ANL’s Rick Stevens on DoE’s AI for Science Project

March 23, 2020

With release of the Department of Energy’s AI for Science report in late February, the effort to build a national AI program, modeled loosely on the U.S. Exascale Initiative, enters a new phase. Project leaders have already had early discussions with Congress... Read more…

By John Russell

Servers Headed to Junkyard Find 2nd Life Fighting Cancer in Clusters

March 20, 2020

Ottawa-based charitable organization Cancer Computer is on a mission to stamp out cancer and other life-threatening diseases, including coronavirus, by putting Read more…

By Tiffany Trader

Kubernetes and HPC Applications in Hybrid Cloud Environments – Part II

March 19, 2020

With the rise of cloud services, CIOs are recognizing that applications, middleware, and infrastructure running in various compute environments need a common management and operating model. Maintaining different application and middleware stacks on-premises and in cloud environments, by possibly using different specialized infrastructure and application... Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Summit Joins the Fight Against the Coronavirus

March 6, 2020

With the coronavirus sweeping the globe, tech conferences and supply chains are being hit hard – but now, tech is hitting back. Oak Ridge National Laboratory Read more…

By Staff report

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This