Container App ‘Singularity’ Eases Scientific Computing

By Tiffany Trader

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It’s in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years, and it’s going into other leading HPC centers, including the Texas Advanced Computing Center (TACC), the San Diego Supercomputing Center (SDSC) and many more sites, large and small.

Singularity is just the latest of several large open source projects that Kurtzer, who is the technical lead and HPC architect at LBNL and also at UC Berkeley through a joint appointment, has founded and built. He also created the well-known open source projects Centos Linux and Warewulf (the latter is now a key part of the emerging OpenHPC stack). With these projects ongoing and Singularity attracting a lot of buzz, Kurtzer has never been busier. The day I talked with him he had just delivered two presentations extolling the merits of Singularity and was about to give another the following day.

Kurtzer, who just rolled out Singularity version 2.2 (see release details here), says there’s a reason for the fast adoption curve. Scientists had seen what Docker could do for enterprise and were clamoring to gain the same benefits for their workflows on HPC infrastructures. With Singularity, users can take their complete application environment and reproducibly run it anywhere Singularity is installed.

In this Q&A with HPCwire, Kurtzer lays out the use case for Singularity and discusses the evolving HPC usage model that is driving the need for container solutions within the traditional HPC ecosystem.

HPCwire: What does Singularity offer scientific computing users?

Kurtzer: Singularity has three primary goals from a user perspective: reproducibility, mobility and end user freedom.

An area receiving a lot of attention right now in scientific computing is the reproducibility of results relying on computation and data, and this leads to the necessity to come up with mechanisms to ensure complete reproducibility of the entire stack. Just because we can assemble everything from pieces – the source code, the operating system, libraries and so forth – that’s not necessarily a guarantee that we can reproduce the exact same environment that was used to create a particular scientific result.

One of the ideas behind Singularity is that the container itself is represented by a single file image and that single file can be copied. I can share it with you, we can open up the permissions on that file so you and me and that entire group can run it. And we can archive that file and archive it with our data, and you can email it to yourself. All because it’s a single file.

If you look at Docker and other container solutions, it’s not a single file; it’s usually spread out among directories, or layers of archives. So Docker for example will use a whole bunch of small tarballs to recreate an image and that makes it very difficult to archive for scientists.

Singularity is designed around the idea of the extreme simplicity of a single file so we can ensure reproducibility and mobility of compute, which means if it works for me on my laptop running Ubuntu, it will work on a high-performance computing resource running Centos and it will work on another high-performance computing system running Scientific Linux and it’ll just run everywhere including up in the cloud. Again, this is because that file contains the entire encapsulation necessary for the application to run reproducibility — so that satisfies both the reproducibility and the mobility feature requirements.

And the last one is freedom, and that basically means if a developer or a user has built their application or workflow for Ubuntu or Linux Mint or Alpine or whatever their favorite Linux distro is, they can build a container around it and then transfer that file everywhere that Singularity is installed and they will always be able to work inside of that operating system image that they built and control. So it gives a lot of freedom to the end users and the developers, and if you couple that with the idea of mobility, people are now starting to do releases of software built around the idea that their software exists inside of that Singularity image and they can now just distribute that Singularity image.

This means you don’t have to distribute 14 different things that everyone has to assemble by hand. In order to replicate a particular workflow, they can say, this container does all of the bioscience research that this particular set of experiments needs. So you just download that container and you’ve got it; you don’t have to install anything onto the host operating system because it’s all inside that container.

HPCwire: Can you explain in more detail why operating system flexibility is important?

Kurtzer: As a resource provider, we’ll typically buy our hardware or we’ll get an integrated system and it will come with an operating system installed and we’ll spend a bunch of time tuning it, and building software applications for it and polishing it up for the users, and making it into something that is not only tuned, works properly for the hardware, is optimized for our use cases, got our applications, but we also have to support it. And so we spend a lot of effort building the host operating system on any HPC resource. But then there will always be those users where that particular choice of operating systems will not meet their needs. Or they will need different core library versions, that are not simply upgradable. No matter what we do, it’s not going to make everybody happy — not to be negative, but it’s just the reality.

Singularity gives users that freedom that they want to bring their own environment. Now how important is that? It really depends on the applications. Let’s say somebody wants to use TensorFlow or the various bio related Perl or Python stacks (notoriously hard to install). Is it impossible to build it on a Red Hat derivative of an operating system, like Centos or Scientific Linux? No it’s not impossible, but it’s a huge amount of work, especially when you consider that various developers are releasing these stacks ready to go for Debian and for Ubuntu. It makes the justification of not using Debian and Ubuntu very difficult.

HPCwire: Are there implications for versioning as well?

Kurtzer: Definitely with libraries. A lot of software vendors will release binary versions only and many of these binary versions will be linked against a particular library or library version that is not available on a particular operating system. So for example, if you’re linking against something that is easy to upgrade, not part of the core operating system, in many cases, you can do that using a variable called the LD_LIBRARY_PATH and you can in fact run it against different libraries, but if you’re linking against something that is part of the core library like the C library, well that you can’t just upgrade; it’s pretty much impossible, because the moment you try to upgrade it you run the risk of breaking the entire operating system because the entire operating system is based on that exact version of the C library.

So core library and package versioning is a big deal with containers and it does free you from that. So for example, if I’ve got an application that requires a certain version of the C library that my host doesn’t provide you can get around that using a container because everything in the container is completely independent from the C library.

HPCwire: Can you trace the path from the rise of containers, notably Docker, to the creation of Singularity?

singularity-architecture_g-kurtzer
       Source: Gregory Kurtzer, LBNL

Kurtzer: The commonly understood notion of containers stems from the use cases of virtual machines. This means that containers not only share many of the same features, but also similar use cases, specifically service-level virtualization. Generally containers are a more optimal manner to achieve service-level virtualization because of the proximity of the applications to the physical hardware; with full hardware virtualization, there are not only layers of emulation, but also redundancy which affect performance.

Docker is one of the most commonly known container solutions because it is more than just a run-time environment. It’s also a build environment for containers. It’s a sharing platform. It’s an entire platform enabled to support containers and they nailed it for the enterprise use case. But scientists are a resourceful bunch and they saw some of the features like the reproducibility aspects of the Docker platform and the fact that they can leverage each other’s’ work and they jumped on board. There are a lot of scientists that have pushed a huge amount of work into Docker containers and those same scientists then turned to the HPC centers wanting to be able to use those containers on an HPC system, and the HPC service providers were turning back to them and saying, “we can’t integrate Docker, it just doesn’t fit!” There’s not one HPC center that has a full integration of Docker on a traditional HPC environment. There are custom-built Docker solutions that exist but you’re not using traditional HPC that we’ve all come to know and love and spend gobs of money on, so they’re not using high-performance interconnects, they’re not running MPI-like jobs, they’re not using the high performance or inter-galactic parallel file systems, etc. Essentially they have removed the High Performance out of HPC and thus leaving only compute.

That’s where Singularity came to be. We need to satisfy the requirements of the scientists. We need be able to leverage the work that they’ve already done, that’s already in Docker. Singularity has the ability to run a container right out of the Docker registry and/or push that over to an HPC resource and run it over there as well.

HPCwire: What kind of traction and adoption have you had?

Kurtzer: I did the first release (1.0) in April 2016, and I just released version 2.2. Now I can’t tell you what sites are running it and what sites are not because most people don’t contact me to tell me about it, they’re just running the code. But I can tell you out of the ones who have contacted me and spoke with me, I know right now TACC is in the process of integrating it, SDSC has already integrated it on 65,000 cores, University of Florida on 51,000 cores, NIH on 50,000 cores. Our site has 38,000 cores, which includes our institutional Lawrencium cluster. University of Nebraska is also at about 14,000 cores.

Additionally there are other sites also running Singularity: UC Berkeley, UC Irvine, Notre Dame, Stanford, Cambridge also have it installed and there’s even more; I just don’t know about them all.

One of the major factors in the massive uptake has to do with the simplicity of installing and configuring Singularity. Singularity is a single package installation (RPM or DEB) and once it is installed, users can immediately begin utilizing Singularity containers interactively, in their workflows and/or via the resource manager. This has huge benefits for HPC service providers in that there is no need to modify the system architecture, scheduler configuration, or security paradigm in order to utilize Singularity on traditional existing HPC resources.

HPCwire: When you look at the strata of HPC workloads and users, where is the biggest need for Singularity?

Kurtzer: The type of science that we’re seeing is not as much as the typical HPC jobs; but rather the areas of research that are just starting to utilize HPC (for example humanities, social science, public policy, business, political science, etc.). Sometimes called the long tail of science, there are potentially huge areas of research that can make use of HPC. I personally have seen a huge amount of uptake from the long tail of science – the scientists that are not writing compiled code, the scientists that are not as worried about massive scalability. We’re also seeing a lot of uptake along the lines of non-compiled code, which could run at scale. We have some pyMPI jobs that could run across thousands of nodes and tens of thousands of cores and doesn’t necessarily run very efficiently but the cost of human development time is a lot more than paying for CPU time, and non-compiled, higher level languages (such as Python) facilitate this.

HPCwire: How is Singularity different from Shifter, the container solution developed at NERSC?

Kurtzer: While there is some overlap in terms of the container runtime implementations (e.g. both projects use namespaces, maintain user credentials, and support HPC), the primary distinctions are within the fundamental design goals and thus implementation features.

For example, Shifter is designed around the idea of implementing Docker on HPC and the best way to implement that is really not to use Docker at all (as I mentioned, Docker is designed for service level virtualization, not HPC). Considering users have been asking for Docker support on HPC for years, NERSC satisfied those requests by implementing Shifter which takes Docker images and shifts them to a format and mode of operation which can be utilized on large HPC resources. Shifter also integrates with the resource manager to properly execute jobs natively within a specified container.

Singularity, on the other hand, focuses on the mobility and reproducibility of the images themselves and uses these images as the vector of portability. Additionally, Singularity does no integration with the resource manager and relies on the users to manage their container based workflows completely.

One place where we are planning on joining forces is for Shifter to also accept Singularity images (in addition to Docker containers).

More information about Singularity.

More information about Shifter.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high-end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This