Singularity : a "Docker" for HPC environments
Joana Chavez Jun 12
Companies nowadays are under pressure to digitally transform their applications but are constrained by existing applications and infrastructure while rationalizing in a diverse portfolio of clouds, data centers and application architectures.
Many different container platforms have been released to the market, overall the most well-known is Docker, a very powerful container platform that works perfectly for "almost" all use cases of applications. I said “almost” because Docker is oriented to micro-service virtualization.
However, in the HPC (High Performance Computing) context, scheduling resources is an essential feature that can considerably determine the performance of the system. This type of applications run a wide range of computationally intensive tasks in various fields, some examples including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling, biological macromolecules and physical simulations. Centers and companies that use this type of applications cannot run the risk to run Docker on their environment because it simply does not fit their use case. Some of the main reasons of this incompatibility are the following:
Security issues: HPC environments are multi-user systems where users should only have access to their own data. While Docker gives superuser privileges, it does not avoid the ability of accessing other users’ data and with it getting also control of the cluster and the computing resources. Singularity avoids this security problem by allowing different mechanisms for privileges:
Root owned daemon process: This mechanism is the most similar to Docker, because there is a root owned background process that allows the management of containers and spawns the jobs within the container. The access of users to the daemon is allowed through the IPC control socket for communicating with the root owned daemon process.
Limited usage to root: This mechanism allows only the root user to run the containers.
User Namespace: This mechanism allows a user to virtually become another user and run a limited set of privilege system functions. Singularity supports the user namespace natively and can run completely “rootless” or without privileges, of course the features are severely limited.
SetUID: Singularity supports the “old school” UNIX method for running a particular program with escalated permission. This mechanism gives a lot of flexibility in terms of supported features and legacy compliance.
Capability sets: with capability sets you can set up privileges by process and per file basis. This is an alternative to SetUID because it allows much finer grained capability control.
Another awesome feature from Singularity is the portability. You can take your container image with your environment which inside has all of your scientific apps and tools that you need and carry it from system to system. Docker uses layers for this matter, making the image more difficult to share.
In the necessity to support all the previous needs, Singularity was born. It was developed by Sylabs and a community who continuously work on this platform. The first public release was on April 2016 and it had a massive uptake from different companies and research groups around the world. As a matter of fact, this powerful tool got the attention of the HPC Wire team and was also named as one of the top technologies to watch on 2017.
We have just started to scratch the surface of deep learning and AI. This is still a new unexplored field and it’s expected to be evolving rapidly for the next 20 years at least. Singularity is already the standard de-facto in the academic world for running HPC workloads related to ML, DL, and AI, and it is deployed at hundreds of supercomputer research centers worldwide. Singularity is the best choice for Enterprises that want to enter these new emerging fields because it creates a bridge between the research world and the enterprise world and allows recent advances in research to freely flow into enterprise offerings. New disruptive technologies require the IT infrastructure of enterprises to adapt quickly, but this conflicts with the need of IT departments to preserve stability, security, and performance. Singularity solves this problem by introducing a layer of abstraction between the applications and the underlying operating system, while providing a safe passthrough to access computing resources required by ML, DL, and AI directly, with no performance penalty and without compromises on security.
This week I decided to talk about a couple of things that I feel like probably a lot of you might go through on your way to getting where ever it is you want to get to. And that is that burn out feeling or lack of motivation after the fun of doing something turns into well work. And having the patience to get to your goal.