DEV Community

Peter Benjamin (they/them)
Peter Benjamin (they/them)

Posted on • Updated on

Docker Security Best-Practices

voodoo container

Table of Contents

Overview

Containers have revolutionized the tech industry in recent years for many reasons. Because of their properties of encapsulating dependencies into a portable container image, many organizations have adopted them as the primary method of developing, building, and deploying production applications.

As a Software Engineer and a Security Engineer, I cannot even begin to express my excitement for the game-changing potential of containers and orchestrators, namely Kubernetes. However, due to the chaotic (good) nature of open-source and the speed by which projects move and evolve, many organizations are simply unable or unequipped to properly secure these technologies.

This article aims to provide a list of common security mistakes and security best-practices/recommendations in 2018.

In the next article, I will offer the same insights for Kubernetes.

Legend:

Icon Meaning
Not Recommended
🗒️ Rationale
Recommendation

The Host

❌ Running Docker on an unsecured, unhardened host

🗒️ Docker is only as secure as the underlying host

✅ Make sure you follow OS security best-practices to harden your infrastructure. If you dole out root access to every user in your organization, then it doesn't matter how secure Docker is.


Docker Hardening Standard

✅ The Center for Internet Security (CIS) puts out documents detailing security best-practices, recommendations, and actionable steps to achieve a hardened baseline. The best part: they're free.

✅ Better yet, docker-bench-security is an automated checker based on the CIS benchmarks.

# recommended
$ docker run \
    -it \
    --net host \
    --pid host \
    --userns host \
    --cap-add audit_control \
    -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
    -v /var/lib:/var/lib \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v /usr/lib/systemd:/usr/lib/systemd \
    -v /etc:/etc --label docker_bench_security \
    docker/docker-bench-security
Enter fullscreen mode Exit fullscreen mode

Docker Engine

Docker Engine is an API that listens for incoming requests and, in turn, interfaces with the underlying host kernel to accomplish its job. Docker Engine supports communications on 3 different sockets: unix, tcp, and fd.

❌ Running Docker Engine (aka the Docker daemon, aka dockerd) on tcp or any networked socket

🗒️ If anyone can reach the networked socket that Docker is listening on, they potentially have access to Docker and, since Docker needs to run as root, to the underlying host

✅ The default docker behavior today is the safest assumption, which is to listen on a unix socket

# not recommended
$ dockerd -H "tcp://1.2.3.4:8080"

# recommended
$ dockerd -H "unix:///var/run/docker.sock"
Enter fullscreen mode Exit fullscreen mode

❌ Mounting the Docker socket into the container

🗒️ Mounting /var/run/docker.sock inside the container is a common, yet very dangerous practice. An attacker can execute any command that the docker service can run, which generally provides access to the whole host system as the docker service runs as root.

✅ Short of just saying "Don't mount the docker socket", carefully consider the use-cases that require this giant loophole. For example, many tutorials for running a Jenkins master in a container will instruct you to mount the docker socket so that Jenkins can spin up other containers to run your tests in. This is dangerous as that means anyone can execute any shell commands from Jenkins to gain unauthorized access to sensitive information or secrets (e.g. API tokens, environment variables) from other containers, or launch privileged containers and mount /etc/shadow to extract all users' passwords.

# not recommended
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu /bin/bash
Enter fullscreen mode Exit fullscreen mode

Container Privileges

❌ Running privileged containers

🗒️ Containers would have full access to the underlying host

✅ If needed by a container, grant it only the specific capabilities that it needs

# not recommended
$ docker run -d --privileged ubuntu

# recommended
$ docker run -d --cap-add SYS_PTRACE ubuntu
Enter fullscreen mode Exit fullscreen mode

❌ Running containers as root users

🗒️ This is a system administration standard best-practice. There is little to no reason for running software in containers as root.

✅ Run containers as non-root users

# not recommended (runtime example)
$ docker run -d ubuntu sleep infinity
$ ps aux | grep sleep
root ... sleep infinity

# recommended (runtime example)
$ docker run -d -u 1000 ubuntu sleep infinity
$ ps aux | grep sleep
1000 ... sleep infinity

# recommended (build-time example)
FROM ubuntu:latest
USER 1000
Enter fullscreen mode Exit fullscreen mode

Static Analysis

❌ Pulling and running containers from public registries

🗒️ Recently, security researchers found 17 cryptomining containers on Docker Hub

✅ Scan container images to detect and prevent containers with known vulnerabilities or malicious packages from getting deployed on your infrastructure


✅ Sign container images

  • Docker Content Trust guarantees the integrity of the publisher and the integrity of the contents of a container image, thus establishing trust

Runtime Security

✅ Attach seccomp, apparmor, or selinux profiles to your containers

  • Security profiles, like seccomp, apparmor, and selinux add stronger security boundaries around the container to prevent it for making a SYSCALL it is not explicitly allowed to make

✅ Monitor, detect, and alert on anomalous, suspicious, and malicious container behavior


✅ Consider running containers in a container runtime sandbox, like gvisor

  • Container runtime sandbox add an even stronger security boundary around your containers at runtime

Conclusion

Container technology is not inherently more or less secure than traditional virtualization technologies. Containers rely on linux primitives for security, such as UIDs, GIDs, namespace isolation, cgroups to control cpu, memory, pids, io ...etc.

If you feel like I missed something, got some details wrong, or just want to say hi, please feel free to leave a comment below or reach out to me on GitHub 🐙, Twitter 🐦, or LinkedIn 🔗

Top comments (3)

Collapse
 
flchs profile image
François Lachèse

Thanks you Peter for this article.

I completely agree with you about not letting any container access the docker socket.

But what is the alternative ? The Jenkins example is very pertinent and the only alternative I can think of is running a Docker in Docker instance.

But then, on the official docker in docker image page you can find a link to an article from Jérôme Petazzoni in which he recommends to use the socket binding method for Jenkins over Docker in Docker.

I would be glad to know more details about what are your recommendations regarding Jenkins running Docker commands and security.

Collapse
 
ahansondev profile image
Alex Hanson

Two thumbs up to the note on watching what you pull from public registries. They're certainly safe more often than not, but sometimes it's better to start with the OS you know and love (and know is updated and secure), and installing your own packages and specific versions of those packages. Nothing makes me cringe more than seeing a Dockerfile that starts off with a "yum update -y" at the top, grabbing the latest updates without having any idea how that's going to affect the built container, from an operational or security perspective.

Collapse
 
pbnj profile image
Peter Benjamin (they/them)

You touch on a very important point and a common trend I see with newcomers to containers that prompt me to iterate key concepts:

  1. Don't treat containers the same way you would treat virtual machines.
  2. Containers should be immutable images with the absolute minimal number of dependencies needed to run a single application.

To this point, Docker published a set of best practices for writing Dockerfiles. It's worth a read once newcomers have a handle on basic docker usage.

Google is also working on a set of container images that don't even include an OS as the base image. The images only include a small set of libraries needed to run applications written in various languages. And that's it. No need for an OS or even apt or yum in the images themselves.

Check out GoogleContainerTools/Distroless and this CNCF talk