DEV Community

Cover image for How to run your own docker registry with password, SSL and S3 backend
Chen
Chen

Posted on • Updated on • Originally published at devopsian.net

How to run your own docker registry with password, SSL and S3 backend

I remember the day I started using Docker. The simplicity was overwhelming. I went over the official docs which are very good, and looked up some 101 tutorials to get started. I had successfully made my first image. Now that I have one,
I want to deploy it over my servers. The first option was DockerHub, but a free account obligates you to share the images. If you need to keep them private, this service costs money.

Or, you can just run your own Docker Registry and have the flexability to:

  • Password protect it
  • Store the images on locally or in the cloud (S3/Azure/GoogleCloud)
  • Use your self-signed SSL certificates

"The Registry is a stateless, highly scalable server side application that stores and lets you distribute Docker images."

The official Docker documentation covers the subject in-depth. You should read it to get a grasp of what is it and how to run it. In this post, I address the issues I had encountered when I deployed our registry server. I'll explain how to implement the three items mentioned above. For the rest of this post, I assume you read the Registry Overview and Understanding Docker Registry at the official site. (~3min).

Get the registry running

The registry app itself runs in a container. It has a front-end component (which is optional) that allows you to access the registry data from a browser. Each one of the components runs inside its own container, which forms the app stack. An app stack is a group of apps (stack) that is linked together to perform certain tasks. We can use docker-compose to run the app stack, instead of running each container separatley.

Here's the docker-compose.yml file we start with:

version: '2'
registry:
    image: registry:v2
    restart: always
    ports:
      - "5000"
    environment:
      REGISTRY_STORAGE_DELETE_ENABLED: 'true'
      REGISTRY_HTTP_ADDR: 0.0.0.0:5000
registry-frontend:
    image: konradkleine/docker-registry-frontend:v2
    restart: always
    environment:
      ENV_DOCKER_REGISTRY_HOST: 'registry'
      ENV_DOCKER_REGISTRY_PORT: 5000
    links:
      - registry
    ports:
      - "8080:80"
    expose:
      - 80
Enter fullscreen mode Exit fullscreen mode

Let's briefly break this down, as its just plain configuration (without password, S3 or SSL).

The registry and registry-frontend are the container names in my stack. You can name them any way you like. The environment variable REGISTRY_STORAGE_DELETE_ENABLED allows you delete images from the registry, and REGISTRY_HTTP_ADDR binds the listening address.

The registry-frontend container uses ENV_DOCKER_REGISTRY_HOST and ENV_DOCKER_REGISTRY_PORT as the address to which it connects. Because we use docker-compose, it is available for us by name. The links configuration links the container the another service, in our case to the registry.

Don't you wonder what's the difference between ports and expose settings? the difference is ports make the ports defined accessible to the host as well as other services in the compose file, whereas expose actually expose the ports only to other services without publishing them to the host machine. Only the internal port can be specified in such case.

Ports also links between an external port (defined on the host) to a container assigned port. In the case of the registry container, port 5000 is accessible from everywhere. On the other hand, registry-frontend exposes port 80 only to other services defined in the file, while accessing it from outside is bounded to port 8080.

To run the services, we use docker-compose up inside the directory where the docker-compose.yml file resides.
You can validate the services are running with docker ps. If the containers are up, you can now browse http://localhost:8080 to see your registry data.

Passwords time

As part of having my private registry, I would like to add some measure of security. I want to password protect it.
Once it set, the client needs to provide a password when they login the registry for the first time.
This is very simple, and there is a great article covering that. Check "Private Docker Registry Part 2: let’s add basic authentication" on Medium and configure if you need it too.

Photo by Liam Tucker on Unsplash

Private SSL certificates

Security is always an issue. It's something we tend to forget when we are in the middle of POC. We want to make things work first, and fast. It requires additional steps, and we tend to leave it as the last task.

My advice for you is - whenever you can use SSL, use it. It's worth the extra time you put in it, and has no drawbacks.
The docker client uses HTTPS by default, so as I see it we have 3 options here:

  • Make the client use HTTP instead of HTTPS
  • Configure the client to trust our registry with insecure-registries parameter
  • Add our certificate to the trusted ones, either the docker engine or the OS

I find the 3rd option as the best solution for me. I'll use my own certificates for my server.

I assume you have generated the certificate (cert.crt, cert.pem and ca-certificate) for your server.
Put the public and private keys in /var/lib/docker/certs. For the registry to be using them, we need to add the following to our configuration, under the environment of the registry:

    REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry_gnosis.crt
    REGISTRY_HTTP_TLS_KEY: /certs/registry_gnosis_key.pem
  volumes:
    - /var/lib/docker/certs:/certs
Enter fullscreen mode Exit fullscreen mode

Ok our server side now uses our certs. Great! but we are not done yet.
In order for clients to be able to connect the registry, we need their workstations to trust our CA. Otherwise, it won't connect. If your client machine already trust your CA certificate, you're done. Else, you need to copy your CA certificate over the machine. You can copy it to /etc/docker/certs.d and docker will trust it automatically, or for the OS to trust it you need to (On Ubuntu system, it may vary on other OS):

  1. Copy it to /usr/local/share/ca-certificates/
  2. sudo update-ca-certificates

Make sure to restart the docker service on the client after you do that.

Now that our registry has basic authentication and SSL encryption support, let's continue our journey to the final configuration step. I'll show how you connect to the registry after I explain the use of S3 as a backened.

S3 storage backend

The registry serves our images. We can pull and push images into or from it. So, these images must reside on a disk somewhere. By default, they reside on the docker host running the registry. You can explicity mount it with a different volume, but you can do something much cooler than that. I used AWS S3 storage services as my backend. What this means is, every image I upload to the registry is saved on a dedicated bucket on S3.

This has the advantage of:

  1. I don't need to care about backups.
  2. My storage is unlimited.
  3. Disaster recovery, in case the machine my registry runs at burns or something, I can deploy a new service and connect it to the S3 bucket. All data is preserved.

Re-read item 3 again. This is priceless. I can start another registry container anywhere in the world, once I configure the S3 storage in the docker-compose.yml file I gain the access to all my images.

To configure storage, we need this snippet in the environment part of the registry component:

    REGISTRY_STORAGE: s3
    REGISTRY_STORAGE_S3_ACCESSKEY: <api access key>
    REGISTRY_STORAGE_S3_SECRETKEY: <api secret>
    REGISTRY_STORAGE_S3_BUCKET: <bucket name>
    REGISTRY_STORAGE_S3_REGION: <region>
    REGISTRY_HEALTH_STORAGEDRIVER_ENABLED: false
Enter fullscreen mode Exit fullscreen mode

It's pretty straight forward, but the last one. REGISTRY_HEALTH_STORAGEDRIVER_ENABLED is important.
Before I added that I ran into problems. I don't recall the exact error, but after googling for some time, I find out that if you run the registry with an empty bucket (mine was) this health check fails, and the service fails to start. So I had to disable it, and things worked.
After you upload something, you may enable this check. I decided I don't really care about that, so I left it disabled.

Client connection

In order to use my new registry, I need to connect the client so it can pull or push images from/to it. Remember our registry works with SSL, and the docker client too by default, so I need to trust my CA, as explained above.

To connect, execute docker login -u <user> <url:port> (without the https prefix).

Now, on my Ubuntu machine I encountered another error when doing this:
"Error saving credentials: error storing credentials - err: exit status 1, out: Cannot autolaunch D-Bus without X11 $DISPLAY"

To solve this, I needed to delete a package from the OS: apt remove golang-docker-credential-helpers
made it work. After you login to the registry, by default the credentials are kept in a file inside a hidden directory ~/.docker/config.json. Obviously this isn't best practice, you can find better alternatives here.

Push and Pull

To pull or push images we simply refer to our registry it's address, docker pull our-registry.com:<port>/image.
You can also use the IP if you don't own a domain.

API to check registry contents

I will use a fake domain registry.gnosis.org:5000 for my registry server.
Once I had the containers running, I could browse to http://gnosis.example.com and see the content of the registry, the repositories, images and tags. If you decided not to use the front-end or you need to use it within an app, you can access the registry using it's API.

I'll show some useful examples,

# list of the repositories
chen@gns:~$ curl -ksS -u admin https://gnosis.example.com/v2/_catalog
Enter host password for user 'admin':
{"repositories":["sample_image","nginx"]}

# list of tags for an image
chen@gns:~$ curl -ksS -u admin https://gnosis.example.com/v2/sample_image/tags/list
Enter host password for user 'admin':
{"name":"sample_image","tags":["0.1", "0.2", "latest"]}
Enter fullscreen mode Exit fullscreen mode

Summary

I went through the process I did to deploy my registry. I know I didn't cover it thoroughly, but that wasn't the purpose of this post. If you need a private registry, it is very easy to set up one. There are many tutorials how to do that.
I shared with you the issues I had faced when deploying a registry, and the configuration I had applied.

Applying password and SSL as security item is important, don't forget that. If you can use a cloud provider to store your images, it can save you a lot of headache. But it's not the end of the world if you can't, you just need take care for the items I listed.

Here's how the completed docker-compose.yml looks like:

version: '2'
registry:
    image: registry:v2
    restart: always
    ports:
      - "5000"
    environment:
      REGISTRY_STORAGE_DELETE_ENABLED: 'true'
      REGISTRY_HTTP_ADDR: 0.0.0.0:5000
      REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry_gnosis.crt
      REGISTRY_HTTP_TLS_KEY: /certs/registry_gnosis_key.pem
      REGISTRY_STORAGE: s3
      REGISTRY_STORAGE_S3_ACCESSKEY: <api access key>
      REGISTRY_STORAGE_S3_SECRETKEY: <api secret>
      REGISTRY_STORAGE_S3_BUCKET: <bucket name>
      REGISTRY_STORAGE_S3_REGION: <region>
      REGISTRY_HEALTH_STORAGEDRIVER_ENABLED: false
  volumes:
    - /var/lib/docker/certs:/certs
registry-frontend:
    image: konradkleine/docker-registry-frontend:v2
    restart: always
    environment:
      ENV_DOCKER_REGISTRY_HOST: 'registry'
      ENV_DOCKER_REGISTRY_PORT: 5000
    links:
      - registry
    ports:
      - "8080:80"
    expose:
      - 80
Enter fullscreen mode Exit fullscreen mode

Top comments (5)

Collapse
 
bogdaaamn profile image
Bogdan Covrig • Edited

That's insanely useful, thank you!

Does Docker allow to pull images from an unsecured registry? I mean the advantages of using a certificate are obvious, but for beginning or testing is possible not to use one?

Collapse
 
chen profile image
Chen • Edited

Yes, you can do that with the limitation of not having basic-auth.

You need to update your /etc/docker/daemon.json config file on the client:

{
  "insecure-registries" : ["myregistrydomain.com:5000"]
}

Check out the steps at docs.docker.com/registry/insecure/

Collapse
 
qm3ster profile image
Mihail Malo

Why don't you put it on a domain and use a publicly accepted certificate?
I can't find a convenient way to distribute my own certificates.

Collapse
 
chen profile image
Chen

We have non-public domain which is signed by us, and is used for in-house services only.

Have you checked Ansible as an option for distributing your private certificates?

Collapse
 
qm3ster profile image
Mihail Malo

No, I'll have a look next time.