For a long time, I've been using Docker to ease the deployment of local services in order to quickly hop from projects to projects without having to worry about each of my service's software version, whether it is Node.js, PostgreSQL, Stripe, etc...
However, when it comes to production, I have never found a really good workflow for deploying my servers easily other than relying on Swarm or Kubernetes, until recently when I came accross Docker Contexts and since then, it changed a whole lot about how I think when it comes to production deployment.
Before contexts
I'm a dumb human, and a simple mind.
When it comes to deploying a new projects, I buy a server, I copy the private key in order to be able to SSH into it, I add a new SSH key if there wasn't any, I copy it to my GitHub account, I make sure my project is on GitHub.
Then, I make sure Docker & Docker Compose are installed, I would clone my project on my recently purchased server and run docker compose commands for deploying all of the services that I need.
And if needed, I go back to my server using SSH, and I run the administration commands that I need to monitor my services, and if I need to update something, I go back to my computer, update something, push to GitHub, pull from my server, and the process starts all over again.
You might have done something similar, or you used a PaaS like transfer.sh to first download your project and then unzip it on your server, or maybe you just didn't use Docker & Docker Compose in production and you simply installed the necessary programs to run your applications (Ruby, Python, Node.js, ...).
Ultimately it works and we are happy with that, until some time passes and we now have a different version of PostgreSQL, or Ruby, or Stripe, and we have to spend a lot of times debugging what is going on, maybe try different software version in production, and in the process we might break things up.
I was today's year old when...
I learned about Docker Contexts, well not really today but this week at least.
If you don't know what are Docker Context, this is a powerful concept that allows you to switch between environments like Virtual Machines, or Clusters, or even Virtual Private Servers when running a command, without using SSH or FTP explicitly.
It is like SSHing into a server and running a Docker and/or Docker Compose command, except you don't have to SSH into that particular server yourself, it is taken care by Docker and it adds some more benefits to it.
Docker with context
If you used GitHub or GitLab as a project storage (which is great and you should) just to be able to clone your project on your server, now you don't have to do that anymore since you can build and run images that are based on a project stored on your computer directly to a machine like a VPS using a context.
First of all, you'll have to setup a context. For that, you'll need the latest available version of Docker & Docker Compose.
For instance, if you used to connect to a server via a SSH tunnel with the following command.
ssh user@1.2.3.4
You can now add it to one of your context.
docker context create \
--docker "host=ssh://user@1.2.3.4" \
--description "My production server" \
production
That's it! We can now list our contexts by using the following command.
docker context ls
You should now see two contexts (if you have never used one before). The second one being the context you just created.
The first one is the default context. In fact, each time you used a command, you were using your default context all this time! The default context is simply your own computer.
And you can switch context easily if you need to.
docker context use production
Now, if you run any Docker commands, they will be run on your server instead! It will automatically send a command through the SSH tunnel you set up for this particular context.
docker container ps
# List containers on your server, not your computer!
You can always switch back to your default context using the same command.
docker context switch default
I personnaly don't recommend using these commands, especially if you manage multiple contexts since you could run commands by mistake on a server that was not meant to receive the command.
As a more verbose, but safer alternative, you can use the --context
or -c
option to run command for a particular context.
docker -c production container ps
That's exactly the same output! And you can even use them for Docker Compose commands.
docker -c production compose ls
Docker Compose with context
Now that we know how it works, we have nothing more to learn. You can use absolutely the same commands as you were locally, but this time with a little bit more context (pun intended).
You can now ditch the SSH key you added on GitHub/GitLab, remove all source-code from your server to gain some precious disk space storage and right from your computer you can deploy your applications and services with a simple command.
docker -c production compose up -d --build
That's it, well that was quick!
Of course you can do all sorts of things and if you wish to test this out, you can use an example compose file down below for inspiration.
version: "3"
name: production
services:
nginx:
image: nginx:1.25.4-alpine3.18
ports:
- 80:80
phpmyadmin:
image: phpmyadmin:5.2.1-apache
ports:
- 8000:80
postgresql:
image:postgres:16.2-alpine3.18
environment:
POSTGRES_DB: devto
POSTGRES_USER: devto
POSTGRES_PASSWORD: devto
volumes:
- postgresql:/var/lib/postgresql/data
Now we can deploy our services using the following command.
docker -c production compose up -d
We can track the state of our services using the following command.
docker -c production compose ps
And we can even execute commands in one of our services if we need to.
docker -c production compose exec postgresql psql -U devto devto
You can now add tables in the psql
command line utility right from your computer directly to your server.
No SSH into the server required anymore, and you can quickly run these services locally just to test things out if you are on a plane or you don't have any active connection.
docker compose up -d
Simply remove the context, it's that easy!
No registry required
If you have already used a cluster management solution like Swarm or Kubernetes, you'll know that you have to first use a Registry to upload your images to be able to use these images on your cluster's worker nodes.
With Docker Context you don't, actually this is a all-in-one solution that you can use to deploy even custom images that are then built on the target server using contexts.
If we take this Dockerfile for instance.
FROM php:8.0.0-alpine
RUN addgroup -g 1000 -S php
RUN adduser -g "" -s /bin/sh -h /home/php -g php -SDu 1000 php
USER php
WORKDIR /home/php
COPY ./index.php .
CMD [ "php", "-S", "0.0.0.0:8000" ]
With the following index.php
<?php
phpinfo();
We can now use it in a compose.yml
file.
version: "3"
name: production
services:
server:
restart: unless-stopped
build: .
ports:
- 8000:8000
And we can now deploy this bad boy on our freshly bought VPS.
docker -c production build # Build on the server!
docker -c production up -d # Run the service on our server
And great is that it takes the files that are on our computer (the index.php
, Dockerfile
and compose.yml
) and builds everything on the server instead of our computer since we added the context option.
No need to setup and upload images on a registry anymore, there is nothing easier than that!
Conclusion
Since I discovered contexts, I saw a great increase in productivity and in my workflow.
I'm now able to iterate faster when it comes to deploying multiple versions of my applications, sometimes several times a day without worrying about breaking something in production.
And I get to use the tools I already use locally, which are Docker & Docker Compose in an easy way. This is what I talked about in the title when using the word "isomorphic".
The only thing that saddens me the most is that I didn't heard about it sooner, and I can already see stuff that could have been a lot easier if I used this in production.
Limitations
There is no real limitations to that, you can use all commands that are available from Docker & Docker Compose to quickly deploy your services.
The obvious limitation is that it is not as automatic as Docker Swarm or Kubernetes in the sense that you still have to manually provision your machines and add the correct SSH key to your context before running these commands.
And obviously, compared to Swarm/Kubernetes, you don't get the advantage of having a cluster of worker nodes that you can use to distribute the load, but for applications with low to moderate traffic, this can make the work of deploying and managing your applications insanely more easy than the traditional clone/pull/start workflow.
See more limitations to that? Let me know in the comment section down below, and happy hacking!
Top comments (2)
this is gold
Glad you liked it! I hope it serves you well in your next projects!