DEV Community

Cover image for Dockerless, part 3: Moving development environment to containers with Podman
mkdev.me for mkdev

Posted on • Edited on • Originally published at mkdev.me

Dockerless, part 3: Moving development environment to containers with Podman

In the introductory article of this series I wrote that one of disadvantages of Podman and Buildah is that the technology is still pretty new and moves fast. This final article you are reading appeared with much delay because from Podman 1.3.1 till 1.4.1 one of the key features that we will look at in this article was broken.

Luckily, Podman 1.4.1 and above not only fixes features that were broken for a few weeks, but also has these features finally covered with tests. Hopefully, there will be no such dramatic loss in functionality in future releases. My original warning still applies though: new container technology toolchain is new and sometimes unstable. Keep that in mind.

Disclaimer #1: Depending on when you are reading this article, my warning might not apply. It's the state of things as of June 2019. If you are from the end of 2019 or 2020, chances are that Podman is mature and stable enough for you not to worry about broken features between minor versions.

Disclaimer #2: I will briefly mention how Podman works, but I won't go into details. If you are an infrastructure engineer or just curious, then follow all the links I put in the article to learn more. If you are a developer who doesn't care too much about internals, then skip them, as it might take you quite some time to dive into this topic without immediate benefit for your daily work.

What is Podman and does it work?

Podman is a replacement for Docker for local development of containerized applications. Podman commands map 1 to 1 to Docker commands, including their arguments. You could alias docker with podman and never notice that there is a completely different tool managing your local containers.

One of the core features of Podman is it's focus on security. There is no daemon involved in using Podman. It uses traditional fork-exec model instead and as well heavily utilizes user namespaces and network namespaces. As a result, Podman is a bit more isolated and in general more secure to use than Docker. You can even be root in a container without granting container or Podman any root privileges on the host -- and user in a container won't be able to do any root-level tasks on the host machine.

A good example of how Podman's model can lead to a better security is covered in an article Podman: A more secure way to run containers. If you want to learn more about how Podman leverages Linux namespaces, start with Podman and user namespaces: A marriage made in heaven article. Finally, if you want to read about possible obstacles that you might have with this approach, then read The shortcomings of rootless containers.

For most of the users internals of Podman should not matter too much in a day to day use. What does matter is that Podman provides same developer experience as Docker while doing things in a slightly more secure way in the background. Let's see if that's true.

Local development environment of mkdev.me

Main web application behind mkdev.me is written in Ruby on Rails. For a developer to be able to run this application locally he or she needs:

  1. PostgreSQL server;

  2. Redis server;

  3. Mattermost instance (for our chat solution;

  4. Mattermost test instance (to be used during automated tests);

In total that's 5 services to run locally (including web application itself). One can imagine that for any new developer to install and configure all of it by hand can take quite some time. And once it's done, there is no guarantee that resulting local environment is close to the production one: developer could install different PostgreSQL or Mattermost versions, that were not yet tested to work with mkdev.

Wouldn't it be great to bootstrap complete development environment with one command and get a production like setup running in seconds? That's what Docker and Docker Compose provided developers with. That's what Podman can provide as well.

Podman's pods and what they are good for

On top of the regular containers Podman has pods. If you ever heard of Kubernetes, this concept is familiar to you. In Kubernetes world pod is a smallest deployment unit that consists of one or more containers. Podman's pods are exactly the same. All containers inside the pod share the same network namespace, so they can easily talk to each other over localhost without the need to export any extra ports.

There are 3 possible use cases for pods.

1. Prepare your application for running on Kubernetes/Openshift

You could use pods in Podman as a preparation step before moving it to Kubernetes. In many cases, for real world web applications, you probably will be better off using minikube, which will guarantee you the same APIs and functionality Kubernetes has. You would want to have Deployments, Services and other resources, that would be vital part of your setup in production. Just having a way to simulate pods with Podman won't be of much benefit for this.

2. Run your application with Podman in production

You could decide that complete container orchestration is an overkill for you (and that would be a very good decision in many cases). Then it would make sense to still use containers for packaging and delivering your application. And in certain cases you could benefit from not just running one container, but running multiple ones inside the pod on your production server. The question is what exactly will be the benefit of putting your containers inside the pod versus just running them as separate systemd managed services? I don't have a good answer here, but the feature is there and someone might find a use case for it in production.

3. Simplifying your development environment

Final and the most attractive reason for developers is to use Podman pods to automate development environment. In this case you would run all the services your application depends on inside the same pod. This is absolutely not something you should ever do in production environment on a real Kubernetes cluster, as your services should be running in different pods behind different replication controllers and service endpoints. But for local development doing it this way is convenient.

Podman pods and Kubernetes pods

Before we move to some real examples, we need to learn about one pod-related feature of Podman: play kube. Podman doesn't have a replacement to Docker Compose. There is a third party tool podman-compose that might bring this functionality, but we at mkdev didn't get to testing it yet.

Instead of Docker Compose Podman has pods and a way to run them out of YAML definition. This YAML definition is compatible with Kubernetes pods YAML, meaning that you can take this YAML, load it into your Kubernetes cluster and expect some pods running.

Out of scope: Supporting docker-compose. We believe that Kubernetes is the defacto standard for composing Pods and for orchestrating containers, making Kubernetes YAML a defacto standard file format. - Podman documentation

Now let's do a small example.

Basic usage of Podman

We first need to create a new pod that will expose port 5432:

podman pod create --name postgresql -p 5432 -p 9187
Enter fullscreen mode Exit fullscreen mode

We can see running pods with podman pod ps command:

POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID
235164dd4137 postgresql Created 26 seconds ago 1 229b2a70b8c4
Enter fullscreen mode Exit fullscreen mode

When you create a new pod, Podman automatically starts infra container, which you can see by running podman ps.

Let's start a PostgreSQL container inside this pod:

podman run -d --pod postgresql -e POSTGRES_PASSWORD=password postgres:latest
Enter fullscreen mode Exit fullscreen mode

If you don't have postgres:latest image yet, Podman will pull it automatically, from Docker Hub -- same experience you would have with Docker CLI.

Let's start another container inside postgresql pod: with a PostgreSQL Prometheus exporter:

podman run -d --pod postgresql -e DATA_SOURCE_NAME="postgresql://postgres:password@localhost:5432/postgres?sslmode=disable" wrouesnel/postgres_exporter
Enter fullscreen mode Exit fullscreen mode

We can see top processes inside the pod with podman pod top postgresql command. And we can access PostgreSQL metrics if we curl localhost:9187/metrics.

If we want to create the same setup again without running imperative shell commands and to store this setup as a declarative code, we can run podman generate kube postgresql > postgresql.yaml which will result in a Kubernetes-compatible pod definition. If you follow the link and examine this YAML file, you will see that Podman correctly configured all the ports and even exported all the environment variables, which you can cleanup if you want to rely on the image defaults.

Remove the pod with podman pod rm postgresql -f. Then, instead of running all of the commands again, simply run podman play kube postgresql.yaml to get the same result. You could also kubectl apply -f postgresql.yaml and get this PostgreSQL running on your Kubernetes cluster.

Warning: if you happen to use Podman 1.4.2, then at this point you will hit a bug described in this GitHub Issue. Let's hope by the time you read it the issue is fixed. If it's not fixed, then follow steps to fix your YAML from the Issue description or simply copy contents of my gist, which already contains a fixed definition.

The YAML file generated by Podman should not be used 'as is', because Podman dumps all environment variables, securityContext and other things that you could leave without in your development environment and that might have better defaults in your Kubernetes cluster. Consider it a convenient scaffolding, not a final result.

Using Podman in real Ruby on Rails application

At mkdev we completely automated our development environment with Podman. New developers (assuming they have a Linux machine running) can run a single script ./script/bootstrap.sh to get the application running. The script itself looks like this:

#!/bin/bash
set +e
if [ "$(podman pod ps | grep mkdev-dev | wc -l)" == "0" ] ; then
  echo "> > > Starting PostgreSQL, Redis and Mattermost"
  podman play kube pod.yaml
else
  echo "Development pod is already running. Re-create it? Y/N"
  read input
  if [ $input == "Y" ] ; then
    podman pod rm mkdev-dev -f
    podman play kube pod.yaml
  else
    echo "Leaving bootstrap process."
    exit 0
  fi
fi
echo "> > > Waiting for PostgreSQL to start"
until podman exec postgres psql -U postgres -c '\list'
do
  echo "> > > > > > PostgreSQL is not ready yet"
  sleep 1
done
podman exec -u postgres postgres psql -U postgres -d template1 -c 'create extension hstore;'
echo "> > > Creating development IM database"
until podman exec -u postgres postgres createdb mattermost; do sleep 1; done
echo "> > > Creating test IM database"
until podman exec -u postgres postgres createdb mattermost_test; do sleep 1; done
echo "> > > Creating and seeding the database"
./script/setup.sh
./script/exec.sh 'bundle exec rails db:create db:migrate db:test:prepare'
./script/seed.sh
echo "> > > Attempting to start the app"
./script/run.sh
Enter fullscreen mode Exit fullscreen mode

We rely on play kube feature to create all of the required services, the way you would normally run docker-compose up. Our pod.yaml defines 4 containers -- PostgreSQL, development and test Mattermost instances and Redis server. We run a dedicated test instance of Mattermost because we need to reset it's database after integration tests, and we don't want to reset development instance as it would certainly be unproductive.

Instead of running rails and rake commands directly, we hide them inside scripts folder, in a similar setup you can remember from Scripts to Rule Them All article by GitHub around how they organize these kind of scripts internally.

scripts/bootstrap.sh invokes a number of other scripts, like seeding the database, downloading some dependencies and triggering database migrations. One script that developers find useful is scripts/exec.sh:

#!/bin/bash
set -e
echo "Running command in new container ..."
podman run --pod mkdev-dev -it --rm -v $(pwd):/app:Z docker.io/mkdevme/app:dev $1
Enter fullscreen mode Exit fullscreen mode

It runs a command in a new application container and then removes this container. This is very useful to run one-off commands like database migrations or rake tasks.

scripts/run.sh simply starts the application container, if not started, and then spins up a Rails server inside:

#!/bin/bash
set -e
if [ "$(podman ps | grep app | grep mkdevme | wc -l)" == "0" ] ; then
  echo "> > > Starting new application container"
  podman run --pod mkdev-dev --name app -v $(pwd):/app:Z -d docker.io/mkdevme/app:dev tail -f /app/log/development.log
fi
n=0
until [ $n -ge 5 ]
do
  podman exec app /entrypoint.sh bundle exec rails s -b '0.0.0.0' -P /tmp/mkdev.pid
  n=$[$n+1]
  echo "Not all components are up. Sleeping for 10 seconds."
  sleep 10
done
Enter fullscreen mode Exit fullscreen mode

Note that command executed inside the application container is just a tail -f, which results in a never-dying container. It's done this way mostly to allow developers to quickly enter the container and debug something inside in case Rails refuses to boot with a new error.

You might not fancy the amount of bash scripts we had to write. It is definitely not as nice as a single docker-compose.yaml file. It is not too bad, though. These scripts need to be written once and are not overly complicated. In the end, it's more the matter of taste, than a real technical drawback.

With this set of handy scripts we cover most of development tasks, like restarting the server, executing any commands and so on. There are certain things to be improved, like there always are, but in general we are pretty happy with the result. Developers have identical environments, with same dependency versions, same Ruby version and same everything and they can (re-)create whole local setup in seconds. These are the same benefits you would get from Docker, but in this case without Docker at all.

Dockerless: is it worth it?

And that concludes this series. Hopefully you've learned something new about container standards and new tools in the container world. One question you might still ask: was it worth it? I sure asked this question myself. It would certainly be easier to apply good old Docker skills and practices and we would probably end up with the same result, but faster.

But the point is that in the end we did end up with the same result. Container images are being built, containers are being used in development and test environments and we have the same benefits as with Docker. We didn't have to compromise much on features, even though we for sure struggled in the beginning with certain bugs in Podman. Even as I was writing this article I discovered yet another bug in Podman!

Now that mkdev has a working solution with Podman and Buildah, we will likely stick with it. There are ideas flowing around for deploying our application in a Podman-spawned container managed as a systemd service (we are not at scale to have any reasons to introduce container orchestration tool to the stack). The pod.yaml we have can be used to deploy review apps for new Pull Requests, which is something that would improve our testing processes even more. And there are more features in Podman with every release.

As I already did in the first article of this series, I encourage you to learn more about what's happening in the container landscape. There are new things to learn and to try and there are some very good articles to read that I've linked in all 3 parts of this series.

Looking forward to your comments and proposals for new container-related articles. What would you like to know about them in depth?


This is an mkdev article written by Kirill Shirinkin. You can hire our DevOps mentors any time you like.

Top comments (0)