DEV Community

Mario Carrion
Mario Carrion

Posted on • Originally published at mariocarrion.com on

Building Microservices in Go: Containerization using Docker

In the past I covered Docker from different point of views, including creating small docker images, building private Go packages with GitlabCI as well as using it for integration testing.

This time I will discuss briefly a few cloud providers (and their services) supporting Docker images, the most recent changes Docker added a few years ago to enable multi-stage builds for building small images and how to use docker-compose for local development.

What is Docker?

From the official site:

...is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package.

Those containers are created after instantiating Docker Images, those images need Docker Registries for distributing those same images to different users, the most popular Docker Registries at the moment for hosting docker images include:

You can still have the option to run a registry on-premise if needed. For orchestrating those Docker Containers some of the most popular cloud-based provider services available at the moment are:

How to use multi-stage builds?

The code used for this post is available on Github.

The key parts to use multi-stage builds are to name the image, or images, to use and then call those images in subsequent calls. Please refer to the original Dockerfile for more details.

For example:

FROM golang:1.16.2-alpine3.13 AS builder

# ... commands are called here ...

FROM alpine:3.13 AS certificates

# ... more commands are called here ...

FROM scratch

# We refer to the previous stages, to copy the artifacts we created in previous stages.

COPY --from=certificates /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt

COPY --from=builder /build/rest-server ./bin/rest-server
Enter fullscreen mode Exit fullscreen mode

In this case we are using two images in three steps:

  1. builder: it is used for building the binaries,
  2. certificates: it is used for installing the required certificates, and
  3. Last one, we are copying the files we need from the other two to finally create the image.

With that we can generate the smallest docker image we can actually have for our use case.

Using Docker Compose for local development

When developing locally we could either run the needed services manually or use something like docker-compose, this really depends on your personal preference, I like running the docker containers manually instead of depending on docker-compose, the reason for this is that, to date, there's no way to really depend on a service and wait until that one is completely initialized without adding a third-party program to handle that.

To rephrase it, some containers take longer to initialize and if there is a service depending on them it will start first then and it will fail because the ones that it depends on are still not ready.

To handle that limitation when using docker-compose we can re-run those containers after the ones that fail, using our To-Do Microservice docker-compose.yml file for a concrete example, we do:

  1. Execute docker-compose up, this step will start all the required services with their corresponding containers, we expect the api service to fail because the postgres service takes longer to start.
  2. If we run docker-compose up api only, then the api service will start successfully but we won't be able to interact with it because the database schema is not up to date, for that we need to run the database migrations.
  3. Running docker-compose run api migrate -path /api/migrations/ -database postgres://user:password@postgres:5432/dbname?sslmode=disable up will complete the step of migrating the database to the new version and with that we finally have everything working correctly.

Conclusion

Using containers is not something new but Docker popularized and made it easier for everybody to create and use those containers, over the years cloud service providers, like Amazon and Google Cloud, started supporting those artifacts and integrated them into their services, they even implemented their own versions of popular orchestration tools for containers, like Amazon Elastic Container Service (ECS) and Google Kubernetes Engine (GKE).

Using Docker and containers is not a requirement for Microservices but using that together with orchestration tools could result in a good usage of the infrastucture resources being used to make those said Microservices live.

Oldest comments (0)