DEV Community

Nicholas Rempel
Nicholas Rempel

Posted on • Originally published at nrempel.com on

A Short Guide to a Super Productive Docker Development Environment

If you’ve ever worked on a large piece of software, I’m sure you’ve endured the pain of setting up a complex development environment. Installing and configuring a database, message broker, web server, worker processes, local smtp server, (and who knows what else!) is time consuming for every developer starting on a project. This guide will show you how to set up a docker development environment which will enable you and new developers to get up and running in minutes with even the most complex system. This will make your life much easier in the long run and get new developers up and running on the project much more quickly.

In this guide, we’ll be using Docker Community Edition and Docker Compose. You may want to read up on these tools a bit before proceeding.

The code for the guide is available here.

Docker

Installing Docker

Grab Docker for your operating system here. Docker is available for all modern operating systems. For most users, this will also include Docker Compose. Once installed, keep Docker running in the background to use Docker commands!

Dockerfile Example

Your Dockerfile is the blueprint for your container. You’ll want to use your Dockerfile to create your desired environment. This includes installing any language runtimes you might need and installing any dependencies your project relies on. Luckily, most languages have a base image that you can inherit. We'll get dig into this further with the Dockerfile example below.

Your Dockerfile doesn’t need to include any instructions for installing a database, cache server, or other tools. Each container should be built around a single process. Other processes would normally be defined in other Dockerfiles but you don’t even need to worry about that; in this example, we use 3 readymade containers for our databases and message broker.

Dockerfile

# Inherit from node base image
FROM node

# This is an alternative to mounting our source code as a volume.
# ADD . /app

# Install Yarn repository
RUN curl -sS http://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb http://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list

# Install OS dependencies
RUN apt-get update
RUN apt-get install yarn

# Install Node dependencies
RUN yarn install
Enter fullscreen mode Exit fullscreen mode

The Dockerfile above does a couple things: first, we inherit from the node base image. This means that it includes the instructions from that image’s Dockerfile (including whatever base image it inherits from). Second, I install the Yarn package manager since I prefer it over the default NodeJs package manager. Note that while my preferred language here is NodeJs, this guide is language independent. Set up your container for whatever language runtime you prefer to work in.

Give it a try and run docker-compose build and see what happens.

Docker Compose Example

A few sections ago, I mentioned Docker Compose which is a tool to declaratively define your container formation. This means that you can define multiple different process types which all run concurrently in different containers and communicate with one another via http. Docker makes exposing interfaces between containers easier by using what they call links. The beauty here is that it’s as simple as working with multiple processes on single machine but you can be sure that there are no tightly coupled components that might not work in a production environment!

docker-compose.yml

version: '3'
services:
  ###############################
  # Built from local Dockerfile #
  ###############################
  web:
    # Build the Dockerfile in this directory.
    build: .
    # Mount this directory as a volume at /app
    volumes:
      - '.:/app'
    # Make all commands relative to our application directory
    working_dir: /app
    # The process that runs in the container.
    # Remeber, a container runs only ONE process.
    command: 'node server.js'
    # Set some environment variables to be used in the application
    environment:
      PORT: 8080
      # Notice the hostname postgres.
      # This is made available via container links.
      DATABASE_URL: 'postgres://postgres:@postgres:5432/postgres'
      REDIS_URL: 'redis://redis:6379'
      RABBIT_URL: 'amqp://rabbitmq'
    # Make the port available on the host machine
    # so that we can navigate there with our web browser.
    ports:
      - '8080:8080'
    # Link this container to other containers to create
    # a network interface.
    links:
      - postgres
      - redis
      - rabbitmq

  clock:
    build: .
    volumes:
      - '.:/app'
    working_dir: /app
    command: 'node clock.js'
    environment:
      DATABASE_URL: 'postgres://postgres:@postgres:5432/postgres'
      REDIS_URL: 'redis://redis:6379'
      RABBIT_URL: 'amqp://rabbitmq'
    links:
      - postgres
      - redis
      - rabbitmq

  worker:
    build: .
    volumes:
      - '.:/app'
    working_dir: /app
    command: 'node worker.js'
    environment:
      DATABASE_URL: 'postgres://postgres:@postgres:5432/postgres'
      REDIS_URL: 'redis://redis:6379'
      RABBIT_URL: 'amqp://rabbitmq'
    links:
      - postgres
      - redis
      - rabbitmq

  shell:
    build: .
    volumes:
      - '.:/app'
    working_dir: /app
    command: bash
    environment:
      DATABASE_URL: 'postgres://postgres:@postgres:5432/postgres'
      REDIS_URL: 'redis://redis:6379'
    ports:
      - '8080:8080'
    links:
      - postgres
      - redis
      - rabbitmq

  ############################
  # Built from remote images #
  ############################
  postgres:
    # Image name
    image: postgres
    # Expose the port on your local machine.
    # This is not needed to link containers.
    # BUT, it is handy for connecting to your
    # database with something like DataGrip from
    # you local host machine.
    ports:
      - '5432:5432'

  rabbitmq:
    image: rabbitmq
    ports:
      - '5672:5672'

  redis:
    image: redis
    ports:
      - '6379:6379'
Enter fullscreen mode Exit fullscreen mode

Let’s walk through this example:

We have 7 different containers in our formation: web, clock, worker, shell, postgres, rabbitmq, and redis. That’s a lot! In a production environment, these processes might each run on separate physical servers; or, the processes all might run on a single machine.

Notice how the web, clock, worker, and shell containers are all built from the current directory. So each of those 4 processes all run on the container that we defined in our Dockerfile. The postgres, rabbitmq, and redis containers, on the other hand, are built from prebuilt images which are found on the Docker Store. Building containers for these tools from images is much quicker than installing each of the tools on your local machine.

Take a look at the volumes key. Here, we mounted our current directory at /app. Then the working_dir key indicates that all commands shall be run relative to this directory.

Ok. Now, take a look at the links key present on the locally built containers. This exposes a network interface between this container and the containers listed. Notice how we use the name of the link as the hostname in our environment variables. In this example, we link the containers and then we expose the uri for each of our linked services as environment variables.

Try running one of the services: run the command docker-compose up web.

Write your application code

Ok, our server architecture includes 3 process types that run your application code; we have our web process that is responsible for serving web requests and pushing work to a job queue; we have our worker process that is responsible for pulling jobs off the queue and doing the work; and we have our clock process is effectively a cron runner that pushes work onto our job queue.

Our architecture also includes 3 other services that you commonly see in web server architecture: a Postgres database, a Redis datastore, and a RabbitMQ message broker.

Here’s a minimal implementation of the 3 aforementioned processes that also showcase the usage of our 3 data backends:

clock.js

const SimpleCron = require("simple-cron");
const cron = new SimpleCron();
const amqp = require("amqplib/callback_api");

cron.schedule("* * * * *", () => {
  amqp.connect(process.env.RABBIT_URL, (err, conn) => {
    conn.createChannel((err, ch) => {
      const q = "clock";
      ch.assertQueue(q, { durable: false });
      ch.sendToQueue(q, Buffer.from("hi."));
    });
    console.log("Queuing new job!");
  });
});

cron.run();
Enter fullscreen mode Exit fullscreen mode

server.js

const express = require("express");
const pg = require("pg");
const redis = require("redis");
const amqp = require("amqplib/callback_api");
const app = express();

app.get("/", (req, res) => {
  res.send("Hello World!");
});

// Test Postgres connection
app.get("/postgres/:blurb", (req, res) => {
  const ip = req.connection.remoteAddress;
  const db = new pg.Pool({ connectionString: process.env.DATABASE_URL });
  db.connect((err, client, done) => {
    client.query(
      'create table if not exists "blurbs" ("id" serial primary key, "text" varchar(255))',
      (err, result) => {
        client.query(
          'insert into "blurbs" ("text") values ($1)',
          [req.params.blurb],
          (err, result) => {
            client.query('select * from "blurbs"', (err, result) => {
              const blurbs = result.rows.map(o => o.text);
              res.send(`List of blurbs:\n${blurbs.join(" ")}`);
              client.end();
              done();
            });
          }
        );
      }
    );
  });
});

// Test Redis connection
app.get("/redis", (req, res) => {
  const client = redis.createClient(process.env.REDIS_URL);
  client.incr("count", (err, reply) => {
    res.send(`Request count: ${reply}`);
  });
});

// Test RabbitMQ connection
app.get("/rabbit/:msg", (req, res) => {
  amqp.connect(process.env.RABBIT_URL, (err, conn) => {
    conn.createChannel((err, ch) => {
      const q = "web";
      ch.assertQueue(q, { durable: false });
      ch.sendToQueue(q, Buffer.from(req.params.msg));
    });
    res.send("Message sent to worker process; check your terminal!");
  });
});

app.listen(process.env.PORT, () => {
  console.log(`Example app listening on port ${process.env.PORT}!`);
});
Enter fullscreen mode Exit fullscreen mode

worker.js

const amqp = require("amqplib/callback_api");

amqp.connect(process.env.RABBIT_URL, (err, conn) => {
  conn.createChannel((err, ch) => {
    // Consume messages from web queue
    var q1 = "web";
    ch.assertQueue(q1, { durable: false });
    ch.consume(
      q1,
      msg => {
        console.info(
          "Message received from web process:",
          msg.content.toString()
        );
      },
      { noAck: true }
    );

    // Consume messages from clock queue
    var q2 = "clock";
    ch.assertQueue(q2, { durable: false });
    ch.consume(
      q2,
      msg => {
        console.info(
          "Message received from clock process:",
          msg.content.toString()
        );
      },
      { noAck: true }
    );
  });
});
Enter fullscreen mode Exit fullscreen mode

There are example endpoints for each of the different components of our architecture. Visiting /postgres/:something will insert something into the postgres database and render a view containing all of the contents. Visiting /redis will count the number of visits to that page and display the count. Visiting /rabbit/:msg will send a message to the worker process and you can check the terminal logs to see the message. The clock process will also run continuously and send a message to the worker process once every minute. Not bad for a 1 minute set up!

Pull it all together with a bash script

I like to write a simple script so I don't have to memorize as many commands:

manage.sh

#!/bin/bash

set -e

SCRIPT_HOME="$( cd "$( dirname "$0" )" && pwd )"
cd $SCRIPT_HOME

case "$1" in
  start)
      docker-compose up web worker clock
    ;;
  stop)
      docker-compose stop
    ;;
  build)
      docker-compose build
    ;;
  rebuild)
      docker-compose build --no-cache
    ;;
  run)
      if [ "$#" -lt  "2" ]
       then
        echo $"Usage: $0 $1 <command>"
        RETVAL=1
      else
        shift
        docker-compose run shell "$@"
      fi
    ;;
  shell)
      docker-compose run shell
    ;;
  *)
    echo $"Usage: $0 {start|stop|build|rebuild|run}"
    RETVAL=1
esac

cd - > /dev/null
Enter fullscreen mode Exit fullscreen mode

Done! Now, we don’t need to worry about remembering docker-compose commands. To run our entire server stack we now simply run ./manage.sh start. If we need to build our containers again because we changed our Dockerfile or we need to install new dependencies, we can run ./manage.sh build.

Our shell container exists so that we can shell into our container or run one-off commands in the context of our container. Using the script above, you can run ./manage shell to start a terminal session in the container. If you want to run a single command in your container, you can use ./manage run <command>.

If you're familiar with the difficulty caused by complex development environments running on your local machine, then investigating a Docker powered development environment could save you time. There is a bit of set up involved but the productivity gained in the long term by using a tool like Docker pays for itself.

Originally published at nrempel.com

Top comments (0)