DEV Community

Cover image for Using Prisma Migrate with a Dockerized Postgres
Ann Kilzer キルザー杏
Ann Kilzer キルザー杏

Posted on

Using Prisma Migrate with a Dockerized Postgres

Many thanks to Olivier Le Chevalier for helping with Docker setup.

Why I am doing this

After trying a half dozen migration engines for NodeJS, I was pleased to see Prisma and its excellent documentation. As a golang developer I am partial to SQLBoiler and its database-first approach, though perhaps this is a condition of our community where we want all the knobs. Prisma was code-first but still gave me enough control to feel confident.

So the initial setup was easy, but then I reached a challenge. How do I Dockerize Postgres so that our open-source contributors, each using different operating systems and who-knows-what flavor of Postgres can quickly get started?

I'd set up Docker containers for golang, so this should be easy, right?

In coding there are always surprises 😏

Challenge: Shadow DB and permissions

After attempting my first migration with yarn prisma migrate dev --name init I got the following error:

Error: Migration engine error:
db error: ERROR: permission denied to create database
Enter fullscreen mode Exit fullscreen mode

That's weird... the main database looked good. Then I learned about the "shadow database", a temporary db used in migration. Read more here "About the Shadow Database".

We can configure the URLs using environment variables and edit schema.prisma like so:

// schema.prisma
datasource db {
  provider          = "postgresql"
  url               = env("DATABASE_URL")
  shadowDatabaseUrl = env("SHADOW_DB_URL")
}
Enter fullscreen mode Exit fullscreen mode

Now that we have a knob to configure the route to the shadow db, we can ensure that the database is set up with the right permissions via Docker.

Setting up Docker

As I'm not a fan of magic numbers nor passwords in code, I configured a .env file that would not be checked into git. Please set the password accordingly to replace [add your password here].

# .env
POSTGRES_USER=postgres
POSTGRES_PASSWORD=[add your password here]
POSTGRES_DB=mydatabase
SHADOW_DB=shadow

# Outside the container
POSTGRES_HOST=127.0.0.1
POSTGRES_PORT=5400

# Inside the container (used in docker-compose.yml)
DOCKER_POSTGRES_HOST=database
DOCKER_INTERNAL_PORT=5432

DATABASE_URL="postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}?schema=public"
SHADOW_DB_URL="postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${SHADOW_DB}?schema=public"
Enter fullscreen mode Exit fullscreen mode

Now for Docker Compose. Download Docker Desktop here and start it up.

In the root directory of your project create a file docker-compose.yml with contents like so:

# docker-compose.yml
version: "3.8"
services:
  db:
    image: postgres:14-alpine
    networks:
      new:
        aliases:
          - database
    env_file:
      - .env
    ports:
      - ${POSTGRES_PORT}:${DOCKER_INTERNAL_PORT}
    healthcheck:
      test:
        [
          "CMD",
          "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"
        ]
      interval: 10s
      timeout: 3s
      retries: 5
    volumes:
      - pgdata:/var/lib/postgresql/mydb-data # persist data even if container shuts down
  db-configure:
    image: postgres:14-alpine
    depends_on:
      - db
    networks:
      new:
        aliases:
          - database
    env_file:
      - .env
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DOCKER_POSTGRES_HOST}:${DOCKER_INTERNAL_PORT}/${POSTGRES_DB}?schema=public
      - SHADOW_DB_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DOCKER_POSTGRES_HOST}:${DOCKER_INTERNAL_PORT}/${SHADOW_DB}?schema=public
    entrypoint: /docker/db-init.sh
    volumes:
      - ./docker:/docker
      - status:/out
  install-deps:
    depends_on:
      - db-configure
    env_file:
      - .env
    image: node:18.12.1-alpine
    networks:
      - new
    working_dir: /target
    volumes:
      - .:/target
    command: "yarn install"
  migrate:
    depends_on:
      - db-configure
      - install-deps
    env_file:
      - .env
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DOCKER_POSTGRES_HOST}:${DOCKER_INTERNAL_PORT}/${POSTGRES_DB}?schema=public
      - SHADOW_DB_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DOCKER_POSTGRES_HOST}:${DOCKER_INTERNAL_PORT}/${SHADOW_DB}?schema=public
    links:
      - db
    networks:
      - new
    image: node:18.12.1-alpine
    working_dir: /target
    volumes:
      - .:/target
      - status:/in
    command: "sh -c 'until [[ -f /in/db-init-done && -d /target/node_modules ]]; do sleep 1; done; yarn migrate; yarn prisma db seed'"

volumes:
  pgdata:
  status:

networks:
  new:
Enter fullscreen mode Exit fullscreen mode

You'll also need the init script docker/db-init.sh which contains:

#!/bin/sh
set -euo pipefail

until PGPASSWORD=$POSTGRES_PASSWORD psql -h ${DOCKER_POSTGRES_HOST} -U $POSTGRES_USER -p $DOCKER_INTERNAL_PORT -c '\q'; do
  >&2 echo "Postgres is unavailable - sleeping 😴"
  sleep 1
done

>&2 echo "Postgres is up 🙌 - executing command"

function create_database_if_not_exists() {
    local db="$1"
    PGPASSWORD=$POSTGRES_PASSWORD psql -U $POSTGRES_USER -h database -tc "SELECT 1 FROM pg_database WHERE datname = '$db'" | grep -q 1 || \
    PGPASSWORD=$POSTGRES_PASSWORD psql -U $POSTGRES_USER -h database << EOF
       CREATE DATABASE $db;
       GRANT ALL PRIVILEGES ON DATABASE $db TO $POSTGRES_USER;
EOF
}

create_database_if_not_exists $POSTGRES_DB
create_database_if_not_exists $SHADOW_DB
touch out/db-init-done
Enter fullscreen mode Exit fullscreen mode

(I like to put emoji in my scripts because it really stands out in logging output.)

Let's walk through what's happening here.

  1. The first service defined in Docker Compose is db.
    a. We leverage the postgres:14-alpine image and give it access to .env via env_file.
    b. We allow it to be accessed on a Docker-internal network called database.
    c. We also make it accessible on a different port 5400 outside the container to avoid conflicting with any local Postgres (presumably running on 5432).

  2. The second service db-configure runs the init script /docker/db-init.sh and also uses the same image as the previous step.
    a. We override some .env variables via the environment parameter to distinguish between running prisma commands inside and outside of docker.
    b. Inside the script, we create all databases and assign permissions in preparation for running prisma.
    c. The last line touch out/db-init-done is a neat trick that writes a file to the shared status volume when the operation is complete. Now we can test this file exists before running other commands!

  3. Service install-deps runs yarn install and leverages a node image. Nothing fancy

  4. The last service migrate runs our migration and seeding.
    a. Again we override environment variables to ensure we have the docker-internal perspective.
    b. The command waits until the db-init-done file exists, and also until node_modules exist, before running migration and seeding.

Try it out!

Now you can run everything with docker compose up in one terminal. From another terminal, you can run yarn prisma studio and see the explorer into your dockerized database.

Top comments (0)