When setting up the backend for my project I had many issues related to configuring & connecting to the DB running in a Docker container via Node & PgAdmin. And so, I wanted to explain how I fixed these issues in hopes that it can save you hours of frustrations.
We will be learning to:
- Configure Typescript for Node.js
- Run Node.js & Postgres in Docker containers
- Use env variables in Docker Compose & Node.js
- Connect to the DB running in a container via PgAdmin
- Use Nodemon to automatically restart the server once the code changes
Prerequisite
Typescript & Nodemon
We will start by creating a basic Express server.
First, let's install the packages we will need:
//Dev Dependencies
npm i --save-dev typescript nodemon @types/pg @types/express dotenv
npm i pg express
Add the following scripts in package.json:
"scripts": {
"start": "node ./dist/app.js",
"dev": "nodemon -L -e ts --exec \"npm run build && npm start\"",
"build": "tsc"
}
-
buildconverts all our.tsfiles to.jsand puts it in adistfolder (as configured below intsconfig.json) -
devusesnodemonto watch for changes in any.tsfile ('-e ts'). When there are changes, it will run thebuild&startscripts. Nodemon saves us from having to stop and start the server each time there is a change-
'-L'is required when usingnodemonin containers
-
-
startstarts up our server
To configure Typescript, create a tsconfig.json file at the root with the following:
{
"compilerOptions": {
"target": "es6" /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'. */,
"module": "commonjs" /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', 'es2020', or 'ESNext'. */,
"outDir": "./dist" /* Redirect output structure to the directory. */,
"strict": true /* Enable all strict type-checking options. */,
"typeRoots": ["./node_modules/@types"] /* List of folders to include type definitions from. */,
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */,
"skipLibCheck": true /* Skip type checking of declaration files. */,
"forceConsistentCasingInFileNames": true /* Disallow inconsistently-cased references to the same file. */
}
}
Next, create an .env file at the root so that we use the same variables when configuring Docker Compose & the server. Also, we can hide the env variables used in Docker Compose as docker-compose.yml are commited to Github whereas the .env file is not.
For now, add a PORT variable to set the port the server will run at:
PORT=5000
Create a app.ts in a new src folder with the following content:
import express, { NextFunction, Request, Response } from "express";
import dotenv from "dotenv";
const app = express();
dotenv.config(); //Reads .env file and makes it accessible via process.env
app.get("/test", (req: Request, res: Response, next: NextFunction) => {
res.send("hi");
});
app.listen(process.env.PORT, () => {
console.log(`Server is running at ${process.env.PORT}`);
});
To verify everything is setup correctly thus far, start the server:
npm run dev
Now, make a GET request to localhost:5000/test. The response should be hi. Also, notice there should be a dist folder with all the converted .ts files.
Docker
Now, we will run the server & Postgres in a Docker container.
Before that, you might ask why use Docker at all?
Docker allows your app to run in isolated environments known as containers. Consequently, this solves the age-old problem of "the code works on my machine".
Also, it allows you to use all the tools you want without installing them locally but by using images.
Docker images can installed from Docker Hub or created using a Dockerfile.
Create a file named Dockerfile at the root:
# Installs Node.js image
FROM node:16.13.1-alpine3.14
# sets the working directory for any RUN, CMD, COPY command
# all files we put in the Docker container running the server will be in /usr/src/app (e.g. /usr/src/app/package.json)
WORKDIR /usr/src/app
# Copies package.json, package-lock.json, tsconfig.json, .env to the root of WORKDIR
COPY ["package.json", "package-lock.json", "tsconfig.json", ".env", "./"]
# Copies everything in the src directory to WORKDIR/src
COPY ./src ./src
# Installs all packages
RUN npm install
# Runs the dev npm script to build & start the server
CMD npm run dev
The Dockerfile will build our Express Server as an image, which we can then run in a container.
When creating applications that use multiple containers, it is best to use Docker Compose to configure them.
But before Docker Compose, let's add some more variables to the .env file as we will require them shortly.
DB_USER='postgres'
DB_HOST='db'
DB_NAME='db_name'
DB_PASSWORD='password'
DB_PORT=5432
-
DB_HOSTcorresponds to the name of the DB service below. This is because each Docker container has its own definition oflocalhost. You can think ofdbas the container's localhost. -
DB_PORTis the default port Postgres uses -
DB_PASSWORD&DB_USERare the default auth credentials Postgres uses
Create a docker-compose.yml file at the root:
version: '3.8'
services:
api:
container_name: api
restart: always
build: .
ports:
- ${PORT}:${PORT}
depends_on:
- db
volumes:
- .:/usr/src/app
db:
container_name: postgres
image: postgres
ports:
- '5433:${DB_PORT}'
volumes:
- data:/data/db
environment:
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME}
volumes:
data: {}
Note: The ${VARIABLE_NAME} syntax lets us use variables from the .env file. Docker Compose can automatically get variables from the root .env file.
For the api service, we are:
- using the
Dockerfileto build the container - exposing
${PORT}(which was 5000 from the.envfile). When we expose a port, it allows us to access the server vialocalhost:${PORT} - only starting the container once the
dbservice finishes starting up - mapping all the files in the project directory to
WORKDIRof the container using volumes
For the db service, we are:
- using the
postgresimage from Docker Hub - using volumes so that our DB data does not erase when we shut down the container
- mapping port
5432of the container to port5433of ourlocalhost - using env variables from the
.envfile and passing it to thepostgresimage. The image requires at least thePOSTGRES_PASSWORDas per the documentation on Docker Hub. We also includedPOSTGRES_DBas it specifies a different name for the default database that is created when the image is first started
Connecting To Postgres
To connect the server to Postgres container, add the following to app.ts:
import { Pool } from "pg";
const pool = new Pool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
database: process.env.DB_NAME,
password: process.env.DB_PASSWORD,
port: parseInt(process.env.DB_PORT || "5432")
});
const connectToDB = async () => {
try {
await pool.connect();
} catch (err) {
console.log(err);
}
};
connectToDB();
Now, we can startup the server & DB by the following command:
docker-compose up
This will build & start the containers (api & db). Remember, first db will start then api as api depends on db.
Try making the same GET request we did earlier and you should get the same response.
Before we end the tutorial, you might be wondering, how do I view the DB and its contents? There are 2 ways:
- You can add a new service to the
docker-compose.ymlfile that uses the pgadmin4 image - If you have PgAdmin installed locally:
- Use
localhostas the host &5433as the port when adding a new server. Why5433and not5432- the default port of Postgres? Earlier, we mapped port5432of the container to port5433of ourlocalhost. But, why5433? It could've been any port, just not5432because if you have Postgres already installed locally, it is already using port5432. So, you cannot have the Postgres container also using the same port.
- Use
Conclusion
I hope my explanation was clear & helped you in some way. If you want the source code, you can find the full code here.
Latest comments (6)
I got this error
hey did you solve the error? or if you did share some related article. Thanks!
Great post, cleared up a lot of questions I had about it. Thank you very much!
FATAL: password authentication failed for user
I removed quotes from .env and did docker compose down and docker compose up
Great post! Helped me out a lot :P
Just so you know, I searched the -L flag on nodemon, cause I didn't know what that was, and I saw this in the docs:
So, maybe it's not required. It looks worth to try and run things without it first :)