DEV Community

Kacper Michalik
Kacper Michalik

Posted on

Set Up PostgreSQL with NestJS & Docker for Fast Local Development

Launching a new project and need Postgres for NestJS development, but don’t want to commit to a production DB provider (yet)? Running a local Postgres instance in Docker is your best friend. Simple. Reliable. No system clutter.

Below, I’ll walk you through my go-to setup for spinning up NestJS and PostgreSQL using Docker - minimal friction, fully scriptable, always reproducible. You’ll get practical configuration, commands for direct container access, and a sample NestJS database config.

Why This Setup?

Early development is all about moving fast: changing schemas, resetting data, running migrations, sometimes all in the same day. Managed cloud databases (like Neon) are a great final destination, but for local hacking and testing, Docker wins every time. It keeps Postgres off your host machine, avoids “works on my machine” surprises. This is true plug-and-play for local dev.

Project Structure and Required Files

Here’s what we’ll set up:

  • Dockerfile for the NestJS app
  • docker-compose.yml to wire up Node and Postgres
  • .env file for environment variables
  • Sample NestJS config and scripts
  • Practical commands for common workflows

Dockerfile: Simple Node Environment

FROM node:18

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD ["npm", "run", "start:dev"]
Enter fullscreen mode Exit fullscreen mode

docker-compose.yml: Node + Postgres Side-by-Side

This is the magic sauce that glues your Node API and a disposable Postgres instance together

version: "3.8"

services:
  db:
    image: postgres:13
    restart: always
    env_file:
      - .env
    ports:
      - "5432:5432"
    volumes:
      - db-data:/var/lib/postgresql/data

  api:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    depends_on:
      - db
    env_file:
      - .env
    command: sh -c "npm run migration:run && npm run start:dev"  

volumes:
  db-data:
Enter fullscreen mode Exit fullscreen mode

Tip: The volumes key lets your database survive reboots without losing data.


.env

Create a .env file at your project root:

POSTGRES_USER=postgres
POSTGRES_PASSWORD=changeme
POSTGRES_DB=app_db
POSTGRES_HOST=db
POSTGRES_PORT=5432
PORT=3000
Enter fullscreen mode Exit fullscreen mode

Keep your secrets out of Git! .env goes in .gitignore.


Package.json Scripts: Interactive Containers

Why remember container IDs? Add this to your package.json scripts for quick access:

"scripts": {
  "db": "docker exec -it $(docker-compose ps -q db) bash",
  "api": "docker exec -it $(docker-compose ps -q api) bash"
}
Enter fullscreen mode Exit fullscreen mode

Now just run npm run db for a database container shell, or npm run api for the app.


NestJS: Connecting to Your Dockerized Database

In your main startup (e.g. main.ts):

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  await app.listen(process.env.PORT);
}
bootstrap();
Enter fullscreen mode Exit fullscreen mode

Database Config:

Here’s a common config file for TypeORM :

const config = {
  type: "postgres",
  host: process.env.POSTGRES_HOST,
  port: parseInt(process.env.POSTGRES_PORT, 10),
  username: process.env.POSTGRES_USER,
  password: process.env.POSTGRES_PASSWORD,
  database: process.env.POSTGRES_DB,
  entities: [__dirname + "/**/*.entity{.ts,.js}"],
  synchronize: false,  // safer for non-prod
  migrations: [__dirname + "/migrations/**/*{.ts,.js}"],
  autoLoadEntities: true,
};
Enter fullscreen mode Exit fullscreen mode

Development Workflow: Day-to-Day Commands

  • Start everything:docker-compose up --build (first time) or just docker-compose up
  • View logs:docker-compose logs -f api
  • Tear it down (remove containers):docker-compose down
  • Hop in the DB shell:npm run db
  • Hop in app container:npm run api

Conclusion - Start Fast, Move Faster

Top comments (1)

Collapse
 
mickyarun profile image
arun rajkumar

Nice setup guide. One thing I'd add for teams running multiple services: the single-service Docker setup works great in isolation, but the pain starts when service A needs to call service B's API during local development. We solved this by having all services register with a local Traefik reverse proxy — each service gets its own route (e.g., api.local/service-a, api.local/service-b), and they discover each other the same way they would in production. Removes the "works on my machine but breaks when services talk to each other" problem that kills productivity on microservices teams.