DEV Community

Cover image for Building a DevOps Pipeline from Scratch: Local Deployment with Vagrant, Docker & Nginx - Part 1
Desmond Goldsmith
Desmond Goldsmith

Posted on

Building a DevOps Pipeline from Scratch: Local Deployment with Vagrant, Docker & Nginx - Part 1

This is Part 1 of a series where I build a complete DevOps pipeline from the ground up; local deployment, CI/CD, security scanning, monitoring, and alerting.

Series Overview

This series documents the end-to-end process of building a production-grade DevOps pipeline. Here's where we're headed:

  • Part 1: Local Deployment with Vagrant, Docker, and Nginx
  • Part 2: CI/CD Pipeline with GitHub Actions (automated testing and deployment on every push)
  • Part 3: Security Scanning with Trivy (container image and code vulnerability scanning)
  • Part 4: Monitoring and Logging
  • Part 5: Slack Alerting for Failed Builds and Deployments (real-time notifications when things go wrong)

Resources

What We're Building in Part 1

In this first part, we deploy a full-stack web application locally using Vagrant as the infrastructure layer, Docker Compose to orchestrate the containers, and Nginx as the reverse proxy sitting in front of everything.

Here's the architecture:

Architecture diagram showing a browser connecting to a Vagrant VM through port forwarding, traffic flowing through Nginx as a reverse proxy to a React frontend and Express backend, with the backend connected to a MySQL database. All services run as Docker containers on a shared app-network.

All four services run as Docker containers on a shared private network. Nothing is exposed to the outside except Nginx on port 80.

Why This Stack

  • Vagrant lets us provision a real Ubuntu server locally. Everything practiced here transfers directly to cloud deployments. You can destroy and rebuild the environment as many times as needed , no cloud costs, no configuration drift etc.

  • Docker Compose manages the multi-container setup through a single declarative file. Rather than running and networking containers manually, Docker compose handles the entire lifecycle with one command.

  • Nginx as a reverse proxy is standard practice in production. It gives us with a single entry point for all traffic, cleanly separating routing concerns from application logic, and makes it straightforward to add SSL termination, rate limiting, or load balancing later.

Prerequisites

To verify installation, run this:

vagrant --version
git --version
Enter fullscreen mode Exit fullscreen mode

Step 1 : Create the Vagrantfile

Before we touch anything, make sure you're working on your local machine; not inside a VM or remote server.Open your terminal and follow along.

Create a new directory for the Vagrant project and add a Vagrantfile

mkdir ~/vagrant-devops-project
cd ~/vagrant-devops-project
Enter fullscreen mode Exit fullscreen mode

In the vagrant-devops-project directory, create a vagrant file called Vagrantfile using this command touch Vagrantfile

Open the Vagrantfile in a vim editor using this command vim Vagrantfile and paste the command below:

Vagrant.configure("2") do |config|

  # Ubuntu 22.04 
  config.vm.box = "ubuntu/jammy64"
  config.vm.hostname = "devops-server"

  # Forward VM port 80 to localhost:8080
  config.vm.network "forwarded_port", guest: 80, host: 8080

  config.vm.provider "virtualbox" do |vb|
    vb.memory = "2048"
    vb.cpus = 2
  end

  # Provision Docker using the official install script
  config.vm.provision "shell", inline: <<-SHELL
    curl -fsSL https://get.docker.com | sh
    usermod -aG docker vagrant
  SHELL

end
Enter fullscreen mode Exit fullscreen mode

When you're done pasting, press Esc to exit insert mode, then type :wq and hit Enter to save and close the file.

Note:
The provision script uses Docker's official installer to install Docker and Docker Compose on the Ubuntu vm.

The usermod line grants the vagrant user permission to run Docker without sudo.

Port forwarding maps localhost:8080 on your machine to port 80 inside the VM, which is where Nginx listens.

Step 2 : Boot the VM

Run the command below to boot the VM.

vagrant up
Enter fullscreen mode Exit fullscreen mode

Step 3 : SSH into the VM

Run the command below to SSH into the VM.

vagrant ssh
Enter fullscreen mode Exit fullscreen mode

Step 4 : Get the Application Code

We're using Docker's official awesome-compose repository as the application base. The focus of this series is on infrastructure and pipeline , not the application code itself.

Inside your VM, clone the project and cd into the awesome-compose/react-express-mysql using the code below:

git clone https://github.com/docker/awesome-compose.git
cd awesome-compose/react-express-mysql
Enter fullscreen mode Exit fullscreen mode

This is the project structure:

react-express-mysql/
├── frontend/        ← React application
├── backend/         ← Node.js / Express API
├── db/              ← Database init scripts
├── compose.yaml     ← Docker Compose config
└── README.md
Enter fullscreen mode Exit fullscreen mode

Step 5 : Add Nginx

The repository doesn't include Nginx. We add it ourselves.
In the react-express-mysql directory, create an nginx directory using the command below:

mkdir nginx
cd nginx
Enter fullscreen mode Exit fullscreen mode

In the nginx directory, run this command vim nginx.conf and paste the code below:

server {
    listen 80;

    # Route frontend traffic
    location / {
        proxy_pass http://frontend:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # Route API traffic to the backend
    location /api/ {
        proxy_pass http://backend:5000/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
Enter fullscreen mode Exit fullscreen mode

After pasting the code above, press Esc to exit insert mode, then type :wq and hit Enter to save and close the file.

Note:
frontend and backend _in the _proxy_pass directives are Docker container names, not folder references. When containers share a Docker network, they can resolve each other by service name. Docker handles the DNS internally , no IP addresses required.

proxy_set_header forwards the original client information to the upstream service. This matters when your backend needs the real client IP for logging or rate limiting.

Step 6: Update Docker Compose

Replace the contents of compose.yaml with this code below:

services:

  frontend:
    build:
      context: frontend
      target: development
    networks:
      - app-network
    depends_on:
      - backend

  backend:
    build: ./backend
    environment:
      - DB_HOST=db
      - DB_USER=root
      - DB_PASSWORD=password
      - DB_NAME=example
    networks:
      - app-network
    depends_on:
      - db
    restart: always

  db:
    image: mariadb:10.6.4-focal
    command: '--default-authentication-plugin=mysql_native_password'
    environment:
      - MYSQL_DATABASE=example
      - MYSQL_ROOT_PASSWORD=password
    volumes:
      - db-data:/var/lib/mysql
    networks:
      - app-network
    restart: always

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - frontend
      - backend
    networks:
      - app-network
    restart: always

volumes:
  db-data:

networks:
  app-network:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

Key design decisions in this config:

Only _Nginx _has a ports mapping. Every other service communicates internally over app-network. This is intentional , the database and backend are never directly reachable from outside the VM.

depends_on enforces startup ordering. Nginx won't start until the frontend and backend containers are up. The backend won't start until the database is ready.

restart: always ensures containers recover automatically from crashes.

The db-data named volume persists database state independently of the container lifecycle. Removing and recreating the database container does not wipe the data.

Your updated project structure:

react-express-mysql/
├── frontend/
├── backend/
├── db/
├── nginx/
│   └── nginx.conf     
├── compose.yaml        
└── README.md
Enter fullscreen mode Exit fullscreen mode

Step 7: Deploy the Application

Run these commands below to start docker compose

docker compose up --build -d
Enter fullscreen mode Exit fullscreen mode

Verify all containers are running:

docker compose ps
Enter fullscreen mode Exit fullscreen mode

Expected output:

NAME                              STATUS
react-express-mysql-frontend-1    running
react-express-mysql-backend-1     running
react-express-mysql-db-1          running
react-express-mysql-nginx-1       running
Enter fullscreen mode Exit fullscreen mode

Step 8: Verify the Deployment

Open your browser on your local machine:

http://localhost:8080
Enter fullscreen mode Exit fullscreen mode

The React application loads. The request path was: browser → Vagrant port forwarding → VM port 80 → Nginx → React frontend container.

Useful Commands

# View logs for a specific service
docker compose logs nginx
docker compose logs backend

# Stream logs in real time
docker compose logs -f nginx

# Open a shell inside a running container
docker exec -it react-express-mysql-nginx-1 sh

# Monitor container resource usage
docker stats

# Stop the stack
docker compose down

# Stop and remove all volumes (full reset)
docker compose down -v

# Restart a single service
docker compose restart nginx
Enter fullscreen mode Exit fullscreen mode

What's Coming in Part 2

With the application running locally, the next step is automation.
In Part 2 we'll build a CI/CD pipeline using GitHub Actions that triggers on every push to the main branch. It will run automated tests, and if they pass, SSH into the server and deploy the latest version with zero manual intervention.

After that, the series continues with:

  • Trivy for scanning container images and source code for known vulnerabilities before they ever reach the server

  • Centralized logging so you have visibility into what's happening across all containers

  • Slack notifications for failed builds and deployments because the worst way to find out something broke is from a user

Final Thoughts

I've been learning DevOps for a while now ; going through Docker, Linux, AWS, Vagrant, and currently working towards my Kubernetes certification. But there's a point where consuming content stops being enough and you just have to build something real.

This project was that moment for me. And honestly, the thing I learned most wasn't about Docker or Nginx. It was that the tools make a lot more sense when they're solving a real problem in front of you. Why does Nginx sit in front of everything? Because you need one controlled entry point. Why do containers talk by name instead of IP? Because IPs change, names don't. Those things click differently when you're the one wiring them together.

This setup is intentionally close to how production pipelines are structured .. a single entry point through Nginx, services isolated on a private network, infrastructure defined as code. Each part of this series builds directly on the previous one. By the end you'll have a complete pipeline, from a local vagrant up all the way through automated deployment, security scanning, and real-time alerting.

Follow along and drop questions in the comments. Thanks.

Top comments (0)