DEV Community

Cover image for Dockerized Deployment of a Full Stack Application with Reverse Proxy, Monitoring & Observability
ChigozieCO
ChigozieCO

Posted on

Dockerized Deployment of a Full Stack Application with Reverse Proxy, Monitoring & Observability

The goal here is to provide a robust and scalable application infrastructure while showcasing best practices in containerization, monitoring, and cloud deployment. By the end of this project, users will have a fully functional web application deployed on a cloud platform with proper domain configuration, detailed logging, and real-time performance monitoring.


Project Overview

This project demonstrates the deployment of a full-stack application with a FastAPI backend and a React frontend using Docker. It integrates a reverse proxy for routing, and implements comprehensive monitoring and observability using Prometheus, Grafana, Loki, Promtail, and cAdvisor. We will eventually host our application from a cloud platform.


Prerequisite

  • Docker and Docker Compose: Installed and configured on your system.

  • Basic Knowledge of Docker: Understanding containerization concepts and Docker CLI commands.

  • Git: Installed to clone the project repository.

  • Code Editor: Like VS Code or any preferred IDE.

  • System Requirements: At least 4GB RAM and a stable internet connection for pulling images and dependencies.

  • AWS Account and AWS CLI: Required for deploying the application to the cloud.

  • Observability Tools Knowledge: Familiarity with Prometheus, Grafana, Loki, cAdvisor, and Promtail for monitoring and observability setup.


Clone Application Repo

The application we are to deploy can be found here. To begin the project we will crone the repo into an empty directory using the below command:

git clone https://github.com/The-DevOps-Dojo/cv-challenge01.git .
Enter fullscreen mode Exit fullscreen mode

The . at the end of the command is to ensure that the repo is cloned directory into the directory where we are, without an additional folder.

clone-repo

Let's begin.


Containerization

This project leverages Docker to containerize the frontend, backend, database, a database management tool, a reverse proxy and monitoring and observability (Prometheus, Grafana, cAdvisor, Promtail and Loki).

Using docker compose, the services are orchestrated to run seamlessly in isolated environments, enabling consistent and efficient development, testing, and deployment.

Now we will go ahead and write the Dockerfile for each of the services starting from the backend, frontend, database Adminer; the and database management tool.

Dockerize Backend

We will dockerize the backend now by writing the Dockerfile we will use to create our image. Navigate in the backend directory and create a new file Dockerfile. You can find all the code used in this project here.

Using a multi-stage Docker build I was able to reduce my image size by 70.99%, initially starting from a size of 262Mi to a final size of 76Mi as can be seen in the screenshots below:

Version 1

build1

Multi-stage Version 2 Build

build2

My Dockerfile can be found here

Add the below code into the Dockerfile you created:

# Stage 1: Builder stage, Base image for the backend
FROM python:3.10-slim AS builder

# Install system dependencies
RUN apt-get update && apt-get install -y \
    curl \
    build-essential \
    && rm -rf /var/lib/apt/lists/*

# Install poetry
RUN curl -sSL https://install.python-poetry.org | python3 -

# Add poetry to PATH
ENV PATH="/root/.local/bin:$PATH"

# Set working directory and copy only dependency files first
WORKDIR /backend
COPY pyproject.toml poetry.lock* ./

# Install dependencies with Poetry
RUN poetry config virtualenvs.create false && poetry install

# Stage 2: Slim final image
FROM python:3.10-slim

# Install system dependencies
RUN apt update && apt install -y \
    curl \
    postgresql-client \
    && rm -rf /var/lib/apt/lists/*

# Install Poetry
RUN curl -sSL https://install.python-poetry.org | python3 -

# Add poetry to PATH
ENV PATH="/root/.local/bin:$PATH"

# Set working directory
WORKDIR /backend

# Copy virtual environment from builder stage
COPY --from=builder /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin

# Copy application files
COPY . .

RUN poetry config virtualenvs.create false

# Expose application port
EXPOSE 8000

# Set the default command to run the application
CMD ["poetry", "run", "uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0", "--port", "8000"]
Enter fullscreen mode Exit fullscreen mode

✨ I opted to use the slim variant of the python image as it is lightweight while still being developer-friendly and avoids common build issues.

It has better compatibility than the Alpine variant, few dependency issues, moderately sized and has better ease of debugging than Alpine does.

✨ I installed curl as I would be needing it for the installation of Poetry, then installed Poetry and added it to PATH.

✨ We used the ENV command to set the $PATH variable globally, making sure Poetry is available in all subsequent layers and for the runtime environment.

✨ Like earlier mentioned, I used a multi-stage build, in the first stage, I installed poetry and in the 2nd stage I copied environment with poetry installed to the final image, however seeing as we will still need to use poetry to set up the database with the necessary tables we will install poetry in the second stage.

This has minimal impact on the size of the image so we are in the clear.

Build Backend Image

We don't necessarily have to build the image but if you want to build it for testing purposes, navigate into the backend directory (where you have your backend Dockerfile) and use the command:

docker build -t devopsdojo/backend:v1 .
Enter fullscreen mode Exit fullscreen mode

⚠️ NOTE:

Ensure docker is running before running the above command otherwise you will get an error message.

To check the size of your image you can use the below command:

docker image inspect devopsdojo/backend:v1 --format='{{.Size}}' | numfmt --to=iec-i
Enter fullscreen mode Exit fullscreen mode

Dockerize Frontend

Just as we did with the Backend, we will write a Dockerfile that we will use in building our frontend image. Navigate into the frontend directory and create a new file Dockerfile.

Even though I opted to use the slim variant of the node image and also use a multi stage build as with my backend image I was not able to substantially reduce the size of my frontend image, I did manage to reduce it from over 1gb to 850mb though but I would have loved to further trim it down.

This dockerfile is pretty straightforward, install npm, expose the port (merely for documentation) and start the container.

Find my Frontend Dockerfile here.

Enter the below code into your Dockerfile:

# Base image for the frontend
FROM node:22-slim AS base

# Set working directory
WORKDIR /frontend

# Install debugging tools
RUN apt update && apt install -y bash net-tools && rm -rf /var/lib/apt/lists/*

# Copy package.json and package-lock.json separately for better caching
COPY package*.json ./

# Install dependencies and clear cache
RUN npm install && npm cache clean --force

# Base image for stage 2 to reduce size of image
FROM node:22-slim

# Set working directory
WORKDIR /frontend

# Copy dependencies and source code
COPY --from=base /frontend/node_modules ./node_modules

# Expose application port
EXPOSE 5173

# Copy the rest of the application files
COPY . .

# Command to start the application
CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0", "--port", "5173"]
Enter fullscreen mode Exit fullscreen mode

Configure Application Docker Compose file

Docker Compose is a tool for defining and running multi-container Docker applications. It simplifies managing complex environments by using a single YAML file to configure services, networks, and volumes. It's necessary for this project to streamline the setup of interconnected services like the frontend, backend, database, database admin UI and observability, ensuring they run consistently with minimal effort.

Navigate to the root directory of your project and create a new file compose.yaml, here we will define the services we want to run, create a docker network that these services will use to communicate with each other and configure volumes for the services that need them.

The first compose file we will create is our application compose file. In this compose file we will define the backend, frontend, database, reverse proxy (Traefik) and Adminer services.


First Draft of Compose File to test locally

The below is the first draft of my docker compose file to enable me test my containers locally. Here you will see that the ports are mapped and it is missing Traefik, the reverse proxy service. This is because I wanted to test my images first before I go too far in the project.

You can choose to do it like I did or skip ahead to the next configuration.

Once Traefik is configured, there will no need to map the ports, like we did in this version of the docker compose file below, because Traefik will be the point of entry of all traffic to our application.

If you want to test the images before moving ahead, add the below to your compose.yaml file:

services:
  frontend:
    env_file:
      - ./frontend/.env
    build: ./frontend
    ports:
      - "80:5173"
    networks:
      - devopsdojo
  backend:
    env_file:
      - ./backend/.env
    environment:
      - PYTHONPATH=/backend
    build: ./backend
    ports:
      - "8000:8000"
    networks:
      - devopsdojo
    depends_on:
      - db
    command: >
      sh -c "
      until pg_isready -h db -U app; do
        echo 'Waiting for database...';
        sleep 2;
      done;
      poetry run bash ./prestart.sh && poetry run uvicorn app.main:app --host 0.0.0.0 --port 8000"
  db:
    image: postgres:13-alpine
    restart: always
    environment:
      - POSTGRES_USER=app
      - POSTGRES_PASSWORD=changethis123
      - POSTGRES_DB=app
    # ports:
    #   - "5432:5432"
    networks:
      - devopsdojo
    volumes: 
      - db:/var/lib/postgresql/data
  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080
    networks:
      - devopsdojo

networks:
  devopsdojo:
    driver: bridge
    name: devopsdojo

volumes:
  db:
    driver: local
Enter fullscreen mode Exit fullscreen mode

⚠️ NOTE

In your backend/.env file update your POSTGRES_SERVER variable from app to db and if you made any other changes either to the port or the password, in your compose file, update it accordingly in the backend/.env file so that both values match.

Explanation

The above setup simplifies development by isolating services while allowing them to communicate seamlessly within the same Docker network. Here’s what each service does:

  • frontend:

    • Builds the React frontend from the ./frontend directory.
    • Uses environment variables from ./frontend/.env.
    • Exposes port 5173 internally, mapped to port 80 on the host.
    • Connects to the shared devopsdojo network.

  • backend:

    • Builds the FastAPI backend from the ./backend directory.
    • Uses environment variables from ./backend/.env and sets an additional PYTHONPATH for module resolution.
    • Exposes port 8000 for the application.
    • Depends on the db service and waits for the database to be ready before starting.
    • Runs a prestart script to populate the database and then serves the API with Uvicorn.

  • db:

    • Runs a lightweight PostgreSQL 13-alpine database.
    • Configured with user, password, and database names.
    • Stores persistent data in a Docker volume (db).
    • Connected to the shared devopsdojo network.

Spin up Application

To build and run the containers use the below command:

docker compose up
Enter fullscreen mode Exit fullscreen mode

⚠️ TIP

You can use the -d flag to run the containers in a detached mode but I like to see the logs which is why I opted not to use the -d flag.

When your containers are run you can open up localhost on your browser to see the frontend (login with the super user username and password found in the backend .env file) and localhost:8000 and localhost:8000/docs to see the backend. To view Adminer navigate to localhost:8080 and login with the correct credentials. You should see the below on your browser if you have correctly configured your application.

Application Frontend

Localhost-frontend

lh-logged-in

Application's Swagger UI

lh-docs

Adminer Dashboard

lh-adminer

To bring down your application run the docker compose down command, you can use the -v flag to also remove the volume(s).


Cloud Deployment

At this stage in our project we will be deploying our application to the cloud from where we will continue and finish our configuration and serve our application.

We will use an EC2 instance for our deployment to keep everything simple and self managed, another reason to use EC2 is to easily run docker-compose without adapting configurations to ECS-specific constructs.

We will go ahead and create an EC2 instance through the management console; if you don't know how to do it follow the instructions below:

Steps to Create an EC2 Instance

  • Log in to the AWS Management Console.

  • Navigate to the EC2 Dashboard under the "Compute" section.

  • Click Launch Instance and provide a name for your instance.

  • Select an AMI (Amazon Machine Image), such as Amazon Linux 2 or Ubuntu. I'd be using the Amazon Linux.

  • Choose an instance type (e.g., t2.micro for free tier) however we will use a t2.medium as cAdvisor requires more space than is available on the t2.micro.

  • Configure a key pair for SSH access (or create a new one).

  • Edit network settings to allow required inbound traffic, check the boxes for SSH, HTTPS and HTTP. We will be using the default VPCs and subnets, if you don't want to you can create a new VPC.

  • Add storage, leave the default settings as it.

  • Click Launch Instance and wait for it to initialize. Your key pair should be downloaded, note the download location

  • After our instance has been launched we will SSH into it using the command below:

ssh -i "<path/to/your/keypair.pem>" ec2-user@<your public dns>
Enter fullscreen mode Exit fullscreen mode

ssh-into-instance


Copy project File into Instance

Once connected to the EC2 instance we need to copy our project directory into the instance as we will be working from the instance going forward.

We will use the scp utility to copy our folder from our local machine to our instance using the below command.

Open a new terminal (or close the connection to the instance) and enter the command:

 scp -i "<path/to/your/keypair.pem>" -r <path/to/your/project/directory> ec2-user@<your public dns>:/home/ec2-user/
Enter fullscreen mode Exit fullscreen mode

scp

⚠️ REMINDER

Ensure you have docker and docker compose installed in your instance to continue with the project.

If you need help installing Docker on an Amazon Linux EC2 instance check out this post


Configure Reverse Proxy - Traefik

Traefik is a modern reverse proxy and load balancer designed for containerized environments. It automatically discovers services in your Docker setup and routes external traffic to the appropriate service based on configuration rules. As a reverse proxy, it sits between clients and backend services, managing requests, enhancing security, and improving performance.

In this project, Traefik is the main gateway to our entire application, it handles routing for the frontend, backend, Adminer and the observability, simplifying domain management, enabling HTTPS with ease, and streamlining application deployment in the cloud. It's basically the traffic controller that decides how requests get routed to different parts of our website.

Let's configure Traefik now.


Traefik Configuration

The Traefik configuration file defines how Traefik operates as a reverse proxy and load balancer. It includes settings for:

  • EntryPoints: Specify the ports (e.g., 80 for HTTP, 443 for HTTPS) where Traefik listens for incoming requests.

  • CertificatesResolvers: Enable automatic SSL/TLS certificate generation and renewal using Let's Encrypt.

  • Providers: Define where Traefik gets routing information (e.g., Docker labels, Kubernetes, or static configuration files).

  • Routing Rules: Configure how incoming requests are routed to specific services based on domains, subdomains, or paths.

  • Metrics and Observability: Optionally expose performance metrics (e.g., Prometheus) to monitor Traefik’s behavior.

To begin, create a new file traefik.yml in the root of your project and add the below to the file:

entryPoints:
  # Entry point for HTTP traffic on port 80
  web:
    address: ":80"
  # Entry point for HTTPS traffic on port 443
  websecure:
    address: ":443"

# Configuration for obtaining SSL certificates via Let's Encrypt
certificatesResolvers:
  myresolver:
    acme:
      # Email address for Let's Encrypt notifications and account management
      email: <your email address>
      # Storage file for SSL certificates
      storage: acme.json
      # Use HTTP challenge for domain verification
      httpChallenge:
        entryPoint: web

http:
  # Configure routers for handling requests
  routers:
    # Global HTTP to HTTPS redirection
    redirect-to-https:
      # Match all HTTP requests with any hostname
      rule: "HostRegexp(`{host:.+}`)" 
      entryPoints:
        - web 
      # Middleware to handle the redirection
      middlewares:
        - redirect-to-https 
      # No actual service, just a placeholder for redirection
      service: noop@internal

    # Router to redirect www to non-www
    redirect-www:
      rule: "Host(`<your domain>`)"
      entryPoints:
        - web
        - websecure
      # Middleware to handle the redirection
      middlewares:
        - redirect-www
      # No actual service, just a placeholder for redirection
      service: noop@internal

  middlewares:
    # Middleware to redirect HTTP to HTTPS
    redirect-to-https:
      redirectScheme:
        scheme: https
        # Sends a permanent redirect (HTTP 301)
        permanent: true

    # Middleware to redirect 'www' subdomain to the root domain
    redirect-www:
      redirectRegex:
        # Matches URLs starting with 'www.'
        regex: "^https?://www\\.(.*)"
        # Replaces 'www.' with the root domain
        replacement: "https://$1"
        # Sends a permanent redirect (HTTP 301)
        permanent: true

# Enable Prometheus metrics for monitoring
metrics:
  prometheus:
    # Add labels for entry point and service
    addEntryPointsLabels: true
    addServicesLabels: true

# Use Docker as the provider for Traefik configurations
providers:
  docker:
    # Only explicitly exposed containers will be served by Traefik
    exposedByDefault: false
Enter fullscreen mode Exit fullscreen mode

Explanation

This traefik configuration serves as the smart traffic manager for our website. It does a few key things:

  • Sets up two main entry points: Port 80 (regular web traffic) and Port 443 (secure, encrypted traffic).

  • Automatically gets free SSL certificates from Let's Encrypt, keeping our site secure without manual certificate hunting. It uses your email specified in the configuration for certificate management.

  • Automatically sends all HTTP traffic to HTTPS (no unsecured connections), redirects www.yourdomain.com to yourdomain.com and ensures visitors always reach the right version of your site.

  • Adds tracking labels for Prometheus (helps you watch how your site is performing).

  • Only serves containers you explicitly tell it to expose.


Docker Compose Configuration for Traefik

We need to update our compose file with the reverse-proxy service and add the necessary Traefik labels to our other services. Your compose.yaml should now look like this:

services:
  frontend:
    env_file:
      - ./frontend/.env
    build: ./frontend
    container_name: frontend
    # ports:
    #   - "80:5173"
    networks:
      - devopsdojo
    labels:
      # Enable Traefik for this service and specify the secure entrypoint (HTTPS)
      - "traefik.enable=true"
      - "traefik.http.routers.frontend.rule=Host(`<yourdomain>`)" # substitute with your domain name
      - "traefik.http.routers.frontend.entrypoints=websecure"

      # Enable TLS for this router & use the 'myresolver' certificates resolver for obtaining SSL certificates
      - "traefik.http.routers.frontend.tls=true"
      - "traefik.http.routers.frontend.tls.certresolver=myresolver"

  backend:
    env_file:
      - ./backend/.env
    environment:
      - PYTHONPATH=/backend
    build: ./backend
    container_name: backend
    # ports:
    #   - "8000:8000"
    networks:
      - devopsdojo
    depends_on:
      - db
    command: >
      sh -c "
      until pg_isready -h db -U app; do
        echo 'Waiting for database...';
        sleep 2;
      done;
      poetry run bash ./prestart.sh && poetry run uvicorn app.main:app --host 0.0.0.0 --port 8000"
    labels:
      # Enable Traefik for this service and specify the secure entrypoint (HTTPS)
      - "traefik.enable=true"
      # # Middleware for CORS
      - "traefik.http.middlewares.backend-cors.headers.accessControlAllowOriginList=https://<yourdomain>" # substitute with your domain name
      # Route /api to the backend root
      - "traefik.http.routers.backend-api.rule=Host(`<yourdomain>`) && PathPrefix(`/api`)" # substitute with your domain name
      - "traefik.http.middlewares.api-strip-prefix.stripPrefix.prefixes=/api"
      - "traefik.http.routers.backend-api.middlewares=api-strip-prefix,backend-cors"
      - "traefik.http.routers.backend-api.entrypoints=websecure"

      # Route /docs to /docs (Swagger UI)
      - "traefik.http.routers.backend-docs.rule=Host(`<yourdomain>`) && PathPrefix(`/docs`)" # substitute with your domain name
      - "traefik.http.routers.backend-docs.middlewares=backend-cors"
      - "traefik.http.routers.backend-docs.entrypoints=websecure"

      # Route /api/v1/openapi.json to the OpenAPI spec
      - "traefik.http.routers.backend-openapi.rule=Host(`<yourdomain>`) && Path(`/api/v1/openapi.json`)" # substitute with your domain name
      - "traefik.http.routers.backend-openapi.middlewares=backend-cors"
      - "traefik.http.routers.backend-openapi.entrypoints=websecure"

      # Enable TLS for these routers & use the 'myresolver' certificates resolver for obtaining SSL certificates
      - "traefik.http.routers.backend-api.tls=true"
      - "traefik.http.routers.backend-api.tls.certresolver=myresolver"
      - "traefik.http.routers.backend-docs.tls=true"
      - "traefik.http.routers.backend-docs.tls.certresolver=myresolver"
      - "traefik.http.routers.backend-openapi.tls=true"
      - "traefik.http.routers.backend-openapi.tls.certresolver=myresolver"

  db:
    image: postgres:13-alpine
    container_name: db
    restart: always
    environment:
      - POSTGRES_USER=app
      - POSTGRES_PASSWORD=changethis123
      - POSTGRES_DB=app
    # ports:
    #   - "5432:5432"
    networks:
      - devopsdojo
    volumes: 
      - db:/var/lib/postgresql/data

  adminer:
    image: adminer
    container_name: adminer
    restart: always
    # ports:
    #   - 8080:8080
    networks:
      - devopsdojo
    labels:
      # Enable Traefik for this service and specify the secure entrypoint (HTTPS)
      - "traefik.enable=true"
      - "traefik.http.routers.adminer.rule=Host(`db.<yourdomain>`)" # substitute with your domain name
      - "traefik.http.routers.adminer.entrypoints=websecure"
      # Enable TLS for this router & use the 'myresolver' certificates resolver for obtaining SSL certificates
      - "traefik.http.routers.adminer.tls=true"
      - "traefik.http.routers.adminer.tls.certresolver=myresolver"

  reverse-proxy:
    image: traefik:v3.2
    container_name: traefik
    ports:
      - "80:80"
      - "443:443"
      - "8080:8080"
    command:
      - --api
    networks:
      - devopsdojo
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./traefik.yml:/etc/traefik/traefik.yml
      - ./acme.json:/acme.json
    labels:
      # Enable Traefik for this service
      - "traefik.enable=true"

      # Dashboard route
      - "traefik.http.routers.dashboard-api.rule=Host(`<yourdomain>`) && PathPrefix(`/dashboard`) || (PathPrefix(`/debug`) || PathPrefix(`/api/http`) || PathPrefix(`/api/tcp`) || PathPrefix(`/api/udp`) || PathPrefix(`/api/entrypoints`) || PathPrefix(`/api/overview`) || PathPrefix(`/api/rawdata`) || PathPrefix(`/api/version`))" # substitute with your domain name
      - "traefik.http.routers.dashboard-api.service=api@internal"
      - "traefik.http.middlewares.dashboard-auth.basicauth.users=admin:<hashed password>" # Substitute with your username:hashed-password you generated with htpasswd
      - "traefik.http.routers.dashboard-api.middlewares=dashboard-auth,redirect-dashboard"
      - "traefik.http.routers.dashboard-api.entrypoints=websecure"

      # Enable TLS for this router & use the 'myresolver' certificates resolver for obtaining SSL certificates
      - "traefik.http.routers.dashboard-api.tls=true"
      - "traefik.http.routers.dashboard-api.tls.certresolver=myresolver"

      # Redirect /dashboard to /dashboard/
      - "traefik.http.middlewares.redirect-dashboard.redirectregex.regex=^https?://(.*)/dashboard$$"
      - "traefik.http.middlewares.redirect-dashboard.redirectregex.replacement=https://$1/dashboard/"
      - "traefik.http.middlewares.redirect-dashboard.redirectregex.permanent=true"

networks:
  devopsdojo:
    driver: bridge
    name: devopsdojo

volumes:
  db:
    driver: local
Enter fullscreen mode Exit fullscreen mode

Updates Made

  • We added the necessary labels to all our services to allow traefik route traffic to those services, the only service that is exempted is the db service because we will only access the database via Adminer, our database administration service.

  • We removed the port mappings which we had added to our services to enable us test our application locally. This is necessary because all the traffic to our application will be routed through Traefik.

  • We added the reverse-proxy service; the configuration:

    • Opens ports 80 (HTTP), 443 (HTTPS), and 8080 (Traefik dashboard).
    • Allows incoming web traffic and Traefik management.
    • Enables the Traefik dashboard at /dashboard. Our dashboard uses authentication so that unauthorized persons do not have access to our dashboard. (I will show you how to hash your password soon)
    • Ensures the dashboard is only accessible via HTTPS.
    • The last update we made was to redirect traffic from /dashboard to /dashboard/ as traefik expects the trailing backslash to ensure proper routing and resource loading for certain applications, like the dashboard. Without the trailing slash, API calls or static resource paths may fail, leading to incomplete or broken functionality. This redirect guarantees consistency and resolves such issues.

Generate Password Hash

To generate the hashed password, you can use an htpasswd generator online or use the htpasswd command but before using the command ensure you install it using the below command:

# Install apache2-utils if not already installed
sudo apt-get install apache2-utils

# Or if using an Amazon Linux server like me use the below command 

sudo yum install httpd-tools
Enter fullscreen mode Exit fullscreen mode

⚠️ NOTE

When used in a docker compose file, all dollar signs in the hash need to be doubled for escaping. This is the reason for the sed command used in the command below.

Generate that password hash using the command below:

echo $(htpasswd -nbB admin <yourpassword>) | sed -e s/\\$/\\$\\$/g
Enter fullscreen mode Exit fullscreen mode

hashed-password

The output will give you the username:hashed-password format to use in the configuration. Copy the output and substitute it into the dashboard-auth middleware label in the reverse-proxy service in your compose file.


Prepare the acme.json File

If you noticed, the reverse-proxy service has an acme.json volume where the SSL certificate will be saved. This file needs to already exist in our project directory (same directory where your compose file is) and be writeable so that traefik can save our SSL certificate there so we need to create that file.

Use the command below:

touch acme.json
chmod 600 ./acme.json
Enter fullscreen mode Exit fullscreen mode

Create DNS (A) Record for Domains

DNS records are like the postal address for your website, telling the internet exactly where to find your digital home. When you create these records, you're essentially setting up a precise navigation system that directs internet traffic to your specific server.

These records are crucial because they enable services like Let's Encrypt to verify your domain ownership, allow automatic SSL certificate generation, and ensure that when someone types your domain name, they're routed to the correct IP address.

Think of it as creating a map that guides visitors directly to your online doorstep, making sure they arrive safely, securely, and exactly where you want them to be.

Depending on your domain hosting service, the process to create these records may differ; however you need to create an A record for your domain, db.<yourdomain> and www.<your domain>.

⚠️ NOTE

Ensure you create these records, if they are not created your application will not be served on your domain.

Use the public IP address of your EC2 instance created earlier for the A record.


Update your Environments

In both your backend and frontend directories you need to update the .env files to include your domain to avoid CORS issue and so that your application is accessible from your domain.

backend/.env

Update the DOMAIN and BACKEND_CORS_ORIGINS variables in your backend/.env to include your domain, it should look like this now:

DOMAIN=<yourdomain> # This has no leading http or https eg example.com

BACKEND_CORS_ORIGINS="http://localhost,http://localhost:5173,https://localhost,https://localhost:5173,http://<yourdomain>,https://<yourdomain>"
Enter fullscreen mode Exit fullscreen mode

frontend/.env

Update VITE_API_URL with your domain name, it should now look like this:

VITE_API_URL=https://<yourdomain>/api
Enter fullscreen mode Exit fullscreen mode

Now we are set to rebuild our image and test our traefik configuration.


Build Application

After you have made all these adjustments, you are set to build the containers and run your application, do this by simply running the Docker compose up command as seen below, we will use the -d flag to run it in a detached mode, if you would like to see the logs you can omit the -d flag.

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

You should now be able to access your application from your domain as seen in the images below.

Application Frontend

domain-frontend

domain-logged-in

Application Backend Root

domain/api

Application's Swagger UI

domain/docs

Adminer Dashboard

db/domain

Traefik Dashboard Asking for Authentication

auth-traefik

If you do not supply an credentials or you enter a wrong username or password you won't be granted access as seen in the image below.

Unauthorized Response From Traefik Dashboard

traefik-unauthorized

Traefik Dashboard

If you successfully authenticate with the correct credentials you should see a dashboard that resembles the one below:

traefik-dash


Monitoring and Observability

We're almost at the end of this project, we need to configure our monitoring and observability stack. Effective monitoring and observability are critical for maintaining the health and performance of modern applications. In this project, we implement a robust monitoring stack using Prometheus, Grafana, Loki, Promtail, and cAdvisor to ensure real-time visibility into system metrics, logs, and container performance.

  • Prometheus: Collects and stores metrics, enabling detailed insights into application and infrastructure performance.

  • Grafana: Visualizes metrics and logs through customizable dashboards.

  • Loki: Provides log aggregation and querying capabilities.

  • Promtail: Streams logs from application containers to Loki.

  • cAdvisor: Monitors resource usage and performance of running containers.

Together, these tools create an integrated solution for proactive monitoring, streamlined troubleshooting, and maintaining operational excellence in containerized environments.


Configure Prometheus

Due to its robust querying language, effective storage, and simplicity in integrating with several metrics sources, Prometheus is a popular open-source monitoring and alerting solution.

Docker Compose Configuration for Prometheus

We will separate our application stack from our monitoring stack and so we need to create a new compose file, in your project root create a new file compose.monitoring.yaml.

touch compose.monitoring.yaml
Enter fullscreen mode Exit fullscreen mode

Add the below in the new file:

services:
  prometheus:
    image: prom/prometheus
    container_name: prometheus
    restart: unless-stopped
    command: 
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--web.external-url=/prometheus'
    networks:
      - devopsdojo
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prom_data:/prometheus
    labels:
      # Enable Traefik for this service, configure router and specify entrypoint
      - "traefik.enable=true"
      - "traefik.http.routers.prometheus.rule=Host(`<yourdomain>`) && PathPrefix(`/prometheus`)"
      - "traefik.http.routers.prometheus.entrypoints=websecure"

      # Tell Traefik to use the port 9090 to connect to prometheus
      - "traefik.http.services.prometheus.loadbalancer.server.url=http://prometheus:9090/"

      # Enable TLS for this router & use the 'myresolver' certificates resolver for obtaining SSL certificates
      - "traefik.http.routers.prometheus.tls=true"
      - "traefik.http.routers.prometheus.tls.certresolver=myresolver"

volumes:
  prom_data:

networks:
  devopsdojo:
    driver: bridge
    name: devopsdojo
Enter fullscreen mode Exit fullscreen mode

✨ This Docker Compose setup deploys Prometheus in a Docker container, along with a specified configuration file for custom monitoring settings.

✨ This Prometheus service is configured to run in a Docker container with the prom/prometheus image. It uses a custom configuration file prometheus.yml, which we will create and populate next, and persists data in a named volume prom_data. It connects to the devopsdojo network and integrates with Traefik for secure access.

✨ Traefik routes traffic to Prometheus via websecure and TLS, using the domain you specify with the path /prometheus. The service listens on port 9090, but since we will access the prometheus dashboard via a subpath routed through Traefik we need to explicitly make the prometheus service aware of this subpath by explicitly appending the subpath to the target server URL in it's loadBalancer configuration as you can see that we did above.

Next we create the prometheus configuration file that instructs prometheus on what to do.


Prometheus Configuration File

The Prometheus configuration file is a YAML-based document that outlines how Prometheus should scrape, collect, and process metrics from various targets.

It defines parameters such as scrape intervals, targets to scrape, and rules for alerting, providing the blueprint for effective monitoring setups. The configuration file is the core of the Prometheus setup and is crucial for accurate and efficient monitoring. It's human-readable and easy to edit.

In your project root create a new file prometheus.yml and add the below blocks of code to the file:

# Global defaults, applies to all scrape jobs unless explicitly overridden
global:
  scrape_interval: 15s
  scrape_timeout: 10s
  evaluation_interval: 15s

# Define the specify endpoints prometheus should scrape data from
scrape_configs:
  # Config to scrape data from the prometheus service itself
  - job_name: 'prometheus'
    honor_timestamps: true
    metrics_path: prometheus/metrics
    scheme: http
    static_configs:
      - targets: ['prometheus:9090']

  # Config to scrape data from the traefik service
  - job_name: 'traefik'
    metrics_path: /metrics
    static_configs:
      - targets: ['traefik:8080']
Enter fullscreen mode Exit fullscreen mode

This Prometheus configuration does the following:

  • Global Settings: These settings apply to all scrape jobs unless overridden. They define default behavior for how Prometheus scrapes metrics:

    • scrape_interval: Frequency of scraping. Sets the default time between each scrape (15 seconds).
    • scrape_timeout: Maximum time Prometheus waits for a scrape to complete. Specifies how long Prometheus should wait for a scrape to complete (10 seconds).
    • evaluation_interval: Defines how often Prometheus evaluates alerting and recording rules (15 seconds).

  • Scrape Configurations: Defines how Prometheus scrapes data from specific services:

    • Prometheus Job: Scrapes Prometheus's own metrics at prometheus:9090/prometheus/metrics.
    • honor_timestamps: Ensures that scraped data honors the timestamps from the source.
    • Traefik Job: Scrapes metrics from the Traefik service at traefik:8080/metrics.

Both jobs define their targets and metrics paths for scraping, ensuring that Prometheus collects data from the specified services.

We will still come back to this configuration when we setup cAdvisor so we can add a job to scrape it's data.


Configure cAdvisor

Google created cAdvisor (Container Advisor), a tool that allows for real-time tracking of performance metrics and resource utilization for containers in use. It gathers, compiles, analyses, and exports data about containers that are currently running so that Prometheus can use it for monitoring.

We will create the cadvisor service in docker compose now by adding the below block of code into the compose.monitoring.yaml file:

compose.monitoring.yaml

  cadvisor:
    image: gcr.io/cadvisor/cadvisor
    container_name: cadvisor
    privileged: true
    restart: unless-stopped
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    networks:
      - devopsdojo
Enter fullscreen mode Exit fullscreen mode

Now we need to tell Prometheus to scrape data from cAdvisor, to do that add the below block of code to the prometheus.yml file

prometheus.yml

  # Config to scrape data from the cAdvisor service
  - job_name: 'cadvisor'
    static_configs:
      - targets: ['cadvisor:8080']
Enter fullscreen mode Exit fullscreen mode

Configure Loki

Loki is a log aggregation system designed by Grafana Labs for efficiently collecting, storing, and querying logs. Unlike traditional logging systems, Loki is optimized for cost-efficiency and simplicity by indexing only metadata, not the content of logs. It's often paired with Promtail, which collects logs from various sources (e.g., Docker containers, system logs) and pushes them to Loki. Together with Grafana, Loki provides a powerful, scalable solution for centralized log management and visualization.


Docker Compose Configuration for Loki

In your compose.monitoring.yaml file add the following after the cadvisor service:

compose.monitoring.yaml

  loki:
    image: grafana/loki:latest
    container_name: loki
    restart: unless-stopped
    command: 
      - '--config.file=/etc/loki/loki-config.yaml'
    networks:
      - devopsdojo
    volumes:
      - ./loki-config.yaml:/etc/loki/loki-config.yaml
      - loki-data:/loki
    labels:
      # Enable Traefik for this service, configure router and specify entrypoint
      - "traefik.enable=true"
      - "traefik.http.routers.loki.rule=Host(`<yourdomain>`) && PathPrefix(`/loki`)"
      - "traefik.http.routers.loki.entrypoints=websecure"

      # Tell Traefik to use the port 3100 to connect to loki
      - "traefik.http.services.loki.loadbalancer.server.url=http://loki:3100/"

      # Add middleware to strip the prefix
      - "traefik.http.middlewares.loki-strip.stripprefix.prefixes=/loki"
      - "traefik.http.routers.loki.middlewares=loki-strip"

      # Enable TLS for this router & use the 'myresolver' certificates resolver for obtaining SSL certificates
      - "traefik.http.routers.loki.tls=true"
      - "traefik.http.routers.loki.tls.certresolver=myresolver"
Enter fullscreen mode Exit fullscreen mode

Add the Loki volume to the Volume top level element, your volume should look like this now:

volumes:
  prom_data:
  loki-data:
Enter fullscreen mode Exit fullscreen mode

Loki Configuration File

If you noticed, in docker compose configuration above we specified a config file but we do not have any config file yet and so the next step is to create that file.

We need to download the necessary configuration files for Loki, to do this run the below command:

wget https://raw.githubusercontent.com/grafana/loki/v3.0.0/cmd/loki/loki-local-config.yaml -O loki-config.yaml
Enter fullscreen mode Exit fullscreen mode

We will leave the config file as is as the configuration is sufficient for what we need to do.


Configure Promtail

Promtail is an agent designed to collect and forward logs to Loki. It works seamlessly with various log sources, including system logs, application logs, and Docker container logs.

Promtail reads log files, adds metadata such as labels (e.g., container name, job, or hostname), and sends the enriched logs to Loki for aggregation and querying. It integrates natively with Kubernetes, leveraging pod labels and annotations to simplify log collection in containerized environments.

Promtail is lightweight, easy to configure, and essential for building a robust logging pipeline with Loki.


Docker Compose Configuration for Promtail

Add the below to your compose.monitoring.yaml file

compose.monitoring.yaml

  promtail:
    image: grafana/promtail:latest
    container_name: promtail
    restart: unless-stopped
    command: 
      - '--config.file=/etc/promtail/config.yml'
    networks:
      - devopsdojo
    volumes:
      - ./promtail-config.yml:/etc/promtail/config.yml
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/log:/var/log:ro
Enter fullscreen mode Exit fullscreen mode

Promtail Configuration File

Like we did with the Loki configuration, we need to download the configuration file for Promtail, do that by running the below command:

wget https://raw.githubusercontent.com/grafana/loki/v3.0.0/clients/cmd/promtail/promtail-docker-config.yaml -O promtail-config.yaml
Enter fullscreen mode Exit fullscreen mode

We will update the promtail configuration file we just downloaded, we need to update our client url and add another job to the configuration.

Update your client: - url to look like the below:

clients:
  - url: https://<your domain>/loki/loki/api/v1/push
Enter fullscreen mode Exit fullscreen mode

After the system job at the end of the file, add the docker job:

  - job_name: docker
    static_configs:
      - targets:
          - localhost
        labels:
          job: docker
          __path__: /var/lib/docker/containers/*/*.log
Enter fullscreen mode Exit fullscreen mode

Configure Grafana

Next we will configure our Grafana service so that we can visualize our data via Grafana's dashboard. Grafana enables you to query, visualize, alert on, and explore your metrics, logs, and traces wherever they’re stored. Grafana data source plugins enable you to query data sources including time series databases like Prometheus and CloudWatch, logging tools like Loki and Elasticsearch and a lot more.

Grafana OSS provides you with tools to display that data on live dashboards with insightful graphs and visualizations.


Docker Compose Configuration for Grafana

For our Docker compose configuration of our Grafana service, add the below to your compose.monitoring.yaml file

grafana:
    image: grafana/grafana
    container_name: grafana
    restart: unless-stopped
    environment:
      - "GF_SERVER_DOMAIN=<yourdomain>"
      - "GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s/grafana"
      - "GF_SERVER_SERVE_FROM_SUB_PATH=true"
    networks:
      - devopsdojo
    volumes:
      - grafana-storage:/var/lib/grafana
    labels:
      # Enable Traefik for this service, configure router and specify entrypoint
      - "traefik.enable=true"
      - "traefik.http.routers.grafana.rule=Host(`<yourdomain>`) && PathPrefix(`/grafana`)"
      - "traefik.http.routers.grafana.entrypoints=websecure"

      # Tell Traefik to use the port 3000 to connect to grafana
      - "traefik.http.services.grafana.loadbalancer.server.url=http://grafana:3000"

      # Enable TLS for this router & use the 'myresolver' certificates resolver for obtaining SSL certificates
      - "traefik.http.routers.grafana.tls=true"
      - "traefik.http.routers.grafana.tls.certresolver=myresolver"
Enter fullscreen mode Exit fullscreen mode

The volume part of the compose.monitoring.yaml file will now look like this:

volumes:
  prom_data:
  grafana-storage:
Enter fullscreen mode Exit fullscreen mode

Spin up Services

Now that we have written all our configurations we can go ahead and start up all our monitoring services.

Since we want to run both our compose files at the same time we will add an include element to our first (application) compose file.

Open the compose.yaml file and add the below block of code to the very top of the file.

compose.yaml

include:
  - compose.monitoring.yaml
Enter fullscreen mode Exit fullscreen mode

Now if you have any of the earlier containers running already, you should stop them and start everything all over again. Use the command below:

docker compose down && docker compose up -d
Enter fullscreen mode Exit fullscreen mode

All our containers should be up and running and available now so we can go ahead and create our grafana dashboards now.

If you head to your prometheus web UI (https:///prometheus) and click on target health under the status dropdown you should see the different scrape targets we configured in our prometheus configuration.

Prometheus Web UI

Prometheus-UI


Create Dashboards

To begin creating our Grafana dashboards we need to login to our Grafana web UI. Navigate to https:///grafana, the login page will popup and you need to enter the default username and password which are both admin.

Grafana Web UI Login

grafana-login

Once you are logged in you will be asked to change your password, do that and let's continue.


Add Data sources

Upon logging in you will see a welcome page that looks like the one below, click on the Data Sources box (illustrated by the arrow) to add your first data source.

Grafana welcome page

grafana-add-data-source


Prometheus Data source

✨ Click on Prometheus from the options on the page that opens up, it should be right on top.

Prometheus data source

prometheus-data-source

✨ On the next page, scroll down and enter https://<yourdomain>/prometheus as your prometheus server url under the Connection category.

Prometheus server url

prom-server-url

✨ Scroll to the bottom of the page and click on Save and test, you should get a confirmation that the prometheus API has been successfully queried, as you can see below

Successfully queried Prometheus API

prom-api


Loki Data Source

✨ To add Loki as a data source, click on the hamburger button on the top left part of the page and under connections, click on data sources as shown below

Data sources menu

data-source

✨ On the next page, click on the add new data source button on the top right corner of the page.

add-new-data-src

✨ Scroll down a bit and select Loki as the data source. Just as we did with prometheus, in the next page enter your loki url in the connection category, finally scroll down and save and test.


Loki data source addition

loki-data-source


Import cAdvisor Dashboard

Now that we have added our Data sources we can proceed with creating our dashboard. We won't be creating these dashboards from scratch though, we will simply import dashboards that have been created by community members that fit what we are trying to do.

You can find these dashboards at https://grafana.com/grafana/dashboards/ but I've done the heavy lifting for you and found the dashboard we will use, however feel free to explore the link and see if any other dashboards fit your need better.

✨ Navigate to Grafana web Ui homepage and click on the Dashboards box

Create first Dashboard

add-dash

✨ On the resultant new page click on import dashboard and in the next page enter the code 19792 and click on load

Import dashboard - cAdvisor

load-dash

✨ On the new page, scroll to the bottom, in the Prometheus category, click and select your prometheus data source from the drop down and click on import

Add cAdvisor Dashboard

import-dash

✨ You should be able to see your beautiful dashboard now, it will look like what I have below

cAdvisor Dashboard

cadvisor-dash1

cadvisor-dash2

cadvisor-dash3


Import Loki Dashboard

✨ To add a new dashboard, click on the hamburger button on the top left and click on dashboards from the options.

add-new-dash

✨ When the next page loads, click on the arrow on the New button for the dropdown and select import from the options.

new-dash

✨ From here you know the drill, enter the code 13186, load the dashboard, add your loki data source and import the dashboard.

loki-dash-code

Import-loki-dash

✨ You should see the dashboard now, tinker around with it to see more information.

loki-dash

✨ You could also explore your Loki data source and run custom queries as you see fit.


Import Traefik Dashboard

The process to import the traefik dashboard is the same as we went through for the Loki dashboard.

✨ Enter the code 4475 and load the dashboard, add your prometheus data source and import the dashboard.

Import-traefik-dash

✨ Now you should have a dashboard that looks like the one below.

traefik-dash


Conclusion

Congratulations! After all the effort and time invested, we’ve successfully Dockerized our full-stack application with a FastAPI backend and a React frontend, set up a reverse proxy, and deployed a robust monitoring and observability stack using Prometheus, Grafana, Loki, Promtail, and cAdvisor. Along the way, we built fully functional dashboards to keep an eye on system performance and logs.

This journey most definitely demanded patience and determination, but it also provided the opportunity to sharpen essential skills in containerization, deployment, and observability. From orchestrating containers to visualizing metrics and logs, these are valuable tools in any developer's toolkit.

Take a moment to appreciate how far you’ve come, whether it’s mastering Docker, setting up monitoring systems, or troubleshooting with confidence, this project is a testament to your growth and perseverance. Here's to many more successful deployments ahead!

If you found this post helpful in anyway, please let me know in the comments and share with your friends. If you have a better way of implementing any of the steps I took here, please do let me know, I love to learn new (~better~) ways of doing things.

Follow for more DevOps content broken down into simple understandable structure.

Top comments (0)