<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nonso Echendu</title>
    <description>The latest articles on DEV Community by Nonso Echendu (@nonso_echendu_001).</description>
    <link>https://dev.to/nonso_echendu_001</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nonso_echendu_001"/>
    <language>en</language>
    <item>
      <title>From Code to Production: A DevOps Journey with Docker, Traefik, and Modern Monitoring 🚀</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Fri, 11 Apr 2025 18:40:14 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/from-code-to-production-a-devops-journey-with-docker-traefik-and-modern-monitoring-27l0</link>
      <guid>https://dev.to/nonso_echendu_001/from-code-to-production-a-devops-journey-with-docker-traefik-and-modern-monitoring-27l0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hey builders! 👋 &lt;/p&gt;

&lt;p&gt;So I recently took on an exciting challenge: transforming a full-stack FastAPI and React application into a production-ready system with robust monitoring. While the application itself was already well-structured, my role was to bring it to production standards through proper containerization, orchestration, and monitoring/observability. Let me walk you through this journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to see the complete code? Check out the &lt;a href="https://github.com/NonsoEchendu/full-stack-fastapi-project" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📋 Prerequisites
&lt;/h2&gt;

&lt;p&gt;Some things you'll need to have for this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker and Docker Compose installed. You can check the &lt;a href="https://docs.docker.com/engine/install/ubuntu/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; on how to install them on ubuntu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A domain name (for SSL/TLS setup). I recommend getting from &lt;a href="https://hostinger.com?REFERRALCODE=WAHNONSOEUS4" rel="noopener noreferrer"&gt;Hostinger&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic understanding of:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker containers and images&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reverse proxies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring concepts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Linux command line&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Sufficient system resources:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;At least 4GB RAM&lt;/li&gt;
&lt;li&gt;2 CPU cores&lt;/li&gt;
&lt;li&gt;20GB storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cloned &lt;a href="https://github.com/The-DevOps-Dojo/cv-challenge01" rel="noopener noreferrer"&gt;the github repo containing the frontend, backend, db&lt;/a&gt; . &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Firewall configured to allow necessary ports (22, 3000, 9090, etc.) &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Challenge 🎯
&lt;/h2&gt;

&lt;p&gt;When I first looked at the project, I saw a typical full-stack application with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A FastAPI backend&lt;/li&gt;
&lt;li&gt;A React frontend&lt;/li&gt;
&lt;li&gt;PostgreSQL database&lt;/li&gt;
&lt;li&gt;Basic authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My mission? Transform this into a production-grade system with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proper containerization&lt;/li&gt;
&lt;li&gt;Automated SSL/TLS&lt;/li&gt;
&lt;li&gt;Comprehensive monitoring&lt;/li&gt;
&lt;li&gt;Efficient log management&lt;/li&gt;
&lt;li&gt;Zero-downtime deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Solution Architecture
&lt;/h2&gt;

&lt;p&gt;I designed a modern DevOps architecture that looks like this (if you've been following my articles recently, you'll know i like this types of diagrams now 😅):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────────────────┐
│                           Client Requests                               │
└───────────────────────────────┬─────────────────────────────────────────┘
                                │
                        ┌───────▼───────┐
                        │    Traefik    │
                        │  Reverse Proxy│
                        │   (SSL/TLS)   │
                        └───────┬───────┘
                                │
        ┌───────────────────────┼───────────────────────┐
        │                       │                       │
┌───────▼───────┐       ┌───────▼───────┐       ┌───────▼───────┐
│    Frontend   │       │    Backend    │       │    Adminer    │
│  (React/Nginx)│◄─────►│   (FastAPI)   │◄─────►│  (DB Admin)   │
└───────┬───────┘       └───────┬───────┘       └───────┬───────┘
        │                       │                       │
        │                       │                       │
        │                ┌──────▼───────┐               │
        └───────────────►│  PostgreSQL  │◄──────────────┘
                         │   Database   │
                         └──────┬───────┘
                                │
                         ┌──────▼───────┐
                         │  Monitoring  │
                         │    Stack     │
                         └──────┬───────┘
                                │
    ┌─────────────┬─────────────┼─────────────┬────────────┐
    │             │             │             │            │
┌───▼─────┐   ┌───▼─────┐   ┌───▼───┐    ┌────▼─────┐  ┌───▼──────┐
│cAdvisor │   │Promtail │   │Loki   │    │Prometheus│  │Grafana   │
│Container│   │Logs     │   │Log    │    │Metrics   │  │Dashboards│
│Metrics  │   │Collector│   │Storage│    │Database  │  │&amp;amp; Alerts  │
└─────────┘   └─────────┘   └───────┘    └──────────┘  └──────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Application in Action
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsnmrxvm4qchp9c49tjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsnmrxvm4qchp9c49tjc.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66fqo0v550e972l1rt1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66fqo0v550e972l1rt1v.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Containerization Strategy: Building Efficient Images 🐳
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Multi-stage Builds: Optimizing Image Size
&lt;/h3&gt;

&lt;p&gt;So let's look at how I achieved proper containerization.&lt;/p&gt;

&lt;p&gt;One of the first challenges I faced was keeping the Docker images small in size and efficient. That's where multi-stage builds came in. Let me show you how I implemented this for both frontend and backend:&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend Container
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build stage&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:18-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="c"&gt;# Production stage&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:alpine&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist /usr/share/nginx/html&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; nginx.conf /etc/nginx/conf.d/default.conf&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 80&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["nginx", "-g", "daemon off;"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why did I adopt using multi-stage?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It reduces final image size by excluding build tools. &lt;br&gt;
For the frontend imagine the image size reducing from a whooping ~590mb with a single-stage build to a 50.1mb image size from the multi-stage build.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using multi-stage also separates build dependencies from runtime dependencies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improves security by reducing attack surface&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Makes for faster deployments due to smaller images&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Single-stage build...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5h3egtp8zu212v00vyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5h3egtp8zu212v00vyd.png" alt="Image description" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;vs Multi-stage build...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpdpvmemrbt2062ckbc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpdpvmemrbt2062ckbc4.png" alt="Image description" width="760" height="27"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Backend Container
&lt;/h3&gt;

&lt;p&gt;I also applied multi-stage build here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build stage&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;python:3.11-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Install only necessary build dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    build-essential &lt;span class="se"&gt;\
&lt;/span&gt;    libpq-dev &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Install specific version of poetry with export support&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nv"&gt;poetry&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;1.5.1

&lt;span class="c"&gt;# Copy only dependency files first for better caching&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; pyproject.toml poetry.lock* ./&lt;/span&gt;

&lt;span class="c"&gt;# Generate requirements.txt - using the correct syntax for poetry export&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;poetry &lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="nt"&gt;--without-hashes&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;requirements.txt &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# Copy the rest of the application&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Production stage&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11-slim&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Install runtime dependencies only&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    libpq5 &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Copy requirements and install directly with pip&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# Copy application code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app .&lt;/span&gt;

&lt;span class="c"&gt;# Make startup script executable&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /app/prestart.sh

&lt;span class="c"&gt;# Set correct PYTHONPATH to ensure app imports work properly&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONPATH="${PYTHONPATH}:/app"&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8000&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["bash", "-c", "cd /app &amp;amp;&amp;amp; ./prestart.sh &amp;amp;&amp;amp; uvicorn app.main:app --host 0.0.0.0 --port 8000"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Database Management with Adminer
&lt;/h3&gt;

&lt;p&gt;I added Adminer in my application stack in order to monitor the database. I also configured it to have secure access through Traefik.&lt;br&gt;
Did this by simply added that Traefik label in the adminer service section in the docker-compose file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;adminer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;adminer&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.adminer.rule=Host(`michaeloxo.tech`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PathPrefix(`/adminer`)"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.adminer.entrypoints=websecure"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.adminer.tls=true"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adminer also supports multiple database types so you can try using it too in your various application stacks. &lt;/p&gt;

&lt;h3&gt;
  
  
  Volume Management
&lt;/h3&gt;

&lt;p&gt;For data persistence for grafana and prometheus, i implemented named volumes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;postgres_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;prometheus_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;grafana_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps persist data across container restarts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network Isolation
&lt;/h3&gt;

&lt;p&gt;I implemented proper network isolation using Docker networks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app-network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will allow easy communication among the containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Health Checks
&lt;/h3&gt;

&lt;p&gt;Every service includes health checks. The database, for example, has this health check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD-SHELL"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pg_isready&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-U&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;${POSTGRES_USER}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-d&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;${POSTGRES_DB}"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These health checks are important as they ensure that all the services are available, and we can detect problems early. &lt;/p&gt;



&lt;h2&gt;
  
  
  Detailed Component Breakdown: The Monitoring Stack 🔍
&lt;/h2&gt;

&lt;p&gt;Now let's talk about observability and monitoring.&lt;/p&gt;

&lt;p&gt;When I first approached this project, I knew I needed a monitoring solution that would be both powerful and maintainable. &lt;/p&gt;

&lt;p&gt;After careful consideration, I settled on a modern stack that combines the best tools for container monitoring, metrics collection, and log aggregation. Let me walk you through each component and why I chose them.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Foundation: cAdvisor and Container Metrics
&lt;/h3&gt;

&lt;p&gt;The first piece of the puzzle was finding a way to monitor the containers effectively. That's where cAdvisor came in. What makes cAdvisor special is its zero-configuration approach - just mount the right volumes, and it starts collecting metrics automatically. In my setup, it watches over every container, tracking CPU usage, memory consumption, network I/O, and disk usage in real-time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;cadvisor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/cadvisor/cadvisor:v0.47.0&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/:/rootfs:ro&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run:/var/run:rw&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/sys:/sys:ro&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/:/var/lib/docker:ro&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The beauty of cAdvisor lies in its simplicity. It exposes metrics in Prometheus format out of the box, making it a perfect fit for my monitoring stack. Every container's performance is now visible at a glance, helping us identify potential issues before they become problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prometheus and Metrics Storage
&lt;/h3&gt;

&lt;p&gt;For storing and querying our metrics, I chose Prometheus. Yeah I know, Prometheus is quite everyone's go-to when it come storing and collecting metrics. This is because of its pull-based architecture, which is more reliable than push-based systems, especially in containerized environments. My Prometheus configuration is quite clean and straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;15s&lt;/span&gt;
  &lt;span class="na"&gt;evaluation_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;15s&lt;/span&gt;     

&lt;span class="na"&gt;scrape_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prometheus'&lt;/span&gt;
    &lt;span class="na"&gt;metrics_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/prometheus/metrics'&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prometheus:9090'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cadvisor'&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cadvisor:8080'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Log Aggregation Duo: Loki and Promtail
&lt;/h3&gt;

&lt;p&gt;For log management, I implemented a combination of Loki and Promtail. This choice was driven by the need for a lightweight yet powerful logging solution. Unlike traditional ELK stacks that can be resource-intensive, Loki and Promtail provide efficient log aggregation with minimal overhead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;loki&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/loki:latest&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./loki/loki-config.yml:/etc/loki/config.yml&lt;/span&gt;

&lt;span class="na"&gt;promtail&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/promtail:latest&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./promtail/promtail-config.yml:/etc/promtail/config.yml&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/log:/var/log:ro&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/containers:/var/lib/docker/containers:ro&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The synergy between these tools is impressive. Promtail collects logs from both the system and containers, while Loki stores them efficiently. What's particularly useful is how they use the same labeling system as Prometheus, making it easy to correlate logs with metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Visualization Layer: Grafana
&lt;/h3&gt;

&lt;p&gt;To bring all this data to life, I chose and implemented Grafana. I mean, Grafana just wonderfully ties everything together, providing beautiful dashboards and powerful querying capabilities. My Grafana setup is configured to work seamlessly with both Prometheus and Loki:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;grafana&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grafana/grafana:latest&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;grafana_data:/var/lib/grafana&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;GF_SERVER_ROOT_URL=http://grafana:3000/grafana&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;GF_SERVER_SERVE_FROM_SUB_PATH=true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's how the grafana dashboards looks like in production:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcbhh1xc9i6nd2yy7n2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcbhh1xc9i6nd2yy7n2x.png" alt="Image description" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqgptrew7m4rd51w11ih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqgptrew7m4rd51w11ih.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4z7tbcbayc567eo7uuod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4z7tbcbayc567eo7uuod.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Traffic Manager: Traefik
&lt;/h3&gt;

&lt;p&gt;Finally, to tie everything together, I implemented Traefik as my reverse proxy. This modern reverse proxy stands out for its automatic service discovery and dynamic configuration capabilities. My Traefik setup ensures secure access to all my monitoring tools. Just add these labels to the desired service in the docker-compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.[service].rule=Host(`michaeloxo.tech`)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;PathPrefix(`/[service]`)"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.[service].entrypoints=websecure"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.[service].tls=true"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.[service].middlewares=global-middleware@file"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What makes this setup particularly effective is how all components work together. &lt;/p&gt;

&lt;p&gt;cAdvisor collects metrics, Prometheus stores them, Loki and Promtail handle logs, Grafana visualizes everything, and Traefik ensures secure access. It's a well-oiled machine where each part plays its role perfectly.&lt;/p&gt;

&lt;p&gt;The result? A comprehensive monitoring solution that provides real-time insights into our application's performance, helps us identify and troubleshoot issues quickly, and ensures we have the data we need to make informed decisions about our infrastructure.&lt;/p&gt;

&lt;p&gt;(Btw, I recently implemented a similar but more complex monitoring stack on a product we're building, and ofc i wrote an article about it. &lt;a href="https://dev.to/nonso_echendu_001/mastering-modern-monitoring-a-comprehensive-guide-to-grafana-prometheus-and-dora-metrics-20ec"&gt;Read it here&lt;/a&gt; )&lt;/p&gt;



&lt;h2&gt;
  
  
  Lessons Learned 📚
&lt;/h2&gt;

&lt;p&gt;Throughout this DevOps implementation journey, I've gathered some valuable insights that are worth noting:&lt;/p&gt;

&lt;h3&gt;
  
  
  Traefik was like magic!
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Automatic SSL Was a Game-Changer. Before Traefik, I spent hours manually configuring and renewing SSL certificates. I also never have  to worry about certificate renewals again. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Middleware Chains Simplified Security. Creating reusable security configurations with middleware chains was a marvel. I could easily apply consistent security headers across all services with a single reference.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For containerization best practices
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Multi-Stage Builds Transformed My Images&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The satisfaction of seeing frontend image sizes drop from 590MB to 50MB (0ver 90%) was incredible&lt;/li&gt;
&lt;li&gt;Eliminated unnecessary build tools from production images&lt;/li&gt;
&lt;li&gt;Significantly improved deployment speed and reduced bandwidth usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Implementing Proper Service Dependencies Using Healthchecks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One thing that could make your setup even better is implementing proper service dependencies with healthchecks for startup order. &lt;/p&gt;

&lt;p&gt;This ensures services start in the right order and only when their dependencies are truly ready, not just running. It's eliminated those annoying "connection refused" errors during startup and makes the system much more resilient to restarts.&lt;/p&gt;

&lt;p&gt;The biggest takeaway from this project was how the right tooling can transform complex tasks into manageable ones. Traefik turned what would have been several hours of reverse proxy configuration into minutes, while the monitoring stack gave me insights I didn't even know I needed until I had them.&lt;/p&gt;



&lt;h2&gt;
  
  
  Conclusion 🎉
&lt;/h2&gt;

&lt;p&gt;Phew! What a rewarding journey this has been! Taking a regular app and turning it into a containerized, monitored production system was quite the adventure.&lt;/p&gt;

&lt;p&gt;Is it perfect? Nah, nothing ever is. But that's the beauty of DevOps - it's all about continuous improvement. I'm still tinkering with container sizes, playing with alert thresholds, and learning new security tricks,. And tbh, that's the fun part!&lt;/p&gt;

&lt;p&gt;The biggest win for me wasn't just getting everything up and running (though that felt amazing!), but seeing how all these pieces work together. Watching container metrics flow into Prometheus, visualizing them in Grafana, and catching issues before they become problems - it's like having superpowers lol.&lt;/p&gt;

&lt;p&gt;I still have a list of improvements I want to make. Just thought of adding a ci/cd pipeline and integrating terraform &amp;amp; ansible to automate deployment, I know i'll still think of more.&lt;/p&gt;

&lt;p&gt;But for now, I'm pretty good with this setup.  It's running smoothly, it's secure, and most importantly - it's giving us the insights we need to keep getting better.&lt;/p&gt;

&lt;p&gt;I do hope sharing my experience helps you in your own containerization and monitoring journey. Again don't forget to check out my article exclusively on monitoring &lt;a href="https://dev.to/nonso_echendu_001/mastering-modern-monitoring-a-comprehensive-guide-to-grafana-prometheus-and-dora-metrics-20ec"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you'd like to explore the complete implementation, the entire project is available on &lt;a href="https://github.com/NonsoEchendu/full-stack-fastapi-project" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. I welcome your feedback, issues, and pull requests!&lt;/p&gt;

&lt;p&gt;Till the next, happy building!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Container System from Scratch: Understanding the Magic Behind Docker</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Fri, 28 Mar 2025 22:49:14 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/building-a-container-system-from-scratch-understanding-the-magic-behind-docker-5fjn</link>
      <guid>https://dev.to/nonso_echendu_001/building-a-container-system-from-scratch-understanding-the-magic-behind-docker-5fjn</guid>
      <description>&lt;p&gt;Hi builders!&lt;/p&gt;

&lt;p&gt;I think this has been the shortest time intervals between any of my post (lol). &lt;/p&gt;

&lt;p&gt;Well, today, I'm excited to share a project I've been working on (HNG thingy): building a container system from scratch! &lt;/p&gt;

&lt;p&gt;Maybe when you hear of "container", Docker comes right to mind, yeah? Well, if you've ever wondered what's happening under the hood when you run &lt;code&gt;docker run&lt;/code&gt;, this post is for you. I'll demystify containers by creating my own lightweight implementation that captures the core functionality of Docker. &lt;/p&gt;

&lt;h2&gt;
  
  
  🔍 Introduction: Why Build Your Own Container System?
&lt;/h2&gt;

&lt;p&gt;Containers have transformed how we deploy and run applications, but they can seem like magic. By building our own container system, we can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Gain a deep understanding of the core Linux technologies that power containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn about isolation, resource control, and namespace concepts firsthand&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Appreciate the engineering decisions behind production container systems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build a foundation for more advanced container orchestration concepts&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of this guide, you'll have a functional container system capable of running processes in isolation with resource limits, networking, and other essential features.&lt;/p&gt;

&lt;h2&gt;
  
  
  📋 Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before diving in, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Linux system (Ubuntu 20.04 or similar)&lt;/li&gt;
&lt;li&gt;Root or sudo access&lt;/li&gt;
&lt;li&gt;Basic knowledge of Python and Bash&lt;/li&gt;
&lt;li&gt;Understanding of Linux processes and networking concepts&lt;/li&gt;
&lt;li&gt;Necessary packages installed: python3, cgroups-tools, iptables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Btw, you can access all the scripts written for this project in my git &lt;a href="https://github.com/NonsoEchendu/container-system" rel="noopener noreferrer"&gt;GitHub repo here&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  🏗️ The Architecture: Understanding Our Container System
&lt;/h2&gt;

&lt;p&gt;Our container implementation relies on these core components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Python CLI Manager: Handles user commands and orchestrates container lifecycle 
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2. Bash Container Script: Implements the low-level container functionality&lt;/p&gt;

&lt;p&gt;3. Linux Namespaces: For process, network, and filesystem isolation&lt;/p&gt;

&lt;p&gt;4. Cgroups: To implement resource limits (CPU, memory)&lt;/p&gt;

&lt;p&gt;5. Chroot: For filesystem isolation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────┐
│                       User Commands                         │
│                                                             │
│  simple_container.py start|stop|list|logs                   │
└────────────────────────────┬────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────┐
│                 Python Container Manager                    │
│                                                             │
│  • Parses command line arguments                            │
│  • Manages container lifecycle                              │
│  • Tracks running containers                                │
│  • Sets up resource limits (cgroups)                        │
└────────────────────────────┬────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────┐
│                    container.sh Script                      │
│                                                             │
│  • Creates namespaces (process, network, mount)             │
│  • Sets up filesystem isolation (chroot)                    │
│  • Configures networking &amp;amp; port forwarding                  │
│  • Implements volume mounts                                 │
│  • Handles user isolation                                   │
└────────────────────────────┬────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────┐
│                      Linux Kernel                           │
│                                                             │
│  • Namespaces   • Cgroups     • Network Stack               │
│  • Filesystem   • Devices     • Process Management          │
└─────────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧠 Understanding Container Technologies: What Powers Our System
&lt;/h2&gt;

&lt;p&gt;Let's demystify the key technologies that make containers possible:&lt;/p&gt;

&lt;p&gt;(I'll be talking and elaborating on some of my code snippets in those scripts. so you can open the repo by the side, so you'll get the most of my explanations)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Namespaces: The Isolation Foundation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Linux namespaces are the cornerstone of container isolation, providing separate views of system resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;PID Namespace: Gives containers their own process IDs, starting with PID 1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Network Namespace: Creates isolated network stacks with separate interfaces and routing tables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mount Namespace: Isolates filesystem mount points between containers and host&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;UTS Namespace: Allows containers to have their own hostname and domain name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IPC Namespace: Isolates inter-process communication mechanisms&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;User Namespace: Maps user IDs between container and host, improving security &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our implementation uses unshare to create these namespaces, achieving process isolation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;unshare &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nt"&gt;--uts&lt;/span&gt; &lt;span class="nt"&gt;--ipc&lt;/span&gt; &lt;span class="nt"&gt;--pid&lt;/span&gt; &lt;span class="nt"&gt;--fork&lt;/span&gt; &lt;span class="nb"&gt;chroot&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WRAPPED_CMD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;strong&gt;Cgroups: Resource Control Made Simple&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Control groups (cgroups) limit and account for resource usage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CPU Limits: Prevent containers from hogging CPU resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory Constraints: Protect the host from memory-hungry containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disk I/O Controls: Limit disk activity for fair resource sharing&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our implementation sets these limits using the cgroup filesystem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;set_cpu_limit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cpu_limit&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Limit CPU usage for a container (percentage)&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="c1"&gt;# Create cgroup and set CPU limit
&lt;/span&gt;    &lt;span class="c1"&gt;# ...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;strong&gt;Chroot: Filesystem Isolation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The chroot command changes the root directory for a process, creating filesystem isolation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chroot&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WRAPPED_CMD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple yet powerful mechanism ensures containers can't access files outside their designated root filesystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Part 1: Building the Container Manager (Python CLI)
&lt;/h2&gt;

&lt;p&gt;Let's start by creating our Python CLI for managing containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#!/usr/bin/env python3
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;argparse&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;signal&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SimpleContainer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;container_dir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/var/run/simple-container&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;makedirs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;container_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exist_ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cpu_limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;memory_limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
              &lt;span class="n"&gt;network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;detach&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;use_userns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Start a new container&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="c1"&gt;# Implementation details...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our CLI supports these commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;start: Launch a new container with specified resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;stop: Gracefully terminate a running container&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;list: Show all running containers and their details&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;logs: Display container logs for troubleshooting&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🌐 Part 2: Creating Network Isolation
&lt;/h2&gt;

&lt;p&gt;Networking is crucial for container functionality. Our implementation creates:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A network namespace for the container&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2. Virtual Ethernet (veth) pairs to connect container and host&lt;/p&gt;

&lt;p&gt;3. NAT rules for internet access&lt;/p&gt;

&lt;p&gt;4. Port forwarding for service exposure&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Setup network namespace&lt;/span&gt;
ip netns add &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_NETNS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Create veth pair&lt;/span&gt;
ip &lt;span class="nb"&gt;link &lt;/span&gt;add &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VETH_HOST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nb"&gt;type &lt;/span&gt;veth peer name &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VETH_CONTAINER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Move container end to namespace&lt;/span&gt;
ip &lt;span class="nb"&gt;link set&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VETH_CONTAINER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; netns &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_NETNS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Configure interfaces and routing&lt;/span&gt;
&lt;span class="c"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives each container its own isolated network stack while maintaining connectivity to the outside world.&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Part 3: Implementing Filesystem Isolation and Volumes
&lt;/h2&gt;

&lt;p&gt;Our container system supports both filesystem isolation and volume mounts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Base filesystem: Using chroot with a minimal root filesystem&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2. Overlay filesystem: For non-destructive modifications&lt;/p&gt;

&lt;p&gt;3. Volume mounts: For sharing directories between host and container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Mount essential filesystems&lt;/span&gt;
mount &lt;span class="nt"&gt;-t&lt;/span&gt; proc proc &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS&lt;/span&gt;&lt;span class="s2"&gt;/proc"&lt;/span&gt;
mount &lt;span class="nt"&gt;-t&lt;/span&gt; sysfs sysfs &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS&lt;/span&gt;&lt;span class="s2"&gt;/sys"&lt;/span&gt;

&lt;span class="c"&gt;# Setup volume mounts&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;volume &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VOLUMES&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;&lt;span class="nv"&gt;host_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$volume&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;: &lt;span class="nt"&gt;-f1&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nv"&gt;container_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$volume&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;: &lt;span class="nt"&gt;-f2&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    mount &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$host_path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS$container_path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🔒 Part 4: User Isolation and Security
&lt;/h2&gt;

&lt;p&gt;Security is essential for containers. Our implementation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creates a non-root container user (UID 1000)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2. Runs commands as this user inside the container&lt;/p&gt;

&lt;p&gt;3. Sets up a minimal /dev environment&lt;/p&gt;

&lt;p&gt;4. Manages permissions for mounted volumes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Setup user isolation&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS&lt;/span&gt;&lt;span class="s2"&gt;/etc/passwd"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
root:x:0:0:root:/root:/bin/bash
container:x:1000:1000:container:/home/container:/bin/bash
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Run command as container user&lt;/span&gt;
su - container &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$COMMAND&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ⚙️ Part 5: Resource Limiting with Cgroups
&lt;/h2&gt;

&lt;p&gt;To prevent containers from consuming excessive resources, we implement cgroup-based limits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;set_memory_limit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;memory_limit&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Limit memory usage for a container (in bytes)&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;cgroup_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/sys/fs/cgroup/memory/simple-container-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;container_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;makedirs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cgroup_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exist_ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Set memory limit
&lt;/span&gt;    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;cgroup_path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/memory.limit_in_bytes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;w&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;memory_limit&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;cgroup_path&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prevents "noisy neighbor" problems and protects your host system from container resource abuse.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔍 Part 6: Testing Container Isolation
&lt;/h2&gt;

&lt;p&gt;Let's verify our container implementation works correctly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Process Isolation: Container processes can't see host processes
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./simple_container.py start &lt;span class="nt"&gt;--name&lt;/span&gt; test1 &lt;span class="nt"&gt;--command&lt;/span&gt; &lt;span class="s2"&gt;"ps aux"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2. Network Isolation: Container has its own network stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./simple_container.py start &lt;span class="nt"&gt;--name&lt;/span&gt; test2 &lt;span class="nt"&gt;--command&lt;/span&gt; &lt;span class="s2"&gt;"ip addr &amp;amp;&amp;amp; ping -c 1 8.8.8.8"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. Filesystem Isolation: Container can't access host files&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./simple_container.py start &lt;span class="nt"&gt;--name&lt;/span&gt; test3 &lt;span class="nt"&gt;--command&lt;/span&gt; &lt;span class="s2"&gt;"ls -la / &amp;amp;&amp;amp; cat /etc/hostname"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4. Resource Limits: Container respects CPU and memory constraints&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./simple_container.py start &lt;span class="nt"&gt;--name&lt;/span&gt; test4 &lt;span class="nt"&gt;--cpu&lt;/span&gt; 50 &lt;span class="nt"&gt;--memory&lt;/span&gt; 256M &lt;span class="nt"&gt;--command&lt;/span&gt; &lt;span class="s2"&gt;"stress --cpu 4"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚀 Part 7: Real-World Application - Deploying a Todo App in Our Container
&lt;/h2&gt;

&lt;p&gt;Let's put our container system to the test with a real-world application! We'll deploy a simple Flask Todo application inside our custom container, demonstrating how the concepts we've explored can be applied to practical use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting Up a Fresh Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Starting with a fresh Ubuntu server, here's how to deploy a simple web application in our container system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# First, update the system&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="c"&gt;# Install required dependencies&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; python3 python3-pip python3-venv git

&lt;span class="c"&gt;# Clone our container system repository&lt;/span&gt;
git clone https://github.com/NonsoEchendu/container-system
&lt;span class="nb"&gt;cd &lt;/span&gt;container-system

&lt;span class="c"&gt;# Make the scripts executable&lt;/span&gt;
&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; +x container.sh simple_container.py cgroups.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;strong&gt;Preparing the Container Root Filesystem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our container system needs a proper root filesystem. Let's prepare one.&lt;/p&gt;

&lt;p&gt;But before then, we need to install a very important tool, debootstrap. Debootstrap is what will install a Debian base system into our root filesystem's directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;debootstrap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can continue with setting up our root filesystem.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a directory for our container's root filesystem&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /home/ubuntu/todo-rootfs

&lt;span class="c"&gt;# Prepare the root filesystem&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./simple_container.py prepare-rootfs &lt;span class="nt"&gt;--target&lt;/span&gt; /home/ubuntu/todo-rootfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Setting Up the Container Root Filesystem Repository Sources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need to ensure the container has proper repository sources configured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Configure proper repository sources in the container&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'cat &amp;gt; /home/ubuntu/odo-rootfs/etc/apt/sources.list &amp;lt;&amp;lt; EOF
deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu focal-security main restricted universe multiverse
EOF'&lt;/span&gt;

&lt;span class="c"&gt;# Update package lists with the new repositories&lt;/span&gt;
&lt;span class="nb"&gt;sudo chroot&lt;/span&gt; /home/ubuntu/todo-rootfs/ apt-get update

&lt;span class="c"&gt;# Install Python and Flask in the existing root filesystem&lt;/span&gt;
&lt;span class="nb"&gt;sudo chroot&lt;/span&gt; /home/ubuntu/todo-rootfs apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; python3 python3-pip
&lt;span class="nb"&gt;sudo chroot&lt;/span&gt; /home/ubuntu/todo-rootfs pip3 &lt;span class="nb"&gt;install &lt;/span&gt;flask
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Getting the Todo Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now let's get the Todo application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the Todo app repository&lt;/span&gt;
git clone https://github.com/NonsoEchendu/simple-flask-todo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Creating and Starting the Container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With our root filesystem and application ready, let's create and start the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start the container with the Todo app&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;container-system

&lt;span class="nb"&gt;sudo&lt;/span&gt; ./simple_container.py create &lt;span class="nt"&gt;--name&lt;/span&gt; todo-container &lt;span class="nt"&gt;--rootfs&lt;/span&gt; /home/ubuntu/todo-rootfs 

&lt;span class="nb"&gt;sudo&lt;/span&gt; ./simple_container.py start &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--name&lt;/span&gt; todo-container &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--volume&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/../simple-flask-todo:/app &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--port&lt;/span&gt; 8080:8080 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--command&lt;/span&gt; &lt;span class="s2"&gt;"cd /app &amp;amp;&amp;amp; python3 app.py"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cpu&lt;/span&gt; 50 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--memory&lt;/span&gt; 256M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A successful run should look like this on your terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymwvqu125spq0vx4tdqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymwvqu125spq0vx4tdqx.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessing the Todo Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the container is running, you can access the Todo app in your browser by going to &lt;code&gt;http://your-server-ip:8080&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lvnj3l61lndqhgwo430.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lvnj3l61lndqhgwo430.png" alt="Image description" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managing the Container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After successfully deploying the application, you can manage the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check the container status&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./simple_container.py list

&lt;span class="c"&gt;# View the application logs&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./simple_container.py logs &lt;span class="nt"&gt;--name&lt;/span&gt; todo-container

&lt;span class="c"&gt;# Stop the container when done&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./simple_container.py stop &lt;span class="nt"&gt;--name&lt;/span&gt; todo-container

&lt;span class="c"&gt;# Remove the container&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./simple_container.py remove &lt;span class="nt"&gt;--name&lt;/span&gt; todo-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🔥 Part 8: Challenges Faced and Overcome
&lt;/h2&gt;

&lt;p&gt;I'll be very honest, building a container system from scratch wasn't without challenges. Let me tell you some of them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem: Permission Problems with User Namespaces&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the trickiest issues was handling permissions correctly when combining user namespaces with chroot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;unshare &lt;span class="nt"&gt;--user&lt;/span&gt; &lt;span class="nt"&gt;--map-root-user&lt;/span&gt; &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="nb"&gt;chroot&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(The above command will fail with "Permission denied") &lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: I separated the concerns - using chroot as root but then switching to the container user afterward using su:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chroot&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; su - container &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$COMMAND&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem: Network Namespace Communication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Setting up proper communication between host and container network namespaces was challenging. My initial approach that caused connectivity issues was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip netns exec "$CONTAINER_NETNS" ip route add default via "$GATEWAY_IP"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; I implemented a complete solution with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Proper veth pair setup&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Correct IP and routing configuration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NAT rules for outbound connections&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Careful DNS configuration&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Problem: Volume Mount Permission&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Volume mounts created complex permission issues, particularly with nested directories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mount &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOST_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS$CONTAINER_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the above command, volume permissions wouldn't match user expectations. Files that were created would have root ownership, not container user&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; I implemented special handling for volume permissions, ensuring volumes have appropriate permissions for container user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USE_USER_NS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"true"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
    &lt;span class="c"&gt;# Make mount point accessible to container user&lt;/span&gt;
    &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; 1000:1000 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS$container_path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="c"&gt;# Mount with specific options&lt;/span&gt;
    mount &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$mount_opts&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$host_path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS$container_path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem: DNS Resolution Failures&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DNS resolution inside containers was initially broken, preventing network connections. Container couldn't resolve external hostnames. &lt;/p&gt;

&lt;p&gt;A command like, &lt;code&gt;ping google.com&lt;/code&gt; would return "Unknown host".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; I properly configured DNS by copying host resolver settings and ensuring proper access, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;setup_dns&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;# Get host's DNS servers&lt;/span&gt;
    &lt;span class="nv"&gt;HOST_DNS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;grep &lt;/span&gt;nameserver /etc/resolv.conf | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $2}'&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 1&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOST_DNS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nv"&gt;HOST_DNS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"8.8.8.8"&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;

    &lt;span class="c"&gt;# Create resolv.conf with host's DNS&lt;/span&gt;
    &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ROOT_FS&lt;/span&gt;&lt;span class="s2"&gt;/etc/resolv.conf"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
nameserver &lt;/span&gt;&lt;span class="nv"&gt;$HOST_DNS&lt;/span&gt;&lt;span class="sh"&gt;
nameserver 8.8.8.8
nameserver 8.8.4.4
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;    &lt;span class="c"&gt;# Add DNS server IP to container's routing table&lt;/span&gt;
    ip netns &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_NETNS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; ip route add &lt;span class="nv"&gt;$HOST_DNS&lt;/span&gt; via &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$HOST_IP&lt;/span&gt; | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="s1"&gt;'/'&lt;/span&gt; &lt;span class="nt"&gt;-f1&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt; And some other problems i can't remember atm.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Part 9: Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Building this container system taught me some valuable lessons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson 1:&lt;/strong&gt; The power of of Linux fundamentals and building blocks&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson 2:&lt;/strong&gt; Importance of Security Layering&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Namespaces for isolation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Capability restrictions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;User separation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resource limits&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Filesystem restrictions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lesson 3:&lt;/strong&gt; Abstractions Have Real Value&lt;/p&gt;

&lt;p&gt;After implementing containers from scratch, I have newfound appreciation for the abstractions Docker provides. What seems like "magic" is actually careful engineering to hide complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson 4:&lt;/strong&gt; Resource Management is Crucial&lt;/p&gt;

&lt;p&gt;Containers without resource limits can easily disrupt host systems. Proper cgroup configuration is not optional but essential for production use.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔮 Conclusion: From Understanding to Innovation
&lt;/h2&gt;

&lt;p&gt;Congratulations! You've successfully built a container system that implements all the key features of production container runtimes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;✅ Process isolation with namespaces&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;✅ Network isolation and port forwarding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;✅ Filesystem isolation and volume mounts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;✅ Resource limits with cgroups&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;✅ User isolation for security&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This journey has given you deep insights into how containers actually work, demystifying what often seems like magic. &lt;/p&gt;

&lt;p&gt;Now you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Better understand Docker and Kubernetes internals&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Debug container issues more effectively&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make informed decisions about container deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Potentially extend your implementation with more advanced features&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember, while Docker and other production container systems are much more sophisticated, they're built on these same fundamental Linux primitives we've explored. By building our own implementation, we've peeled back the layers of abstraction to reveal the elegant simplicity at the core of container technology.&lt;/p&gt;

&lt;p&gt;Till next time, happy building!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Modern Monitoring Approach: A Comprehensive Guide to Grafana, Prometheus, and DORA Metrics</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Fri, 21 Mar 2025 10:26:46 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/mastering-modern-monitoring-a-comprehensive-guide-to-grafana-prometheus-and-dora-metrics-20ec</link>
      <guid>https://dev.to/nonso_echendu_001/mastering-modern-monitoring-a-comprehensive-guide-to-grafana-prometheus-and-dora-metrics-20ec</guid>
      <description>&lt;p&gt;Hi builders.&lt;/p&gt;

&lt;p&gt;Been a while i posted an article here. I've been so consumed with an internship program (HNG tech), where we're actually building a real-world product. Sneak peak: it's an AI grading system that schools and educational organisations can use. When it's out, i'll tell y'all about it :)&lt;/p&gt;

&lt;p&gt;But today, i'll be talking about monitoring your web applications, which will save you time and money (application downtime will make you lose money).&lt;/p&gt;

&lt;p&gt;This guide will walk you through building a powerful monitoring ecosystem that not only alerts you when things break but helps you understand why they break and how to improve your development practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔍 Introduction: Why Monitoring Matters
&lt;/h2&gt;

&lt;p&gt;A robust monitoring system is very important as it aids in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Providing real-time visibility into system health&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alerting teams before small issues become major outages&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Offering actionable insights for continuous improvement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enabling data-driven decisions about infrastructure and development practices&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of this guide, you'll have implemented a comprehensive monitoring solution leveraging industry-standard tools like Prometheus, Grafana, and cutting-edge DORA metrics to transform how you understand and improve your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  📋 Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before diving in, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Linux-based cloud server (I used Ubuntu 20.04, but any other linux distro works too)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Root or sudo access&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic command line knowledge&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Actions configured for your repositories&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Firewall configured to allow necessary ports (22, 3000, 9090, etc.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Slack workspace&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🏗️ The Architecture: Understanding Our Monitoring Stack
&lt;/h2&gt;

&lt;p&gt;Our monitoring system consists of these tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Prometheus: To collect and store all our metrics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grafana: Makes pretty, customizable dashboards that'll help you visualize your metrics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Node Exporter: Collects hardware server metriks (like CPU, memory, disk)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Blackbox Exporter: Checks if endpoints are actually responding. Will also check for SSL validity&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DORA Metrics Exporter: Tracks development performance on our github repos using industry-standard metrics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AlertManager: Manages alert routing and delivery to Slack&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌───────────────────────────────────────────────────────────────┐
│                    Monitored Infrastructure                   │
│                                                               │
│  ┌────────────────┐    ┌────────────────┐   ┌──────────────┐  │
│  │ Node Exporter  │    │ Blackbox Probe │   │    GitHub    │  │
│  │(Server Metrics)│    │ (Website Tests)│   │(DORA Metrics)│  │
│  └───────┬────────┘    └───────┬────────┘   └───────┬──────┘  │
└──────────┼─────────────────────┼────────────────────┼─────────┘
           │                     │                    │
           └─────────┬───────────┴────────────────────┘
                     │
                     ▼
            ┌─────────────────┐
            │    Exporters    │
            │(Data Collectors)│
            └────────┬────────┘
                     │
                     ▼  
           ┌───────────────────┐
           │    Prometheus     │
           │(Metrics Database) │────────────────┐
           └─────┬─────────────┘                │
                 │                              │
        ┌────────▼─────┐                        │
        │ AlertManager │                        │
        │  (Alerts)    │                        │
        └────────┬─────┘                        │
                 │                              │
                 │                              │
        ┌────────▼──────┐                       │
        │   Slack       │                       │
        │(Notifications)│                       │
        └───────────────┘                       │
                                                │
                                                ▼
            ┌────────────────────────────────────────────────────┐
            │                 Grafana Dashboards                 │
            │  ┌───────────┐  ┌───────────┐  ┌───────────────┐   │
            │  │  System   │  │  Website  │  │ DORA Metrics  │   │
            │  │ Dashboard │  │ Dashboard │  │   Dashboard   │   │
            │  └───────────┘  └───────────┘  └───────────────┘   │
            └────────────────────────┬───────────────────────────┘
                                     │
                                     ▼
                            ┌─────────────────┐
                            │   Developers    │
                            │  &amp;amp; Operations   │
                            └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧠 Why I Chose These Specific Monitoring Tools
&lt;/h2&gt;

&lt;p&gt;Our monitoring stack wasn't assembled randomly; each component was carefully selected to fulfil specific requirements in a modern infrastructure. Let's talk a bit about each of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prometheus: The Foundation of Our Stack
&lt;/h3&gt;

&lt;p&gt;I chose Prometheus as our core monitoring tool for several practical reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pull-based approach&lt;/strong&gt;: Prometheus collects metrics by reaching out to our systems, which simplifies setup and improves reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simple yet powerful queries&lt;/strong&gt;: PromQL lets us create meaningful alerts and visualizations without complex coding.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Built-in alerting&lt;/strong&gt;: Prometheus detects issues based on our defined thresholds, sending these to AlertManager for notification.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy metric filtering&lt;/strong&gt;: I can quickly analyze metrics by server, application, or any other label we've defined.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Perfect fit for our infrastructure&lt;/strong&gt;: Works seamlessly with both our servers and web applications. &lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Grafana: Visualization That Drives Insights
&lt;/h2&gt;

&lt;p&gt;I chose Grafana for visualization because it excels at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-source dashboards&lt;/strong&gt;: Can combine Prometheus data with other sources like logs or traces.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rich visualization options&lt;/strong&gt;: From time-series graphs to heatmaps and histograms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Alerting capabilities&lt;/strong&gt;: Though I primarily use AlertManager, Grafana's alerting can provide an additional layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Annotation support&lt;/strong&gt;: Allows marking deployments, incidents, and other events directly on dashboards.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User-friendly interface&lt;/strong&gt;: Makes it accessible for both technical and non-technical stakeholders. &lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Node Exporter: Server-Level Insights
&lt;/h2&gt;

&lt;p&gt;I implemented Node Exporter to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor system resources&lt;/strong&gt;: Track CPU, memory, disk, and network usage on our servers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identify bottlenecks&lt;/strong&gt;: Quickly pinpoint which resources are constraining performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Track system health&lt;/strong&gt;: Get early warnings about server issues before they affect users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collect detailed metrics&lt;/strong&gt;: Access hundreds of system-level data points for thorough analysis. &lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Blackbox Exporter: External Endpoint Monitoring
&lt;/h3&gt;

&lt;p&gt;Blackbox Exporter was specifically chosen to monitor external endpoints and provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Uptime monitoring&lt;/strong&gt;: Detects when websites and APIs are unavailable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Response time tracking&lt;/strong&gt;: Measures latency that could affect user experience&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSL certificate validation&lt;/strong&gt;: Alerts before certificates expire to prevent security warnings&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP status verification&lt;/strong&gt;: Ensures services are returning proper response codes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Basic content validation&lt;/strong&gt;: Can verify that responses contain expected content&lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AlertManager: Intelligent Alert Routing
&lt;/h3&gt;

&lt;p&gt;AlertManager wasn't chosen merely for sending notifications, but for its sophisticated handling of alerts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Grouping and deduplication&lt;/strong&gt;: Prevents alert storms during major outages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Silencing and inhibition&lt;/strong&gt;: Reduces noise by temporarily muting alerts during maintenance or when a higher-priority alert is already firing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multiple receivers&lt;/strong&gt;: Routes different alerts to appropriate teams or channels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Templating&lt;/strong&gt;: Creates rich, context-aware notifications that speed up troubleshooting. &lt;br&gt;&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  DORA Metrics Exporter: Connecting Technical and Business Performance
&lt;/h3&gt;

&lt;p&gt;This custom component bridges the gap between operational metrics and development performance, allowing teams to make data-driven decisions about process improvements.&lt;/p&gt;

&lt;h2&gt;
  
  
  📊 The Importance of Tracking DORA Metrics
&lt;/h2&gt;

&lt;p&gt;DORA (DevOps Research and Assessment) metrics identifies four key metrics:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Frequency (DF)&lt;/strong&gt;: How often code is successfully deployed to production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lead Time for Changes (LTC)&lt;/strong&gt;: The time it takes for a commit to reach production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mean Time to Restore (MTTR)&lt;/strong&gt;: How quickly service can be restored after an incident&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Change Failure Rate (CFR)&lt;/strong&gt;: The percentage of deployments that cause a failure 
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our setup doesn't just passively collect these metrics — it makes them actionable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time visibility&lt;/strong&gt;: Dashboards show current performance on all four metrics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Historical trends&lt;/strong&gt;: Track improvements over time with historical data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proactive alerts&lt;/strong&gt;: Get notified when metrics fall outside healthy ranges&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Correlation with system metrics&lt;/strong&gt;: Understand how infrastructure affects delivery performance&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if your Change Failure Rate rises above a threshold, you'll be alerted before it becomes a serious problem. Similarly, if Mean Time to Restore grows too long, you can focus efforts on improving incident response.&lt;/p&gt;

&lt;p&gt;By integrating DORA metrics into your monitoring system, you create a feedback loop that continuously improves both technical operations and business outcomes. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🚨 How Alerts Transform System Reliability
&lt;/h2&gt;

&lt;p&gt;Now let's talk about alerting. &lt;/p&gt;

&lt;p&gt;Alerts are far more than just notifications — they are a critical component of a reliability strategy that shifts teams from reactive to proactive operations. A well-configured alerting system fundamentally changes how teams approach reliability.&lt;/p&gt;

&lt;p&gt;Here's what I mean. We'll compare the traditional approach (proactive alerts) and modern approach (with our system)&lt;/p&gt;

&lt;h4&gt;
  
  
  The Traditional Approach (Without Proactive Alerts)
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;System fails completely&lt;/li&gt;
&lt;li&gt;Users report problems&lt;/li&gt;
&lt;li&gt;Engineers scramble to diagnose unfamiliar issues&lt;/li&gt;
&lt;li&gt;Teams work under pressure to restore service&lt;/li&gt;
&lt;li&gt;Business impact accumulates during downtime&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  The Modern Approach (With Our Alert System)
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;System begins showing early warning signs&lt;/li&gt;
&lt;li&gt;Alerts trigger before users are affected&lt;/li&gt;
&lt;li&gt;Engineers investigate with detailed context already provided&lt;/li&gt;
&lt;li&gt;Problems are resolved during their early stages&lt;/li&gt;
&lt;li&gt;Many outages are prevented entirely&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The alerts we've configured in this guide do more than just notify—they transform how teams interact with systems:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Context-Rich Notifications
&lt;/h4&gt;

&lt;p&gt;Our alerts include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Precise timing information&lt;/li&gt;
&lt;li&gt;System relationships&lt;/li&gt;
&lt;li&gt;Impact assessments&lt;/li&gt;
&lt;li&gt;Historical context&lt;/li&gt;
&lt;li&gt;Actionable next steps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means engineers can begin addressing issues immediately rather than spending critical time gathering basic information.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Progressive Severity Levels
&lt;/h4&gt;

&lt;p&gt;By distinguishing between warnings and critical alerts, the system creates a natural progression:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Warnings&lt;/strong&gt;: Address during normal working hours to prevent future problems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Critical alerts&lt;/strong&gt;: Require immediate attention to restore service&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tiered approach ensures appropriate response without creating false urgency.&lt;/p&gt;




&lt;p&gt;Enough talk. Now let's roll up our sleeves and get these tools live and up and running on our servers (was that just tautology lol)&lt;/p&gt;

&lt;h2&gt;
  
  
  🔧 Part 1: Building the Foundation
&lt;/h2&gt;

&lt;p&gt;Let's start by creating the directory structure and installing our components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Directory Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/prometheus /etc/prometheus/rules /etc/alertmanager /etc/blackbox_exporter
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/lib/prometheus /var/lib/grafana /var/lib/alertmanager
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /opt/dora-exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Component Installation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Prometheus: Your Metrics Database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install Prometheus (I'm using v2.47.0, but use whatever's current):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://github.com/prometheus/prometheus/releases/download/v2.47.0/prometheus-2.47.0.linux-amd64.tar.gz
&lt;span class="nb"&gt;tar &lt;/span&gt;xvfz prometheus-&lt;span class="k"&gt;*&lt;/span&gt;.tar.gz
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;prometheus-2.47.0.linux-amd64/prometheus /usr/local/bin/
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;prometheus-2.47.0.linux-amd64/promtool /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AlertManager: Your Alert Orchestrator&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://github.com/prometheus/alertmanager/releases/download/v0.26.0/alertmanager-0.26.0.linux-amd64.tar.gz
&lt;span class="nb"&gt;tar &lt;/span&gt;xvfz alertmanager-&lt;span class="k"&gt;*&lt;/span&gt;.tar.gz
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;alertmanager-0.26.0.linux-amd64/alertmanager /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Node Exporter: Hardware &amp;amp; OS Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install Node Exporter for server metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz
&lt;span class="nb"&gt;tar &lt;/span&gt;xvfz node_exporter-&lt;span class="k"&gt;*&lt;/span&gt;.tar.gz
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Blackbox Exporter: External Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://github.com/prometheus/blackbox_exporter/releases/download/v0.24.0/blackbox_exporter-0.24.0.linux-amd64.tar.gz
&lt;span class="nb"&gt;tar &lt;/span&gt;xvfz blackbox_exporter-&lt;span class="k"&gt;*&lt;/span&gt;.tar.gz
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;blackbox_exporter-0.24.0.linux-amd64/blackbox_exporter /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DORA Metrics Exporter: Engineering Performance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the DORA metrics exporter, we'll be using some custom scripts from this repo github.com/NonsoEchendu/dora-metrics (please follow me on github ;) ) that collects key performance metrics from GitHub repositories.&lt;/p&gt;

&lt;p&gt;Clone the repo into your server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/NonsoEchendu/dora-metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy all its contents to the /opt/dora-exporter directory I created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo cp &lt;/span&gt;dora-metrics/&lt;span class="k"&gt;*&lt;/span&gt; /opt/dora-exporter
&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; .env /etc/default/dora-metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and edit the .env file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; .env /etc/default/dora-metrics
&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/default/dora-metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and prepare virtual environment where the dora-metrics python script will run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;python3-venv
&lt;span class="nb"&gt;sudo &lt;/span&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv /opt/dora-exporter/venv 
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; dora:dora /opt/dora-exporter/venv
&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; dora /opt/dora-exporter/venv/bin/pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Grafana: Your Visualization Platform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's install grafana now. Run these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; apt-transport-https software-properties-common wget
wget &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-O&lt;/span&gt; - https://packages.grafana.com/gpg.key | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb https://packages.grafana.com/oss/deb stable main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; /etc/apt/sources.list.d/grafana.list
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;User Creation &amp;amp; Permissions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Security best practice: each component runs as its own user with limited permissions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--no-create-home&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/false prometheus
&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--no-create-home&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/false alertmanager
&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--no-create-home&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/false node_exporter
&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--no-create-home&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/false blackbox
&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--no-create-home&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/false dora
&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--no-create-home&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/false grafana

&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; prometheus:prometheus /etc/prometheus /var/lib/prometheus
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; alertmanager:alertmanager /etc/alertmanager /var/lib/alertmanager
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; blackbox:blackbox /etc/blackbox_exporter
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; grafana:grafana /var/lib/grafana
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; dora:dora /opt/dora-exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚀 Part 2: Service Configuration
&lt;/h2&gt;

&lt;p&gt;Now let's configure each service to run automatically at startup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prometheus Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create (/etc/systemd/system/prometheus.service) and write into it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Prometheus
&lt;span class="nv"&gt;Wants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prometheus
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prometheus
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--config&lt;/span&gt;.file&lt;span class="o"&gt;=&lt;/span&gt;/etc/prometheus/prometheus.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--storage&lt;/span&gt;.tsdb.path&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/prometheus &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--web&lt;/span&gt;.console.libraries&lt;span class="o"&gt;=&lt;/span&gt;/usr/share/prometheus/console_libraries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--web&lt;/span&gt;.console.templates&lt;span class="o"&gt;=&lt;/span&gt;/usr/share/prometheus/consoles &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--web&lt;/span&gt;.enable-lifecycle

&lt;span class="nv"&gt;Restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AlertManager Service
&lt;/h3&gt;

&lt;p&gt;Create and write into this &lt;code&gt;/etc/systemd/system/alertmanager.service&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Alertmanager
&lt;span class="nv"&gt;Wants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;alertmanager
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;alertmanager
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/alertmanager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--config&lt;/span&gt;.file&lt;span class="o"&gt;=&lt;/span&gt;/etc/alertmanager/alertmanager.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--storage&lt;/span&gt;.path&lt;span class="o"&gt;=&lt;/span&gt;/var/lib/alertmanager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--log&lt;/span&gt;.level&lt;span class="o"&gt;=&lt;/span&gt;debug

&lt;span class="nv"&gt;Restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Node Exporter Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create and write into this &lt;code&gt;/etc/systemd/system/node-exporter.service&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Node Exporter
&lt;span class="nv"&gt;Wants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;node_exporter
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/node_exporter &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--collector&lt;/span&gt;.filesystem.ignored-mount-points&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"^/(sys|proc|dev|host|etc)(&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;|/)"&lt;/span&gt;

&lt;span class="nv"&gt;Restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Blackbox Exporter Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/etc/systemd/system/blackbox-exporter.service&lt;/code&gt; and write inti it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Blackbox Exporter
&lt;span class="nv"&gt;Wants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network-online.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;blackbox
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;blackbox
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bin/blackbox_exporter &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--config&lt;/span&gt;.file&lt;span class="o"&gt;=&lt;/span&gt;/etc/blackbox_exporter/blackbox.yml

&lt;span class="nv"&gt;Restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DORA Metrics Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/etc/systemd/system/dora-metrics.service&lt;/code&gt; and write inti it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;Unit]
&lt;span class="nv"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;DORA Metrics Exporter
&lt;span class="nv"&gt;After&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network.target

&lt;span class="o"&gt;[&lt;/span&gt;Service]
&lt;span class="nv"&gt;User&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dora
&lt;span class="nv"&gt;Group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dora
&lt;span class="nv"&gt;EnvironmentFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/default/dora-metrics
&lt;span class="nv"&gt;WorkingDirectory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/opt/dora-exporter
&lt;span class="nv"&gt;ExecStart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/opt/dora-exporter/venv/bin/python3 /opt/dora-exporter/main.py
&lt;span class="nv"&gt;Restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always

&lt;span class="o"&gt;[&lt;/span&gt;Install]
&lt;span class="nv"&gt;WantedBy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📝 Part 3: Configuration Files
&lt;/h2&gt;

&lt;p&gt;Time to set up the actual monitoring configuration. This is where I tell our stack what to monitor and how to alert.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prometheus Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/etc/prometheus/prometheus.yml&lt;/code&gt; and write into it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;15s&lt;/span&gt;
  &lt;span class="na"&gt;evaluation_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;15s&lt;/span&gt;

&lt;span class="c1"&gt;# Alertmanager configuration&lt;/span&gt;
&lt;span class="na"&gt;alerting&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;alertmanagers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:9093'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;        

&lt;span class="c1"&gt;# Load rules once and periodically evaluate them&lt;/span&gt;
&lt;span class="na"&gt;rule_files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rules/node_exporter_alerts.yml"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rules/blackbox_alerts.yml"&lt;/span&gt; 
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rules/dora_alerts.yml"&lt;/span&gt;

&lt;span class="na"&gt;scrape_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prometheus'&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:9090'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;node_exporter'&lt;/span&gt;
    &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:9100'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="c1"&gt;# PM2 metrics from host (for NextJS apps)&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pm2'&lt;/span&gt;
    &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:9209'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="c1"&gt;# Blackbox exporter for HTTP/HTTPS uptime and SSL monitoring&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;blackbox_http'&lt;/span&gt;
    &lt;span class="na"&gt;metrics_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/probe&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;http_2xx&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Look for a HTTP 200 response&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://website-url&lt;/span&gt;
        &lt;span class="c1"&gt;# Add more URLs as needed&lt;/span&gt;
    &lt;span class="na"&gt;relabel_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__address__&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;__param_target&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__param_target&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;instance&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;__address__&lt;/span&gt;
        &lt;span class="na"&gt;replacement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:9115&lt;/span&gt;  &lt;span class="c1"&gt;# Blackbox exporter's address&lt;/span&gt;

  &lt;span class="c1"&gt;# Blackbox exporter for SSL monitoring&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;blackbox_ssl'&lt;/span&gt;
    &lt;span class="na"&gt;metrics_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/probe&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Use the TLS probe&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;website-url:443&lt;/span&gt;
    &lt;span class="na"&gt;relabel_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__address__&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;__param_target&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__param_target&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;instance&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;__address__&lt;/span&gt;
        &lt;span class="na"&gt;replacement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:9115&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dora-metrics'&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:8000'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Alert Rules
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Server Health Alerts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/etc/prometheus/rules/node_exporter_alerts.yml&lt;/code&gt; and write into it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-exporter&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HighCPULoad&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2m&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;High&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;CPU&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;load&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}})"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CPU&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;load&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;80%&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}%&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;LABELS:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HighMemoryLoad&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(node_memory_MemTotal_bytes - (node_memory_MemFree_bytes + node_memory_Buffers_bytes + node_memory_Cached_bytes)) / node_memory_MemTotal_bytes * 100 &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2m&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;High&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;memory&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;load&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}})"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;load&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;80%&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}%&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;LABELS:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HighDiskUsage&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(node_filesystem_size_bytes{fstype!~"tmpfs|fuse.lxcfs|squashfs|vfat"} - node_filesystem_free_bytes{fstype!~"tmpfs|fuse.lxcfs|squashfs|vfat"}) / node_filesystem_size_bytes{fstype!~"tmpfs|fuse.lxcfs|squashfs|vfat"} * 100 &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;85&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;High&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;disk&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;usage&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}})"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Disk&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;usage&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;85%&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}%&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;LABELS:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Website Availability Alerts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/etc/prometheus/rules/blackbox_alerts.yml&lt;/code&gt; and write into it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;blackbox-exporter&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EndpointDown&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;probe_success == &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;critical&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Endpoint&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;down&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}})"&lt;/span&gt; 
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Endpoint&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;down&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;LABELS:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SlowResponseTime&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;probe_duration_seconds &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Slow&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;response&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;time&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}})"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Response&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;time&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;LABELS:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SSLCertExpiringSoon&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;probe_ssl_earliest_cert_expiry - time() &amp;lt; 86400 * &lt;/span&gt;&lt;span class="m"&gt;30&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SSL&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;certificate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;expiring&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;soon&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}})"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SSL&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;certificate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;expires&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;less&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;than&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;30&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;days&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;days&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;LABELS:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SSLCertExpired&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;probe_ssl_earliest_cert_expiry - time() &amp;lt;= &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;critical&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SSL&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;certificate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;expired&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels.instance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}})"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SSL&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;certificate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;has&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;expired&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;LABELS:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$labels&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DORA Performance Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/etc/prometheus/rules/dora_alerts.yml&lt;/code&gt; and write into it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dora-metrics&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HighChangeFailureRate&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(sum(increase(github_deployment_failed_total[7d])) / sum(increase(github_deployment_total[7d]))) * 100 &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;15&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1h&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;High&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;change&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;failure&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;rate"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Change&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;failure&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;rate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;15%&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;over&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;last&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;7&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;days&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}%"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LongMeanTimeToRestore&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;avg(github_incident_mttr_seconds) / 60 &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;60&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1h&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Long&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;mean&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;time&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;restore"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Mean&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;time&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;restore&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;60&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;minutes&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;minutes"&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LowDeploymentFrequency&lt;/span&gt;
    &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sum(increase(github_deployment_total[7d])) &amp;lt; &lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1d&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Low&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;deployment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;frequency"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Less&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;than&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;3&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;deployments&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;last&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;7&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;days&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;VALUE&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$value&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;deployments"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AlertManager Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In creating the alertmanager file, we need to have the slack webhook url. Now we can actually pass the url directly into the config file, but that's a huge security risk; and Slack will deactivate it after a while.&lt;/p&gt;

&lt;p&gt;So what we'll do instead is to save the webhook url into a file, give it needed permissions and pass the file into our alertmanager config file.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/etc/alertmanager/slack_api_url&lt;/code&gt; and write ONLY your webhook url into it.&lt;/p&gt;

&lt;p&gt;Next give the file only read permissions for users:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;644 /etc/alertmanager/slack_api_url
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create &lt;code&gt;/etc/alertmanager/alertmanager.yml&lt;/code&gt; and write into it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resolve_timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
  &lt;span class="na"&gt;slack_api_url_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/etc/alertmanager/slack_api_url'&lt;/span&gt;

&lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;group_by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;alertname'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;instance'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;job'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;group_wait&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
  &lt;span class="na"&gt;group_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
  &lt;span class="na"&gt;repeat_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4h&lt;/span&gt;
  &lt;span class="na"&gt;receiver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;slack-notifications'&lt;/span&gt;
  &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;critical&lt;/span&gt;
      &lt;span class="na"&gt;receiver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;slack-critical'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
      &lt;span class="na"&gt;receiver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;slack-warning'&lt;/span&gt;

&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;slack-notifications'&lt;/span&gt;
    &lt;span class="na"&gt;slack_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;send_resolved&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;channel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#devops-alerts'&lt;/span&gt;
        &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;if&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;eq&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Status&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"firing"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}🔴&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ALERT{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;else&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}🟢&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;RESOLVED{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;end&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.CommonLabels.alertname&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
        &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Status "firing" }}*SYSTEM ALERT*{{ else }}*SYSTEM RECOVERED*{{ end }}&lt;/span&gt;

          &lt;span class="s"&gt;{{ range .Alerts }}&lt;/span&gt;
          &lt;span class="s"&gt;*{{ .Annotations.summary }}*&lt;/span&gt;
          &lt;span class="s"&gt;{{ .Annotations.description }}&lt;/span&gt;

          &lt;span class="s"&gt;*⏰ Incident Details:*&lt;/span&gt;
          &lt;span class="s"&gt;• Started: {{ .StartsAt }}&lt;/span&gt;
          &lt;span class="s"&gt;• Status: {{ .Status | toUpper }}&lt;/span&gt;

          &lt;span class="s"&gt;*🔍 Technical Information:*&lt;/span&gt;
          &lt;span class="s"&gt;• System: {{ .Labels.instance }}&lt;/span&gt;
          &lt;span class="s"&gt;• Job: {{ .Labels.job }}&lt;/span&gt;
          &lt;span class="s"&gt;• Severity: {{ .Labels.severity }}&lt;/span&gt;

          &lt;span class="s"&gt;*👥 Impact Assessment:*&lt;/span&gt;
          &lt;span class="s"&gt;• Users affected: {{ if eq .Labels.job "blackbox_http" }}Website visitors{{ else }}Service users{{ end }}&lt;/span&gt;

          &lt;span class="s"&gt;*👥 Team to Notify:* @devops-team&lt;/span&gt;
          &lt;span class="s"&gt;{{ end }}&lt;/span&gt;
        &lt;span class="na"&gt;icon_emoji&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;if&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;eq&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Status&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"firing"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:red_circle:{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;else&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:green_circle:{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;end&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;slack-critical'&lt;/span&gt;
    &lt;span class="na"&gt;slack_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;send_resolved&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;channel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#devops-alerts'&lt;/span&gt;
        &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;if&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;eq&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Status&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"firing"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}🔴&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;CRITICAL{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;else&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}🟢&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;RESOLVED{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;end&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.CommonLabels.alertname&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
        &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Status "firing" }}*CRITICAL SYSTEM ALERT*{{ else }}*SYSTEM RECOVERED*{{ end }}&lt;/span&gt;

          &lt;span class="s"&gt;{{ range .Alerts }}&lt;/span&gt;
          &lt;span class="s"&gt;*{{ .Annotations.summary }}*&lt;/span&gt;
          &lt;span class="s"&gt;{{ .Annotations.description }}&lt;/span&gt;

          &lt;span class="s"&gt;*⏰ Incident Details:*&lt;/span&gt;
          &lt;span class="s"&gt;• Started: {{ .StartsAt }}&lt;/span&gt;
          &lt;span class="s"&gt;• Status: {{ .Status | toUpper }}&lt;/span&gt;

          &lt;span class="s"&gt;*🔍 Technical Information:*&lt;/span&gt;
          &lt;span class="s"&gt;• System: {{ .Labels.instance }}&lt;/span&gt;
          &lt;span class="s"&gt;• Job: {{ .Labels.job }}&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Labels.job "blackbox_http" }}• Error: Connection failed&lt;/span&gt;
          &lt;span class="s"&gt;• HTTP Status: No response{{ end }}&lt;/span&gt;

          &lt;span class="s"&gt;*👥 Impact Assessment:*&lt;/span&gt;
          &lt;span class="s"&gt;• Severity: Critical&lt;/span&gt;
          &lt;span class="s"&gt;• User Impact: {{ if eq .Labels.job "blackbox_http" }}All website users affected{{ else }}Service degradation{{ end }}&lt;/span&gt;

          &lt;span class="s"&gt;*🚨 Attention:* &amp;lt;@U089ZLRDV1N&amp;gt; &amp;lt;@U08BD88M87J&amp;gt; &amp;lt;@U08AN9DLDMH&amp;gt; &amp;lt;@U08AQNQVC8L&amp;gt; &amp;lt;@U08B8JT5RAN&amp;gt; &amp;lt;@U08ANPWRP9Q&amp;gt; &amp;lt;@U08A81WGZHV&amp;gt; &amp;lt;@U08A8QAN2P9&amp;gt; &amp;lt;@U08AP0AGQ9Z&amp;gt; &amp;lt;@U08AMPHCPSN&amp;gt;&lt;/span&gt;
          &lt;span class="s"&gt;{{ end }}&lt;/span&gt;
        &lt;span class="na"&gt;icon_emoji&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;if&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;eq&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Status&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"firing"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:fire:{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;else&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:white_check_mark:{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;end&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
        &lt;span class="na"&gt;link_names&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;slack-warning'&lt;/span&gt;
    &lt;span class="na"&gt;slack_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;send_resolved&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;channel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#devops-alerts'&lt;/span&gt;
        &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;if&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;eq&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Status&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"firing"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}⚠️&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;WARNING{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;else&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}🟢&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;RESOLVED{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;end&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.CommonLabels.alertname&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
        &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Status "firing" }}*WARNING ALERT*{{ else }}*WARNING RESOLVED*{{ end }}&lt;/span&gt;

          &lt;span class="s"&gt;{{ range .Alerts }}&lt;/span&gt;
          &lt;span class="s"&gt;*{{ .Annotations.summary }}*&lt;/span&gt;
          &lt;span class="s"&gt;{{ .Annotations.description }}&lt;/span&gt;

          &lt;span class="s"&gt;*⏰ Incident Details:*&lt;/span&gt;
          &lt;span class="s"&gt;• Started: {{ .StartsAt }}&lt;/span&gt;
          &lt;span class="s"&gt;• Status: {{ .Status | toUpper }}&lt;/span&gt;

          &lt;span class="s"&gt;*🔍 Technical Information:*&lt;/span&gt;
          &lt;span class="s"&gt;• System: {{ .Labels.instance }}&lt;/span&gt;
          &lt;span class="s"&gt;• Job: {{ .Labels.job }}&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Labels.alertname "SlowResponseTime" }}• Response Time: {{ if eq .Labels.job "blackbox_http" }}Slow{{ end }}{{ end }}&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Labels.alertname "SSLCertExpiringSoon" }}• Certificate Expires: Soon{{ end }}&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Labels.alertname "HighCPULoad" }}• CPU Load: High{{ end }}&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Labels.alertname "HighMemoryLoad" }}• Memory Use: High{{ end }}&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Labels.alertname "HighDiskUsage" }}• Disk Usage: High{{ end }}&lt;/span&gt;

          &lt;span class="s"&gt;*👥 Impact Assessment:*&lt;/span&gt;
          &lt;span class="s"&gt;• Severity: Warning&lt;/span&gt;
          &lt;span class="s"&gt;• User Impact: Potential performance degradation&lt;/span&gt;

          &lt;span class="s"&gt;*💡 Recommended Actions:*&lt;/span&gt;
          &lt;span class="s"&gt;{{ if eq .Labels.alertname "SlowResponseTime" }}Check database queries or high backend resource usage.{{ else if eq .Labels.alertname "SSLCertExpiringSoon" }}Renew SSL certificate before expiration.{{ else if eq .Labels.alertname "HighCPULoad" }}Identify CPU-intensive processes and optimize.{{ else if eq .Labels.alertname "HighMemoryLoad" }}Check for memory leaks or increase available memory.{{ else if eq .Labels.alertname "HighDiskUsage" }}Clean up disk space or expand storage.{{ end }}&lt;/span&gt;

          &lt;span class="s"&gt;*🚨 Attention:* &amp;lt;@U089ZLRDV1N&amp;gt; &amp;lt;@U08BD88M87J&amp;gt; &amp;lt;@U08AN9DLDMH&amp;gt; &amp;lt;@U08AQNQVC8L&amp;gt; &amp;lt;@U08B8JT5RAN&amp;gt; &amp;lt;@U08ANPWRP9Q&amp;gt; &amp;lt;@U08A81WGZHV&amp;gt; &amp;lt;@U08A8QAN2P9&amp;gt; &amp;lt;@U08AP0AGQ9Z&amp;gt; &amp;lt;@U08AMPHCPSN&amp;gt;&lt;/span&gt;
          &lt;span class="s"&gt;{{ end }}&lt;/span&gt;
        &lt;span class="na"&gt;icon_emoji&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;if&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;eq&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Status&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"firing"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:warning:{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;else&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:white_check_mark:{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;end&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
        &lt;span class="na"&gt;link_names&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Blackbox Exporter Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/etc/blackbox_exporter/blackbox.yml&lt;/code&gt; and write into it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;modules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;http_2xx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;prober&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;valid_http_versions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;HTTP/1.1"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;HTTP/2.0"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;valid_status_codes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;200&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GET&lt;/span&gt;
      &lt;span class="na"&gt;follow_redirects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;preferred_ip_protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ip4"&lt;/span&gt;
      &lt;span class="na"&gt;tls_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;insecure_skip_verify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

  &lt;span class="na"&gt;http_post_2xx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;prober&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POST&lt;/span&gt;
      &lt;span class="na"&gt;preferred_ip_protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ip4"&lt;/span&gt;

  &lt;span class="na"&gt;tcp_connect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;prober&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;

  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;prober&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
    &lt;span class="na"&gt;tcp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;preferred_ip_protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ip4"&lt;/span&gt;
      &lt;span class="na"&gt;tls_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;insecure_skip_verify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🔔 Part 4: Integrating with Slack
&lt;/h2&gt;

&lt;p&gt;Real-time notifications are crucial for fast incident response. Let's set up Slack integration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Slack App:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to &lt;a href="https://api.slack.com/apps" rel="noopener noreferrer"&gt;Slack API Apps&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click "Create New App" → "From scratch"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name your app (e.g., "DeploymentMonitor")&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select your workspace&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2. Enable Incoming Webhooks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to "Incoming Webhooks" in the sidebar&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Toggle "Activate Incoming Webhooks" to ON&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click "Add New Webhook to Workspace"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the channel for receiving alerts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the Webhook URL and replace the placeholder in your alertmanager.yml&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3. Configure Alert Formatting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Our configuration creates informative, well-formatted alerts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Critical alerts use 🔴 while resolved alerts use 🟢&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alerts include relevant links and contextual information&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrg5ju8affj1zcp3mj6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrg5ju8affj1zcp3mj6p.png" alt="Slack Alert example" width="800" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example of a slack alert&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🖥️ Part 5: Launching Your Monitoring Stack
&lt;/h2&gt;

&lt;p&gt;Time to bring everything online:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable services to start at boot&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;prometheus alertmanager node-exporter blackbox-exporter grafana-server dora-metrics

&lt;span class="c"&gt;# Start all services&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start prometheus alertmanager node-exporter blackbox-exporter grafana-server dora-metrics

&lt;span class="c"&gt;# Verify everything is running&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status prometheus alertmanager node-exporter blackbox-exporter grafana-server dora-metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If all was setup well, it should show success messages like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcd8ayha71vls8qmmnskg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcd8ayha71vls8qmmnskg.png" alt="Image description" width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd3c5r9mxlupigfpc4t0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd3c5r9mxlupigfpc4t0.png" alt="Image description" width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  📊 Part 6: Setting Up Grafana Dashboards
&lt;/h2&gt;

&lt;p&gt;Now for the exciting part - visualizing all this data!&lt;/p&gt;

&lt;p&gt;1. Access Grafana:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open your browser to http://:3000&lt;/li&gt;
&lt;li&gt;Default credentials are admin/admin&lt;/li&gt;
&lt;li&gt;You'll be prompted to set a new password&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2. Add Prometheus as a Data Source:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the gear icon (Configuration) → Data Sources&lt;/li&gt;
&lt;li&gt;Click "Add data source" and select "Prometheus"&lt;/li&gt;
&lt;li&gt;Set the URL to &lt;a href="http://localhost:9090" rel="noopener noreferrer"&gt;http://localhost:9090&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click "Save &amp;amp; Test"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3. Import Pre-built Dashboards:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click "+" → "Import"&lt;/li&gt;
&lt;li&gt;Enter dashboard ID:&lt;/li&gt;
&lt;li&gt;Node Exporter: 1860&lt;/li&gt;
&lt;li&gt;Blackbox Exporter: 7587&lt;/li&gt;
&lt;li&gt;Select Prometheus as the data source&lt;/li&gt;
&lt;li&gt;Click "Import"&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5mj1898gz4lidtx777v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5mj1898gz4lidtx777v.png" alt="Node Exporter Dashboard" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Node Exporter dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4thz3asz2pwb7ujhobqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4thz3asz2pwb7ujhobqm.png" alt="Blackbox Dashboard" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Blackbox Dashboard&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;br&gt; 4. Import DORA Metrics Dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the JSON file from &lt;a href="https://github.com/NonsoEchendu/dora-metrics" rel="noopener noreferrer"&gt;NonsoEchendu/dora-metrics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;This provides detailed visualization of your software delivery performance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📈 Part 7: Understanding DORA Metrics
&lt;/h2&gt;

&lt;p&gt;Our monitoring setup includes tracking of DORA metrics, which are industry-standard measures of development team performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployment Frequency (DF): How often you successfully release to production&lt;/li&gt;
&lt;li&gt;Lead Time for Changes (LTC): Time from code commit to production deployment&lt;/li&gt;
&lt;li&gt;Change Failure Rate (CFR): Percentage of deployments causing a failure in production&lt;/li&gt;
&lt;li&gt;Mean Time to Restore (MTTR): How quickly service is restored after an incident&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ks8qc88zxi2rop97nr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ks8qc88zxi2rop97nr5.png" alt="Dora Metrics Dashboard" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dora Metrics dashboard&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;br&gt; These metrics help you understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How responsive your development process is&lt;/li&gt;
&lt;li&gt;How stable your changes are&lt;/li&gt;
&lt;li&gt;How quickly you can recover from failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The alerts we've configured will notify you when these metrics fall outside healthy ranges, enabling continuous improvement of your development practices. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🔮 Conclusion: Building a Data-Driven Culture
&lt;/h2&gt;

&lt;p&gt;Yipeee! You've successfully implemented a comprehensive monitoring solution that provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time visibility into system performance and availability&lt;/li&gt;
&lt;li&gt;Early warnings for potential issues&lt;/li&gt;
&lt;li&gt;Insights into your development practices&lt;/li&gt;
&lt;li&gt;Clear metrics for measuring improvements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By combining system monitoring with DORA metrics, you've created a foundation for a truly data-driven engineering culture. Use these insights to fuel your next improvements, and watch your team's effectiveness and your application's reliability grow together.&lt;/p&gt;

&lt;p&gt;Till the next, happy building!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a FastAPI Book API with CI/CD Pipelines (Using Github Actions) and Docker Deployment</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Thu, 13 Feb 2025 15:33:10 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/building-a-fastapi-book-api-with-cicd-pipelines-using-github-actions-and-docker-deployment-c56</link>
      <guid>https://dev.to/nonso_echendu_001/building-a-fastapi-book-api-with-cicd-pipelines-using-github-actions-and-docker-deployment-c56</guid>
      <description>&lt;p&gt;In this article, I’ll walk you through the development of a FastAPI-based book API, complete with a Continuous Integration (CI) and Continuous Deployment (CD) pipeline. This project was part of a Devops challenge with HNG, and is aimed to retrieve book details by ID, and deployed to an AWS ec2 instance using Docker. &lt;/p&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;The repository provided already included a predefined structure, and we had to ensure that all endpoints were correctly implemented and accessible via Nginx.&lt;/p&gt;

&lt;p&gt;These were the goals to be achieved in this challenge:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Add an endpoint to retrieve a book by its ID, so that it is accessible via this url path &lt;code&gt;/api/v1/books/{book_id}&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up a CI pipeline that automates tests whenever a pull request is made to the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up a CD pipeline that automates the deployment process whenever changes are pushed to the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serve the application over NGINX&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fastapi-book-project/
├── api/
│   ├── db/
│   │   ├── __init__.py
│   │   └── schemas.py      # Data models and in-memory database
│   ├── routes/
│   │   ├── __init__.py
│   │   └── books.py        # Book route handlers
│   └── router.py           # API router configuration
├── core/
│   ├── __init__.py
│   └── config.py           # Application settings
├── tests/
│   ├── __init__.py
│   └── test_books.py       # API endpoint tests
├── main.py                 # Application entry point
├── requirements.txt        # Project dependencies
└── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Docker. To install docker, check &lt;a href="https://docs.docker.com/get-started/get-docker/" rel="noopener noreferrer"&gt;their official documentation&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up
&lt;/h2&gt;

&lt;p&gt;The initial step was cloning the provided git repo, which contained the basic project structure. Here's a link to the project's git repo,  &lt;a href="https://github.com/NonsoEchendu/fastapi-book-project" rel="noopener noreferrer"&gt;https://github.com/NonsoEchendu/fastapi-book-project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To clone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/NonsoEchendu/fastapi-book-project.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Challenge Task #1
&lt;/h2&gt;

&lt;p&gt;So starting with the first task in the challenge: &lt;strong&gt;Add an endpoint to retrieve a book by its ID&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To do this, I need to go to the &lt;code&gt;books.py&lt;/code&gt; file found in the &lt;code&gt;routes/&lt;/code&gt; folder, and add the &lt;code&gt;/api/v1/books/{book_id}&lt;/code&gt; endpoint. &lt;/p&gt;

&lt;p&gt;I'm also going to add a &lt;code&gt;get_book&lt;/code&gt; function to retrieve the book details by ID. And if the book is not found, the function throws a &lt;code&gt;404 Not Found error&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Endpoint definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@router.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/{book_id}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response_model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Book&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HTTP_200_OK&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_book&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;book_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Book&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;book&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;books&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;book_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;book&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;HTTPException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;detail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Book not found&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;book&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;Next, i tested if the endpoint I just added is working.&lt;/p&gt;

&lt;p&gt;To test, i'll be starting the application using Uvicorn:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvicorn app.main:app &lt;span class="nt"&gt;--reload&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, access the endpoint on my browser url at &lt;code&gt;http://127.0.0.1:8000/api/v1/books/1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhm2ufp67ghv1cpudpw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhm2ufp67ghv1cpudpw0.png" alt="Image description" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It works!&lt;/p&gt;



&lt;h2&gt;
  
  
  Dockerizing the Application
&lt;/h2&gt;

&lt;p&gt;Before moving to the next tasks for the challenge - creating CI/CD pipelines, i'll want to containerize the app using docker.&lt;/p&gt;

&lt;p&gt;So first, i'll be creating a &lt;code&gt;Dockerfile&lt;/code&gt; in the project root's directory. The &lt;code&gt;Dockerfile&lt;/code&gt; will be structured to install dependencies, set up a non-root user, and start the app using Uvicorn.&lt;/p&gt;

&lt;p&gt;Here's the Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can build the application into a docker image and run it inside a container while maintaining a clean and isolated environment.&lt;/p&gt;

&lt;p&gt;To build the image, while in the project's root directory, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; fastapi-book &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the app in a container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8000:8000 fastapi-book
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should be able to also access it on your browser url at &lt;code&gt;http://127.0.0.1:8000/api/v1/books/1&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenge Task #2
&lt;/h2&gt;

&lt;p&gt;Moving to the next task: &lt;strong&gt;Setting up a CI (Continuous Integration) pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This CI pipeline will simply automate a &lt;code&gt;pytest&lt;/code&gt; whenever a pull request is made to the &lt;code&gt;main&lt;/code&gt; branch. These tests are important as they ensure that any changes to the codebase do not break existing functionality. &lt;/p&gt;

&lt;p&gt;The CI pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CI Pipeline&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout Repository&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set Up Python&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-python@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;python-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.9"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Dependencies&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;python -m venv venv&lt;/span&gt;
          &lt;span class="s"&gt;source venv/bin/activate&lt;/span&gt;
          &lt;span class="s"&gt;pip install --upgrade pip&lt;/span&gt;
          &lt;span class="s"&gt;pip install -r requirements.txt&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Tests&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;source venv/bin/activate&lt;/span&gt;
          &lt;span class="s"&gt;pytest --maxfail=1 --disable-warnings&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Challenge Task #3
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Setting up a CD (Continuous Deployment) pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This CD pipeline will automate the deployment process whenever changes are pushed to the main branch.&lt;/p&gt;

&lt;p&gt;The CD pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment Pipeline&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;SSH_PRIVATE_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SSH_PRIVATE_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;SERVER_IP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SERVER_IP }}&lt;/span&gt;
          &lt;span class="na"&gt;USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SERVER_USER }}&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout Repository&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Debug Environment Variables&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;echo "Server IP: $SERVER_IP"&lt;/span&gt;
          &lt;span class="s"&gt;echo "User: $USER"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set Up SSH&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;# Setup SSH&lt;/span&gt;
          &lt;span class="s"&gt;mkdir -p ~/.ssh &lt;/span&gt;
          &lt;span class="s"&gt;echo "$SSH_PRIVATE_KEY" | tr -d '\r' &amp;gt; ~/.ssh/id_rsa&lt;/span&gt;
          &lt;span class="s"&gt;chmod 600 ~/.ssh/id_rsa&lt;/span&gt;
          &lt;span class="s"&gt;ssh-keyscan -H $SERVER_IP &amp;gt;&amp;gt; ~/.ssh/known_hosts&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prepare EC2 Instance &amp;amp; Pull Latest Code&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;ssh $USER@$SERVER_IP &amp;lt;&amp;lt; 'EOF'&lt;/span&gt;
            &lt;span class="s"&gt;set -e  # Stop script if any command fails&lt;/span&gt;

            &lt;span class="s"&gt;# Ensure project directory exists&lt;/span&gt;
            &lt;span class="s"&gt;if [ ! -d "/home/$USER/fastapi-book-project" ]; then&lt;/span&gt;
              &lt;span class="s"&gt;git clone https://github.com/NonsoEchendu/fastapi-book-project.git /home/$USER/fastapi-book-project&lt;/span&gt;
            &lt;span class="s"&gt;fi&lt;/span&gt;

            &lt;span class="s"&gt;# Go to project folder&lt;/span&gt;
            &lt;span class="s"&gt;cd /home/$USER/fastapi-book-project&lt;/span&gt;

            &lt;span class="s"&gt;# Fetch latest changes&lt;/span&gt;
            &lt;span class="s"&gt;git reset --hard  # Ensure a clean state&lt;/span&gt;
            &lt;span class="s"&gt;git pull origin main&lt;/span&gt;
          &lt;span class="s"&gt;EOF&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Docker&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;ssh $USER@$SERVER_IP &amp;lt;&amp;lt; 'EOF'&lt;/span&gt;
            &lt;span class="s"&gt;sudo apt-get update&lt;/span&gt;
            &lt;span class="s"&gt;sudo apt-get install -y docker.io &lt;/span&gt;
            &lt;span class="s"&gt;sudo usermod -aG docker $USER&lt;/span&gt;
          &lt;span class="s"&gt;EOF&lt;/span&gt;

          &lt;span class="s"&gt;ssh $USER@$SERVER_IP "docker ps"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy With Docker&lt;/span&gt; 
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;ssh $USER@$SERVER_IP &amp;lt;&amp;lt; 'EOF'&lt;/span&gt;
            &lt;span class="s"&gt;cd /home/$USER/fastapi-book-project&lt;/span&gt;
            &lt;span class="s"&gt;docker stop fastapi-app || true&lt;/span&gt;
            &lt;span class="s"&gt;docker rm fastapi-app || true&lt;/span&gt;
            &lt;span class="s"&gt;docker build -t fastapi-app .&lt;/span&gt;
            &lt;span class="s"&gt;docker run -d --name fastapi-app -p 8000:8000 fastapi-app&lt;/span&gt;
          &lt;span class="s"&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup ensures that changes pushed to the main branch were automatically pulled into the server. It also builds the image and runs the application in a Docker container. &lt;/p&gt;

&lt;p&gt;The use of secrets also ensures secure authentication without exposing sensitive credentials.&lt;/p&gt;

&lt;p&gt;To setup the repository secrets in your Github Actions, go to the github repository settings. Then go to &lt;code&gt;**Secrets and variables**&lt;/code&gt;, then click on &lt;code&gt;**Actions**&lt;/code&gt;. Next, click on &lt;code&gt;**New repository secret**&lt;/code&gt; and add the 3 repo secrets we used in the CD pipeline, namely, &lt;code&gt;SERVER_IP&lt;/code&gt;, &lt;code&gt;SERVER_USER&lt;/code&gt;, &lt;code&gt;SSH_PRIVATE_KEY&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Before creating these secrets, please ensure to already create an AWS ec2 instance. The values assigned to these secrets will be from the running ec2 instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenge Task #4
&lt;/h2&gt;

&lt;p&gt;And now the last task - &lt;strong&gt;Serving the application over NGINX&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;I want to use NGINX as a reverse proxy to route traffic from my ec2 instance's public ip address to my FastAPI application running on port 8000. &lt;/p&gt;

&lt;p&gt;Inside my ec2 instance ubuntu server, i'll first install and enable Nginx using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get nginx

&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start &lt;span class="nb"&gt;enable &lt;/span&gt;nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then i add the Nginx configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano etc/nginx/sites-available/fastapi-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And add this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;yourdomain.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your domain or server IP&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://127.0.0.1:8000/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_http_version&lt;/span&gt; &lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Upgrade&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Connection&lt;/span&gt; &lt;span class="s"&gt;"upgrade"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, i enable the configuration by creating a symlink, and remove the default site:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /etc/nginx/sites-available/fastapi /etc/nginx/sites-enabled/

&lt;span class="nb"&gt;sudo rm&lt;/span&gt; /etc/nginx/sites-enabled/default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then i can restart NGINX:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now i can access the endpoint directly without adding &lt;code&gt;8000&lt;/code&gt; to the url, like this: &lt;code&gt;http://ec2-public-ip/api/v1/books/{book_id}&lt;/code&gt;. &lt;br&gt;
&lt;code&gt;{book_id}&lt;/code&gt; here being a positive integer. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24vc7zqw3zw29ls1y31b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24vc7zqw3zw29ls1y31b.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, what was I able to achieve in this project? &lt;/p&gt;

&lt;p&gt;I built a FastAPI-based book API that retrieves book details by ID. I also made the application production-ready by containerizing it using Docker and used Nginx as a reverse proxy.&lt;/p&gt;

&lt;p&gt;I also implemented CI/CD pipelines using GitHub Actions ensuring that the application is thoroughly tested and automatically deployed whenever changes are pushed to the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;

&lt;p&gt;If you’re interested in exploring the code, check out the &lt;a href="https://github.com/NonsoEchendu/fastapi-book-project" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;. Feel free to fork it, experiment, and adapt it to your needs!&lt;/p&gt;

&lt;p&gt;Till my next project, happy building! ✌🏽&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting up Nginx Server</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Wed, 29 Jan 2025 11:32:10 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/setting-up-nginx-server-412c</link>
      <guid>https://dev.to/nonso_echendu_001/setting-up-nginx-server-412c</guid>
      <description>&lt;h2&gt;
  
  
  Objective
&lt;/h2&gt;

&lt;p&gt;Hello builders! Today we'll be looking at something basic, i.e setting up and configuring an nginx web server.&lt;/p&gt;

&lt;p&gt;We'll be using it to host a very simple webpage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;An Ubuntu Server&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I'll be using AWS to quickly spin up an Ubuntu server using an ec2 instance. &lt;/p&gt;

&lt;p&gt;You can check out the official AWS documentation on how to create an Ubuntu ec2 instance &lt;a href="https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/gs-ubuntu.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Because Nginx listens on port 80 (HTTP), i'll also be creating an inbound rule to allow traffic to port 80. &lt;/p&gt;

&lt;p&gt;Also allow traffic to port 22, so that we can SSH into the ubuntu server to configure Nginx.&lt;/p&gt;




&lt;h2&gt;
  
  
  Installing Nginx
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Now that we've got our Ubuntu server running, SSH into the just the ubuntu server, using:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i &amp;lt;your-pem-key-file&amp;gt; ubuntu:&amp;lt;your-ec2-public-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
Next, let's install Nginx.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, we'll update Ubuntu's package lists:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-y&lt;/code&gt; command-line option is to auto-approve and skip the prompt. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Install Nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install nginx -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we'll enable and start the installed Nginx service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable nginx
sudo systemctl start nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;Let's verify our Nginx configuration and check if Nginx is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If successfully installed and running, you should see an output like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxplvctfynu3zoebsnhf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxplvctfynu3zoebsnhf5.png" alt="Image description" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting Up Nginx
&lt;/h2&gt;

&lt;p&gt;Now that we've installed Nginx successfully, let's set up a custom webpage that'll be shown by default when we visit our public ip address on any browser.&lt;/p&gt;

&lt;p&gt;We'll be creating a custom &lt;code&gt;index.html&lt;/code&gt; file in &lt;code&gt;/var/www/html/index.html&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano /var/www/html/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;Then we create a basic html template. Paste the following line of code into the &lt;code&gt;index.html&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
    &amp;lt;title&amp;gt;A Custom Nginx Page&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
    &amp;lt;h1&amp;gt;Welcome to DevOps Stage 0 - [Nonso Echendu]/Michaelo_0x]&amp;lt;/h1&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;Next, we want to configure our nginx server, such that it listens for HTTP requests on port 80, and will serve files from &lt;code&gt;/var/www/html&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let's edit the Nginx default site configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/nginx/sites-available/default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace everything in it with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80 default_server;

    root /var/www/html;
    index index.html;

    server_name _;

    location / {
        try_files $uri $uri/ =404;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me explain some key lines in the above configuration.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;listen 80 default_server&lt;/code&gt; tells Nginx to listen for incoming connections on port 80.  &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;default_server&lt;/code&gt; flag means that if a request comes in and it doesn't match any other server block, this server block will respond by default.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;root /var/www/html;&lt;/code&gt; defines the root directory that Nginx should go to look for files to serve. Remember, that we have our custom &lt;code&gt;index .html&lt;/code&gt; file in that directory - &lt;code&gt;/var/www/html&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;index index.html&lt;/code&gt; specifies when anyone goes to your webpage &lt;code&gt;http://&amp;lt;your-public-ip&amp;gt;&lt;/code&gt;, it should serve the default &lt;code&gt;index.html&lt;/code&gt; file which we created earlier. &lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;Now let's restart our Nginx server for all these changes to take effect. &lt;/p&gt;

&lt;p&gt;Restart Nginx with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Re-confirm the Nginx server is running by using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should be something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxplvctfynu3zoebsnhf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxplvctfynu3zoebsnhf5.png" alt="Image description" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;And that's it! We have our Nginx server configured to serve a custom webpage. &lt;/p&gt;

&lt;p&gt;Let's now confirm all that we've done visually.&lt;/p&gt;

&lt;p&gt;Get your ec2 instance public ip, go to a web browser, and visit &lt;code&gt;http://&amp;lt;your-public-ip&amp;gt;&lt;/code&gt;. You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6e8bgj5bqwsb9q12r6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6e8bgj5bqwsb9q12r6o.png" alt="Image description" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And so there we have it. I was able to show how to install and setup an Nginx on a fresh Ubuntu server. &lt;/p&gt;

&lt;p&gt;I also showed how to a create a custom HTML web page and configure Nginx to serve that webpage by default.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;strong&gt;P.s.&lt;/strong&gt;&lt;br&gt;
This mini project is a task given to me by HNG Intenship - an internship program that will be running for a couple of months.&lt;/p&gt;

&lt;p&gt;If you're looking to hire Devops engineers (and related fields) do check these out:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/devops-engineers" rel="noopener noreferrer"&gt;DevOps Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/cloud-engineers" rel="noopener noreferrer"&gt;Cloud Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/site-reliability-engineers" rel="noopener noreferrer"&gt;Site Reliability Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/platform-engineers" rel="noopener noreferrer"&gt;Platform Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/infrastructure-engineers" rel="noopener noreferrer"&gt;Infrastructure Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/kubernetes-specialists" rel="noopener noreferrer"&gt;Kubernetes Specialists&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/aws-solutions-architects" rel="noopener noreferrer"&gt;AWS Solutions Architects&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/azure-devops-engineers" rel="noopener noreferrer"&gt;Azure DevOps Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/google-cloud-engineers" rel="noopener noreferrer"&gt;Google Cloud Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/ci-cd-pipeline-engineers" rel="noopener noreferrer"&gt;CI/CD Pipeline Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/monitoring-observability-engineers" rel="noopener noreferrer"&gt;Monitoring/Observability Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/automation-engineers" rel="noopener noreferrer"&gt;Automation Engineers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/docker-specialists" rel="noopener noreferrer"&gt;Docker Specialists&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/linux-developers" rel="noopener noreferrer"&gt;Linux Developers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hng.tech/hire/postgresql-developers" rel="noopener noreferrer"&gt;PostgreSQL Developers&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting up a VPC Infrastructure For Jenkins, Artifactory, Sonarqube on AWS using Terraform</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Thu, 23 Jan 2025 22:37:36 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/setting-up-a-vpc-infrastructure-for-jenkins-artifactory-sonarqube-on-aws-using-terraform-27ia</link>
      <guid>https://dev.to/nonso_echendu_001/setting-up-a-vpc-infrastructure-for-jenkins-artifactory-sonarqube-on-aws-using-terraform-27ia</guid>
      <description>&lt;p&gt;Hey builders! So this is an article very much related to a previous one, where I wrote about &lt;a href="https://dev.to/nonso_echendu_001/deploying-jenkins-on-aws-installing-and-configuring-artifactory-and-sonarqube-on-seperate-ec2-nm9"&gt;setting up a VPC infrastructure on AWS&lt;/a&gt; but using the AWS console UI.&lt;/p&gt;

&lt;p&gt;In this article, however, we'll be using Terraform, an IAC tool, to automate the setup and configuration of the vpc infrastructure and instances. &lt;/p&gt;

&lt;p&gt;Here's a link to this terraform project's github repo - &lt;a href="https://github.com/NonsoEchendu/terraform-for-aws-instances" rel="noopener noreferrer"&gt;https://github.com/NonsoEchendu/terraform-for-aws-instances&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You see, as Devops engineers, our job is to automate things, and make them work faster and seamlessly. And that's what makes using Terraform efficient. Because with just one command we can create and configure the whole infrastructure, as well as delete them all at once.&lt;/p&gt;

&lt;p&gt;Alright, let's dive in...&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we get into the terraform script though, here are some prerequisites you must have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://developer.hashicorp.com/terraform/install" rel="noopener noreferrer"&gt;Install terraform&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#getting-started-install-instructions" rel="noopener noreferrer"&gt;Install AWS CLI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set up the AWS CLI with an IAM user having sufficient permissions (e.g., AdministratorAccess), using this command&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;br&gt; Then fill in your AWS credentials. &lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Objective
&lt;/h2&gt;

&lt;p&gt;We want to create EC2 instances for a Jenkins, Artifactory and Sonarqube servers. All resources will be in one VPC. The Jenkins server will be in a public subnet and the Artifactory and Sonarqube will both be in a private subnet. &lt;/p&gt;

&lt;p&gt;By placing Artifactory and Sonarqube in a private subnet, we are not exposing them directly to the public or the internet, thus reducing the risk of unauthorized access or attacks. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Repository Structure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxvnbiymn7ximns67jac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxvnbiymn7ximns67jac.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt; &lt;br&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  AWS Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;This is an architectural diagram of what we want to do or achieve. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3hp74y9tmg4zhgfw6my.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3hp74y9tmg4zhgfw6my.jpg" alt="Image description" width="800" height="517"&gt;&lt;/a&gt; &lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Script
&lt;/h2&gt;

&lt;p&gt;Now let's take a look at the terraform scripts that'll be doing all the work for us. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;1. &lt;strong&gt;The &lt;code&gt;providers.tf&lt;/code&gt; file&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;This script defines and configures the AWS provider for Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;provider "aws"&lt;/code&gt; declares that terraform will interact with AWS services by using the AWS provider plugin. It allows Terraform to create and manage resources in AWS, such as EC2 instances, VPCs, etc.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;region = "us-east-1"&lt;/code&gt; specifies that all AWS resources in this Terraform configuration will be created in the us-east-1 region.&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;2. &lt;strong&gt;The &lt;code&gt;variables.tf&lt;/code&gt; file&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "cidr" {
  default = "10.0.0.0/16"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a variable &lt;code&gt;cidr&lt;/code&gt;, with a default value of &lt;code&gt;10.0.0.0/16&lt;/code&gt;. We'll be calling this variable in the main terraform script. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;3. &lt;strong&gt;The &lt;code&gt;main.tf&lt;/code&gt; file&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now this is a lengthy script, 300+ lines, but we'll be taking it bit by bit.&lt;/p&gt;

&lt;p&gt;Again, you can find the whole script in the &lt;a href="https://github.com/NonsoEchendu/terraform-for-aws-instances" rel="noopener noreferrer"&gt;project repo here&lt;/a&gt;. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alright, let's start with the first resource, which creates a VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "main_vpc" {
  cidr_block = var.cidr
  instance_tenancy = "default"
  tags = {
    Name = "javaVPC"
  }
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; We're giving it the name &lt;code&gt;main_vpc&lt;/code&gt;, and that's what we'll be using to reference the VPC throughout the script when attaching subnets, instances and the likes.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cidr_block = var.cidr&lt;/code&gt;. We're assigning the value of &lt;code&gt;cidr_block&lt;/code&gt; using the variable we defined earlier in the &lt;code&gt;variables.tf&lt;/code&gt; file. &lt;/p&gt;

&lt;p&gt;The CIDR block defines the range of private IP addresses available for use within the VPC. For a /16 block, you get 65,536 IP addresses.&lt;/p&gt;

&lt;p&gt;Then, we also give it a tag with the name &lt;code&gt;javaVPC&lt;/code&gt; (the 'java' there is cause the script was originally written to run a java app). This tag will help you identify the VPC in the AWS Console.&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Next, let's create an internet gateway. This is an important component because that's what will allow our resources like ec2 instances within our VPC to access the internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_internet_gateway" "main_igw" {
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "javaVpcInternetGateway"
  }
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; We're giving it the name &lt;code&gt;main_igw&lt;/code&gt; and attaching it to the VPC we created earlier. &lt;/p&gt;

&lt;p&gt;Without an Internet Gateway, the VPC would remain isolated, and no outbound or inbound traffic to/from the internet would be possible.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Next, we'll be creating route tables - a public and a private one. First, the public route table. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "public_rt" {
  vpc_id = aws_vpc.main_vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main_igw.id
  }
  tags = {
    Name = "javaPublicRouteTable"
  }
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; So we're creating a public route table and attaching it to the vpc. While there's no terraform resource explicitly for public route table, it is how we configure the route table that makes it public or private.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;cidr_block&lt;/code&gt; is set to &lt;code&gt;0.0.0.0/0&lt;/code&gt; meaning we're creating a route entry for all internet-bound traffic. Then we're also directing the traffic to the internet gateway we created earlier. With this, subnets associated with this route table can communicate with the internet. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Next, let's create a public subnet. This will be what will host our Jenkins server and Bastion host (yeah :) i know we never mentioned Bastion, we'll get to it), that need direct internet access. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_subnet" "public_subnet" {
  vpc_id                  = aws_vpc.main_vpc.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = "us-east-1a"
  map_public_ip_on_launch = true
  tags = {
    Name = "javaPublicSubnet"
  }
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; Again, we're attaching this subnet to our vpc (all resources created in this project are being attached to this vpc).&lt;/p&gt;

&lt;p&gt;For the cidr_block, IP address range, &lt;code&gt;10.0.1.0/24&lt;/code&gt;, allocates 256 IP addresses (from 10.0.1.0 to 10.0.1.255). &lt;/p&gt;

&lt;p&gt;Then we're also placing this subnet in the &lt;code&gt;us-east-1a&lt;/code&gt; availability zone for fault tolerance and high availability. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;map_public_ip_on_launch&lt;/code&gt; set to &lt;code&gt;true&lt;/code&gt; automatically assigns a public IP address to instances launched in this subnet (Jenkins, Bastion host), as both instances need direct internet access.&lt;/p&gt;

&lt;p&gt;We still haven't attached or associated this public subnet to the public route table we created earlier. There's another terraform resource for that.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;So we create a &lt;code&gt;route_table_association&lt;/code&gt; Terraform resource, that will simply associate our public subnet to the public route table. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table_association" "public_rta" {
  subnet_id      = aws_subnet.public_subnet.id
  route_table_id = aws_route_table.public_rt.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;p&gt;&lt;br&gt; Let's move to creating a private route table and subnet. &lt;br&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, though, we need to create 2 things - an elastic IP and a NAT gateway. Why though? I'll explain.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;For security reasons, we don't want our Artifactory and Sonarqube instances to have direct public internet access. But they still need to access the internet to download updates, plugins, or dependencies. &lt;/p&gt;

&lt;p&gt;So we'll be creating an Elastic IP which we will assign to a NAT gateway. &lt;/p&gt;

&lt;p&gt;The Elastic IP address is used specifically for the NAT Gateway, not for the private instances like Artifactory and Sonarqube.&lt;/p&gt;

&lt;p&gt;The purpose of the Elastic IP for the NAT Gateway is to provide a static, public IP address that the NAT Gateway can use to enable internet access for the resources in the private subnet.&lt;/p&gt;

&lt;p&gt;Without an Elastic IP, the NAT Gateway would be assigned a dynamically allocated public IP address, which could change over time. And so by associating an Elastic IP with the NAT Gateway, the public IP address remains constant, which is important for maintaining reliable internet connectivity for the resources in the private subnet.&lt;/p&gt;

&lt;p&gt;The private instances, like Artifactory and Sonarqube, do not need public IP addresses assigned to them directly. They reside in the private subnet and will then access the internet through the NAT Gateway, using the Elastic IP. &lt;/p&gt;

&lt;p&gt;The NAT gateway will then be placed in the public subnet which has access to the internet. &lt;/p&gt;

&lt;p&gt;If it's a bit confusing, just please look at the architecture diagram again. &lt;/p&gt;

&lt;p&gt;Here's the terraform configuration for this: &lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Elastic IP for NAT Gateway
resource "aws_eip" "nat_eip" {
  domain = "vpc"
}
# NAT GAteway
resource "aws_nat_gateway" "main_nat" {
  allocation_id = aws_eip.nat_eip.id
  subnet_id     = aws_subnet.public_subnet.id
  tags = {
    Name = "NatGateway"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Next, we can create our private route table and attach to our vpc&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "private_rt" {
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "javaPrivateRouteTable"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Then, we need to create a route in the private route table that directs all internet-bound traffic through the NAT gateway. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Basically, this is to allow resources in the private subnet, such as the Artifactory and Sonarqube servers, to access the internet indirectly through the NAT Gateway, without having a direct public IP address.&lt;/p&gt;

&lt;p&gt;Aws route configuration:&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route" "private_nat" {
  route_table_id         = aws_route_table.private_rt.id
  destination_cidr_block = "0.0.0.0/0"
  nat_gateway_id         = aws_nat_gateway.main_nat.id
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; &lt;code&gt;destination_cidr_block = "0.0.0.0/0"&lt;/code&gt; sets the destination CIDR block for the route to &lt;code&gt;0.0.0.0/0&lt;/code&gt;, which represents all internet traffic (i.e., any destination outside the VPC). &lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Next, we create a private subnet and associate it with the private route table.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Private Subnet
resource "aws_subnet" "private_subnet" {
  vpc_id            = aws_vpc.main_vpc.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "us-east-1a"
  tags = {
    Name = "javaPrivateSubnet"
  }
}
# Private Route Table Association with Private Subnet
resource "aws_route_table_association" "private_rta" {
  subnet_id      = aws_subnet.private_subnet.id
  route_table_id = aws_route_table.private_rt.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;




&lt;p&gt;Now we're moving to creating the instances. &lt;br&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We start with the &lt;strong&gt;Jenkins server&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;But first, let's create a Security Group for the Jenkins instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "jenkins_sg" {
  name   = "jenkins_sg"
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "jenkins_sg"
  }
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt;Then, we're going to add inbound and outbound rules to this security group.&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Jenkins Security Group Inbound Rule 1
resource "aws_vpc_security_group_ingress_rule" "jenkins_sg_inbound_rule1" {
  security_group_id = aws_security_group.jenkins_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 22
  ip_protocol       = "tcp"
  to_port           = 22
}
# Jenkins Security Group Inbound Rule 2
resource "aws_vpc_security_group_ingress_rule" "jenkins_sg_inbound_rule2" {
  security_group_id = aws_security_group.jenkins_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 8080
  ip_protocol       = "tcp"
  to_port           = 8080
}
# Jenkins Security Group Outbound Rule 1
resource "aws_vpc_security_group_egress_rule" "outbound_rule1" {
  security_group_id = aws_security_group.jenkins_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  ip_protocol       = "-1"
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; Basically, we're setting inbound rules to allow access to ports 22 (for SSH) and 8080 (Jenkins HTTP UI). &lt;/p&gt;

&lt;p&gt;&lt;code&gt;cidr_ipv4&lt;/code&gt; is set to &lt;code&gt;0.0.0.0/0&lt;/code&gt; meaning any ip address is allowed to access these ports. &lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Next, let's create the ec2 instance for Jenkins itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "jenkins_server" {
  ami                    = "ami-04b4f1a9cf54c11d0"
  instance_type          = "t2.micro"
  key_name = "new-test-key-pair"
  vpc_security_group_ids = [aws_security_group.jenkins_sg.id]
  subnet_id              = aws_subnet.public_subnet.id
  user_data              = filebase64("./jenkins_user_data.sh")
  tags = {
    Name = "JenkinsServer"
  }
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; You can change your own AMI id to one of your choice. The one stated above is for Ubuntu 24.04 x86 architecture. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;key_name&lt;/code&gt; argument requires that you assign an already created key pair. In this case, i have already created a key pair from my AWS console named &lt;code&gt;new-test-key-pair&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We also attach this instance to the Jenkins security group we created earlier. Also attaching the instance to the public subnet.&lt;/p&gt;

&lt;p&gt;We're also making use of &lt;code&gt;user_data&lt;/code&gt; arg. This will help us to pass a script to our Jenkins EC2 instance during its launch. The &lt;code&gt;jenkins_user_data.sh&lt;/code&gt; script installs docker and docker compose, and spins up a Jenkins container from a jenkins docker image. &lt;/p&gt;

&lt;p&gt;To get the user-data scripts check the &lt;a href="https://github.com/NonsoEchendu/terraform-for-aws-instances" rel="noopener noreferrer"&gt;project repository&lt;/a&gt; under the &lt;code&gt;user-data-scripts&lt;/code&gt; directory. &lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;br&gt; And so we're done with the Jenkins instance.&lt;/p&gt;

&lt;p&gt;Let's move to the next ec2 instances - Artifactory and Sonarqube.&lt;/p&gt;

&lt;p&gt;But before that, we have to create one instance before them. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; The &lt;strong&gt;Bastion Host&lt;/strong&gt;. The one mentioned earlier.&lt;/p&gt;

&lt;p&gt;But why do we need to create this Bastion host?&lt;/p&gt;

&lt;p&gt;Well, the Bastion host will serve as a secure gateway in the public subnet that provides SSH access to instances in private subnets. It will act as the single entry point to SSH into our private Artifactory and Sonarqube instances. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's create first, the Bastion Security Group, and add inbound and outbound rules. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "bastion_sg" {
  name   = "bastion_sg"
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "bastion_sg"
  }
}
# Bastion Security Group Inbound Rule 1
resource "aws_vpc_security_group_ingress_rule" "bastion_sg_inbound_rule1" {
  security_group_id = aws_security_group.bastion_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 22
  ip_protocol       = "tcp"
  to_port           = 22
}
# Bastion Security Group Outbound Rule 1
resource "aws_vpc_security_group_egress_rule" "bastion_outbound_rule1" {
  security_group_id = aws_security_group.bastion_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  ip_protocol       = "tcp"
  from_port         = 22
  to_port           = 22
}
# Bastion Security Group HTTPS Outbound Rule 
resource "aws_vpc_security_group_egress_rule" "bastion_outbound_https" {
  security_group_id = aws_security_group.bastion_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  ip_protocol       = "tcp"
  from_port         = 443
  to_port           = 443
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; With the ingress or inbound rule, we're simply allowing access to port 22 (for SSH). &lt;/p&gt;

&lt;p&gt;&lt;code&gt;cidr_ipv4&lt;/code&gt; is set to &lt;code&gt;0.0.0.0/0&lt;/code&gt; meaning any ip address is allowed to access this port. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Replace &lt;code&gt;"0.0.0.0/0"&lt;/code&gt; with a more restrictive CIDR block that only includes trusted IP addresses. For example, your trusted IP range. Allowing &lt;code&gt;0.0.0.0/0&lt;/code&gt; means any device connected to the internet can attempt to connect to the bastion host using SSH.&lt;/p&gt;

&lt;p&gt;We also set some egress or outbound rules. Both rules allow Bastion Security Group to send outbound traffic on ports 22 (SSH) and 443 (HTTPS) to any IP address (0.0.0.0/0). &lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Next, we create the Bastion instance:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "bastion_host" {
  ami                    = "ami-04b4f1a9cf54c11d0"
  instance_type          = "t2.micro"
  key_name               = "new-test-key-pair"
  vpc_security_group_ids = [aws_security_group.bastion_sg.id]
  subnet_id              = aws_subnet.public_subnet.id
  tags = {
    Name = "bastion_host"
  }
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; The setup is very similar to the Jenkins instance creation config, with the exception that here, we're using the Bastion Security Group in the Bastion instance. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;br&gt; &lt;strong&gt;3.&lt;/strong&gt; Now we come to the &lt;strong&gt;Artifactory instance&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let's create the Security group with inbound and outbound rules for it:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Artifactory Security Group Inbound Rule 1
resource "aws_vpc_security_group_ingress_rule" "artifactory_sg_inbound_rule1" {
  security_group_id            = aws_security_group.artifactory_sg.id
  from_port                    = 22
  ip_protocol                  = "tcp"
  to_port                      = 22
  referenced_security_group_id = aws_security_group.bastion_sg.id
}
# Artifactory Security Group Inbound Rule 2
resource "aws_vpc_security_group_ingress_rule" "artifactory_sg_inbound_rule2" {
  security_group_id            = aws_security_group.artifactory_sg.id
  from_port                    = 8081
  ip_protocol                  = "tcp"
  to_port                      = 8081
  referenced_security_group_id = aws_security_group.jenkins_sg.id
}
# Artifactory Security Group Inbound Rule 3
resource "aws_vpc_security_group_ingress_rule" "artifactory_sg_inbound_rule3" {
  security_group_id            = aws_security_group.artifactory_sg.id
  from_port                    = 8082
  ip_protocol                  = "tcp"
  to_port                      = 8082
  referenced_security_group_id = aws_security_group.jenkins_sg.id
}
# Artifactory Security Group Outbound Rule 
resource "aws_vpc_security_group_egress_rule" "artifactory_https_outbound_rule" {
  security_group_id = aws_security_group.artifactory_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 443
  to_port           = 443
  ip_protocol       = "tcp"
}
resource "aws_vpc_security_group_egress_rule" "artifactory_http_outbound_rule" {
  security_group_id = aws_security_group.artifactory_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 80
  to_port           = 80
  ip_protocol       = "tcp"
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; The inbound rules we added are to allow access to ports 22 (SSH), 8081 for the Artifactory UI, and 8082 for Repository-specific services.&lt;/p&gt;

&lt;p&gt;There's an argument, though, that we're using here that we didn't use in the other security groups, the &lt;code&gt;referenced_security_group_id&lt;/code&gt; arg. &lt;/p&gt;

&lt;p&gt;Its work? Instead of allowing access from all IPs (cidr_ipv4), we're restricting SSH (port 22) access to only the Bastion security group. And access to ports 8081 and 8081 are from only the Jenkins Security Group.  &lt;/p&gt;

&lt;p&gt;Meaning that only instances in the Bastion security group can SSH into the Artifactory instance, and only instances in the Jenkins SG can access ports 8081 and 8082. &lt;/p&gt;

&lt;p&gt;For the Outbound rules:&lt;/p&gt;

&lt;p&gt;Since we'll be installing docker and updating the linux machine's dependencies on the Artifactory instance, we set outbound rules for tcp on port 443 (https) and port 80 (http) to allow the instance to communicate package repositories, Docker repositories, and any other services using HTTPS.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Now we can create the Artifactory ec2 instance:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "artifactory_server" {
  ami                         = "ami-04b4f1a9cf54c11d0"
  instance_type               = "t2.medium"
  key_name                    = "new-test-key-pair"
  vpc_security_group_ids      = [aws_security_group.artifactory_sg.id]
  subnet_id                   = aws_subnet.private_subnet.id
  associate_public_ip_address = false
  user_data                   = filebase64("user-data-scripts/artifactory_user_data.sh")
  tags = {
    Name = "ArtifactoryServer"
  }
  depends_on = [aws_nat_gateway.main_nat]
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; In this instance, we're still using the same AMI, but with a different instance type, &lt;code&gt;t2.medium&lt;/code&gt;, as Artifactory requires a higher cpu and ram specification.&lt;/p&gt;

&lt;p&gt;We're also disabling a public ip address from being assigned to this instance. And we're adding the Artifactory instance in the private subnet we created earlier on. &lt;/p&gt;

&lt;p&gt;Another important thing i noticed while testing and running this project was that, the Artifactory and Sonarqube insatnces are normally created and running before the NAT gateway becomes "available". &lt;/p&gt;

&lt;p&gt;Remember that the NAT gateway is what enables instances in the private subnet to connect to the internet. So, the instances running before the nat gateway becomes "available" makes updating linux dependencies, installation of tools like docker and docker-compose, by the user-data script to fail.&lt;/p&gt;

&lt;p&gt;So the solution? &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;depends_on&lt;/code&gt; argument. The Artifactory instance  will depend on the NAT gateway, meaning that the NAT gateway will be created and status be "available" first, before Terraform creates the Artifactory instance. &lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;br&gt; &lt;strong&gt;4.&lt;/strong&gt; Now we can go to creating the &lt;strong&gt;Sonarqube Instance&lt;/strong&gt; and its security group. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Sonarqube Security Group:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "sonarqube_sg" {
  name   = "sonarqube_sg"
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "sonarqube_sg"
  }
}
# sonarqube Security Group Inbound Rule 1
resource "aws_vpc_security_group_ingress_rule" "sonarqube_sg_inbound_rule1" {
  security_group_id            = aws_security_group.sonarqube_sg.id
  from_port                    = 22
  ip_protocol                  = "tcp"
  to_port                      = 22
  referenced_security_group_id = aws_security_group.bastion_sg.id
}
# Sonarqube Security Group Inbound Rule 2
resource "aws_vpc_security_group_ingress_rule" "sonarqube_sg_inbound_rule2" {
  security_group_id            = aws_security_group.sonarqube_sg.id
  from_port                    = 9000
  ip_protocol                  = "tcp"
  to_port                      = 9000
  referenced_security_group_id = aws_security_group.jenkins_sg.id
}
# Sonarqube Security Group Outbound Rule 
resource "aws_vpc_security_group_egress_rule" "sonarqube_https_outbound_rule" {
  security_group_id = aws_security_group.sonarqube_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 443
  to_port           = 443
  ip_protocol       = "tcp"
}
resource "aws_vpc_security_group_egress_rule" "sonarqube_http_outbound_rule" {
  security_group_id = aws_security_group.sonarqube_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 80
  to_port           = 80
  ip_protocol       = "tcp"
}
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;&lt;br&gt; Similar inbound and outbound rules as that of Artifactory are used here. &lt;br&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;And finally, the Sonarqube instance:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "sonarqube_server" {
  ami                         = "ami-04b4f1a9cf54c11d0"
  instance_type               = "t2.medium"
  key_name                    = "new-test-key-pair"
  vpc_security_group_ids      = [aws_security_group.sonarqube_sg.id]
  subnet_id                   = aws_subnet.private_subnet.id
  associate_public_ip_address = false
  user_data                   = filebase64("user-data-scripts/sonarqube_user_data.sh")
  tags = {
    Name = "SonarqubeServer"
  }
  depends_on = [aws_nat_gateway.main_nat]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Running the Terraform Scripts
&lt;/h2&gt;

&lt;p&gt;To run the terraform configuration scripts, we'll use these commands:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Change to the project root directory, and run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will initialize the terraform working directory, installing also the required provider (AWS in this case) plugins. &lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt; 2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will show you what Terraform will do when you run the &lt;code&gt;terraform apply&lt;/code&gt; command. It'll also help you detect any syntax errors or config issues before making changes. &lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt; 3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This executes the actions in the execution plan (e.g., creating, modifying, or destroying resources).&lt;/p&gt;

&lt;p&gt;It will also prompt a confirmation message if terraform should proceed to execute, to which you should input &lt;code&gt;yes&lt;/code&gt; to proceed. &lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;&lt;br&gt; Voila! We've successfully used terraform to set up an AWS infrastructure for Jenkins, Artifactory and Sonarqube.&lt;/p&gt;

&lt;p&gt;We hosted them all in a VPC, put Jenkins and Bastion in a public subnet and Artifactory and Sonarqube in a different private subnet. &lt;/p&gt;

&lt;p&gt;You can login to your AWS Console and confirm that all these resources were created and working well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;P.s.&lt;/strong&gt; With just one command, &lt;code&gt;terraform destroy&lt;/code&gt; you can destroy all these resources at once. &lt;/p&gt;

&lt;p&gt;If you've manually created different AWS resources before like the ones we created in this project, you'll know how tedious it can be. &lt;/p&gt;

&lt;p&gt;But with Terraform with just few commands you can create and delete all these resources in an instant. &lt;/p&gt;

&lt;p&gt;I do hope you enjoyed this article, as i enjoyed writing it. &lt;/p&gt;

&lt;p&gt;Please do like, share and leave your comments.&lt;/p&gt;

&lt;p&gt;Till the next, happy building!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Weather Dashboard App with Docker, Flask, Open Weather API and AWS S3</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Wed, 08 Jan 2025 19:25:56 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/building-a-weather-dashboard-app-with-docker-flask-open-weather-api-and-aws-s3-117a</link>
      <guid>https://dev.to/nonso_echendu_001/building-a-weather-dashboard-app-with-docker-flask-open-weather-api-and-aws-s3-117a</guid>
      <description>&lt;p&gt;Hey builders! In this article, we will be going through the contents of a github repository that uses Docker to run a weather dashboard application. This application uses Open Weather API to fetch real-time weather data for multiple cities, and utilizes AWS s3 for secure and scalable data storage. &lt;/p&gt;

&lt;p&gt;Python was used for scripting, AWS for cloud management. &lt;/p&gt;

&lt;p&gt;This project underscores the principles of modern DevOps including automation, cloud efficiency and error handling.&lt;/p&gt;

&lt;p&gt;Here's the github repository for the project, &lt;a href="https://github.com/NonsoEchendu/flask-weather-dashboard" rel="noopener noreferrer"&gt;check here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's dive in!&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker. To see how to install docker, check the &lt;a href="https://docs.docker.com/get-started/get-docker/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For other prerequisites, please check the &lt;a href="https://github.com/NonsoEchendu/flask-weather-dashboard" rel="noopener noreferrer"&gt;github repo&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Repository Structure
&lt;/h2&gt;

&lt;p&gt;Here is an overview of the repository structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ikx5q5aqxbq8cho0fm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ikx5q5aqxbq8cho0fm5.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Files and Their Purpose:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. web_dashboard.py&lt;/strong&gt;&lt;br&gt;
This is the main application file. It contains the WeatherDashboard class, which handles fetching weather data from the OpenWeather API and storing it to an AWS S3 bucket. &lt;/p&gt;

&lt;p&gt;The class also includes methods for creating the S3 bucket if it doesn't exist and saving weather data to the bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. app.py&lt;/strong&gt;&lt;br&gt;
This defines a Flask web application that provides a simple interface for displaying and managing weather data for various cities.&lt;/p&gt;

&lt;p&gt;It contains a main function that goes to the s3 bucket to retrieve weather data on stored cities. It also handles a search feature for when a user searches a city, and immediately fetches and stores the weather data in the s3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. .env&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This file contains environment variables required by the application, such as the OpenWeather API key and AWS credentials. It ensures sensitive information is not hard-coded into the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyhok808qsm88f3mu9lo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyhok808qsm88f3mu9lo.png" alt="Image description" width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**4. Dockerfile&lt;/p&gt;

&lt;p&gt;The Dockerfile defines the steps to build a Docker image for the application. It installs the necessary dependencies, copies the application code, and sets the command to run the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs2uos95adkobinaso4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs2uos95adkobinaso4z.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting Up the Application
&lt;/h2&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Clone the repo:&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/NonsoEchendu/flask-weather-dashboard.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Change directory&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd 30days-weather-dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a .env file and place it in the project's root directory:&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight markdown"&gt;&lt;code&gt;OPENWEATHER_API_KEY=your=openweather-api-key
AWS_BUCKET_NAME=your-aws-bucket-name
AWS_BUCKET_FOLDER_NAME=your-aws-bucket-folder-name
AWS_ACCESS_KEY_ID=your-aws-access-key-id
AWS_SECRET_ACCESS_KEY=your-aws-secret-access-key
AWS_DEFAULT_REGION=your-aws-default-region
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build the docker image&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t weather-dashboard .
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the docker container:&lt;/p&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --env-file .env -p 5000:5000 -d --name weather-dash weather-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;-p 5000:5000&lt;/code&gt; maps the container's port 5000 to your local machine's port 5000, so you can be able to view it on your web broswer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;code&gt;http://localhost:5000&lt;/code&gt; on your web browser. You should have something like this when you search for a city:&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxhfoc1m26di8mu7r22t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxhfoc1m26di8mu7r22t.png" alt="Image description" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By using docker to deploy this application, we are implementing core devops principles: containerization, automation. So instead of installing each of those tools locally, with just 2 commands we get our application running.&lt;/p&gt;

&lt;p&gt;Future growth for this project will be seting up CI/CD pipeline for automation.&lt;/p&gt;




&lt;p&gt;Happy building and Collaborating!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploying Jenkins on AWS, Installing and Configuring Artifactory and SonarQube on Seperate EC2 Instances</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Mon, 06 Jan 2025 16:43:30 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/deploying-jenkins-on-aws-installing-and-configuring-artifactory-and-sonarqube-on-seperate-ec2-nm9</link>
      <guid>https://dev.to/nonso_echendu_001/deploying-jenkins-on-aws-installing-and-configuring-artifactory-and-sonarqube-on-seperate-ec2-nm9</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;We would be deploying Jenkins on AWS, integrate it with Artifactory and SonarQube.&lt;/p&gt;

&lt;p&gt;We're using SonarQube to generate reports on coding standards, unit tests, code coverage, code complexity, bugs, and security recommendations. And JFrogs's Artifactory for artifacts strorage&lt;/p&gt;



&lt;h2&gt;
  
  
  A. AWS Console Deployment: Jenkins, Sonarqube and Artifactory
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Create a VPC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We'll be using a VPC to host and manage securely our instances that will be running Jenkins, Sonarqube and Artifactory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Go to VPC&lt;/li&gt;
&lt;li&gt; Create VPC&lt;/li&gt;
&lt;li&gt; Select VPC and more&lt;/li&gt;
&lt;li&gt; Input a name for your VPC (e.g. TestVPC)&lt;/li&gt;
&lt;li&gt; IPv4 CIDR: 10.0.0.0/16&lt;/li&gt;
&lt;li&gt; No IPv6&lt;/li&gt;
&lt;li&gt; Number of public subnets: 1&lt;/li&gt;
&lt;li&gt; Number of private subnets: 1&lt;/li&gt;
&lt;li&gt; Click "Customize Subnets CIDR blocks" "&lt;/li&gt;
&lt;li&gt; Under Public subnet: 10.0.1.0/24&lt;/li&gt;
&lt;li&gt; Under Private subnet: 10.0.2.0/24&lt;/li&gt;
&lt;li&gt; Click "Create VPC"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z0bi0puaylx6xqgmk34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z0bi0puaylx6xqgmk34.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Selecting VPC and more automatically creates a subnet, Internet gateway and route tables&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create NAT Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC Dashboard → NAT Gateways → Create&lt;/li&gt;
&lt;li&gt;Select the subnet created while creating the VPC (you should see it with a similar name as your VPC)&lt;/li&gt;
&lt;li&gt;Connectivity type: public&lt;/li&gt;
&lt;li&gt;Allocate new EIP&lt;/li&gt;
&lt;li&gt;Create NAT Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Edit Route Table&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select route table associated with private subnet&lt;/li&gt;
&lt;li&gt;Edit routes&lt;/li&gt;
&lt;li&gt;Add route:
&amp;gt; - Destination: 0.0.0.0/0
&amp;gt; - Target: Select your NAT Gateway
&amp;gt; - Save changes&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;It should look something like this:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8ln84y2ug1hlz7r74oh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8ln84y2ug1hlz7r74oh.png" alt="Image description" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Now we move to creating EC2 Instances...&lt;/p&gt;

&lt;h3&gt;
  
  
  EC2 Instances Setup
&lt;/h3&gt;



&lt;p&gt;&lt;strong&gt;1. Jenkins Instance (Public Subnet)&lt;/strong&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Dashboard → Launch Instance&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Name: test-jenkins&lt;/li&gt;
&lt;li&gt;AMI: Ubuntu 22.04 LTS&lt;/li&gt;
&lt;li&gt;Instance type: t2.micro&lt;/li&gt;
&lt;li&gt;Key pair: Create/select existing&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;Click "Edit" (under network setting)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;VPC: TestVPC&lt;/li&gt;
&lt;li&gt;Subnet: Public subnet created earlier&lt;/li&gt;
&lt;li&gt;Auto-assign public IP: Enable&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;Click "Create security group"&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Name: jenkins-sg&lt;/li&gt;
&lt;li&gt;Inbound rules:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;SSH (22): Your IP&lt;/li&gt;
&lt;li&gt;Custom TCP (8080): Your IP or Anywhere (0.0.0.0/0)&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;



&lt;p&gt;It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzft37x852c5bxyx85vs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzft37x852c5bxyx85vs.png" alt="Image description" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;2. Artifactory Instance (Private Subnet)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Dashboard → Launch Instance&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Name: Artifactory&lt;/li&gt;
&lt;li&gt;AMI: Ubuntu 22.04 LTS&lt;/li&gt;
&lt;li&gt;Instance type: t2.medium &lt;/li&gt;
&lt;li&gt;Key pair: Same as that used for Jenkins instance&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;Click "Edit" (under network setting)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;VPC: TestVPC&lt;/li&gt;
&lt;li&gt;Subnet: Private subnet created earlier&lt;/li&gt;
&lt;li&gt;Auto-assign public IP: Disable&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;Click "Create security group"&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Name: artifactory-sg&lt;/li&gt;
&lt;li&gt;Inbound rules:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;SSH (22): jenkins-SG&lt;/li&gt;
&lt;li&gt;Custom TCP (8081-8082): jenkins-SG&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;



&lt;p&gt;It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ucn119v4fdciw59jy7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ucn119v4fdciw59jy7m.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;3. Sonarqube Instance (Private Subnet)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Dashboard → Launch Instance&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Name: Sonarqube&lt;/li&gt;
&lt;li&gt;AMI: Ubuntu 22.04 LTS&lt;/li&gt;
&lt;li&gt;Instance type: t2.medium &lt;/li&gt;
&lt;li&gt;Key pair: Same as that used for Jenkins instance&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;Click "Edit" (under network setting)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;VPC: TestVPC&lt;/li&gt;
&lt;li&gt;Subnet: Private subnet created earlier&lt;/li&gt;
&lt;li&gt;Auto-assign public IP: Disable&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;Click "Create security group"&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Name: sonarqube-sg&lt;/li&gt;
&lt;li&gt;Inbound rules:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;SSH (22): bastion-SG&lt;/li&gt;
&lt;li&gt;Custom TCP (9000): jenkins-SG&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;



&lt;p&gt;It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8l7ty7m8iruxaox2a1qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8l7ty7m8iruxaox2a1qh.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Next, we'll be creating another Instance for Bastion Host.&lt;/p&gt;

&lt;p&gt;A bastion host is a server used to manage access to an internal or private network from an external network.&lt;/p&gt;

&lt;p&gt;In our case, we'll be using it to ssh into the private subnet for managing Artifactory and Sonarqube.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;4. Bastion Host (Public Subnet)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Dashboard → Launch Instance&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Name: Bastion&lt;/li&gt;
&lt;li&gt;AMI: Ubuntu 22.04 LTS&lt;/li&gt;
&lt;li&gt;Instance type: t2.micro&lt;/li&gt;
&lt;li&gt;Network: Public subnet&lt;/li&gt;
&lt;li&gt;Security Group:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Allow SSH from your IP&lt;/li&gt;
&lt;li&gt;Allow outbound to Artifactory SG&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80jr5hse41wxi5mo834f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80jr5hse41wxi5mo834f.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;






&lt;h2&gt;
  
  
  B. Instalation of Jenkins, Artifactory and Sonarqube
&lt;/h2&gt;



&lt;p&gt;Now we're done setting up and configuring our instances. Next, we'll be installing our tools: Jenkins, Artifactory and Sonarqube&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Jenkins Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To install Jenkins, we'll be using docker. Specifically, we'll be spinning up a jenkins container using a docker-compose.yaml file. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH to jenkins using &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i &amp;lt;identity-file&amp;gt; ubuntu@&amp;lt;jenkins-public-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Install docker
&lt;/h2&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install docker.io -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Add docker to usergroup with this command:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG docker $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Logout and login again&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test that docker is installed properly:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;



&lt;blockquote&gt;
&lt;p&gt;something like this should show:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5uught1hyapajre5gs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5uught1hyapajre5gs6.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;Next, we install docker compose&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
mkdir -p $DOCKER_CONFIG/cli-plugins
curl -SL https://github.com/docker/compose/releases/download/v2.32.0/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Confirm docker compose is installed&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose version
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;It should bring an output like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvtvr6879fr27zpl0p93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvtvr6879fr27zpl0p93.png" alt="Image description" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a file with the name &lt;code&gt;docker-compose.yaml&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Paste this into the file: &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  jenkins:
    image: jenkins/jenkins:lts
    container_name: jenkins
    privileged: true
    user: root 
    ports:
      - "8080:8080"
      - "50000:50000"
    volumes:
      - jenkins_home:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
    command: |
      sh -c "
        apt-get update &amp;amp;&amp;amp; \
        apt-get install -y sudo &amp;amp;&amp;amp; \
        chown -R 1000:1000 /var/jenkins_home &amp;amp;&amp;amp; \
        apt-get -y install docker.io &amp;amp;&amp;amp; \
        groupadd -f docker &amp;amp;&amp;amp; \
        usermod -aG docker jenkins &amp;amp;&amp;amp; \
        echo 'jenkins ALL=(ALL) NOPASSWD: ALL' &amp;gt;&amp;gt; /etc/sudoers &amp;amp;&amp;amp; \
        chown jenkins:jenkins /var/run/docker.sock &amp;amp;&amp;amp; \
        chmod 666 /var/run/docker.sock &amp;amp;&amp;amp; \
        su jenkins -c /usr/local/bin/jenkins.sh"
volumes:
  jenkins_home:
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;



&lt;blockquote&gt;
&lt;p&gt;This installs docker and spins up a jenkins container as well. In addition, it installs sudo which is needed to run some commands in our jenkins pipeline. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Access Jenkins UI by opening in your browser: &lt;code&gt;http://&amp;lt;jenkins-public-ip&amp;gt;:8080&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow the prompt and Install selected plugins&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;strong&gt;2. Sonarqube Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Like we did for jenkins, we'll also be installing Sonarqube using docker&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH to sonarqube from Bastion&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# SSH first to bastion
ssh -i &amp;lt;identity-file&amp;gt; ubuntu@&amp;lt;bastionpublic-ip&amp;gt;
# SSH to Sonarqube
ssh -i &amp;lt;identity-file&amp;gt; ubuntu@&amp;lt;sonarqube-private-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Install docker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go to previous install docker step&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a file with the name &lt;code&gt;docker-compose.yaml&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Paste this into the file: &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  sonarqube:
    image: sonarqube:lts
    container_name: sonarqube
    ports:
      - "9000:9000"
    volumes:
      - sonarqube_data:/opt/sonarqube/data
      - sonarqube_extensions:/opt/sonarqube/extensions
      - sonarqube_logs:/opt/sonarqube/logs
    restart: always
volumes:
  sonarqube_data:
  sonarqube_extensions:
  sonarqube_logs:
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;docker ps&lt;/code&gt; to ensure Sonarqube is running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access Sonarqube UI by opening in your browser: &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Because we setup sonarqube in a private subnet, it doesn't have any public ip. And so we'll be accessing the UI using SSH tunnelling or SSH port forwarding from the Bastion host.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From your local machine, run this: (We are forwarding traffic from our local machone's port 9000 to Sonarqube's port 9000 in the private subnet, through Bastion.)&lt;/li&gt;
&lt;/ul&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# From your local machine, run
ssh -L 9000:&amp;lt;sonarqube-private-ip&amp;gt;:9000 ubuntu@&amp;lt;bastion-public-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;



&lt;blockquote&gt;
&lt;p&gt;On your local machine browser enter &lt;code&gt;http://localhost:8082&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;It should open something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1rxldveivm46s1dgrpo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1rxldveivm46s1dgrpo.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;P.s. Because my local machine's port 9000 is already being used, that is why i'm using port 9023.&lt;/p&gt;

&lt;p&gt;Default login:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;username: admin&lt;/li&gt;
&lt;li&gt;password: admin&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;



&lt;p&gt;&lt;strong&gt;3. Artifactory Installation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Like we did for both Jenkins and Sonarqube, we'll be installing Artifactory using docker&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH to Artifactory from Bastion&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# From your local machine, SSH first to bastion
ssh -i &amp;lt;identity-file&amp;gt; ubuntu@&amp;lt;bastionpublic-ip&amp;gt;
# SSH to Artifactory
ssh -i &amp;lt;identity-file&amp;gt; ubuntu@&amp;lt;artifactory-private-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Install docker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go to previous install docker step&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a file with the name &lt;code&gt;docker-compose.yaml&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Paste this into the file: &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  artifactory-service:
    image: docker.bintray.io/jfrog/artifactory-oss:7.49.6
    container_name: artifactory
    restart: always
    networks:
      - ci_net
    ports:
      - 8081:8081
      - 8082:8082
    volumes:
      - artifactory:/var/opt/jfrog/artifactory
volumes:
  artifactory:
networks:
  ci_net:
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;docker ps&lt;/code&gt; to ensure Artifactory is running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access Artifactory UI by opening in your browser: &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Just like for Sonarqube, we'll be accessing the Artifactory UI using SSH port forwarding from the Bastion host.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run this:&lt;/li&gt;
&lt;/ul&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# From your local machine
ssh -L 8082:&amp;lt;artifactory-private-ip&amp;gt;:8082 ubuntu@&amp;lt;bastion-public-ip&amp;gt;
# 8082 is Artifactory's UI port
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;



&lt;blockquote&gt;
&lt;p&gt;On your local machine browser enter &lt;code&gt;http://localhost:8082&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;It should open something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0dusbvhy5575sed2z0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0dusbvhy5575sed2z0s.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Default login:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;username: admin&lt;/li&gt;
&lt;li&gt;password: password&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Setting up Jenkins
&lt;/h2&gt;



&lt;p&gt;Now we're going to install some plugins needed to configure Artifactory and Sonarqube in our Jenkins pipeline&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First we'll be installing he Artifactory plugin&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to Manage Jenkins &amp;gt; Plugins &amp;gt; Available Plugins, and search for &lt;code&gt;artifactory&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select and tick the first one, then click install:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hbyz5nrh2tuu5wlbfvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hbyz5nrh2tuu5wlbfvb.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search also for &lt;code&gt;sonarqube&lt;/code&gt;. Select and tick the first one, then click install:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47ik2u37l635v6vfloss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47ik2u37l635v6vfloss.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let's configure Artifactory and Sonarqube on our jenkins server&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Configure Sonarqube&lt;/strong&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click Manage jenkins &amp;gt; System Configurations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll to &lt;code&gt;Sonarqube servers&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;code&gt;Add Sonarqube&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name: Sonarqube (It has to tally with what you set as value for your env variable in your pipeline)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Server URL: http://:9000&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Server authntication token. Let's create one&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;In your sonarqube ui, go to My account (top right)&amp;gt; Security&amp;gt; Generate token. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Type: User token&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expiration: no expiration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Generate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure to copy the generated token &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Add Server authentication token&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kind: Secret text&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scope: Global&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secret: the generated token you copied&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ID: sonarqube-token (or whichever name of choice)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Add&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Select sonarqube token&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Save&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;It should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43zskstle0hmhcjwzuxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43zskstle0hmhcjwzuxz.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt; &lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;&lt;strong&gt;Configure Artifactory&lt;/strong&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click Manage jenkins &amp;gt; System Configurations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll to &lt;code&gt;JFrog&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;code&gt;Add JFrog Platform Instance&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instance ID: artifactory1 (It has to tally with same value of your env variable in your pipeline)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;JFrog Platform URL: http://:8082&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Default Deployer Creentials. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Username: admin&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Password: (new password you set when JFrog prompted you to change default password)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Test Connection&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should show something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq2zy4pvmdyg0oyrz1ol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq2zy4pvmdyg0oyrz1ol.png" alt="Image description" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Click Save&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configure Maven&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click Add maven&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name: enter one of your choice&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tick Install automatically&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Version: Select a recent stable one (3.9.9 in this case)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Save&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;It should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwtn3def4c7mca01vfzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwtn3def4c7mca01vfzc.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;&lt;strong&gt;Create Repository in Artifactory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need to create a local repository in Artifactory where we would upload artifacts to. Here's a step-by-step on how to do it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click Add Repositories (top right) &amp;gt; Local Repository&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lxiianr9xsmaba2spr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lxiianr9xsmaba2spr1.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;Select Maven as package type&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2s7bo7xsl7r92mpyc7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2s7bo7xsl7r92mpyc7x.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Repository key: test001 (same name to be used in Upload artifactory stage when uploading artifact)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create Local Repository&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;And we're all setup!&lt;/p&gt;




&lt;p&gt;Now we can go ahead to create a pipeline script, sourcing from a Java application created with springboot.&lt;/p&gt;

&lt;p&gt;If you would want to see a sample Jenkins pipeline, i've actually written an article on the steps i took to create the pipeline script. &lt;a href="https://dev.to/nonso_echendu_001/deploying-jenkins-in-aws-with-integration-to-artifactory-and-sonarqube-2348"&gt;Check here&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Results:
&lt;/h2&gt;

&lt;p&gt;And so we've created and ran a successful build:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8fqfhavzjtx8leb708a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8fqfhavzjtx8leb708a.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;When we check our Sonarqube, we see a newly created project from our build. We can also see reports, which we can expand on and read indepthly: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnuhc433uayvca57a07wf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnuhc433uayvca57a07wf.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;And in the Artifactory ui dashboard, we can see the newly created artifact under test001:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l0vmqefyy3jgq5fljnl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l0vmqefyy3jgq5fljnl.jpeg" alt="Image description" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By integrating Artifactory and SonarQube into Jenkins, it helps streamline your Continuous Integration (CI) and Continuous Delivery (CD) pipeline, improves the quality of your code, and increases productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;- Continuous Delivery:&lt;/strong&gt; By linking Jenkins, Artifactory, and your deployment pipelines, you can automate the entire process from building to testing, and finally deploying the artifacts. Artifactory stores these artifacts until they are deployed to different environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Integration with docker:&lt;/strong&gt; Jenkins can build Docker images and push them to Artifactory. Artifactory supports Docker image repositories, so you can manage and store Docker images as part of your CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Early Detection of Code Issues:&lt;/strong&gt; By running static analysis in the pipeline, SonarQube can quickly detect problems in the code base before they become bigger issues. This allows developers to fix problems early, saving time and reducing the cost of fixing issues later in the cycle. &lt;/p&gt;




&lt;p&gt;Happy Building and Collaborating!&lt;/p&gt;




</description>
    </item>
    <item>
      <title>Jenkins Pipeline that Integrates to Artifactory and SonarQube</title>
      <dc:creator>Nonso Echendu</dc:creator>
      <pubDate>Mon, 06 Jan 2025 12:32:07 +0000</pubDate>
      <link>https://dev.to/nonso_echendu_001/deploying-jenkins-in-aws-with-integration-to-artifactory-and-sonarqube-2348</link>
      <guid>https://dev.to/nonso_echendu_001/deploying-jenkins-in-aws-with-integration-to-artifactory-and-sonarqube-2348</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;This Jenkins pipeline automates the build, test, and deployment process for a Spring Boot application. The pipeline includes source control management, code quality analysis with SonarQube, and artifact publishing to Artifactory.&lt;/p&gt;

&lt;p&gt;Now we go in-depthly into each stage and step of the pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline Parameters
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjq7rwmhplimtdkcmwu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjq7rwmhplimtdkcmwu6.png" alt="Pipeline params" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FOLDER_PATH&lt;/code&gt;: Target directory for the project (default: springboot2-health-record)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GROUP_ID&lt;/code&gt;: Maven project group ID (default: com.danielpm1982)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ARTIFACT_ID&lt;/code&gt;: Maven project artifact ID (default: springboot2-health-record)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VERSION&lt;/code&gt;: Project version (default: 0.0.1-SNAPSHOT)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GIT_REPO&lt;/code&gt;: Git repository URL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These parameters are gotten from the application source code's POM file&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment Variables
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pf7lkfv376ljmuqtt1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pf7lkfv376ljmuqtt1j.png" alt="Environment variables" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;SONARQUBE&lt;/code&gt;: SonarQube server instance name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ARTIFACTORY_SERVER&lt;/code&gt;: Artifactory server instance name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The values of these variables are gotten from the Sonarqube and Artifactory instance name you stated at when setting them up in Jenkins system configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline Stages
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Clone Repository
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nzv9nx719gu4mbvnymb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nzv9nx719gu4mbvnymb.png" alt="Clone Repository Stage" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clones the Git repository if the target directory doesn't exist. This prevents redundant cloning if the directory is already present.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Install Maven
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l6tcjrh4gcahpq2lhap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l6tcjrh4gcahpq2lhap.png" alt="Install Maven stage" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sets up the Maven build environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updates package lists&lt;/li&gt;
&lt;li&gt;Installs Maven&lt;/li&gt;
&lt;li&gt;Configures Maven environment variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo apt-get install -y maven&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This command installs Maven. &lt;br&gt;
The -y flag automatically confirms all prompts that would normally ask you to approve the installation of packages. Without -y, it would prompt you for confirmation (e.g., "Do you want to continue? [Y/n]").&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;sudo rm -rf /var/lib/apt/lists/*&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This command removes cached package lists that APT uses to store metadata about available packages. &lt;br&gt;
We’re doing this to clean up any unnecessary cached data and reduce disk usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;echo "export MAVEN_HOME=/usr/share/maven" &amp;gt;&amp;gt; ~/.bashrc&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This command sets an environment variable MAVEN_HOME to the directory where Maven is installed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;. ~/.bashrc&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After modifying .bashrc (in the previous step), this command ensures that the environment variable MAVEN_HOME is immediately available in the current shell session without needing to restart the terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. SonarQube Analysis
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn7mybciy8dsug1qlkf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn7mybciy8dsug1qlkf3.png" alt="Sonarqube analysis stage" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Performs static code analysis using SonarQube:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executes Maven clean and verify&lt;/li&gt;
&lt;li&gt;Runs SonarQube analysis&lt;/li&gt;
&lt;li&gt;Skips tests during analysis phase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;mvn clean verify sonar:sonar -DskipTests&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This Maven command is used to clean the project, run tests (but skip them in this case), and perform SonarQube analysis.&lt;/p&gt;

&lt;p&gt;We’re wrapping this shell command in a withSonarQubeEnv(SONARQUBE) block which makes sure that the pipeline environment is configured with the necessary SonarQube environment variables (e.g., authentication tokens, URL) to allow Maven or other build tools to communicate with SonarQube.&lt;/p&gt;

&lt;p&gt;This creates a Project in Sonarqube, which we can confirm from the UI, &lt;code&gt;http://public-ip-address:9000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5cr8cko5mjnq55w07cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5cr8cko5mjnq55w07cw.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Build
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c6oi2uhp6nruslqyub1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c6oi2uhp6nruslqyub1.png" alt="build stage" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Builds the application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executes Maven clean and package&lt;/li&gt;
&lt;li&gt;Skips tests during build phase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;mvn clean package -DskipTests&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This maven command will build the project, clean it first, and package it into a deployable artifact (for our application, a JAR file), located in a target folder.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Quality Gate
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstkx64pndeehpmhqx2va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstkx64pndeehpmhqx2va.png" alt="Image description" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enforces code quality standards:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Waits for SonarQube analysis completion&lt;/li&gt;
&lt;li&gt;Times out after 5 minutes&lt;/li&gt;
&lt;li&gt;Fails pipeline if quality gates aren't met&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&amp;gt; timeout(time: 5, unit: 'MINUTES') {&lt;br&gt;
  waitForQualityGate abortPipeline: true&lt;br&gt;
}&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;waitForQualityGate&lt;/code&gt; function is provided by the SonarQube Jenkins plugin and waits for the quality gate analysis results from SonarQube.&lt;/p&gt;

&lt;p&gt;If the SonarQube quality gate fails, the &lt;code&gt;abortPipeline: true&lt;/code&gt; option ensures the pipeline is aborted. If the quality gate is failed, the pipeline will stop and not continue to subsequent stages.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Publish to Artifactory
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gn593bjdrakm1w1k0b0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gn593bjdrakm1w1k0b0.png" alt="publish to artifactory stage" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Publishes artifacts to JFrog Artifactory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Builds project using Maven&lt;/li&gt;
&lt;li&gt;Uploads JAR file and POM file&lt;/li&gt;
&lt;li&gt;Uses specified group/artifact structure&lt;/li&gt;
&lt;li&gt;Sets artifact properties (type and status)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This script uploads artifacts to an Artifactory server after building the project with Maven. &lt;/p&gt;

&lt;p&gt;Let's look at it critically:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;def server = Artifactory.server(ARTIFACTORY_SERVER)&lt;/code&gt;:&lt;br&gt;
initializes the connection to an Artifactory server, using the &lt;code&gt;ARTIFACTORY_SERVER&lt;/code&gt; environment variable we stated above.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;def buildInfo = Artifactory.newBuildInfo()&lt;/code&gt;:&lt;br&gt;
This creates a new instance of build information for uploading to Artifactory.&lt;br&gt;
The build info is used to store metadata about the build in this case, the artifacts that will be uploaded to Artifactory. It will help us track things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which version of the application the artifact belongs to.&lt;/li&gt;
&lt;li&gt;Which Jenkins job or build pipeline created the artifact.&lt;/li&gt;
&lt;li&gt;What Git commit or branch the artifact was built from.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, we're creating a JSON object, &lt;code&gt;uploadSpec&lt;/code&gt;, that contains the file patterns and target paths for the artifacts being uploaded to Artifactory. There are 2 files we want to publish - the JAR and POM files - so we'll be having an array of objects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first object specifies the JAR file to upload.&lt;/li&gt;
&lt;li&gt;The second object specifies the POM file for the Maven project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;keys:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pattern&lt;/code&gt;: The file pattern to match the artifacts (e.g., the JAR and POM files).&lt;br&gt;
&lt;code&gt;target&lt;/code&gt;: The target path in Artifactory where the artifact will be uploaded.&lt;br&gt;
&lt;code&gt;props&lt;/code&gt;: Custom properties that can be added to the artifacts (e.g., type, status).&lt;br&gt;
&lt;code&gt;recursive&lt;/code&gt;: Specifies whether to recursively find files (set to false here).&lt;/p&gt;

&lt;p&gt;Then we have 2 methods that upload to Artifactory&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;server.upload spec: uploadSpec, buildInfo: buildInfo&lt;/code&gt;&lt;br&gt;
This uploads the files specified in the &lt;code&gt;uploadSpec&lt;/code&gt; JSON object to the Artifactory server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;server.publishBuildInfo buildInfo&lt;/code&gt;&lt;br&gt;
This publishes the build information (metadata) to Artifactory. It allows Artifactory to store information about the build (such as the JAR and POM files).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can view our uploaded artifact by going to &lt;code&gt;http://public-ip-address:8082&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd4ibdoallxabyvvgdn3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd4ibdoallxabyvvgdn3.png" alt="Image description" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
