DEV Community

Cover image for I Built a Nightly Pipeline That Deploys My App While I Sleep
Shreyas Yadav
Shreyas Yadav

Posted on

I Built a Nightly Pipeline That Deploys My App While I Sleep

Stack: Next.js 14 · Express.js · MySQL 8 · Docker · GitHub Actions · AWS EC2 · AWS ECR · AWS RDS · Route 53 · Nginx · Let's Encrypt

I built a Job Application Tracker, a full-stack SPA with GitHub OAuth login, and set up an automated nightly pipeline that builds Docker images, runs smoke tests on a temporary EC2, pushes verified images to ECR, and deploys to a persistent QA server, all without manual intervention. Here's exactly how I did it.


Table of Contents

  1. Architecture Overview
  2. The Application: Job Application Tracker
  3. Dockerizing the App
  4. Local Development with Docker Compose
  5. AWS Infrastructure Setup
  6. GitHub Actions CI/CD Pipeline
  7. Domain Name with Route 53 and SSL with Let's Encrypt
  8. Nginx as a Reverse Proxy
  9. Security Best Practices
  10. Lessons Learned

1. Architecture Overview

The project is split into two repositories, a separation of concerns that keeps application code and infrastructure code independent:

Repo Purpose
job-application-tracker Application source code (frontend, backend, Dockerfiles, local docker-compose)
job-application-tracker-infra Infrastructure: GitHub Actions workflows, Nginx config, smoke tests, prod docker-compose

High-Level Architecture

Developer pushes to source repo
         │
         ▼
  GitHub Actions (infra repo)
  Nightly at 2 AM UTC
         │
  ┌──────▼──────┐
  │  1. BUILD   │  Build Docker images with timestamp tag
  │             │  Push to AWS ECR
  └──────┬──────┘
         │
  ┌──────▼──────┐
  │  2. SMOKE   │  Launch temporary EC2 (t3.micro)
  │    TEST     │  Run containers, execute curl tests
  │             │  Terminate EC2 (pass or fail)
  └──────┬──────┘
         │ (only if tests pass)
  ┌──────▼──────┐
  │  3. PROMOTE │  Retag timestamp → :latest in ECR
  └──────┬──────┘
         │
  ┌──────▼──────┐
  │  4. DEPLOY  │  SSH-less deploy via AWS SSM
  │    TO QA    │  Pull :latest from ECR
  │             │  docker compose up on persistent EC2
  └─────────────┘
         │
  ┌──────▼──────┐
  │  QA EC2     │  Nginx (SSL/HTTPS)
  │  shri.      │  Frontend :3000 → /
  │  software   │  Backend  :5000 → /api/
  └──────┬──────┘
         │
  ┌──────▼──────┐
  │  AWS RDS    │  MySQL 8 (persistent, managed)
  └─────────────┘
Enter fullscreen mode Exit fullscreen mode

2. The Application: Job Application Tracker

Job Application Tracker, live at shri.software
The finished app running at shri.software with HTTPS enforced

The app lets users track their job applications (Applied → Interview → Offer / Rejected). Authentication is handled via GitHub OAuth using NextAuth.js, no username/password to manage.

Tech Stack

  • Frontend: Next.js 14.2 with React 18, Tailwind CSS, NextAuth.js
  • Backend: Express.js 4.18, JWT middleware
  • Database: MySQL 8 (local via Docker, production via AWS RDS)

Database Schema

-- Users created on first GitHub login
CREATE TABLE users (
  id                  INT AUTO_INCREMENT PRIMARY KEY,
  email               VARCHAR(255) UNIQUE NOT NULL,
  name                VARCHAR(255),
  avatar_url          TEXT,
  provider            VARCHAR(50),
  provider_account_id VARCHAR(255),
  UNIQUE KEY (provider, provider_account_id),
  created_at          TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  updated_at          TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);

-- Each row belongs to one user (CASCADE delete keeps DB clean)
CREATE TABLE job_applications (
  id           INT AUTO_INCREMENT PRIMARY KEY,
  company      VARCHAR(255) NOT NULL,
  role         VARCHAR(255) NOT NULL,
  status       ENUM('applied','interview','offer','rejected') DEFAULT 'applied',
  date_applied DATE,
  user_id      INT NOT NULL,
  FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
  created_at   TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  updated_at   TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
Enter fullscreen mode Exit fullscreen mode

App dashboard showing an application entry
Dashboard view, filter by status, edit or delete entries inline

Backend API

Method Route Auth Description
GET /health None Health check for monitoring
POST /api/users/upsert None Called by NextAuth on login
GET /api/applications JWT List user's applications
POST /api/applications JWT Create application
PUT /api/applications/:id JWT Update application
DELETE /api/applications/:id JWT Delete application

Authentication Flow

NextAuth.js handles the GitHub OAuth dance. The key insight: after OAuth completes, we generate a JWT signed with NEXTAUTH_SECRET that the frontend sends to the backend on every API call.

// frontend/src/app/api/auth/[...nextauth]/route.js (simplified)
export const authOptions = {
  providers: [GitHubProvider({ clientId, clientSecret })],
  callbacks: {
    async signIn({ user, account }) {
      // Store GitHub identity in our DB on first login
      await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/users/upsert`, {
        method: 'POST',
        body: JSON.stringify({
          email: user.email,
          name: user.name,
          provider: account.provider,
          provider_account_id: account.providerAccountId,
        }),
      });
      return true;
    },
    async session({ session, token }) {
      // Mint a JWT for API calls; embed it in the session
      session.backendToken = jwt.sign(
        { userId: token.userId, email: token.email },
        process.env.NEXTAUTH_SECRET,
        { expiresIn: '1h' }
      );
      return session;
    },
  },
};
Enter fullscreen mode Exit fullscreen mode

The backend verifies this token on every protected route:

// backend/src/middleware/auth.js
const auth = (req, res, next) => {
  const token = req.headers.authorization?.split(' ')[1];
  if (!token) return res.status(401).json({ error: 'No token' });
  const decoded = jwt.verify(token, process.env.NEXTAUTH_SECRET);
  req.user = decoded; // { userId, email }
  next();
};
Enter fullscreen mode Exit fullscreen mode

3. Dockerizing the App

Backend Dockerfile

FROM node:20-alpine

# Non-root user for security
RUN addgroup -S shri && adduser -S shri -G shri

WORKDIR /app
COPY package.json .
RUN npm install --production
COPY src/ ./src/

RUN chown -R shri:shri /app
USER shri

EXPOSE 5000
CMD ["node", "src/index.js"]
Enter fullscreen mode Exit fullscreen mode

Key decisions:

  • Alpine base, minimal attack surface, smaller image (~50 MB vs ~900 MB for node:20)
  • Non-root user, if the container is compromised, the attacker can't escalate to root on the host
  • --production install, excludes dev dependencies from the final image

Frontend Dockerfile (Multi-Stage Build)

# --- Stage 1: Build ---
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# --- Stage 2: Runtime ---
FROM node:20-alpine
RUN addgroup -S shri && adduser -S shri -G shri
WORKDIR /app

# Copy only the artifacts needed to run
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

RUN chown -R shri:shri /app
USER shri

EXPOSE 3000
CMD ["node_modules/.bin/next", "start"]
Enter fullscreen mode Exit fullscreen mode

Multi-stage builds are critical for Next.js, the build stage pulls in all dev dependencies and compiles TypeScript/JSX. The final stage only contains the compiled output, shrinking the image dramatically.


4. Local Development with Docker Compose

App running locally at localhost:9000
The app running locally via Docker Compose at localhost:9000

The source repo's docker-compose.yml wires everything together for local development with a single command:

# docker-compose.yml
services:
  db:
    image: mysql:8
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - db_data:/var/lib/mysql
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      interval: 10s
      retries: 5

  backend:
    build: ./backend
    ports:
      - "10000:5000"
    environment:
      DB_HOST: db
      DB_NAME: ${MYSQL_DATABASE}
      DB_USER: ${MYSQL_USER}
      DB_PASSWORD: ${MYSQL_PASSWORD}
      NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
    depends_on:
      db:
        condition: service_healthy  # Wait for MySQL, not just the container

  frontend:
    build: ./frontend
    ports:
      - "9000:3000"
    environment:
      NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
      NEXTAUTH_URL: http://localhost:9000
      GITHUB_CLIENT_ID: ${GITHUB_CLIENT_ID}
      GITHUB_CLIENT_SECRET: ${GITHUB_CLIENT_SECRET}
      NEXT_PUBLIC_API_URL: http://localhost:10000
    depends_on:
      - backend

volumes:
  db_data:
Enter fullscreen mode Exit fullscreen mode

To get started locally:

# 1. Clone the source repo
git clone https://github.com/Shreyas-Yadav/job-application-tracker
cd job-application-tracker

# 2. Copy and fill in the env file
cp .env.example .env
# Edit .env with your GitHub OAuth credentials and secrets

# 3. Launch everything
docker compose up

# App available at:
# Frontend: http://localhost:9000
# Backend:  http://localhost:10000
Enter fullscreen mode Exit fullscreen mode

docker compose up --build output in terminal
All three containers (db, backend, frontend) starting up successfully

The depends_on with condition: service_healthy is important, the backend waits for MySQL to be truly ready (not just the container running) before starting.


5. AWS Infrastructure Setup

5.1 GitHub OAuth App

Go to GitHub → Settings → Developer Settings → OAuth Apps → New OAuth App:

  • Homepage URL: https://shri.software
  • Authorization callback URL: https://shri.software/api/auth/callback/github

Save the Client ID and Client Secret, these go into GitHub Actions secrets.

5.2 AWS ECR (Elastic Container Registry)

Create two private repositories to store Docker images:

AWS ECR, both repositories
Both ECR repositories created and ready to receive images from the CI pipeline

aws ecr create-repository --repository-name job-tracker-backend --region us-east-1
aws ecr create-repository --repository-name job-tracker-frontend --region us-east-1
Enter fullscreen mode Exit fullscreen mode

5.3 AWS RDS (MySQL 8)

Create a MySQL 8 RDS instance in the AWS Console:

  1. Engine: MySQL 8.0
  2. Instance class: db.t3.micro (Free Tier eligible)
  3. Storage: 20 GB gp2
  4. Important: Place in the same VPC as your EC2 instances
  5. Set a master username and password
  6. Create a database: jobtracker
  7. Note the endpoint, it looks like job-tracker-db.xxx.us-east-1.rds.amazonaws.com

AWS RDS, job-tracker-db instance
RDS MySQL 8 instance showing "Available" status with the endpoint used by the backend

Why RDS instead of a containerized DB? Managed backups, automated patching, and persistence across EC2 restarts without dealing with EBS volumes.

5.4 QA EC2 Instance

Launch a persistent EC2 instance (Ubuntu 22.04, t3.micro):

# After SSH-ing in, install Docker and the SSM agent
sudo apt-get update
sudo apt-get install -y docker.io docker-compose-plugin

# Enable SSM agent (usually pre-installed on Ubuntu 22.04 AMIs)
sudo systemctl enable amazon-ssm-agent
sudo systemctl start amazon-ssm-agent

# Allow ubuntu user to run docker without sudo
sudo usermod -aG docker ubuntu
Enter fullscreen mode Exit fullscreen mode

AWS EC2, QA instance details
The persistent QA EC2 instance with LabRole attached, running Docker and Nginx

Attach the LabRole to this EC2 instance. In AWS Academy, LabRole is a pre-provisioned IAM role that already has the permissions needed for SSM, ECR, and EC2 operations, you don't create it yourself, it's provided by the lab environment.

5.5 IAM Credentials for GitHub Actions (AWS Academy)

AWS Academy accounts don't support OIDC federation or long-lived IAM users. Instead, you get temporary session credentials from the AWS Details panel in the Vocareum lab console. These rotate every session.

Store them as GitHub Secrets:

AWS_ACCESS_KEY_ID      → from AWS Details panel
AWS_SECRET_ACCESS_KEY  → from AWS Details panel
AWS_SESSION_TOKEN      → from AWS Details panel (required for temporary credentials)
Enter fullscreen mode Exit fullscreen mode

Then configure credentials in your workflow using the static credential method:

- name: Configure AWS credentials
  uses: aws-actions/configure-aws-credentials@v4
  with:
    aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
    aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
    aws-region: us-east-1
Enter fullscreen mode Exit fullscreen mode

Note: AWS Academy session credentials expire when the lab session ends (~4 hours). You'll need to update these three secrets each time you start a new lab session. This is a limitation of the Academy environment, in a real AWS account you would use OIDC to avoid static credentials entirely.


6. GitHub Actions CI/CD Pipeline

The infra repo contains five workflow files that chain together via workflow_call. Each does one thing well.

6.1 Nightly Orchestrator (nightly.yml)

This is the entry point, triggered on a schedule or manually:

# .github/workflows/nightly.yml
name: Nightly Build and Deploy

on:
  schedule:
    - cron: '0 2 * * *'  # 2 AM UTC every day
  workflow_dispatch:        # Allow manual triggers

jobs:
  setup:
    runs-on: ubuntu-latest
    outputs:
      image_tag: ${{ steps.tag.outputs.image_tag }}
    steps:
      - id: tag
        run: echo "image_tag=$(date +%Y%m%d%H%M%S)" >> $GITHUB_OUTPUT

  build:
    needs: setup
    uses: ./.github/workflows/build.yml
    with:
      image_tag: ${{ needs.setup.outputs.image_tag }}
    secrets: inherit

  smoke-test:
    needs: [setup, build]
    uses: ./.github/workflows/smoke-test.yml
    with:
      image_tag: ${{ needs.setup.outputs.image_tag }}
    secrets: inherit

  promote:
    needs: [setup, smoke-test]  # Only runs if smoke test passes
    uses: ./.github/workflows/promote.yml
    with:
      image_tag: ${{ needs.setup.outputs.image_tag }}
    secrets: inherit

  deploy-qa:
    needs: promote
    uses: ./.github/workflows/deploy-qa.yml
    secrets: inherit
Enter fullscreen mode Exit fullscreen mode

The timestamp tag (e.g., 20250307021530) makes every build uniquely identifiable. If a smoke test fails, the image stays tagged only with the timestamp, it's never promoted to :latest and never deployed.

6.2 Build and Push to ECR (build.yml)

# .github/workflows/build.yml
name: Build and Push to ECR

on:
  workflow_call:
    inputs:
      image_tag:
        required: true
        type: string

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout source code
        uses: actions/checkout@v4
        with:
          repository: Shreyas-Yadav/job-application-tracker
          token: ${{ secrets.SOURCE_REPO_TOKEN }}

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
          aws-region: us-east-1

      - name: Login to ECR
        uses: aws-actions/amazon-ecr-login@v2

      - name: Build and push backend
        run: |
          docker build -t ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/job-tracker-backend:${{ inputs.image_tag }} ./backend
          docker push ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/job-tracker-backend:${{ inputs.image_tag }}

      - name: Build and push frontend
        run: |
          docker build -t ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/job-tracker-frontend:${{ inputs.image_tag }} ./frontend
          docker push ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com/job-tracker-frontend:${{ inputs.image_tag }}
Enter fullscreen mode Exit fullscreen mode

6.3 Smoke Test on Temporary EC2 (smoke-test.yml)

This is the most interesting part. Instead of testing on the QA instance (risking breaking it), we launch a fresh, temporary EC2 for every test run:

# .github/workflows/smoke-test.yml
jobs:
  smoke-test:
    runs-on: ubuntu-latest
    steps:
      - name: Launch temporary EC2
        id: launch
        run: |
          INSTANCE_ID=$(aws ec2 run-instances \
            --image-id ${{ secrets.TEMP_EC2_AMI }} \
            --instance-type t3.micro \
            --subnet-id ${{ secrets.TEMP_EC2_SUBNET_ID }} \
            --security-group-ids ${{ secrets.TEMP_EC2_SG_ID }} \
            --iam-instance-profile Name=LabInstanceProfile \
            --query 'Instances[0].InstanceId' \
            --output text)
          echo "instance_id=$INSTANCE_ID" >> $GITHUB_OUTPUT

      - name: Wait for SSM agent
        run: |
          # Wait up to 4 minutes for the instance to boot and SSM to connect
          for i in {1..24}; do
            STATUS=$(aws ssm describe-instance-information \
              --filters "Key=InstanceIds,Values=${{ steps.launch.outputs.instance_id }}" \
              --query 'InstanceInformationList[0].PingStatus' \
              --output text 2>/dev/null || echo "None")
            [ "$STATUS" = "Online" ] && break
            sleep 10
          done

      - name: Run smoke tests via SSM
        run: |
          COMMAND_ID=$(aws ssm send-command \
            --instance-ids ${{ steps.launch.outputs.instance_id }} \
            --document-name "AWS-RunShellScript" \
            --parameters commands='[
              "aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com",
              "IMAGE_TAG=${{ inputs.image_tag }} docker compose -f /tmp/docker-compose.smoke.yml up -d",
              "bash /tmp/smoke-test.sh"
            ]' \
            --query 'Command.CommandId' --output text)

          # Poll until complete
          aws ssm wait command-executed \
            --command-id $COMMAND_ID \
            --instance-id ${{ steps.launch.outputs.instance_id }}

      - name: Terminate temporary EC2
        if: always()  # Clean up even if tests fail
        run: |
          aws ec2 terminate-instances \
            --instance-ids ${{ steps.launch.outputs.instance_id }}
Enter fullscreen mode Exit fullscreen mode

The smoke-test.sh script runs three checks:

#!/bin/bash
BACKEND=${1:-"http://localhost:5000"}
FRONTEND=${2:-"http://localhost:3000"}

# Wait up to 2 minutes for backend to be ready
for i in {1..12}; do
  curl -sf "$BACKEND/health" > /dev/null && break
  echo "Waiting for backend... ($i/12)"
  sleep 10
done

# Test 1: Backend health check
curl -sf "$BACKEND/health" | grep -q '"status":"ok"' || { echo "FAIL: /health"; exit 1; }
echo "PASS: Backend /health returns ok"

# Test 2: Auth middleware is working (unauthenticated request → 401)
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "$BACKEND/api/applications")
[ "$STATUS" = "401" ] || { echo "FAIL: /api/applications should return 401, got $STATUS"; exit 1; }
echo "PASS: /api/applications correctly requires authentication"

# Test 3: Frontend is serving pages
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "$FRONTEND")
[ "$STATUS" = "200" ] || { echo "FAIL: Frontend returned $STATUS"; exit 1; }
echo "PASS: Frontend is serving pages"

echo "All smoke tests passed!"
Enter fullscreen mode Exit fullscreen mode

6.4 Promote Image (promote.yml)

Once smoke tests pass, we retag the timestamp image as :latest:

# .github/workflows/promote.yml
jobs:
  promote:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        repo: [job-tracker-backend, job-tracker-frontend]
    steps:
      - name: Retag image as latest
        run: |
          REGISTRY="${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com"
          # Fetch the manifest of the tested image
          MANIFEST=$(aws ecr batch-get-image \
            --repository-name ${{ matrix.repo }} \
            --image-ids imageTag=${{ inputs.image_tag }} \
            --query 'images[0].imageManifest' --output text)

          # Push the same manifest with the :latest tag
          aws ecr put-image \
            --repository-name ${{ matrix.repo }} \
            --image-tag latest \
            --image-manifest "$MANIFEST"
Enter fullscreen mode Exit fullscreen mode

This approach (retagging the manifest) is instant, no re-pulling or re-pushing layers. The :latest image is byte-for-byte identical to the tested timestamp image.

6.5 Deploy to QA EC2 (deploy-qa.yml)

The final step deploys to the persistent QA server using AWS SSM, no SSH keys needed:

# .github/workflows/deploy-qa.yml
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Check QA EC2 is running
        id: check
        run: |
          STATE=$(aws ec2 describe-instances \
            --instance-ids ${{ secrets.QA_EC2_INSTANCE_ID }} \
            --query 'Reservations[0].Instances[0].State.Name' \
            --output text)
          echo "state=$STATE" >> $GITHUB_OUTPUT

      - name: Sync config files
        if: steps.check.outputs.state == 'running'
        run: |
          # Base64-encode configs and decode them on the EC2 to avoid quoting issues
          COMPOSE_B64=$(base64 -w0 docker-compose.prod.yml)
          NGINX_B64=$(base64 -w0 nginx/nginx.conf)

          aws ssm send-command \
            --instance-ids ${{ secrets.QA_EC2_INSTANCE_ID }} \
            --document-name "AWS-RunShellScript" \
            --parameters commands="[
              \"echo $COMPOSE_B64 | base64 -d > /home/ubuntu/app/docker-compose.prod.yml\",
              \"echo $NGINX_B64 | base64 -d > /etc/nginx/sites-enabled/default\",
              \"nginx -t && systemctl reload nginx\"
            ]"

      - name: Deploy containers
        if: steps.check.outputs.state == 'running'
        run: |
          aws ssm send-command \
            --instance-ids ${{ secrets.QA_EC2_INSTANCE_ID }} \
            --document-name "AWS-RunShellScript" \
            --parameters commands="[
              \"aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com\",
              \"cd /home/ubuntu/app && docker compose -f docker-compose.prod.yml pull\",
              \"DB_HOST=${{ secrets.DB_HOST }} DB_NAME=${{ secrets.DB_NAME }} DB_USER=${{ secrets.DB_USER }} DB_PASSWORD=${{ secrets.DB_PASSWORD }} NEXTAUTH_SECRET=${{ secrets.NEXTAUTH_SECRET }} docker compose -f docker-compose.prod.yml up -d\",
              \"sleep 10 && curl -sf http://localhost:5000/health\"
            ]"
Enter fullscreen mode Exit fullscreen mode

GitHub Actions, Nightly Build pipeline all green
All 5 jobs in the nightly pipeline completing successfully in 5m 26s

GitHub Actions, deploy-qa job steps
The deploy-qa job expanded showing each SSM step: sync config files, deploy via docker compose, health check


7. Domain Name with Route 53 and SSL with Let's Encrypt

7.1 Get a Domain and Migrate to Route 53

  1. Purchase a domain on Name.com (or any registrar)
  2. Create a Hosted Zone in AWS Route 53 for your domain
  3. Note the 4 NS (nameserver) records Route 53 provides
  4. In Name.com's DNS settings, replace the default nameservers with Route 53's NS records
  5. Wait 24-48 hours for propagation

7.2 Create DNS Records

In Route 53, create an A record pointing to your QA EC2's public IP:

Route 53, shri.software hosted zone
Route 53 hosted zone for shri.software showing the A record, NS, and SOA records

Type: A
Name: shri.software
Value: <EC2 Public IP>
TTL: 300
Enter fullscreen mode Exit fullscreen mode

Note: If you stop/start EC2 instances, the public IP changes. Consider using an Elastic IP for the QA instance to keep the IP stable.

7.3 Install Certbot and Get an SSL Certificate

SSH into your QA EC2 and run:

# Install Certbot with the Nginx plugin
sudo apt-get update
sudo apt-get install -y certbot python3-certbot-nginx

# Obtain a certificate (Certbot automatically configures Nginx)
sudo certbot --nginx -d shri.software

# Verify auto-renewal is configured
sudo systemctl status certbot.timer
sudo certbot renew --dry-run
Enter fullscreen mode Exit fullscreen mode

Certbot will:

  1. Verify domain ownership by placing a file at /.well-known/acme-challenge/
  2. Download the certificate to /etc/letsencrypt/live/shri.software/
  3. Modify your Nginx config to use the certificate

8. Nginx as a Reverse Proxy

Nginx sits in front of both services, routing traffic and terminating SSL:

# /etc/nginx/sites-enabled/default

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name shri.software;
    return 301 https://$host$request_uri;
}

# HTTPS server
server {
    listen 443 ssl;
    server_name shri.software;

    ssl_certificate     /etc/letsencrypt/live/shri.software/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/shri.software/privkey.pem;

    # NextAuth routes must go to the frontend (Next.js handles them)
    location /api/auth/ {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto https;
    }

    # All other /api/ routes go to Express backend
    location /api/ {
        proxy_pass http://localhost:5000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # Everything else goes to Next.js frontend
    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto https;
    }
}
Enter fullscreen mode Exit fullscreen mode

The order of location blocks matters: /api/auth/ must be listed before /api/ because Nginx uses longest prefix matching.


9. Security Best Practices

Here's what I did deliberately for security:

Containers

  • Non-root users in every container (adduser -S shri)
  • Alpine base images, smaller surface area, fewer CVEs
  • --production npm install, dev tools never ship to production
  • Multi-stage builds, build tools (compilers, test runners) never end up in production images

Authentication

  • GitHub OAuth, no password storage, no credential database to protect
  • Short-lived JWTs (1 hour expiry) between frontend and backend
  • User-scoped queries, every DB query filters by user_id from the verified JWT; users cannot access each other's data

Infrastructure

  • No SSH keys, AWS SSM for all remote execution; port 22 is closed
  • LabRole on EC2, AWS Academy's pre-provisioned role grants SSM and ECR access without custom policy authoring
  • RDS in private subnet, database is not publicly accessible
  • Secrets in GitHub Secrets, never hardcoded in workflow files or checked into git
  • HTTPS only, Nginx enforces HTTP → HTTPS redirect at the server level

Secrets Management

GitHub Secrets, all 16 secrets configured
All secrets stored in GitHub, values are never visible after being saved

# GitHub Secrets used in this project:
AWS_ACCOUNT_ID          # AWS account number
AWS_ACCESS_KEY_ID       # From AWS Academy lab details panel
AWS_SECRET_ACCESS_KEY   # From AWS Academy lab details panel
AWS_SESSION_TOKEN       # From AWS Academy lab details panel (rotates each session)
TEMP_EC2_AMI            # AMI ID for smoke test instances
TEMP_EC2_SUBNET_ID      # VPC subnet for temporary instances
TEMP_EC2_SG_ID          # Security group ID
QA_EC2_INSTANCE_ID      # Persistent QA EC2 instance ID
DB_HOST                 # RDS endpoint
DB_NAME / DB_USER / DB_PASSWORD
NEXTAUTH_SECRET         # Shared secret for JWT signing
GITHUB_CLIENT_ID / GITHUB_CLIENT_SECRET  # OAuth app credentials
SOURCE_REPO_TOKEN       # PAT to check out source repo from infra repo
Enter fullscreen mode Exit fullscreen mode

Conclusion

The full pipeline, from a git push to a verified deployment on HTTPS, runs without any manual steps. The key architectural wins:

  • Two repos for clean separation of application vs. infrastructure concerns
  • Timestamp-tagged images with promotion to :latest only after passing tests
  • Ephemeral test infrastructure that is created and destroyed per pipeline run
  • SSM-based deployment with no SSH keys to manage
  • Route 53 + Let's Encrypt for production-grade DNS and SSL at zero cost

The live app is running at https://shri.software. The source code is split across:

  • Source repo: github.com/Shreyas-Yadav/job-application-tracker
  • Infra repo: github.com/Shreyas-Yadav/job-application-tracker-infra

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.