DEV Community

Erik for Allscreenshots

Posted on

Day 4: Setting Up CI/CD

Day 4 of 30. Today we're automating deployment before we have much to deploy.

One of the most important lessons we've learned is that CI/CD is not optional. We want to deploy quickly, reliably and as often as possible.
Setting this up for allscreenshots.com before we have anything to deploy might seem backwards since we don't have a working product yet. Why spend time on deployment pipelines?

One of the reasons for this is because we've learned the hard way: the longer you wait with setting an automated deployment pipeline, the harder it gets.
You'll end up with more dependencies, more complexity (for example, the provisioning of a database, failover, auth, etc), more edge cases. Deploying manually, similar to testing manually, is not an option for us, so we're setting this up early.

By the end of today, every push to our main branch will automatically build, test, and deploy our app to a real server.

The goal: boring, reliable deploys

We want deployments to be a non-event. The idea is to push code, wait a few moments, and it's live. There are no manual steps and no manual verifications.

Here's what we're setting up:

  1. GitHub Actions runs on every push
  2. Build the React frontend and bundle it into the Spring Boot app
  3. Run tests (even though we barely have any yet)
  4. Build a single Docker image for our combined service
  5. Push image to GitHub Container Registry
  6. Deploy to our VPS via SSH and some scripts

One deployable unit

We're keeping things simple: one Docker image that contains everything. The React frontend gets built and bundled into the Spring Boot JAR as static resources. Spring Boot serves them directly - no separate web server, no nginx, no complexity.

Why this approach?

Simpler deployment. One container to build, push, and run. No orchestrating multiple services.

Simpler networking. No CORS issues, no reverse proxy configuration. The API and frontend share the same origin.

Simpler development. Run one thing locally, get everything. No "frontend is on port 3000, backend is on port 8080" confusion.

Good enough for our scale. If we ever need to scale frontend and backend separately, we can split later. Right now, that's premature optimization.

The GitHub Actions workflow

Here's our .github/workflows/deploy.yml:

name: Build and Deploy

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up JDK 21
        uses: actions/setup-java@v4
        with:
          java-version: '21'
          distribution: 'temurin'

      - name: Set up Node
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install frontend dependencies
        working-directory: ./frontend
        run: npm ci

      - name: Run frontend tests
        working-directory: ./frontend
        run: npm test -- --passWithNoTests

      - name: Run API tests
        run: ./gradlew test

  build-and-push:
    needs: test
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    permissions:
      contents: read
      packages: write

    steps:
      - uses: actions/checkout@v4

      - name: Log in to Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest

  deploy:
    needs: build-and-push
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'

    steps:
      - name: Deploy to VPS
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.VPS_HOST }}
          username: ${{ secrets.VPS_USER }}
          key: ${{ secrets.VPS_SSH_KEY }}
          script: |
            cd /opt/screenshot-api
            docker compose pull
            docker compose up -d
            docker system prune -f
Enter fullscreen mode Exit fullscreen mode

Nothing fancy. It runs tests, builds one Docker image with everything bundled, pushes it to GitHub's free container registry, then SSHs into our server to pull and restart.

The Dockerfile

One Dockerfile that builds everything:

# Stage 1: Build the frontend
FROM node:20-alpine AS frontend-build
WORKDIR /frontend
COPY frontend/package*.json ./
RUN npm ci
COPY frontend/ ./
RUN npm run build

# Stage 2: Build the backend (with frontend bundled in)
FROM eclipse-temurin:21-jdk-alpine AS backend-build
WORKDIR /app

# Copy Gradle files first for better caching
COPY gradlew build.gradle.kts settings.gradle.kts ./
COPY gradle gradle
RUN ./gradlew dependencies --no-daemon

# Copy frontend build output to Spring Boot static resources
COPY --from=frontend-build /frontend/dist src/main/resources/static

# Copy source and build
COPY src src
RUN ./gradlew bootJar --no-daemon

# Stage 3: Runtime image
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app

# Playwright dependencies will be added later
RUN apk add --no-cache chromium

COPY --from=backend-build /app/build/libs/*.jar app.jar

EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
Enter fullscreen mode Exit fullscreen mode

Three stages:

  1. Frontend build - npm install, npm build, outputs to dist/
  2. Backend build - Copies frontend output into src/main/resources/static, then builds the Spring Boot JAR
  3. Runtime - Minimal JRE image with just the JAR

The final image is around 250MB. If needed, we can optimise this later using multi-stage builds and jlink and jdeps.

Spring Boot serving static files

Spring Boot automatically serves anything in src/main/resources/static at the root path, no configuration needed is needed for this.

The only thing to make this work is that we need to handle the client-side routing. When someone navigates to /dashboard directly, Spring Boot needs to serve index.html instead of returning a 404 response:

@Controller
class SpaController {
    @GetMapping("/{path:[^\\.]*}")
    fun redirect(): String {
        return "forward:/index.html"
    }
}
Enter fullscreen mode Exit fullscreen mode

This forwards any path without a dot (i.e., not a file request) to index.html, letting React Router handle it.

Docker Compose on the server

On our VPS, we have a simple docker-compose.yml:

version: '3.8'

services:
  app:
    image: ghcr.io/yourusername/screenshot-api:latest
    restart: unless-stopped
    ports:
      - "80:8080"
    environment:
      - DATABASE_URL=postgresql://postgres:${DB_PASSWORD}@db:5432/screenshots
      - STORAGE_ENDPOINT=${STORAGE_ENDPOINT}
      - STORAGE_KEY=${STORAGE_KEY}
      - STORAGE_SECRET=${STORAGE_SECRET}
    depends_on:
      - db

  db:
    image: postgres:18-alpine
    restart: unless-stopped
    volumes:
      - postgres_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=screenshots
      - POSTGRES_PASSWORD=${DB_PASSWORD}

volumes:
  postgres_data:
Enter fullscreen mode Exit fullscreen mode

We have two containers: our application and our Postgres database. The environment variables live in a .env file on the server that we created manually once. Secrets stay out of git.

The VPS setup (one-time)

We're using a Hetzner CX22 - 2 vCPUs, 4GB RAM, 40GB disk, at around $4.50/month, which is more than enough for now.

Our initial server setup took about 30 minutes:

# Update system
apt update && apt upgrade -y

# Install Docker
curl -fsSL https://get.docker.com | sh

# Create app directory
mkdir -p /opt/screenshot-api
cd /opt/screenshot-api

# Create .env file with secrets
nano .env

# Create docker-compose.yml
nano docker-compose.yml

# Login to GitHub Container Registry
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
Enter fullscreen mode Exit fullscreen mode

The server just needs Docker and our compose file, everything else comes from the container image.

Why this approach?

GitHub Actions is free for public repos and has generous limits for private ones. This saves us from running something like Jenkins or paying for CircleCI.

GitHub Container Registry is free for public images and cheap for private, this allows us to skip an extra dependency on something like Docker Hub.

Docker Compose is simple. We could use Kubernetes, but for a single VPS with two containers, that's a massive overkill. Compose does exactly what we need for now.

Single image is simpler. One build, one push, one pull, one restart. There is no coordinating of multiple containers, no version mismatches between frontend and backend, and we're staying away from microservices or distributed services as long as we can to reduce complexity.

SSH deployment is good enough. Fancy zero-downtime deployments can come later. For now, a few seconds of downtime during deploys is fine, and it's easy to improve this in the near future

What we didn't do

No Kubernetes. Kubernetes is a quite complex thing to manage, and we have no need for it yet.

No Terraform. Our infrastructure is one VPS. We can recreate it manually under 30 minutes if needed.

No separate staging environment. We'll test locally and in CI. Our staging environment, which is definitely a good idea, can come later when the need is there.

No blue-green deployments. This is overkill for now. Docker Compose restarts are fast enough, we can restart in about 10 seconds.

No separate frontend/backend containers. One deployable unit keeps things simple. We can split this later if needed.

Testing the pipeline

We pushed a dummy commit to verify everything works:

  1. GitHub Actions triggered ✓
  2. Tests passed ✓
  3. Docker image built (frontend + backend combined) ✓
  4. Image pushed to registry ✓
  5. SSH deployment succeeded ✓
  6. App running on VPS ✓

It took about 4 minutes end-to-end. This is fast enough for our needs, and we can optimize this later if it bothers us enough.

What we accomplished today

  • Wrote GitHub Actions workflow for CI/CD
  • Created single Dockerfile that bundles frontend into backend
  • Set up Docker Compose on VPS
  • Configured secrets in GitHub
  • Verified the full pipeline works

From now on, shipping is just git push. That's a good feeling.

Tomorrow: choosing our hosting

Day 5 we'll dig deeper into the hosting decision. Why Hetzner? What are the alternatives? How do we think about cost vs. convenience at this stage?

Book of the day

The Phoenix Project by Gene Kim, Kevin Behr & George Spafford

This novel (yes, a novel about IT operations) transformed how we think about deployment and DevOps. It follows an IT manager at a struggling company and shows how applying manufacturing principles to software delivery can fix broken organizations.

The core insight we got was: work in progress is the enemy. Long-lived branches, manual deployments, and "we'll automate it later" attitudes create bottlenecks that slow everything down.

Setting up CI/CD on day 4 of a project might seem premature, but our experience and the lessons from this book convinced us that the pain of manual deployment compounds over time, while the investment in automation pays dividends immediately.

We can really recommend this book. It's a quick, engaging read, and while it is written for the DevOps world, it reads like a novel.


Current stats:

  • Hours spent: 8 (1 + 2 + 2 + 3 today)
  • Lines of code: ~200 (yml files count, right?)
  • Revenue: $0
  • Paying customers: 0
  • CI/CD pipeline: ✓
  • Auto-deployment: ✓
  • VPS running: ✓

Top comments (0)