DEV Community

Cover image for Stop Configuring Nginx: The Easiest Way to Deploy Go & React with HTTPS
Alex G
Alex G

Posted on

Stop Configuring Nginx: The Easiest Way to Deploy Go & React with HTTPS

The "It Works on My Machine" Trap

We have all been there. You spend weeks building a robust application. Your Go backend is blazing fast, your React frontend is snappy, and everything runs perfectly on localhost:8080.

But then comes the deployment phase.

Suddenly, you are dealing with VPS configuration, SSL certificates, Nginx config files that look like hieroglyphics, and the dreaded CORS errors.

I recently built Geo Engine, a geospatial backend service using Go and PostGIS. I wanted to deploy it to a DigitalOcean Droplet with a custom domain and HTTPS, but I didn't want to spend hours configuring Certbot or managing complex Nginx directives.

Here is how I solved it using Docker Compose and Caddy (the web server that saves your sanity).

The Architecture šŸ—ļø

My goal was to have a professional production environment:

  1. Frontend: A React Dashboard (Vite) on app.geoengine.dev.
  2. Backend: A Go API (Chi Router + PostGIS) on api.geoengine.dev.
  3. Security: Automatic HTTPS for both subdomains.
  4. Infrastructure: Everything containerized with Docker.

Instead of exposing ports 8080 and 5173 to the wild, I used Caddy as the entry point. Caddy acts as a reverse proxy, handling SSL certificate generation and renewal automatically.

The "Magic" Caddyfile ✨

If you have ever struggled with an nginx.conf file, you are going to love this. This is literally all the configuration I needed to get HTTPS working for two subdomains:

# The Dashboard (Frontend)
app.geoengine.dev {
    reverse_proxy dashboard:80
}

# The API (Backend)
api.geoengine.dev {
    reverse_proxy api:8080
}

Enter fullscreen mode Exit fullscreen mode



That’s it. Caddy detects the domain, talks to Let's Encrypt, gets the certificates, and routes the traffic. No cron jobs, no manual renewals.

The Docker Setup 🐳

Here is the secret sauce in my docker-compose.yml. Notice how the services don't expose ports to the host machine (except Caddy); they only talk inside the geo-net network.

services:
  # Caddy: The only service exposed to the world
  caddy:
    image: caddy:2-alpine
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
    networks:
      - geo-net
    depends_on:
      - dashboard
      - api

  # Backend API
  api:
    build: ./backend
    expose:
      - "8080" # Only visible to Caddy, not the internet
    environment:
      - ALLOWED_ORIGINS=https://app.geoengine.dev
    networks:
      - geo-net

  # Database
  db:
    image: postgres:15-alpine
    # ... config ...
    networks:
      - geo-net

networks:
  geo-net:
    driver: bridge

Enter fullscreen mode Exit fullscreen mode

The Challenges (Where I Got Stuck) 🚧

It wasn't all smooth sailing. Here are two "gotchas" that cost me a few hours of debugging, so you don't have to suffer:

1. The "Orphan" Migration Container

I use a separate container to run database migrations (golang-migrate). It kept crashing with a connection error.

The Fix: I realized that even utility containers need to be on the same Docker network! I had forgotten to add networks: - geo-net to my migration service, so it couldn't "see" the database.

2. The CORS Villain šŸ’€

On localhost, allowing * (wildcard) for CORS usually works. But once I moved to production with HTTPS, my frontend requests started failing.


Browsers are strict about credentials (cookies/headers) in secure environments. I had to stop being lazy and specify the exact origin in my Go code using the rs/cors library.


In Go:

// Don't do this in production:
// AllowedOrigins: []string{"*"} āŒ

// Do this instead:
AllowedOrigins: []string{"https://app.geoengine.dev"} āœ…

Enter fullscreen mode Exit fullscreen mode

By matching the exact origin of my frontend, the browser (and the security protocols) were happy.

The Result

After pushing the changes, I ran docker compose up -d. In about 30 seconds, Caddy had secured my site.

You can check out the live demo here: https://app.geoengine.dev
Or explore the code on GitHub: Link GitHub

If you are deploying a side project, give Caddy a try. It feels like cheating, but in the best way possible.

Happy coding!

Top comments (2)

Collapse
 
peacebinflow profile image
PEACEBINFLOW

This post is essentially a public service announcement for anyone still losing sleep over nginx.conf syntax.

It’s hilarious how we’ve collectively accepted that configuring a web server should feel like solving a 1990s point-and-click adventure game. Moving to Caddy feels like the first time you use an automatic transmission after years of grinding gears. That 3-line Caddyfile isn’t just "neat"—it’s a massive reduction in your deployment's surface area for bugs.

The "Orphan Container" issue you mentioned is such a classic Docker trap. I’ve definitely been there—staring at a "Connection Refused" error for an hour, only to realize my migration runner was essentially screaming into a void because it wasn't on the geo-net party line.

And huge props for mentioning the CORS fix properly. A lot of people just throw Access-Control-Allow-Origin: * at the wall to make the red text in the console go away, but doing it right with specific origins in Go is the difference between a "side project" and "production-ready" infrastructure.

Honestly, the combination of Go + Caddy is becoming the new "Gold Standard" for lean teams. You get the raw speed of Go and the zero-maintenance security of Caddy without the "Nginx Tax."

Collapse
 
alex_g_aeeb05ba69eee8a4fd profile image
Alex G

Wow, thank you so much for this thoughtful feedback!

I absolutely love your analogy about the automatic transmission—that's exactly how it felt the first time Caddy auto-renewed a certificate for me without touching a single config file. The 'Nginx Tax' is real!

And I'm really glad you appreciated the CORS section. It was definitely tempting to just use * when I was stuck debugging at 2 AM, but I knew I had to do it the 'production-ready' way, or I'd regret it later.

Thanks for reading and for the great validation on the stack!