What I Deployed
The EpicBook is an online bookstore with a Node.js + Express backend, MySQL database, Handlebars frontend, and Nginx reverse proxy. It has real data: 54 books and 53 authors.
The architecture:
Internet (port 80)
|
v
Nginx (reverse proxy)
| frontend_network
v
Node.js App (port 8080, internal)
| backend_network
v
MySQL (port 3306, internal only)
Only Nginx faces the internet. MySQL is never reachable from outside the VM.
The .env File (Secrets First)
Before writing a single Dockerfile, I created a .env file:
MYSQL_ROOT_PASSWORD=<strong-password>
MYSQL_DATABASE=bookstore
MYSQL_USER=epicbook
MYSQL_PASSWORD=<strong-password>
NODE_ENV=production
PORT=8080
DB_HOST=db
DB_USER=epicbook
DB_PASSWORD=<strong-password>
DB_NAME=bookstore
The original config.json had hardcoded credentials pointing to a Heroku database URL. I updated it to use direct credentials pointing to the db Docker Compose service name. Secrets stay in .env, not in code.
The Multi-Stage Dockerfile
# Stage 1 - install production dependencies
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Stage 2 - production runtime
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 8080
CMD ["node", "server.js"]
The builder stage installs only production dependencies. The runtime stage gets clean node_modules and source code. No dev tools in the final image.
The Nginx Config
upstream epicbook_app {
server app:8080;
}
server {
listen 80;
server_name _;
location / {
proxy_pass http://epicbook_app;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
app is the Docker Compose service name. Docker's internal DNS resolves it automatically. No IP addresses needed.
The docker-compose.yml
Three services, two networks, two volumes:
version: '3.8'
networks:
frontend_network:
driver: bridge
backend_network:
driver: bridge
volumes:
db_data:
nginx_logs:
services:
db:
image: mysql:8.0
networks:
- backend_network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
retries: 5
start_period: 30s
app:
build: .
depends_on:
db:
condition: service_healthy
networks:
- frontend_network
- backend_network
nginx:
image: nginx:alpine
ports:
- "80:80"
depends_on:
app:
condition: service_healthy
networks:
- frontend_network
The depends_on with condition: service_healthy is the key. Nothing starts before the thing it depends on is ready.
What Went Wrong (And How I Fixed It)
Problem 1: Books not loading after deployment
The site came up but showed "no books available." The seed SQL files ran without errors but inserted no data.
The issue was that docker exec with stdin redirection reads the file from the VM host but executes inside the container. The file was not inside the container.
Fix: Use docker cp to copy the files into the container first, then execute them with source:
docker cp db/author_seed.sql epicbook_db:/tmp/author_seed.sql
docker exec epicbook_db mysql -u epicbook -pPassword bookstore -e "source /tmp/author_seed.sql"
Problem 2: Nginx healthcheck showing unhealthy
The healthcheck used wget which is not in nginx:alpine by default.
Fix: Removed the Nginx healthcheck entirely. The site was serving traffic correctly and the app healthcheck was already ensuring startup order.
Problem 3: docker-compose.yml corrupted by heredoc paste
Special characters in the YAML caused the heredoc to break mid-paste.
Fix: Used a Python script to write the file content instead of shell heredoc:
python3 -c "
content = '''..yaml content..'''
with open('docker-compose.yml', 'w') as f:
f.write(content)
"
The Persistence Test
Before restart:
SELECT COUNT(*) FROM Book; --> 54
Run:
docker-compose down && docker-compose up -d
After restart:
SELECT COUNT(*) FROM Book; --> 54
Every container was destroyed and recreated. The data survived because it lives in the db_data named volume, not in the container.
GitHub
https://github.com/vivianokose/the-epicbook_capstone.git



























Top comments (0)