π Table of Contents
What the Project Does
Architecture Overview
Dockerfile Explanation
Docker Compose Breakdown
Volumes and Networking
Challenges Faced
Lessons Learned
π― What the Project Does
The Parent Registration Portal is a simple yet practical web application that allows parents to register themselves and their children through an online form. But beyond its functionality, this project serves as a perfect demonstration of Docker containerization in action.
Core Functionality:
User Interface: A clean, blue-themed form where parents enter their name, address, phone number, and child's name
Data Processing: When submitted, the form sends data to a backend API
Data Storage: Information is permanently stored in a PostgreSQL database
Confirmation: Users receive immediate feedback on successful registration
Why This Project?
This application was built to demonstrate how modern applications can be containerized using Docker, making them:
Portable: Run anywhere Docker is installed
Scalable: Easy to add more instances
Consistent: Same behavior in development and production
Isolated: Services run independently
ποΈ Architecture Overview
The application follows a three-tier architecture pattern, with each tier running in its own Docker container:
text
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β β β β
β Frontend ββββββΆβ Backend ββββββΆβ Database β
β (Nginx) β β (Node.js/API) β β (PostgreSQL) β
β β β β β β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βββββββββββββββββ¬ββββββββ΄ββββββββββββ¬ββββββββββββ
β β
ββββββ΄βββββ ββββββ΄βββββ
β Custom β βPersistentβ
β Network β β Volume β
βmentees- β βpostgres_ β
β net β β data β
βββββββββββ βββββββββββ
Service Breakdown:
Service Technology Purpose Port
Frontend Nginx + HTML/CSS/JS Serves the user interface 8080
Backend Node.js + Express Handles API requests and business logic 3100
Database PostgreSQL Stores registration data persistently 5432
Communication Flow:
User accesses http://localhost:8080 in their browser
Frontend serves HTML/CSS/JS files
User fills form and clicks submit
JavaScript sends POST request to backend API (http://localhost:3100/register)
Backend validates data and inserts into PostgreSQL
Response returns to frontend with success/error message
π³ Dockerfile Explanation
Backend Dockerfile
dockerfile
1. Base Image Selection
FROM node:16-alpine
2. Working Directory
WORKDIR /app
3. Dependency Installation
COPY package.json .
RUN npm install
4. Copy Application Code
COPY . .
5. Expose Port
EXPOSE 3100
6. Start Command
CMD ["node", "server.js"]
Line-by-Line Explanation:
-
Base Image Selection: FROM node:16-alpine
Why Alpine? Alpine Linux is extremely small (~5MB) compared to full OS images
Why Node 16? Stable LTS version with long-term support
Benefit: Smaller image = faster downloads, less disk space, better security
-
Working Directory: WORKDIR /app
Creates and sets /app as the working directory
All subsequent commands run from this location
Best practice: Always set a working directory to avoid confusion
Dependency Installation
dockerfile
COPY package.json .
RUN npm install
First copies only package.json (not the entire code)
Installs dependencies
Optimization: Docker caches this layer. If package.json doesn't change, this step uses cache
-
Copy Application Code: COPY . .
Copies the rest of the application files
Happens after dependency installation to leverage Docker caching
-
Expose Port: EXPOSE 3100
Documents that the container listens on port 3100
Note: This is documentation only; actual port mapping happens in docker-compose
-
Start Command: CMD ["node", "server.js"]
Defines the command to run when container starts
Starts the Node.js server
Frontend Dockerfile
dockerfile
FROM nginx:alpine
WORKDIR /usr/share/nginx/html
COPY . .
EXPOSE 80
Uses official Nginx image
Copies static files to Nginx's serving directory
Nginx automatically starts (no CMD needed)
π§ Docker Compose Breakdown
Complete docker-compose.yml
yaml
version: '3.8'
Custom Network Definition
networks:
mentees-net:
driver: bridge
Persistent Volume Definition
volumes:
postgres_data:
services:
# Database Service
db:
image: postgres:13-alpine
container_name: parent_db
restart: unless-stopped
networks:
- mentees-net
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
ports:
- "5432:5432"
# Backend API Service
backend:
build: ./backend
container_name: parent_backend
restart: unless-stopped
networks:
- mentees-net
depends_on:
- db
environment:
DB_HOST: db
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
DB_NAME: ${DB_NAME}
ports:
- "3100:3100"
# Frontend Service
frontend:
build: ./frontend
container_name: parent_frontend
restart: unless-stopped
networks:
- mentees-net
depends_on:
- backend
ports:
- "8080:80"
Key Components Explained:
Version: '3.8'
Specifies Docker Compose file format version
Version 3.8 supports all features needed
Services Section
Service Key Configuration Purpose
db image: postgres:13-alpine Uses pre-built PostgreSQL image
backend build: ./backend Builds from local Dockerfile
frontend build: ./frontend Builds from local Dockerfile
Restart Policy: restart: unless-stopped
Automatically restarts containers if they crash
Won't restart if manually stopped
Ensures high availability
Depends On
yaml
backend:
depends_on:
- db
frontend:
depends_on:
- backend
Ensures services start in correct order
Database starts before backend
Backend starts before frontend
Environment Variables
yaml
environment:
DB_HOST: db
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
Uses variables from .env file
No hardcoded secrets
Service name db works as hostname (thanks to Docker DNS)
π Volumes and Networking
Volumes: Persistent Data Storage
yaml
volumes:
postgres_data:
db:
volumes:
- postgres_data:/var/lib/postgresql/data
What are Volumes?
Volumes are Docker's mechanism for persisting data generated by containers.
Why Volumes Matter:
Persistence: Data survives container restarts and removals
Performance: Volumes are faster than bind mounts
Backup/Restore: Easy to backup and migrate
Sharing: Can be shared between containers
How It Works:
Volume postgres_data is defined
Mounted to /var/lib/postgresql/data (PostgreSQL's data directory)
Even if container is removed, data remains in volume
New container can mount same volume and access data
Networking: Service Communication
yaml
networks:
mentees-net:
driver: bridge
services:
db:
networks:
- mentees-net
backend:
networks:
- mentees-net
frontend:
networks:
- mentees-net
Custom Network Benefits:
Service Discovery: Containers can reach each other by service name
Backend connects to db (not localhost or IP)
No need to know IP addresses
Isolation: Services not exposed on host network unless mapped
Database port 5432 is only exposed within the network
External access only through mapped ports (3100, 8080)
DNS Resolution: Docker provides built-in DNS
Container names resolve to IP addresses automatically
Network Communication Flow:
text
Frontend (port 8080) ββ Host Machine ββ Backend (port 3100) ββ Database (port 5432)
β β β
ββββββββββ mentees-net βββ΄βββββββββ mentees-net βββββ
Verify Network:
bash
List networks
docker network ls
Inspect custom network
docker network inspect mentees-net
π§ Challenges Faced
Challenge 1: Database Connection Timing
Problem: Backend would start before database was ready, causing connection errors.
text
Error: connect ECONNREFUSED 172.18.0.2:5432
Solution: Implemented retry logic in backend
javascript
const initDB = async () => {
let retries = 5;
while (retries) {
try {
await pool.query('SELECT 1');
console.log('β
Database connected');
return;
} catch (err) {
retries -= 1;
console.log(β³ Waiting for DB... (${retries} retries left));
await new Promise(resolve => setTimeout(resolve, 3000));
}
}
};
Challenge 2: Table Creation
Problem: "relation 'registrations' does not exist" errors
text
ERROR: relation "registrations" does not exist
Solution: Automatic table creation on startup with CREATE TABLE IF NOT EXISTS
javascript
await pool.query();
CREATE TABLE IF NOT EXISTS registrations (
id SERIAL PRIMARY KEY,
parent_name VARCHAR(255) NOT NULL,
address TEXT NOT NULL,
phone VARCHAR(50) NOT NULL,
child_name VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
Challenge 3: CORS Issues
Problem: Browser blocked requests from frontend to backend
text
Access to fetch at 'http://localhost:3100/register' from origin 'http://localhost:8080'
has been blocked by CORS policy
Solution: Enabled CORS in backend
javascript
const cors = require('cors');
app.use(cors());
Challenge 4: Port Conflicts
Problem: Port 3000 already in use on host machine
Solution: Changed to port 3100 (within 3100-3111 range as required)
yaml
ports:
- "3100:3100"
Challenge 5: Environment Variables Not Loading
Problem: Backend couldn't find database credentials
Solution: Used .env file and Docker Compose environment substitution
yaml
environment:
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
π‘ Lessons Learned
-
Docker Caching is Powerful
Order matters in Dockerfile
Copy package.json before source code to leverage cache
Reduced build time from 30 seconds to 5 seconds
Always Use .dockerignore
dockerignore
node_modules
.env
.git
*.log
Keeps images small (reduced from 500MB to 150MB)
Prevents secrets from being embedded in images
-
Service Discovery is Magical
Using service names (db) instead of IP addresses
Docker DNS resolves automatically
Makes configuration portable
Health Checks Improve Reliability
yaml
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
Ensures database is ready before backend starts
-
Environment Variables for Security
Never hardcode credentials
Use .env file (added to .gitignore)
Keep secrets out of version control
Volume Management is Critical
bash
Backup volume
docker run --rm -v postgres_data:/source -v $(pwd):/backup alpine tar czf /backup/postgres_backup.tar.gz -C /source .
Restore volume
docker run --rm -v postgres_data:/target -v $(pwd):/backup alpine tar xzf /backup/postgres_backup.tar.gz -C /target
Volumes survive container removal
Easy to backup and restore
- Logging is Your Friend bash
View all logs
docker-compose logs -f
View specific service
docker logs parent_backend
docker logs parent_db
Invaluable for debugging
Shows exactly what's happening
- Network Inspection Helps bash
docker network inspect mentees-net
Shows all connected containers
Displays IP addresses and configurations
π Conclusion
Building this Parent Registration Portal taught me the fundamentals of Docker containerization:
Isolation: Each service runs in its own container
Orchestration: Docker Compose manages multi-container applications
Persistence: Volumes keep data safe
Communication: Custom networks enable service discovery
Security: Environment variables keep secrets safe
The application is now:
β
Portable: Run anywhere with Docker
β
Scalable: Easy to add more instances
β
Maintainable: Clear separation of concerns
β
Production-ready: Persistent data, restart policies, logging
π Links
GitHub Repository
Docker Documentation
Node.js Official Image
PostgreSQL Official Image
πΈ Screenshots
Running Containers
text
$ docker ps
CONTAINER ID IMAGE PORTS NAMES
abc123def456 nginx:alpine 0.0.0.0:8080->80/tcp parent_frontend
def456ghi789 backend 0.0.0.0:3100->3100/tcp parent_backend
ghi789jkl012 postgres:13 0.0.0.0:5432->5432/tcp parent_db
Custom Network
text
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
xyz789abc123 mentees-net bridge local
Volumes
text
$ docker volume ls
DRIVER VOLUME NAME
local parent-registration-portal_postgres_data
Happy Containerizing! π³
Top comments (0)