The Core Problem Every Developer Faces
Imagine this: You've built an amazing application. It works perfectly on your laptop. You push it to staging, and suddenly it breaks. You fix it, deploy to production, and... different error. Sound familiar?
Here's the truth: The same application behaves differently across environments—and that's not a bug, it's by design.
This article will teach you how professional DevOps teams manage the same codebase across development, staging, and production environments without changing a single line of code.
The Big Picture (In One Sentence)
Same application, different environments, different configurations—managed through environment variables, Docker Compose files, and CI/CD stages.
That's it. Now let's unpack it properly.
Table of Contents
- Understanding Environments
- The Golden Rule: Never Change Code
- The Three Pillars of Environment Management
- Environment Variables Deep Dive
- Docker Compose for Different Environments
- CI/CD Pipeline Configuration
- Real-World Example: E-commerce Application
- Common Pitfalls and How to Avoid Them
- Best Practices Checklist
Understanding Environments
Before we dive into solutions, let's understand what we're dealing with.
What Are Environments?
An environment is a complete setup where your application runs. Think of it as a stage in your application's journey from your laptop to serving real users.
The Three Main Environments
[Development] → [Staging] → [Production]
(Dev) (QA) (Prod)
1. Development Environment (Local/Dev)
- Where: Your laptop or development server
- Purpose: Write code, test features, break things freely
- Users: Developers only
- Data: Fake data, test users
-
Example config:
- Database:
localhost:5432 - Debug mode:
enabled - Email: Mock service (no real emails sent)
- Payment: Sandbox/test mode
- Database:
2. Staging Environment (QA/Pre-Production)
- Where: Server that mirrors production
- Purpose: Test before releasing to real users
- Users: QA team, product managers, stakeholders
- Data: Sanitized production-like data
-
Example config:
- Database:
staging-db.company.com - Debug mode:
enabled(with logging) - Email: Real service, but to test addresses only
- Payment: Test mode with real payment processor
- Database:
3. Production Environment (Live/Prod)
- Where: Live servers accessible to users
- Purpose: Serve real users
- Users: Everyone (customers, clients)
- Data: Real, sensitive data
-
Example config:
- Database:
prod-db.company.com(high availability, replicas) - Debug mode:
disabled - Email: Real service, to real users
- Payment: Live mode with real transactions
- Database:
Why Can't We Use the Same Configuration Everywhere?
Simple answer: Because the needs are different.
| Aspect | Development | Staging | Production |
|---|---|---|---|
| Performance | Can be slow | Should be fast | Must be fast |
| Security | Relaxed | Medium | Maximum |
| Data | Fake | Test | Real |
| Errors | Show full stack traces | Show some details | Hide details |
| Monitoring | Minimal | Moderate | Comprehensive |
| Cost | Minimal resources | Moderate resources | Auto-scaling, high availability |
The Golden Rule: Never Change Code
Here's the most important principle in DevOps:
We do NOT change application code between environments.
What We DO Change:
- ✅ Environment variables (
.envfiles) - ✅ Docker Compose files (different services, volumes, networks)
- ✅ CI/CD pipeline stages (different deployment strategies)
What We DON'T Change:
- ❌ Application logic
- ❌ Business rules
- ❌ Core functionality
- ❌ Source code
Why This Rule Matters
Example of doing it WRONG:
// ❌ BAD: Hardcoding environment-specific logic
if (environment === 'production') {
const dbHost = 'prod-db.company.com';
} else if (environment === 'staging') {
const dbHost = 'staging-db.company.com';
} else {
const dbHost = 'localhost';
}
Problems with this approach:
- Code becomes a mess of if/else statements
- Adding a new environment requires code changes
- Easy to make mistakes and deploy wrong config
- Violates the principle of configuration externalization
Example of doing it RIGHT:
// ✅ GOOD: Read from environment variables
const dbHost = process.env.DB_HOST || 'localhost';
const dbPort = process.env.DB_PORT || 5432;
const dbName = process.env.DB_NAME || 'myapp_dev';
Then use different .env files:
# .env.development
DB_HOST=localhost
DB_PORT=5432
DB_NAME=myapp_dev
# .env.staging
DB_HOST=staging-db.company.com
DB_PORT=5432
DB_NAME=myapp_staging
# .env.production
DB_HOST=prod-db.company.com
DB_PORT=5432
DB_NAME=myapp_prod
The Three Pillars of Environment Management
Let's explore the three mechanisms that make environment management possible.
1. Environment Variables (The Foundation)
What Are Environment Variables?
Environment variables are key-value pairs that configure your application's behavior without changing code.
Think of them as settings or preferences for your application.
Anatomy of an Environment Variable
# Format: KEY=VALUE
DATABASE_URL=postgresql://user:password@host:5432/dbname
- KEY: The variable name (usually UPPERCASE_WITH_UNDERSCORES)
- VALUE: The configuration value (can be string, number, URL, etc.)
What Environment Variables Control
Environment variables control behavior, not logic.
┌─────────────────────────────────────────┐
│ What Env Vars Control │
├─────────────────────────────────────────┤
│ ✅ Which database to connect to │
│ ✅ API keys and secrets │
│ ✅ Feature flags (enable/disable) │
│ ✅ Debug mode on/off │
│ ✅ Third-party service URLs │
│ ✅ Timeouts and retry limits │
│ ✅ Cache settings │
│ ✅ Logging levels │
└─────────────────────────────────────────┘
┌─────────────────────────────────────────┐
│ What Env Vars DON'T Control │
├─────────────────────────────────────────┤
│ ❌ How your authentication works │
│ ❌ Business logic rules │
│ ❌ Algorithm implementations │
│ ❌ User interface behavior │
└─────────────────────────────────────────┘
Real-World Example: E-commerce App
# .env.development
NODE_ENV=development
PORT=3000
# Database
DB_HOST=localhost
DB_PORT=5432
DB_NAME=ecommerce_dev
DB_USER=dev_user
DB_PASSWORD=dev_password
# Redis Cache
REDIS_HOST=localhost
REDIS_PORT=6379
# Payment Gateway
STRIPE_API_KEY=sk_test_123456789
STRIPE_WEBHOOK_SECRET=whsec_test_abc
# Email Service
SMTP_HOST=localhost
SMTP_PORT=1025
EMAIL_FROM=dev@localhost
# Feature Flags
ENABLE_NEW_CHECKOUT=true
ENABLE_LOYALTY_POINTS=false
# Logging
LOG_LEVEL=debug
ENABLE_QUERY_LOGGING=true
# Session
SESSION_SECRET=dev-secret-change-in-prod
SESSION_TIMEOUT=86400
# File Upload
MAX_FILE_SIZE=10485760
ALLOWED_FILE_TYPES=jpg,png,pdf
# .env.production
NODE_ENV=production
PORT=8080
# Database (with connection pooling)
DB_HOST=prod-cluster.us-east-1.rds.amazonaws.com
DB_PORT=5432
DB_NAME=ecommerce_prod
DB_USER=prod_user
DB_PASSWORD=${DB_PASSWORD_FROM_SECRETS_MANAGER}
DB_POOL_MIN=10
DB_POOL_MAX=50
# Redis Cache (with SSL)
REDIS_HOST=prod-cache.abc123.cache.amazonaws.com
REDIS_PORT=6379
REDIS_TLS=true
# Payment Gateway (LIVE mode)
STRIPE_API_KEY=${STRIPE_LIVE_KEY_FROM_VAULT}
STRIPE_WEBHOOK_SECRET=${STRIPE_WEBHOOK_FROM_VAULT}
# Email Service (via SendGrid)
SMTP_HOST=smtp.sendgrid.net
SMTP_PORT=587
SMTP_USER=apikey
SMTP_PASSWORD=${SENDGRID_API_KEY}
EMAIL_FROM=orders@mystore.com
# Feature Flags
ENABLE_NEW_CHECKOUT=true
ENABLE_LOYALTY_POINTS=true
# Logging (less verbose in prod)
LOG_LEVEL=warn
ENABLE_QUERY_LOGGING=false
# Session (secure settings)
SESSION_SECRET=${SESSION_SECRET_FROM_VAULT}
SESSION_TIMEOUT=3600
COOKIE_SECURE=true
COOKIE_HTTPONLY=true
# File Upload (stricter in prod)
MAX_FILE_SIZE=5242880
ALLOWED_FILE_TYPES=jpg,png
How to Use Environment Variables in Code
Node.js / JavaScript:
const express = require('express');
const app = express();
// Access environment variables via process.env
const PORT = process.env.PORT || 3000;
const NODE_ENV = process.env.NODE_ENV || 'development';
const DB_HOST = process.env.DB_HOST;
// Use them in your application
app.listen(PORT, () => {
console.log(`Server running in ${NODE_ENV} mode on port ${PORT}`);
console.log(`Connecting to database at ${DB_HOST}`);
});
Python:
import os
# Access environment variables via os.environ
PORT = int(os.environ.get('PORT', 3000))
NODE_ENV = os.environ.get('NODE_ENV', 'development')
DB_HOST = os.environ.get('DB_HOST')
print(f"Server running in {NODE_ENV} mode on port {PORT}")
print(f"Connecting to database at {DB_HOST}")
Go:
import "os"
// Access environment variables via os.Getenv
port := os.Getenv("PORT")
if port == "" {
port = "3000"
}
nodeEnv := os.Getenv("NODE_ENV")
dbHost := os.Getenv("DB_HOST")
Best Practices for Environment Variables
-
Never commit
.envfiles to Git
# .gitignore
.env
.env.*
!.env.example # Only commit the example
-
Use
.env.exampleas documentation
# .env.example - Commit this to Git
DB_HOST=localhost
DB_PORT=5432
DB_NAME=your_database_name
DB_USER=your_database_user
DB_PASSWORD=your_secure_password
- Use default values in code
const PORT = process.env.PORT || 3000; // Fallback to 3000
- Validate critical environment variables on startup
const requiredEnvVars = ['DB_HOST', 'DB_PASSWORD', 'STRIPE_API_KEY'];
requiredEnvVars.forEach(varName => {
if (!process.env[varName]) {
throw new Error(`Missing required environment variable: ${varName}`);
}
});
-
Use secrets management for sensitive data
- AWS Secrets Manager
- HashiCorp Vault
- Azure Key Vault
- Google Cloud Secret Manager
2. Docker Compose Files (The Orchestrator)
Docker Compose files define how your application's services are configured and connected. Different environments need different compose configurations.
Why Different Compose Files?
Development needs:
- Mount source code as volumes (for hot-reloading)
- Expose all ports for debugging
- Run with minimal resources
- Include development tools (debuggers, profilers)
Production needs:
- Use optimized images
- Expose only necessary ports
- Configure health checks
- Set resource limits
- Enable auto-restart
File Structure
project/
├── docker-compose.yml # Base configuration (shared)
├── docker-compose.dev.yml # Development overrides
├── docker-compose.staging.yml # Staging overrides
├── docker-compose.prod.yml # Production overrides
├── .env.development
├── .env.staging
├── .env.production
└── .env.example
Base Configuration (docker-compose.yml)
This file contains shared configuration across all environments:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
environment:
- NODE_ENV=${NODE_ENV}
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
depends_on:
- db
- redis
db:
image: postgres:15-alpine
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
redis:
image: redis:7-alpine
Development Overrides (docker-compose.dev.yml)
version: '3.8'
services:
app:
# Mount source code for hot-reloading
volumes:
- ./src:/app/src
- ./node_modules:/app/node_modules
# Expose debugging port
ports:
- "3000:3000"
- "9229:9229" # Node.js debugger
# Override command to use nodemon
command: npm run dev
environment:
- DEBUG=* # Enable all debug logs
db:
# Expose database port for local tools (pgAdmin, DBeaver)
ports:
- "5432:5432"
volumes:
# Persist data locally
- ./data/postgres:/var/lib/postgresql/data
redis:
ports:
- "6379:6379"
# Additional development tools
mailhog: # Catch all emails in development
image: mailhog/mailhog
ports:
- "1025:1025" # SMTP
- "8025:8025" # Web UI
Staging Overrides (docker-compose.staging.yml)
version: '3.8'
services:
app:
# Use pre-built image from registry
image: myregistry.com/myapp:staging-latest
# Don't mount source code
# Expose only necessary port
ports:
- "8080:3000"
# Add health check
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Restart policy
restart: unless-stopped
# Resource limits
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
db:
# Use managed database (no local db in staging)
# Remove this service, connect to external DB via DB_HOST env var
profiles:
- disabled # This prevents it from starting
redis:
# Use managed Redis (ElastiCache, etc.)
profiles:
- disabled
Production Overrides (docker-compose.prod.yml)
version: '3.8'
services:
app:
# Use specific versioned image (not 'latest')
image: myregistry.com/myapp:v1.2.3
# Multiple replicas for load balancing
deploy:
replicas: 3
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '1.0'
memory: 1G
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# Strict health check
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 10s
timeout: 5s
retries: 5
start_period: 60s
# Read-only root filesystem for security
read_only: true
tmpfs:
- /tmp
# No ports exposed (behind load balancer)
# Load balancer handles external traffic
# Use external managed services (RDS, ElastiCache)
# No db or redis services defined
Using Different Compose Files
# Development
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# Staging
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Pro Tip: Use Environment-Specific Scripts
Create helper scripts to simplify commands:
# scripts/dev.sh
#!/bin/bash
export $(cat .env.development | xargs)
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# scripts/staging.sh
#!/bin/bash
export $(cat .env.staging | xargs)
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
# scripts/prod.sh
#!/bin/bash
export $(cat .env.production | xargs)
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
3. CI/CD Pipeline Stages (The Automation)
CI/CD (Continuous Integration / Continuous Deployment) pipelines automate the process of testing, building, and deploying your application to different environments.
The CI/CD Flow
[Code Push] → [CI Pipeline] → [Build] → [Test] → [Deploy to Env]
↓
GitHub ↓ ↓ ↓ ↓
GitLab Run Tests Docker Unit Tests Dev/Staging/Prod
Bitbucket Lint Code Build Integration
Push E2E Tests
Example: GitHub Actions Workflow
# .github/workflows/deploy.yml
name: Deploy to Environments
on:
push:
branches:
- develop # Triggers deployment to staging
- main # Triggers deployment to production
pull_request:
branches:
- develop
- main
jobs:
# Job 1: Run tests (always run)
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run unit tests
run: npm test
- name: Run integration tests
run: npm run test:integration
# Job 2: Build Docker image
build:
needs: test # Only run if tests pass
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Container Registry
uses: docker/login-action@v2
with:
registry: myregistry.com
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: |
myregistry.com/myapp:${{ github.sha }}
myregistry.com/myapp:latest
# Job 3: Deploy to Development
deploy-dev:
needs: build
if: github.ref == 'refs/heads/develop' && github.event_name == 'push'
runs-on: ubuntu-latest
environment: development
steps:
- name: Deploy to Development Server
run: |
# SSH into dev server and deploy
ssh ${{ secrets.DEV_SERVER_USER }}@${{ secrets.DEV_SERVER_HOST }} << 'EOF'
cd /opt/myapp
docker pull myregistry.com/myapp:latest
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
EOF
# Job 4: Deploy to Staging
deploy-staging:
needs: build
if: github.ref == 'refs/heads/develop' && github.event_name == 'push'
runs-on: ubuntu-latest
environment: staging
steps:
- name: Deploy to Staging
run: |
ssh ${{ secrets.STAGING_SERVER_USER }}@${{ secrets.STAGING_SERVER_HOST }} << 'EOF'
cd /opt/myapp
export $(cat .env.staging | xargs)
docker pull myregistry.com/myapp:${{ github.sha }}
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
EOF
- name: Run smoke tests
run: |
# Basic health check
curl -f https://staging.myapp.com/health || exit 1
# Job 5: Deploy to Production (manual approval required)
deploy-production:
needs: [build, deploy-staging]
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
runs-on: ubuntu-latest
environment:
name: production
url: https://myapp.com
steps:
- name: Deploy to Production
run: |
ssh ${{ secrets.PROD_SERVER_USER }}@${{ secrets.PROD_SERVER_HOST }} << 'EOF'
cd /opt/myapp
export $(cat .env.production | xargs)
# Use specific version tag, not 'latest'
docker pull myregistry.com/myapp:${{ github.sha }}
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
EOF
- name: Run production smoke tests
run: |
curl -f https://myapp.com/health || exit 1
- name: Notify team
uses: 8398a7/action-slack@v3
with:
status: ${{ job.status }}
text: 'Production deployment completed!'
webhook_url: ${{ secrets.SLACK_WEBHOOK }}
Key Concepts in CI/CD Stages
1. Jobs and Dependencies
jobs:
test: # Run tests first
build: # Build only if tests pass
needs: test
deploy: # Deploy only if build succeeds
needs: build
2. Branch-Based Deployment
# Deploy to staging when pushing to 'develop' branch
if: github.ref == 'refs/heads/develop'
# Deploy to production when pushing to 'main' branch
if: github.ref == 'refs/heads/main'
3. Environment Protection
GitHub allows you to configure environment protection rules:
- Required reviewers: Deployment to production requires approval
- Wait timer: Enforce a delay before deployment
- Deployment branches: Only specific branches can deploy to production
environment:
name: production # References GitHub environment with protection rules
url: https://myapp.com
4. Secrets Management
Store sensitive data in GitHub Secrets:
Repository Settings → Secrets and variables → Actions → New repository secret
Access them in workflows:
password: ${{ secrets.DATABASE_PASSWORD }}
Real-World Example: E-commerce Application
Let's put it all together with a complete example.
Scenario
You're building an e-commerce platform with:
- Web API (Node.js + Express)
- PostgreSQL database
- Redis cache
- Payment processing (Stripe)
- Email notifications
- File uploads
Project Structure
ecommerce-app/
├── src/
│ ├── app.js
│ ├── config/
│ │ └── database.js
│ ├── routes/
│ ├── controllers/
│ └── services/
├── tests/
├── .env.example
├── .env.development
├── .env.staging
├── .env.production
├── docker-compose.yml
├── docker-compose.dev.yml
├── docker-compose.staging.yml
├── docker-compose.prod.yml
├── Dockerfile
├── .github/
│ └── workflows/
│ └── deploy.yml
└── package.json
1. Application Code (src/app.js)
const express = require('express');
const { Pool } = require('pg');
const redis = require('redis');
const stripe = require('stripe')(process.env.STRIPE_API_KEY);
const app = express();
// Configuration from environment variables
const PORT = process.env.PORT || 3000;
const NODE_ENV = process.env.NODE_ENV || 'development';
// Database connection
const pool = new Pool({
host: process.env.DB_HOST,
port: process.env.DB_PORT,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
});
// Redis connection
const redisClient = redis.createClient({
url: `redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`
});
// Health check endpoint
app.get('/health', async (req, res) => {
try {
// Check database
await pool.query('SELECT 1');
// Check Redis
await redisClient.ping();
res.json({
status: 'healthy',
environment: NODE_ENV,
timestamp: new Date().toISOString()
});
} catch (error) {
res.status(503).json({
status: 'unhealthy',
error: error.message
});
}
});
// API endpoints
app.post('/api/orders', async (req, res) => {
// Order processing logic
// Uses environment-specific configuration automatically
});
// Start server
app.listen(PORT, () => {
console.log(`🚀 Server running in ${NODE_ENV} mode on port ${PORT}`);
});
2. Environment Files
.env.development:
# Application
NODE_ENV=development
PORT=3000
DEBUG=true
# Database (local)
DB_HOST=localhost
DB_PORT=5432
DB_NAME=ecommerce_dev
DB_USER=dev_user
DB_PASSWORD=dev_password
# Redis (local)
REDIS_HOST=localhost
REDIS_PORT=6379
# Stripe (test mode)
STRIPE_API_KEY=sk_test_51A1B2C3D4E5F6
STRIPE_WEBHOOK_SECRET=whsec_test_abc123
# Email (local MailHog)
SMTP_HOST=localhost
SMTP_PORT=1025
# File Upload
UPLOAD_DIR=/tmp/uploads
MAX_FILE_SIZE=10485760
# Feature Flags
ENABLE_LOYALTY_PROGRAM=true
ENABLE_GIFT_CARDS=false
.env.staging:
# Application
NODE_ENV=staging
PORT=8080
DEBUG=false
# Database (AWS RDS)
DB_HOST=staging-db.abc123.us-east-1.rds.amazonaws.com
DB_PORT=5432
DB_NAME=ecommerce_staging
DB_USER=staging_user
DB_PASSWORD=${DB_PASSWORD_FROM_SECRETS}
# Redis (AWS ElastiCache)
REDIS_HOST=staging-cache.abc123.cache.amazonaws.com
REDIS_PORT=6379
# Stripe (test mode, but closer to production)
STRIPE_API_KEY=${STRIPE_TEST_KEY_FROM_VAULT}
STRIPE_WEBHOOK_SECRET=${STRIPE_TEST_WEBHOOK_FROM_VAULT}
# Email (SendGrid)
SMTP_HOST=smtp.sendgrid.net
SMTP_PORT=587
SMTP_USER=apikey
SMTP_PASSWORD=${SENDGRID_KEY_FROM_VAULT}
# File Upload (S3)
UPLOAD_DIR=s3://ecommerce-staging-uploads
MAX_FILE_SIZE=5242880
# Feature Flags
ENABLE_LOYALTY_PROGRAM=true
ENABLE_GIFT_CARDS=true
.env.production:
# Application
NODE_ENV=production
PORT=8080
DEBUG=false
# Database (AWS RDS with replicas)
DB_HOST=prod-db.xyz789.us-east-1.rds.amazonaws.com
DB_PORT=5432
DB_NAME=ecommerce_prod
DB_USER=prod_user
DB_PASSWORD=${DB_PASSWORD_FROM_VAULT}
DB_POOL_MIN=20
DB_POOL_MAX=100
# Redis (AWS ElastiCache cluster)
REDIS_HOST=prod-cache.xyz789.cache.amazonaws.com
REDIS_PORT=6379
REDIS_TLS=true
# Stripe (LIVE mode)
STRIPE_API_KEY=${STRIPE_LIVE_KEY_FROM_VAULT}
STRIPE_WEBHOOK_SECRET=${STRIPE_LIVE_WEBHOOK_FROM_VAULT}
# Email (SendGrid with dedicated IP)
SMTP_HOST=smtp.sendgrid.net
SMTP_PORT=587
SMTP_USER=apikey
SMTP_PASSWORD=${SENDGRID_PROD_KEY_FROM_VAULT}
# File Upload (S3 with CloudFront)
UPLOAD_DIR=s3://ecommerce-prod-uploads
CLOUDFRONT_URL=https://cdn.mystore.com
MAX_FILE_SIZE=5242880
# Feature Flags
ENABLE_LOYALTY_PROGRAM=true
ENABLE_GIFT_CARDS=true
# Security
SESSION_SECRET=${SESSION_SECRET_FROM_VAULT}
COOKIE_SECURE=true
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_WINDOW=900000
3. The Deployment Flow
┌──────────────────────────────────────────────────────────────┐
│ Developer Workflow │
└──────────────────────────────────────────────────────────────┘
1. Local Development
├─ Developer writes code
├─ Uses .env.development
├─ Runs: docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
└─ Tests at http://localhost:3000
2. Commit & Push to 'develop' branch
├─ GitHub Actions triggers
├─ Runs tests
├─ Builds Docker image
└─ Auto-deploys to Staging
3. Staging Environment
├─ Uses .env.staging
├─ QA team tests
├─ Runs: docker-compose -f docker-compose.yml -f docker-compose.staging.yml up
└─ Accessible at https://staging.mystore.com
4. Merge to 'main' branch (after approval)
├─ GitHub Actions triggers
├─ Requires manual approval (protected environment)
├─ Deploys to Production
└─ Uses .env.production
5. Production Environment
├─ Uses .env.production
├─ Runs: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
├─ Serves real customers
└─ Accessible at https://mystore.com
4. What Changes Between Environments?
| Component | Development | Staging | Production |
|---|---|---|---|
| Database | Local PostgreSQL | AWS RDS (small) | AWS RDS (large, replicated) |
| Cache | Local Redis | ElastiCache (small) | ElastiCache (cluster mode) |
| MailHog (fake) | SendGrid (test mode) | SendGrid (live, dedicated IP) | |
| Payment | Stripe test mode | Stripe test mode | Stripe LIVE mode |
| File Storage | Local /tmp | S3 (staging bucket) | S3 + CloudFront CDN |
| Debug Mode | ON | OFF | OFF |
| Logging | Verbose | Moderate | Errors only |
| Resources | Minimal | Medium | Auto-scaling |
| SSL/TLS | No | Yes | Yes (with cert pinning) |
| Monitoring | None | Basic | Comprehensive (DataDog) |
Common Pitfalls and How to Avoid Them
Pitfall #1: Forgetting to Update .env Files
Problem:
# You update .env.development but forget .env.staging
# Staging deploys with old configuration
Solution:
- Document all environment variables in
.env.example - Add validation script to check for missing variables
- Use environment variable management tools (Doppler, Infisical)
// scripts/validate-env.js
const required = ['DB_HOST', 'DB_PASSWORD', 'STRIPE_API_KEY'];
const missing = required.filter(key => !process.env[key]);
if (missing.length > 0) {
console.error(`Missing required environment variables: ${missing.join(', ')}`);
process.exit(1);
}
Pitfall #2: Using 'latest' Tag in Production
Problem:
# ❌ BAD: Unpredictable, hard to rollback
image: myregistry.com/myapp:latest
Solution:
# ✅ GOOD: Version-specific, easy to rollback
image: myregistry.com/myapp:v1.2.3
# Or use commit SHA
image: myregistry.com/myapp:a1b2c3d
Pitfall #3: Mixing Configuration Methods
Problem:
// ❌ BAD: Some config in code, some in env vars
const config = {
port: 3000, // Hardcoded
dbHost: process.env.DB_HOST, // From env
debug: NODE_ENV === 'development' ? true : false // Logic in code
};
Solution:
// ✅ GOOD: All configuration from environment
const config = {
port: parseInt(process.env.PORT) || 3000,
dbHost: process.env.DB_HOST,
debug: process.env.DEBUG === 'true'
};
Pitfall #4: Exposing Secrets in Logs
Problem:
// ❌ BAD: Logs sensitive data
console.log('Connecting to database:', {
host: DB_HOST,
user: DB_USER,
password: DB_PASSWORD // Oops! Logged the password
});
Solution:
// ✅ GOOD: Redact sensitive data
console.log('Connecting to database:', {
host: DB_HOST,
user: DB_USER,
password: '***REDACTED***'
});
Pitfall #5: No Rollback Strategy
Problem:
- Deploy to production
- Something breaks
- No easy way to revert
Solution:
- Use versioned images (not
latest) - Keep previous version running until new version is verified
- Blue-green deployment or canary releases
# Quick rollback example
docker-compose pull myapp:v1.2.2 # Previous working version
docker-compose up -d
Pitfall #6: Skipping Staging
Problem:
- Push directly from dev to production
- Miss environment-specific bugs
- Customers affected by issues
Solution:
- Always deploy to staging first
- Run smoke tests in staging
- Get approval before production deploy
Best Practices Checklist
✅ Environment Variables
- [ ] Never commit
.envfiles to Git - [ ] Use
.env.examplefor documentation - [ ] Validate required variables on startup
- [ ] Use secrets management for sensitive data (Vault, AWS Secrets Manager)
- [ ] Set sane defaults in code where possible
- [ ] Use consistent naming conventions (
SCREAMING_SNAKE_CASE) - [ ] Document what each variable does
- [ ] Separate secrets from config (config can be version-controlled, secrets cannot)
✅ Docker Compose
- [ ] Use base
docker-compose.yml+ environment-specific overlays - [ ] Version control all compose files
- [ ] Set resource limits in production
- [ ] Configure health checks
- [ ] Use restart policies
- [ ] Don't expose unnecessary ports in production
- [ ] Use named volumes for data persistence
- [ ] Set appropriate logging drivers
✅ CI/CD Pipeline
- [ ] Run tests before every deployment
- [ ] Build once, deploy everywhere (same Docker image)
- [ ] Use version tags, not
latest - [ ] Implement deployment gates (manual approval for production)
- [ ] Set up rollback procedures
- [ ] Monitor deployments
- [ ] Notify team of deployment status
- [ ] Keep deployment logs
✅ Security
- [ ] Never log sensitive data
- [ ] Use HTTPS in staging and production
- [ ] Rotate secrets regularly
- [ ] Implement rate limiting
- [ ] Use secure session management
- [ ] Keep dependencies updated
- [ ] Scan images for vulnerabilities
- [ ] Follow principle of least privilege
✅ Monitoring
- [ ] Set up health check endpoints
- [ ] Implement logging (structured logs)
- [ ] Monitor application metrics
- [ ] Set up alerts for critical issues
- [ ] Track deployment history
- [ ] Monitor resource usage
- [ ] Set up error tracking (Sentry, Rollbar)
Tools and Resources
Environment Management
-
Dotenv libraries: Load environment variables
- Node.js:
dotenv - Python:
python-dotenv - Go:
godotenv
- Node.js:
-
Secrets Management:
Container Management
- Docker
- Docker Compose
- Kubernetes (for advanced orchestration)
CI/CD Platforms
Monitoring & Logging
Conclusion
Managing applications across multiple environments is a fundamental DevOps skill. Here's what you've learned:
Key Takeaways
- Same code, different configs - Never change application logic for different environments
- Three pillars - Environment variables, Docker Compose, and CI/CD stages
- Env vars control behavior - Not business logic
- Docker Compose orchestrates - Different configurations for different needs
- CI/CD automates - Consistent, repeatable deployments
- Security matters - Never expose secrets, always use proper management
- Always test in staging - Before production deployment
The Mental Model
Think of your application as a template:
Application Code (Template)
+
Configuration (Variables)
=
Running Application (Instance)
The template stays the same. The variables change. The result is different instances optimized for each environment.
Next Steps
Now that you understand the concepts:
- Practice: Set up a simple app with dev/staging/prod environments
- Experiment: Try different Docker Compose configurations
- Automate: Build your first CI/CD pipeline
- Secure: Implement proper secrets management
- Monitor: Add health checks and logging
- Scale: Learn about Kubernetes for advanced orchestration
Final Thought
Environment management isn't just about technology—it's about discipline. It's about creating systems that are predictable, repeatable, and reliable. It's about making sure that what works on your laptop also works in production.
Master these concepts, and you'll be able to deploy with confidence, knowing that your application behaves consistently across all environments.
Questions or Feedback?
Did this guide help you understand environment management? Have questions about specific scenarios? Drop a comment below!
Want to see a follow-up article on:
- Kubernetes environment management?
- Advanced CI/CD patterns?
- Secrets management deep-dive?
- Production deployment strategies?
Let me know in the comments! 🚀
This article is designed to give you crystal-clear understanding of DevOps environment management. If something is unclear or you'd like more detail on any topic, don't hesitate to ask!
Top comments (0)