Or: That time I learned that DNS is not a medical condition
π¬ Act I: The Setup (aka "What Have I Gotten Myself Into?")
Picture this: You're a chef who just learned how to make toast. Now someone's asking you to cook a 7-course meal for the Queen. That's basically what deploying a full-stack application feels like when you're starting out.
But here's the good news: by the end of this guide, you'll have your very own production-ready app running on the internet, complete with:
- A fancy custom domain (like
coolapp.pizza
- yes,.pizza
is a real domain extension) - A backend that actually talks to a database (without crying)
- SSL certificates (the little padlock thing in browsers)
- Automated deployments (because manually deploying is so 2010)
Warning: This guide is long. Really long. Like "Lord of the Rings extended edition" long. But I promise it's worth it. Grab some coffee (or tea, I don't judge), and let's begin.
π The Cast of Characters (Technologies We'll Use)
Think of building a web app like running a restaurant:
- EC2 (AWS) = Your actual restaurant building (the server)
- Domain Name = Your restaurant's address (coolapp.pizza)
- Nginx = The host who greets customers and shows them to the right table
- Backend (FastAPI) = Your kitchen where food (data) is prepared
- Database (PostgreSQL) = Your pantry where ingredients (data) are stored
- Redis = Your speed-rack of frequently used ingredients
- Frontend (React/Next.js) = The dining room where customers actually eat
- Vercel = A fancy restaurant franchise that hosts your dining rooms in multiple locations
- SSL/Certbot = The health inspector's certificate on your wall
- GitHub Actions = Your robot delivery system that automatically restocks your kitchen
Got it? No? That's okay. It'll make sense as we go.
π Act II: The Preparation (Before We Touch Any Code)
What You'll Need (Shopping List)
A domain name - Buy one from Namecheap, Google Domains, or anywhere really. For this guide, let's say you bought
tacocloud.dev
(because why not?)An AWS account - Free tier works fine. You'll need a credit card but won't be charged much (maybe $5-10/month)
A GitHub account - For storing your code and automating deployments
A Vercel account - Free tier is perfect. We'll host the frontend here
LinkedIn/GitHub OAuth apps (optional) - If your app needs "Login with LinkedIn/GitHub"
Patience - Cannot be purchased. Must be cultivated through suffering.
ποΈ Act III: Building the Restaurant (Setting Up EC2)
Step 1: Launch Your EC2 Instance
Think of EC2 as renting a computer in the cloud. It's like having a desktop PC that never turns off and has really fast internet.
Go to AWS Console β EC2 β Launch Instance
Choose Ubuntu Server 24.04 LTS (it's like choosing what operating system your computer runs)
Instance type: t2.micro or t3.micro (the free tier options - perfectly fine for starting out)
Create a new key pair - This is your secret password to access the server. Download the
.pem
file and NEVER LOSE IT. Seriously. Tattoo it on your arm if you have to.-
Security Group (Firewall Rules):
- SSH (Port 22) β Your IP only (this is your secret back door)
- HTTP (Port 80) β Anywhere (0.0.0.0/0)
- HTTPS (Port 443) β Anywhere (0.0.0.0/0)
- DO NOT open PostgreSQL (5432) or Redis (6379) to the internet unless you want hackers having a party
Click Launch and wait 2 minutes
-
Get an Elastic IP (optional but recommended):
- This gives your server a permanent address
- Go to Elastic IPs β Allocate β Associate with your instance
- Write this IP down. Let's say it's
123.45.67.89
Step 2: Connect to Your Server
On your computer (Mac/Linux):
# Move your key file to a safe place
mv ~/Downloads/my-key.pem ~/.ssh/tacocloud-key.pem
# Make it secure (only you can read it)
chmod 400 ~/.ssh/tacocloud-key.pem
# Connect to your server!
ssh -i ~/.ssh/tacocloud-key.pem ubuntu@123.45.67.89
If that worked, you'll see something like:
Welcome to Ubuntu 24.04 LTS
ubuntu@ip-172-31-12-34:~$
Congratulations! You're now controlling a computer in a data center somewhere. Feel the power. β‘
Step 3: Update Everything (Like Updating Your Phone)
sudo apt update && sudo apt upgrade -y
This downloads all the latest security patches. Takes 2-5 minutes. Go stretch.
Step 4: Install the Basic Tools
# Install all the things
sudo apt install -y \
git \
nginx \
python3-pip \
python3-venv \
build-essential \
libpq-dev \
curl \
certbot \
python3-certbot-nginx \
postgresql \
postgresql-contrib \
redis-server
What did we just install? Let me break it down:
-
git
- Version control (like Google Docs revision history but for code) -
nginx
- The web server / traffic cop -
python3-pip
&python3-venv
- Python package manager and virtual environments -
postgresql
- Database (where your data lives) -
redis-server
- Fast cache (like RAM but for your app) -
certbot
- Free SSL certificates (the padlock in the browser) - Other stuff - Building tools and dependencies
π Act IV: Setting Up Your Restaurant Staff (Creating Users)
Running everything as the default ubuntu
user is like having the restaurant owner cook, clean, and greet customers. Not ideal.
# Create a new user for your app
sudo adduser --disabled-password --gecos "" tacomaster
# Give them sudo powers (for emergencies)
sudo usermod -aG sudo tacomaster
# Switch to that user
sudo -u tacomaster -i
Now you're tacomaster
. Nice.
ποΈ Act V: The Pantry (Database Setup)
PostgreSQL is your database. Think of it as an Excel spreadsheet on steroids that can handle millions of rows and never crashes.
Step 1: Create Database Users and Databases
# Become the postgres superuser (like becoming admin)
sudo -u postgres psql
You'll see: postgres=#
Now type these commands:
-- Create a user for DEV environment
CREATE USER taco_dev_user WITH PASSWORD 'super_secret_dev_pass';
-- Create DEV database
CREATE DATABASE taco_dev_db OWNER taco_dev_user;
-- Create a user for PRODUCTION
CREATE USER taco_prod_user WITH PASSWORD 'even_more_secret_prod_pass';
-- Create PROD database
CREATE DATABASE taco_prod_db OWNER taco_prod_user;
-- Exit
\q
Step 2: Fix Authentication (The Annoying Part)
By default, PostgreSQL uses "peer authentication" which is like asking for your driver's license at a restaurant. We need password authentication.
# Find your PostgreSQL version
ls /etc/postgresql/
# Edit the config (replace 16 with your version)
sudo nano /etc/postgresql/16/main/pg_hba.conf
Find these lines:
# "local" is for Unix domain socket connections only
local all all peer
Change peer
to md5
:
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
Save (Ctrl+X, then Y, then Enter) and restart:
sudo systemctl restart postgresql
Step 3: Test It
# Try connecting (you'll be asked for password)
psql -h localhost -U taco_dev_user -d taco_dev_db
Type the password. If you see taco_dev_db=>
, you did it! Type \q
to exit.
π Act VI: The Speed Kitchen (Redis Setup)
Redis is already installed! Just needs a tiny config check:
# Check if Redis is running
sudo systemctl status redis-server
# Test it
redis-cli ping
If you see PONG
, Redis is happy. That's it. Redis is the easiest part of this whole adventure.
π Act VII: Clone Your Code (Getting the Recipe)
Let's say your backend code is on GitHub at github.com/yourusername/taco-backend
.
# As tacomaster user
cd ~
mkdir -p apps
cd apps
# Clone your repo
git clone git@github.com:yourusername/taco-backend.git backend
# Create separate folders for dev and prod
mkdir -p backend/dev backend/prod
# For this guide, we'll work in dev
cd backend/dev
# Copy your code here (or checkout dev branch)
# Details depend on your repo structure
π Act VIII: Environment Variables (The Secret Sauce)
Create a file called .env
in /home/tacomaster/apps/backend/dev/.env
:
nano /home/tacomaster/apps/backend/dev/.env
IMPORTANT: No quotes around values! Each line is KEY=value
ENVIRONMENT=development
PORT=5000
# Database
DB_TYPE=postgresql
DB_NAME=taco_dev_db
DB_USER=taco_dev_user
DB_PASSWORD=super_secret_dev_pass
DB_HOST=localhost
DB_PORT=5432
DB_URL=postgresql://taco_dev_user:super_secret_dev_pass@localhost:5432/taco_dev_db
# Redis
BE_REDIS_URL=redis://localhost:6379
# Security
SECRET_KEY=generate_a_really_long_random_string_here
# OAuth (if you use LinkedIn/GitHub login)
LINKEDIN_CLIENT_ID=your_client_id
LINKEDIN_CLIENT_SECRET=your_client_secret
LINKEDIN_REDIRECT_URI=https://dev.api.tacocloud.dev/api/v1/auth/linkedin/callback
GITHUB_CLIENT_ID=your_github_client_id
GITHUB_CLIENT_SECRET=your_github_client_secret
GITHUB_REDIRECT_URI=https://dev.api.tacocloud.dev/api/v1/auth/github/callback
# CORS (which frontends can talk to this backend)
CORS_ORIGINS=http://localhost:3000,https://dev.tacocloud.dev
# File uploads
UPLOAD_DIRECTORY=/home/tacomaster/apps/backend/dev/static
Save it. Never commit this file to Git! Add .env
to your .gitignore
.
π Act IX: Python Environment (Setting Up the Kitchen)
cd /home/tacomaster/apps/backend/dev
# Create a virtual environment (isolated Python bubble)
python3 -m venv venv
# Activate it
source venv/bin/activate
# Your prompt should now show (venv)
# Upgrade pip
pip install --upgrade pip
# Install your app's dependencies
pip install -r requirements.txt
Run Database Migrations (Setting Up Tables)
If you use Alembic:
# Still in venv, in project directory
alembic upgrade head
This creates all your database tables. Magic! β¨
βοΈ Act X: Systemd Services (The Auto-Start Button)
We want our app to start automatically when the server reboots. Enter systemd - Linux's task manager.
Create /etc/systemd/system/taco-dev.service
:
sudo nano /etc/systemd/system/taco-dev.service
Paste this:
[Unit]
Description=Taco Cloud Dev API
After=network.target postgresql.service redis-server.service
[Service]
User=tacomaster
Group=tacomaster
WorkingDirectory=/home/tacomaster/apps/backend/dev
EnvironmentFile=/home/tacomaster/apps/backend/dev/.env
ExecStart=/home/tacomaster/apps/backend/dev/venv/bin/uvicorn main:app --host 127.0.0.1 --port 5000 --log-level info
Restart=always
RestartSec=3
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
What does this mean?
- Runs as
tacomaster
user - Loads environment variables from
.env
- Starts uvicorn (Python web server) on port 5000
- Restarts automatically if it crashes
- Only listens on localhost (127.0.0.1) - nginx will handle public access
Now enable and start it:
# Reload systemd (tell it about new service)
sudo systemctl daemon-reload
# Enable (start on boot)
sudo systemctl enable taco-dev
# Start now
sudo systemctl start taco-dev
# Check status
sudo systemctl status taco-dev
You should see "active (running)" in green. If not:
# Check logs
sudo journalctl -u taco-dev -n 50 --no-pager
The logs will tell you what's wrong (missing env var, database connection failed, etc.).
Test It Locally
# From the server
curl http://127.0.0.1:5000/
# You should see your API response (probably JSON)
If you see output, your backend is alive! π
π Act XI: DNS Setup (Giving Your Restaurant an Address)
Time to connect your domain to your server.
Go to your domain registrar (Namecheap, Google Domains, etc.) and add these A Records:
Type | Host | Value | TTL |
---|---|---|---|
A | api |
123.45.67.89 |
Automatic |
A | dev.api |
123.45.67.89 |
Automatic |
Replace 123.45.67.89
with your actual EC2 IP.
This creates:
-
api.tacocloud.dev
β your production backend -
dev.api.tacocloud.dev
β your dev backend
Wait 5-30 minutes for DNS to propagate (spread across the internet).
Test it:
# From your laptop
dig +short api.tacocloud.dev
# Should show your EC2 IP
dig +short dev.api.tacocloud.dev
# Should also show your EC2 IP
π¦ Act XII: Nginx (The Traffic Cop)
Nginx sits in front of your app and:
- Handles SSL/HTTPS
- Routes requests to the right backend
- Handles CORS (Cross-Origin requests)
- Serves static files
Create /etc/nginx/conf.d/dev.api.tacocloud.dev.conf
:
sudo nano /etc/nginx/conf.d/dev.api.tacocloud.dev.conf
Paste this (we'll add SSL later):
# HTTP server (will redirect to HTTPS after we get certs)
server {
listen 80;
server_name dev.api.tacocloud.dev;
# Let Certbot verify domain ownership
location ^~ /.well-known/acme-challenge/ {
root /var/www/html;
allow all;
}
# Proxy everything else to your backend
location / {
# Handle OPTIONS (preflight) requests for CORS
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' 'https://dev.tacocloud.dev' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, PATCH, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, Accept' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Length' 0;
add_header 'Content-Type' 'text/plain';
return 204;
}
# Add CORS headers to normal responses
add_header 'Access-Control-Allow-Origin' 'https://dev.tacocloud.dev' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
# Proxy to local uvicorn
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 90;
}
}
Test and reload:
# Test config (checks for syntax errors)
sudo nginx -t
# If OK, reload
sudo systemctl reload nginx
Now test from your laptop:
curl -I http://dev.api.tacocloud.dev/
You should see HTTP/1.1 200 OK or similar. Progress!
π Act XIII: SSL Certificates (The Padlock)
Let's Encrypt gives free SSL certificates. Certbot automates it.
# Request cert for both dev and prod domains
sudo certbot --nginx -d api.tacocloud.dev -d dev.api.tacocloud.dev
Certbot will:
- Verify you own the domains (via HTTP challenge)
- Get certificates
- Modify your nginx configs automatically to use HTTPS
- Set up auto-renewal
Answer the prompts:
- Email: your email (for renewal reminders)
- Agree to terms: Yes
- Share email with EFF: Your choice
If successful:
Successfully deployed certificate for api.tacocloud.dev
Successfully deployed certificate for dev.api.tacocloud.dev
Test HTTPS
curl -I https://dev.api.tacocloud.dev/
You should see:
HTTP/2 200
You now have HTTPS! The padlock will show in browsers. π
Auto-Renewal
Certbot sets up a cron job automatically. Test it:
sudo certbot renew --dry-run
If no errors, you're golden. Certificates renew every 90 days automatically.
π¨ Act XIV: Frontend on Vercel (The Dining Room)
Your backend is ready. Now let's deploy the frontend.
Step 1: Push Frontend to GitHub
# On your laptop, in frontend project directory
git init
git add .
git commit -m "Initial commit"
git branch -M main
git remote add origin git@github.com:yourusername/taco-frontend.git
git push -u origin main
Also create a dev
branch:
git checkout -b dev
git push origin dev
Step 2: Import to Vercel
- Go to vercel.com
- Sign up / Log in (use GitHub account)
- Click New Project
- Import your frontend repo
- Vercel auto-detects framework (Next.js, React, etc.)
- Click Deploy
Wait 2 minutes. Your app is live at something-random.vercel.app
.
Step 3: Add Custom Domains
In Vercel project β Settings β Domains:
Add these domains:
-
app.tacocloud.dev
(production) -
dev.tacocloud.dev
(preview/dev)
Vercel will show DNS instructions. Go back to your registrar and add:
Type | Host | Value | TTL |
---|---|---|---|
A | @ |
76.76.21.21 |
Automatic |
A | app |
76.76.21.21 |
Automatic |
CNAME | www |
cname.vercel-dns.com |
Automatic |
CNAME | dev |
cname.vercel-dns.com |
Automatic |
Wait 5-30 minutes for DNS propagation.
In Vercel, under Domains β each domain β you can assign which Git branch:
-
app.tacocloud.dev
βmain
branch -
dev.tacocloud.dev
βdev
branch
Step 4: Environment Variables
In Vercel β Settings β Environment Variables:
Production (main branch):
NEXT_PUBLIC_BACKEND_URL=https://api.tacocloud.dev/api/v1/
NEXT_PUBLIC_PHOTO_URL=https://api.tacocloud.dev/
NEXT_PUBLIC_WS_URL=wss://api.tacocloud.dev/api/v1/ws/
Preview (dev branch):
NEXT_PUBLIC_BACKEND_URL=https://dev.api.tacocloud.dev/api/v1/
NEXT_PUBLIC_PHOTO_URL=https://dev.api.tacocloud.dev/
NEXT_PUBLIC_WS_URL=wss://dev.api.tacocloud.dev/api/v1/ws/
Click Save and Redeploy the project (so it bakes in new env vars).
π Act XV: GitHub Actions (The Robot Delivery System)
Let's automate deployments. When you push code, GitHub runs a script that:
- SSHs into your server
- Pulls latest code
- Installs dependencies
- Runs migrations
- Restarts the service
Step 1: Generate Deploy Key
On your laptop:
ssh-keygen -t rsa -b 4096 -C "gha-deploy" -f ~/.ssh/tacocloud-deploy -N ""
This creates:
-
~/.ssh/tacocloud-deploy
(private key) -
~/.ssh/tacocloud-deploy.pub
(public key)
Step 2: Add Public Key to Server
# Copy public key content
cat ~/.ssh/tacocloud-deploy.pub
# SSH to server as tacomaster
ssh -i ~/.ssh/tacocloud-key.pem ubuntu@123.45.67.89
sudo -u tacomaster -i
# Create .ssh directory if needed
mkdir -p ~/.ssh
chmod 700 ~/.ssh
# Add public key
nano ~/.ssh/authorized_keys
# Paste the public key, save
chmod 600 ~/.ssh/authorized_keys
Step 3: Add Secrets to GitHub
In GitHub repo β Settings β Secrets and variables β Actions:
Add these secrets:
-
SSH_KEY
β paste content of~/.ssh/tacocloud-deploy
(private key) -
SERVER_HOST
β123.45.67.89
-
SERVER_USER
βtacomaster
-
DEPLOY_METHOD
βsystemd
Step 4: Create Workflow File
In your backend repo, create .github/workflows/deploy-dev.yml
:
name: Deploy to Dev Server
on:
push:
branches: [ dev ]
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_KEY }}" > ~/.ssh/deploy_key
chmod 600 ~/.ssh/deploy_key
- name: Deploy
env:
HOST: ${{ secrets.SERVER_HOST }}
USER: ${{ secrets.SERVER_USER }}
run: |
ssh -i ~/.ssh/deploy_key -o StrictHostKeyChecking=no $USER@$HOST bash -s <<'ENDSSH'
set -e
cd ~/apps/backend/dev
# Pull latest code
git fetch origin
git reset --hard origin/dev
git clean -fd
# Activate venv and install deps
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
# Run migrations (with merge handling)
if command -v alembic >/dev/null 2>&1; then
heads=$(alembic heads 2>/dev/null | wc -l)
if [ "$heads" -gt 1 ]; then
echo "Merging migration heads..."
alembic merge heads -m "auto merge for deployment"
fi
alembic upgrade head
fi
# Restart service
sudo systemctl daemon-reload
sudo systemctl restart taco-dev
sudo systemctl status taco-dev --no-pager -l
deactivate
echo "β
Deployment complete!"
ENDSSH
Commit and push:
git add .github/workflows/deploy-dev.yml
git commit -m "Add CI/CD workflow"
git push origin dev
Go to GitHub β Actions tab. You should see the workflow running!
If it fails, check logs in GitHub Actions. Common issues:
- SSH key mismatch
- Server firewall blocking GitHub IPs
- Missing sudo permissions for tacomaster
π― Act XVI: OAuth Setup (Login Buttons)
If your app uses "Login with LinkedIn" or "Login with GitHub":
- Go to LinkedIn Developer Portal
- Create an app
- In Auth tab, add Redirect URLs:
https://dev.api.tacocloud.dev/api/v1/auth/linkedin/callback
https://api.tacocloud.dev/api/v1/auth/linkedin/callback
http://localhost:5000/api/v1/auth/linkedin/callback
- Copy Client ID and Client Secret to your
.env
files
GitHub
- Go to GitHub β Settings β Developer settings β OAuth Apps
- Create OAuth App
- Add Callback URLs:
https://dev.api.tacocloud.dev/api/v1/auth/github/callback
https://api.tacocloud.dev/api/v1/auth/github/callback
http://localhost:5000/api/v1/auth/github/callback
- Copy Client ID and Secret to
.env
Critical: The redirect URI in your .env
MUST match exactly what's registered in LinkedIn/GitHub. Even a trailing slash matters!
π§ͺ Act XVII: Testing Everything
Backend Health Check
# From your laptop
curl https://dev.api.tacocloud.dev/
curl https://dev.api.tacocloud.dev/api/v1/probe
# Check specific endpoint
curl https://dev.api.tacocloud.dev/api/v1/users
Frontend β Backend Integration
- Open
https://dev.tacocloud.dev
in browser - Open browser DevTools (F12) β Network tab
- Try login or any API call
- Check Network tab - requests should go to
dev.api.tacocloud.dev
- Check response status (should be 200, not 404 or 500)
Check Logs
On server:
# Service logs
sudo journalctl -u taco-dev -f
# Nginx access logs
sudo tail -f /var/log/nginx/access.log
# Nginx error logs
sudo tail -f /var/log/nginx/error.log
π¨ Troubleshooting Common Issues
"502 Bad Gateway"
Cause: Nginx can't reach your backend.
Fix:
# Check if service is running
sudo systemctl status taco-dev
# Check if port is listening
sudo ss -lntp | grep 5000
# Check logs
sudo journalctl -u taco-dev -n 50
"Connection Refused" or "Timeout"
Cause: Firewall blocking requests.
Fix:
# Check nginx is running
sudo systemctl status nginx
# Check AWS Security Group (in AWS console)
# Ensure ports 80 and 443 are open to 0.0.0.0/0
"redirect_uri_mismatch" (OAuth)
Cause: Redirect URI in your .env
doesn't match what's in LinkedIn/GitHub app.
Fix: Double-check they match EXACTLY (including https://
, path, trailing slash).
"Permission denied for schema public" (Database)
Cause: Database user doesn't have permissions.
Fix:
sudo -u postgres psql -d taco_dev_db
ALTER SCHEMA public OWNER TO taco_dev_user;
GRANT ALL ON SCHEMA public TO taco_dev_user;
\q
Frontend Shows "Network Error"
Cause: CORS not configured or wrong API URL.
Fix:
- Check Vercel env vars are correct
- Redeploy frontend in Vercel
- Check nginx CORS headers (in Step XII)
GitHub Actions Fails at SSH
Cause: SSH key not set up correctly or server blocking GitHub IPs.
Fix:
- Verify SSH_KEY secret has full private key (including
-----BEGIN
and-----END
lines) - Test SSH manually:
ssh -i ~/.ssh/deploy_key tacomaster@123.45.67.89
- Check AWS Security Group allows SSH from GitHub Action IPs or 0.0.0.0/0 (temporarily)
π Act XVIII: Best Practices & Security
1. Never Commit Secrets
Always add to .gitignore
:
.env
.env.*
*.pem
*.key
2. Use Strong Passwords
# Generate random password
openssl rand -base64 32
3. Restrict SSH Access
In AWS Security Group:
- SSH (Port 22) β Your IP only (or use VPN/Bastion)
4. Keep System Updated
# Run monthly
sudo apt update && sudo apt upgrade -y
sudo reboot # if kernel updated
5. Enable Fail2Ban (Prevents Brute Force)
sudo apt install -y fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
6. Backup Database
# Manual backup
pg_dump -h localhost -U taco_prod_user taco_prod_db > backup.sql
# Automate with cron (daily backup)
crontab -e
# Add: 0 2 * * * pg_dump -h localhost -U taco_prod_user taco_prod_db > ~/backups/db-$(date +\%F).sql
7. Monitor Logs
Set up log monitoring (Sentry, LogRocket, or simple alerts):
# Email yourself if service fails
sudo apt install -y mailutils
# Configure systemd to email on failure
π Epilogue: You Did It!
If you made it this far and everything works, CONGRATULATIONS! π
You now have:
- β A production server running on AWS
- β Custom domain with SSL
- β Database properly configured
- β Automated deployments with GitHub Actions
- β Frontend hosted on Vercel
- β Dev and production environments
This is not beginner stuff. This is real DevOps. Companies pay people good money to do exactly what you just did.
What's Next?
- Add monitoring (Sentry, DataDog, New Relic)
- Set up staging environment
- Add automated tests to CI/CD
- Learn Docker (containerization)
- Learn Kubernetes (if you hate yourself)
- Consider managed databases (RDS, Supabase) for production
Final Thoughts
DevOps is hard. It's supposed to be hard. Every error you hit, every hour you spent debugging - that's you learning. Nobody is born knowing what "502 Bad Gateway" means. We all Google stuff.
Top comments (0)