It was 2 AM on a Thursday. I was 17, working on a freelance project for a small e-commerce company. They had about 500 active customers. I was supposed to push a minor CSS fix to staging.
Instead, I ran git push --force to the wrong branch.
Then I ran the deployment script that was hooked to that branch.
Then I watched in horror as the migration script -- one I had been testing locally with destructive resets -- ran against the production database.
1,247 customer records. Gone.
This is the story of the worst night of my coding life and everything I learned from it.
How It Happened
Let me set the scene. I was working on two branches:
-
feature/checkout-redesign-- a big feature branch with database migrations -
hotfix/button-color-- a tiny CSS fix
I had been doing git rebase on the feature branch to keep it clean. The rebase required a force push. Normal stuff.
But I had the wrong branch checked out.
Here's what I typed:
git push --force origin main
I meant to type:
git push --force origin feature/checkout-redesign
The moment I hit Enter, I knew something was wrong. The output was different. It was pushing way more commits than expected.
But it was already done.
The Cascade of Disaster
The company had a simple CI/CD pipeline: any push to main triggered an automatic deployment. The deployment script ran database migrations.
My feature branch had a migration that looked like this:
-- This was meant for LOCAL TESTING ONLY
DROP TABLE IF EXISTS customers;
DROP TABLE IF EXISTS orders;
CREATE TABLE customers (
id SERIAL PRIMARY KEY,
-- new schema...
);
Yes. DROP TABLE IF EXISTS. In a migration file. That got pushed to main. That ran automatically in production.
The tables were dropped and recreated empty. All data gone.
The Next 6 Hours
2:03 AM -- I realized what happened. My hands were shaking. Literally. I stared at the terminal output for about 30 seconds hoping I was reading it wrong.
2:05 AM -- I checked the production database. Empty tables. I felt sick.
2:10 AM -- I called the client. He didn't answer (it was 2 AM, obviously). I sent him a message: "There's been a database incident. I'm working on it. Will update you soon."
2:15 AM -- I started looking for backups. The company was using a cheap shared hosting plan. There was a cPanel backup from... 3 weeks ago. Three weeks of customer data, orders, and transactions would still be lost.
2:20 AM -- I frantically googled "recover dropped postgres table." Spoiler: you can't. Not easily, anyway.
2:45 AM -- I found that the hosting provider kept daily snapshots, but only for the paid backup plan. Which the client hadn't purchased.
3:00 AM -- I remembered that the app sent email confirmations for every order. Every single one. I could reconstruct the orders from the email records.
3:00 AM to 7:30 AM -- I wrote a script to parse order confirmation emails from the company's Gmail account (the client had given me access for a different task) and reconstruct the database.
7:30 AM -- The client called me back. I explained everything honestly. He was upset but appreciated that I was already fixing it.
8:00 AM -- I had recovered about 95% of the data. Some customer phone numbers and addresses were lost because they weren't in the confirmation emails.
What I Learned
1. Never, Ever Force Push to Main
Just don't. Set up branch protection rules so it's literally impossible.
# GitHub CLI -- protect main branch
gh api repos/{owner}/{repo}/branches/main/protection \
-X PUT \
-f required_pull_request_reviews='{"required_approving_review_count":1}' \
-F enforce_admins=true \
-F allow_force_pushes=false
If you need to force push for rebase, only do it on feature branches. And even then, double-check which branch you're on:
# Always check before force pushing
git branch --show-current
# See what you're about to push
git push --force-with-lease origin feature/my-branch --dry-run
Use --force-with-lease instead of --force. It will refuse to push if someone else has pushed to the branch since you last pulled, which prevents overwriting their work.
2. Destructive Migrations Should Never Exist in Your Codebase
That DROP TABLE migration was meant for local testing. It should never have been committed in the first place.
Better approach:
-- Migration: add new columns to customers
ALTER TABLE customers ADD COLUMN phone VARCHAR(20);
ALTER TABLE customers ADD COLUMN updated_at TIMESTAMP DEFAULT NOW();
-- If you need to restructure, create new table and migrate data
CREATE TABLE customers_v2 (
id SERIAL PRIMARY KEY,
-- new schema
);
INSERT INTO customers_v2 SELECT * FROM customers;
-- Only after verifying data integrity:
ALTER TABLE customers RENAME TO customers_old;
ALTER TABLE customers_v2 RENAME TO customers;
-- Keep the old table for 30 days, then drop it
Never drop. Always migrate incrementally.
3. Automated Backups Are Not Optional
The client was saving $5/month by not having automated backups. That decision cost them days of stress and almost lost them real money.
After this incident, I set up:
# Daily database backup script
#!/bin/bash
DATE=$(date +%Y-%m-%d_%H-%M)
BACKUP_DIR="/backups/postgres"
pg_dump production_db > "$BACKUP_DIR/backup_$DATE.sql"
# Keep last 30 days
find $BACKUP_DIR -name "*.sql" -mtime +30 -delete
# Upload to S3
aws s3 cp "$BACKUP_DIR/backup_$DATE.sql" s3://my-backups/postgres/
Minimum backup strategy for any production database:
- Daily automated backups
- Stored in a separate location (not the same server)
- Tested monthly (a backup you've never restored from isn't a backup)
- At least 30-day retention
4. CI/CD Should Have Safeguards
The deployment was fully automatic with zero checks. Here's what the pipeline should have looked like:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Check for destructive migrations
run: |
if grep -r "DROP TABLE" migrations/; then
echo "DESTRUCTIVE MIGRATION DETECTED"
exit 1
fi
- name: Run tests
run: npm test
- name: Deploy to staging first
run: ./deploy.sh staging
- name: Run smoke tests on staging
run: npm run test:smoke -- --env=staging
- name: Manual approval for production
uses: trstringer/manual-approval@v1
with:
approvers: team-lead
- name: Deploy to production
run: ./deploy.sh production
The key additions:
- Scan for destructive SQL commands
- Always deploy to staging first
- Run smoke tests before production
- Require manual approval for production deploys
5. Use Git Aliases to Prevent Mistakes
I now have these in my .gitconfig:
[alias]
# Force push only to current branch, with lease
fpush = push --force-with-lease origin HEAD
# Show current branch before any push
safepush = "!f() { echo 'Pushing to:' $(git branch --show-current); read -p 'Continue? (y/n) ' confirm; [ \"$confirm\" = \"y\" ] && git push \"$@\"; }; f"
# Never allow push --force to main
push = "!f() { if [ \"$2\" = \"main\" ] || [ \"$2\" = \"master\" ]; then if echo \"$@\" | grep -q 'force'; then echo 'BLOCKED: Cannot force push to main/master'; exit 1; fi; fi; command git push \"$@\"; }; f"
6. Always Be Honest When You Mess Up
When I called the client, I didn't make excuses. I said: "I made a mistake that caused data loss. Here's exactly what happened. Here's what I'm doing to fix it. Here's what I'll do to make sure it never happens again."
He could have fired me. He didn't. He said he respected that I owned up to it immediately and was already working on a fix at 2 AM.
That relationship actually got stronger after the incident. He hired me for three more projects.
The Silver Lining
That night was terrible. I barely slept, I felt awful, and I learned what real production fear feels like.
But it made me a 10x better developer. Not in the "I write code faster" way. In the "I think about what could go wrong" way.
Every time I write a database migration now, I think about that night. Every time I set up a deployment pipeline, I add safeguards. Every time I work with a new client, the first thing I ask is "what's your backup strategy?"
If you haven't had your "production disaster" moment yet, it's coming. The question is whether you'll have the safeguards in place to make it a 20-minute inconvenience instead of a 6-hour nightmare.
Quick Checklist Before You Push
- [ ] Am I on the right branch?
- [ ] Have I reviewed what I'm about to push? (
git diff origin/main) - [ ] Are there any destructive operations in my changes?
- [ ] Is there a backup I can restore from if this goes wrong?
- [ ] Am I force pushing? If yes, why? Is there a safer way?
Save this list somewhere. Seriously. Future you will be glad you did.
If you found this useful, I share more stuff like this on Telegram and sell developer toolkits on Boosty.
Top comments (0)