DEV Community

Cover image for Why Our "Perfect" MongoDB Setup Failed in Production (And How We Fixed It)
Ziaul Hoque
Ziaul Hoque

Posted on

Why Our "Perfect" MongoDB Setup Failed in Production (And How We Fixed It)

Three weeks before our product launch, everything seemed perfect. Our app was running smoothly in development, Prisma was handling our database transactions beautifully, and we were confident about going live.

Then we deployed to production.

"Transaction failed: not in a replica set" – six words that turned our launch week into a debugging marathon. Sound familiar?

If you've ever attempted to run MongoDB transactions in production, you're familiar with this pain. What works perfectly in development suddenly breaks when you need it most. Today, I'm sharing the production-ready MongoDB setup that finally solved this puzzle for us.

The Problem That Blindsided Us

Here's what happened: Our development environment used a simple MongoDB instance. But when we tried to use Prisma transactions in production, everything crashed. The issue? MongoDB requires a replica set for transactions to function, even in a single-node setup.

This isn't clearly documented anywhere, and it caught our entire team off guard. We weren't alone – I've since learned this trips up countless development teams during their first production deployment.

The Solution That Actually Works
After three days of research, testing, and some very late nights, we built a bulletproof MongoDB 8.0 setup that handles transactions, provides proper security, and persists data reliably. Here's exactly how we did it:

Step 1: Getting Organized (The Foundation)
First, we created a clean project structure with secure credential management:

mkdir -p ~/mongodb_prod/mongo-key
cd ~/mongodb_prod 
Enter fullscreen mode Exit fullscreen mode

Then we set up environment variables for our sensitive data:

nano .env
Enter fullscreen mode Exit fullscreen mode

In the .env file:

# .env
MONGO_ROOT_USER=your_admin_user
MONGO_ROOT_PASS=your_super_strong_and_secret_password
Enter fullscreen mode Exit fullscreen mode

Pro tip: Never hardcode these credentials in your Docker files. I learned this the hard way when I accidentally committed passwords to our repo.

Step 2: The Security KeyFile (Critical for Production)
This step stumped us initially. Replica sets need a security keyFile for internal communication, even in single-node setups:

# Generate the security key
openssl rand -base64 756 > ./mongo-key/mongodb.key

## Set proper permissions (this is crucial!)
chmod 400 ./mongo-key/mongodb.key
sudo chown 999:999 ./mongo-key/mongodb.key
Enter fullscreen mode Exit fullscreen mode

Why user ID 999? That's the MongoDB user inside the Docker container. Getting this wrong will cause authentication failures that are incredibly frustrating to debug.

Step 3: The Docker Compose Configuration That Works
After multiple failed attempts, here's the configuration that finally worked:

services:
  mongodb:
    image: mongo:8.0
    container_name: mongodb_prod
    hostname: mongodb_prod
    restart: always
    ports:
      - "27017:27017"
    environment:
    # Reads credentials from the .env file
      MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USER}
      MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASS}
    # The magic command that enables replica set
      command: ["--replSet", "rs0", "--keyFile", "/etc/secrets/mongodb.key"]
    volumes:
      # Persistent database data
      - mongo-data:/data/db
      # Persistent config data
      - mongo-config:/data/configdb
      # Security keyFile mount
      - ./mongo-key/mongodb.key:/etc/secrets/mongodb.key:ro

volumes:
  mongo-data:
    driver: local
  mongo-config:
    driver: local
Enter fullscreen mode Exit fullscreen mode

Key insights:

The --replSet rs0 parameter is what enables transactions

The hostname must match what you use in the replica set configuration

Named volumes ensure your data survives container restarts

Step 4: The One-Time Initialization
This is where many tutorials fail you. After starting the container, you need to initialize the replica set:

# Start the container
docker compose up -d

# Wait 20-30 seconds for initialization
sleep 30

# Initialize the replica set (replace with your actual credentials)
docker exec mongodb_prod mongosh \
  --username your_admin_user \
  --password your_super_strong_and_secret_password \
  --authenticationDatabase admin \
  --eval "rs.initiate({ _id: 'rs0', members: [{ _id: 0, host: 'mongodb_prod:27017' }]})"
Enter fullscreen mode Exit fullscreen mode

Look for "ok": 1 in the output – that's your confirmation that everything worked.

Step 5: The Connection String That Changes Everything
Here's the connection string format that finally made our Prisma transactions work:

DATABASE_URL="mongodb://your_admin_user:your_super_strong_password@mongodb_prod:27017/your_db_name?authSource=admin&replicaSet=rs0"
Enter fullscreen mode Exit fullscreen mode

The crucial parts:

  • authSource=admin – tells MongoDB where to authenticate
  • replicaSet=rs0 – enables transaction support

Lessons Learned the Hard Way

  1. Test your production setup early: Don't wait until launch week to discover replica set requirements
  2. Document your deployment process: Future you (and your team) will thank you
  3. Monitor your setup: Use docker compose logs -f mongodb_prod to catch issues early
  4. Backup from day one: Even with persistent volumes, regular backups saved us during a server migration

What's Next?

This setup forms the foundation of our current infrastructure. We've since added monitoring with Prometheus, automated backups (remember my previous article?), and performance optimization. But this core configuration remains unchanged – it just works.

Have you faced similar MongoDB production challenges? What setup works best for your team? I'd love to hear about your experiences and any optimizations you've discovered.

If this saved you from deployment headaches, give it a 👍 and share it with other developers who might be struggling with MongoDB transactions in production.

Top comments (0)