DEV Community

Ivan Annovazzi
Ivan Annovazzi

Posted on

How we replaced .env files across 5 microservices without touching the app code

The .env file tax is real. Every time we onboard a new developer, someone has to share credentials over Slack. Every time we add a service, there's another .env.example to maintain. By our fifth microservice, we had a mess.

This is the story of how we moved all five services to a central secrets manager — without touching a single line of app code.

The Problem We Had

Our stack looked something like this:

  • api-gateway — Node.js, reads 12 env vars
  • auth-service — Node.js, reads 8 env vars
  • billing-service — Node.js, reads 6 env vars
  • notification-service — Python, reads 5 env vars
  • analytics-worker — Go, reads 4 env vars

Each service had its own .env.example. New developer? "Hey, ask someone for the values." Production values? "Check the secret Notion page." Rotation? "Good luck, touch every service manually."

The breaking point came when we rotated a database password and missed one service. Three hours of debugging a production incident traced back to a stale .env file.

Why Not Just Vault?

HashiCorp Vault is the "right" answer at scale. But setting up Vault for a 5-person team means:

  • Running and maintaining a Vault cluster
  • Setting up auth methods, policies, lease management
  • Building tooling to inject secrets at runtime

We wanted something that felt more like dotenv but with team access controls, encryption, and auditability.

The Approach: Treat Secrets as Ephemeral Artifacts

The key mental shift is this: a .env file is not configuration, it's a generated artifact.

Instead of storing .env files:

  1. Secrets live in an encrypted store (we used keyenv.dev)
  2. Each developer runs keyenv pull to generate a fresh .env for their environment
  3. CI/CD pipelines do the same in their build step
  4. The generated .env is ephemeral — never committed, never shared

What We Changed (Per Service)

Here's the thing: we changed nothing about how the services consume secrets.

All five services already read from environment variables via process.env, os.environ, or os.Getenv. They continued to do that. We just changed how the .env got there.

Before:

# Copy from teammate over Slack
cp .env.example .env
# Edit manually
Enter fullscreen mode Exit fullscreen mode

After:

keyenv pull --env development
# .env generated from encrypted store
Enter fullscreen mode Exit fullscreen mode

For production deploys, we added one line to our CI pipeline:

keyenv pull --env production
Enter fullscreen mode Exit fullscreen mode

That's it. No SDK integration. No sidecar containers. No code changes.

Per-Environment Scoping

The feature that made this work cleanly is environment inheritance. We defined secrets at three levels:

  • Shared — keys used across all environments (service names, feature flags)
  • Per-environmentDATABASE_URL, STRIPE_SECRET_KEY differ per env
  • Per-service — only relevant to one service

When you run keyenv pull --env staging, you get the merged result. Each developer gets the right values for their environment without needing to know the production secrets.

Rotation in One Place

When we had to rotate our Stripe key (a quarterly security practice), the old workflow was:

  1. Generate new key in Stripe dashboard
  2. Update 3 services that use it
  3. Hope nothing breaks in production
  4. Realize you forgot the analytics worker
  5. Fix the analytics worker

The new workflow:

  1. Generate new key in Stripe dashboard
  2. keyenv set STRIPE_SECRET_KEY <new_value> --env production
  3. Deploy (each service pulls fresh on next start)

Done in 90 seconds. Full audit trail showing who changed what and when.

What We Gained

  • Zero secrets in Slack — new developers get access via team invite, not copy-paste
  • No stale .env files — pull always reflects current state
  • Rotation without touching services — update the store, redeploy
  • Audit trail for compliance — who accessed what and when

What This Doesn't Solve

To be clear about the limits:

  • Runtime secret injection — if you need secrets to change mid-run without restarting, you'll want something with dynamic leases (Vault)
  • Infrastructure secrets — Terraform state, cloud provider credentials are better handled by the cloud provider's native tools
  • Very large teams — at 50+ developers, the enterprise features of Doppler or Infisical start to matter

For a 5-person team with 5 microservices in a multi-environment setup, this approach eliminated our entire class of "wrong credentials" incidents.

Takeaway

The pattern that eliminated our credential chaos:

  1. Secrets belong in an encrypted store, not in files
  2. .env files are generated artifacts, not source-controlled config
  3. Every service gets its secrets the same way: pull at startup
  4. Apps don't need to know any of this — they still read env vars

If you're still copying .env files around, try treating them as generated artifacts for a week. The operational simplification is immediate.

Top comments (0)