DEV Community

Cover image for What Happens When You Leave a .env File in a Public Repo (47 Minutes of Chaos)
Gabriel Anhaia
Gabriel Anhaia

Posted on

What Happens When You Leave a .env File in a Public Repo (47 Minutes of Chaos)


In multiple documented experiments, security researchers have pushed real cloud credentials to public GitHub repositories as honeypots, specifically to measure how fast attackers move. The results are consistent and grim: bots find exposed credentials within minutes. Not hours. Minutes.

The fastest documented discovery was under 60 seconds. The typical range sits between 1 and 5 minutes. By the 30-minute mark, the compromised credentials have been tested for permissions, and whatever the attacker can spin up is already running.

"Don't commit secrets to git" is advice every developer has heard. What most haven't seen is the documented timeline of what happens when that advice gets ignored. It's faster and more automated than most people expect.

The Timeline

These numbers come from published honeypot experiments by GitGuardian, Truffle Security, and independent researchers who've documented the process end to end:

Time After Push What Happens
0:00 git push with .env containing AWS credentials
0:01 – 0:05 Automated scanners detect the credential patterns
0:05 – 0:10 Bots validate credentials via sts:GetCallerIdentity
0:10 – 0:15 Permissions enumeration: what can these keys access?
0:15 – 0:25 Resource creation starts (EC2 instances, usually GPU-optimized)
0:25 – 0:47 Full cryptomining operation running across multiple regions
~4:00:00 AWS billing alert fires (if you configured one; many people haven't)

Under an hour from push to active exploitation. The bill from a single weekend of unnoticed cryptomining can hit $5,000 to $20,000 easily, depending on instance types and how many regions the attacker uses.

How the Bots Work

GitHub's Events API is public. Every push to every public repository is visible in real time. Bots poll this API continuously, scanning commit diffs for strings that match known credential formats:

# Simplified patterns bots scan for
AKIA[0-9A-Z]{16}                              # AWS Access Key ID
ghp_[A-Za-z0-9]{36}                           # GitHub Personal Access Token
sk_live_[A-Za-z0-9]{24,}                      # Stripe Secret Key
SG\.[A-Za-z0-9_-]{22}\.[A-Za-z0-9_-]{43}     # SendGrid API Key
xoxb-[0-9]{11,13}-[0-9]{11,13}-[a-zA-Z0-9]{24}  # Slack Bot Token
Enter fullscreen mode Exit fullscreen mode

The scanning is fast because credential formats are distinctive. An AWS access key always starts with AKIA. A Stripe secret key always starts with sk_live_. These prefixes make pattern matching trivial at scale.

Some bots watch GitHub directly. Others scrape Google's cache, Wayback Machine snapshots, and paste sites. Even deleting the commit within 5 minutes leaves a window large enough for automated systems to capture the credentials.

GitHub does run secret scanning and will notify both the developer and the credential provider when it detects a leak. GitHub's scanning is a real safety net. But the notification takes time: email delivery, human action, credential rotation. The bots are consistently faster than the alert email.

What Attackers Do With Each Credential Type

AWS Access Keys are the jackpot. Bots call sts:GetCallerIdentity to verify the key, then enumerate IAM permissions. If the key has EC2 access, they launch instances for cryptomining (usually Monero). If it has S3 access, they download everything. If it has IAM access, they create new access keys to maintain persistence even after you rotate the original.

Database Connection Strings like postgresql://user:pass@host:5432/db give direct database access. Attackers dump the data, and increasingly, they encrypt tables and leave a ransom note.

Stripe / Payment Keys let attackers read customer data, make test charges, and potentially issue refunds. Stripe's fraud detection will eventually flag anomalous activity, but the window is real.

API Keys for email services (SendGrid, Mailgun) get used for phishing campaigns. Your domain sends 100k phishing emails, your domain reputation gets destroyed, and legitimate emails start hitting spam folders. Recovering sender reputation takes weeks.

"I Deleted the File, Though"

This trips up most developers. You committed .env, realized the mistake, ran git rm .env, committed again, pushed. Fixed?

No. The file is gone from the current tree but it's still in git history. Every previous commit that included the file still contains it in full. Anyone who clones the repo (or has already cloned it) can see everything:

# What an attacker runs against your repo
git log --all --full-history -- "*.env"

# Or search every commit ever made for credential patterns
git log -p --all -S 'AKIA'
Enter fullscreen mode Exit fullscreen mode

Even force-pushing to rewrite history doesn't help completely. GitHub caches commit data. Forks may have captured the original. Bots that scanned it in the first 90 seconds already have a copy.

.gitignore prevents future commits. It does nothing about past ones. If the credentials were ever in any commit, consider them compromised. Full stop.

Audit Your Repos Right Now

Before reading the rest of this post, check your existing repositories. Two tools handle this well.

TruffleHog

# Install
brew install trufflehog
# or
docker pull trufflesecurity/trufflehog:latest

# Scan a local repo — checks the entire git history
trufflehog git file://. --only-verified

# Scan an entire GitHub org
trufflehog github --org=your-org-name --only-verified
Enter fullscreen mode Exit fullscreen mode

The --only-verified flag tells TruffleHog to actually test whether found credentials are still active. Cuts through the noise of rotated or test keys.

Gitleaks

# Install
brew install gitleaks

# Scan the current repo
gitleaks detect --source . -v

# Example output:
# Finding:     AKIAIOSFODNN7EXAMPLE
# Secret:      AKIAIOSFODNN7EXAMPLE
# RuleID:      aws-access-key-id
# Entropy:     3.684
# File:        config/prod.env
# Line:        3
# Commit:      a1b2c3d
# Author:      dev@example.com
# Date:        2024-03-15
Enter fullscreen mode Exit fullscreen mode

If either tool finds active credentials, stop here and go rotate them. Right now. The prevention section will still be here when you get back.

Prevention: Three Layers

Layer 1: .gitignore + .env.example

The baseline. Every project needs this from day one:

# .gitignore
.env
.env.local
.env.production
.env.*.local
Enter fullscreen mode Exit fullscreen mode

And a committed .env.example with placeholder values so new developers know what's needed:

# .env.example — committed to git, contains zero real values
DATABASE_URL=postgresql://user:password@localhost:5432/myapp
STRIPE_SECRET_KEY=sk_test_replace_this
AWS_ACCESS_KEY_ID=your_key_here
AWS_SECRET_ACCESS_KEY=your_secret_here
SENDGRID_API_KEY=SG.replace_this
Enter fullscreen mode Exit fullscreen mode

This prevents the most common accident. But it doesn't catch credentials hardcoded in source files, stuffed into Docker Compose configs, or force-added past the gitignore by someone who thought they knew better.

Layer 2: Pre-Commit Hooks

A hook that blocks commits containing credential patterns. This catches mistakes before they leave the developer's machine:

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.5.0
    hooks:
      - id: detect-secrets
        args: ['--baseline', '.secrets.baseline']
Enter fullscreen mode Exit fullscreen mode
# Setup
pip install pre-commit
detect-secrets scan > .secrets.baseline  # baseline existing findings
pre-commit install
Enter fullscreen mode Exit fullscreen mode

Now any commit containing something that matches a credential pattern gets blocked:

$ git commit -m "add config"
Detect secrets..................................................Failed
- hook id: detect-secrets
- exit code: 1

ERROR: Potential secret in config/settings.py:14
  Type: AWS Access Key
  Line: AWS_KEY = "AKIAIOSFODNN7EXAMPLE"
Enter fullscreen mode Exit fullscreen mode

Gitleaks works as a pre-commit hook too:

# .pre-commit-config.yaml (alternative)
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.21.2
    hooks:
      - id: gitleaks
Enter fullscreen mode Exit fullscreen mode

Either tool works. The point is an automated gate between the developer and the remote repository. A human will forget. The hook won't.

Layer 3: Stop Using .env Files for Production

For anything beyond a solo side project, .env files shouldn't hold production secrets at all. Use a secrets manager.

Small teams / side projects — 1Password CLI:

eval $(op signin)
export DATABASE_URL=$(op read "op://Development/Database/url")
export STRIPE_KEY=$(op read "op://Development/Stripe/secret_key")

# Or inject into a process directly
op run --env-file=.env.tpl -- npm run start
Enter fullscreen mode Exit fullscreen mode

Production infrastructure — AWS Secrets Manager:

# Store a secret
aws secretsmanager create-secret \
  --name prod/database-url \
  --secret-string "postgresql://prod_user:s3cure@db.example.com:5432/app"

# Retrieve it (in your app startup or entrypoint script)
aws secretsmanager get-secret-value \
  --secret-id prod/database-url \
  --query SecretString \
  --output text
Enter fullscreen mode Exit fullscreen mode

Open-source / self-hostable — Infisical:

# Pull secrets as env vars
infisical export --env=production --format=dotenv > .env

# Or inject directly
infisical run --env=production -- npm start
Enter fullscreen mode Exit fullscreen mode

The common pushback: "a secrets manager is overkill for my project." It's not. A secrets manager costs a few dollars a month. A compromised AWS key costs thousands. The calculus isn't complicated.

If You're Already Compromised

If you found live credentials in your git history, here's the order of operations. Don't skip steps and don't rearrange them.

Step 1: Revoke the exposed credentials immediately.

# AWS — deactivate the key right now
aws iam update-access-key \
  --access-key-id AKIAEXAMPLE \
  --status Inactive \
  --user-name affected-user

# Then delete it
aws iam delete-access-key \
  --access-key-id AKIAEXAMPLE \
  --user-name affected-user
Enter fullscreen mode Exit fullscreen mode

Step 2: Rotate ALL credentials, not just the leaked one. If an attacker had AWS access, assume they created additional access keys, IAM roles, or Lambda functions for persistence. Check for unfamiliar users, roles, and keys.

Step 3: Audit what the leaked key did.

# Check AWS CloudTrail
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=AccessKeyId,AttributeValue=AKIAEXAMPLE \
  --start-time "2024-01-01" \
  --end-time "2024-12-31" \
  --max-results 50
Enter fullscreen mode Exit fullscreen mode

Look for: EC2 instance launches (especially GPU instances), S3 access in unfamiliar buckets or patterns, IAM changes, and activity in regions you don't use. Attackers love spinning up instances in ap-southeast-1 and eu-west-3 because those regions are less likely to have billing alerts configured.

Step 4: Set up a billing alert if you don't have one.

aws cloudwatch put-metric-alarm \
  --alarm-name "BillingAlarm-100USD" \
  --metric-name EstimatedCharges \
  --namespace AWS/Billing \
  --statistic Maximum \
  --period 21600 \
  --threshold 100 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 1 \
  --alarm-actions "arn:aws:sns:us-east-1:YOUR_ACCOUNT_ID:billing-alerts" \
  --dimensions Name=Currency,Value=USD
Enter fullscreen mode Exit fullscreen mode

Step 5: Clean the git history (after rotating, so the old creds are useless anyway):

# Install git-filter-repo (preferred over the older git filter-branch)
pip install git-filter-repo

# Remove a specific file from all history
git filter-repo --invert-paths --path .env --force

# Force push to overwrite remote history
git push origin --force --all
Enter fullscreen mode Exit fullscreen mode

This rewrites history. Coordinate with your team first because everyone will need to re-clone.

The 10-Minute Version

If nothing else, do these three things today:

  1. Run gitleaks detect --source . -v on every repo you own. Takes 30 seconds per repo.
  2. Add pre-commit with detect-secrets or gitleaks to your most active project. Takes 5 minutes.
  3. Set up an AWS billing alarm at $50 or $100. Takes 2 minutes and could save you thousands.

That's 10 minutes of work that closes one of the most common, most expensive, and most preventable security holes in software development.


Resources:

Top comments (0)