DEV Community

Cover image for I Scanned 6 Popular Node.js Repos for Undocumented Environment Variables. Here's What I Found.
ckmtools
ckmtools

Posted on

I Scanned 6 Popular Node.js Repos for Undocumented Environment Variables. Here's What I Found.

Most Node.js projects accumulate process.env references over time. Some document them in .env.example. Many don't. I wanted to know how bad the problem actually is in well-maintained, popular open-source repos — so I ran a search using the GitHub API.

Here's what I found.

The Repos

I picked six repos with different scopes: two minimal HTTP frameworks, one structured framework, two full-stack application platforms, and one backend-as-a-service:

Repo Stars Type
expressjs/express ~65k HTTP framework
fastify/fastify ~32k HTTP framework
nestjs/nest ~68k Application framework
strapi/strapi ~63k Headless CMS
keystonejs/keystone ~9k Full-stack CMS
supabase/supabase ~73k BaaS platform (monorepo)

For each repo I used the GitHub code search API to count process.env references, then checked for the presence of .env.example (or .env.sample, .env.template) files at the root and recursively.

Results

Repo process.env refs .env.example files Coverage
expressjs/express 6 0
fastify/fastify 5 0
nestjs/nest 7 0
strapi/strapi 135 10 Partial
keystonejs/keystone 112 3 Partial
supabase/supabase 294 24 Best-in-class

What This Actually Means

The numbers don't tell the full story. The three frameworks at the top (express, fastify, nest) aren't slacking — they're libraries. Their process.env usage is intentionally minimal. Express reads NODE_ENV in lib/application.js. Fastify uses a few vars in test scripts and a serverless guide. NestJS delegates env config entirely to application code via @nestjs/config.

The bottom three are application platforms and CMS tools — products you self-host or deploy, where env configuration is core to the product. Their higher counts make sense.

Strapi: 135 refs, 10 .env.example files

The refs are spread across a large monorepo (packages/, examples/, scripts/). The examples each ship their own .env.example, but the core package doesn't have a central one. The most significant example — examples/complex/.env.example — contains exactly one line:

JWT_SECRET=replaceme
Enter fullscreen mode Exit fullscreen mode

That's the entire documented env surface for a complex Strapi installation, despite the codebase referencing 135 env variables across all packages.

Keystone: 112 refs, 3 .env.example files

The .env.example files exist only for specific integration examples (S3 assets, Cloudinary). The docs/.env.example contains a single variable: BUTTONDOWN_API_KEY= — which is the newsletter API key for Keystone's own documentation site, not something users of the framework need.

Core application env vars (database URLs, session secrets) are documented in prose in the official docs, not as a discoverable example file.

Supabase: 294 refs, 24 .env.example files

Supabase is the standout here. The docker/.env.example is the most thorough example file I found across all six repos — it includes inline comments explaining what each variable does, links to docs for generating secrets, and even notes which values need to be rotated before going to production:

# YOU MUST CHANGE ALL THE DEFAULT VALUES BELOW BEFORE STARTING
# THE CONTAINERS FOR THE FIRST TIME!
Enter fullscreen mode Exit fullscreen mode

That's the right way to do it. Still, the E2E test suite in e2e/studio/env.config.ts references vars like GITHUB_PASS, GITHUB_TOTP, VERCEL_AUTOMATION_BYPASS_SELFHOSTED_STUDIO, SUPA_PAT, and SUPA_REGION — none of which appear in any .env.example. These are CI/testing credentials that contributors need but have to discover by reading the source.

The Pattern

Across all six repos, a consistent pattern emerges:

Framework repos: Low env var count by design. Documentation isn't the problem — minimal surface is the point.

Application platform repos: High env var count, .env.example files exist but cover only a slice of the actual surface. The gap between documented and total process.env references can be large (strapi: 10 files documenting maybe 15 vars vs 135 total refs).

Test and CI env vars are almost never documented. Every repo with a test suite uses env vars to configure database URLs, API tokens, and service endpoints for testing. None of those showed up in .env.example files.

The Maintenance Problem

The real challenge isn't the initial .env.example — it's keeping it in sync as the codebase grows. A feature adds process.env.NEW_FEATURE_FLAG. The .env.example is a separate file. Nobody updates it because nothing enforces the connection.

In a small repo, this is fine. In a monorepo with 135+ references spread across packages and examples, it becomes hard to answer the question: "what env vars does this actually need?"

Wrapping Up

If you're dealing with this problem in your own codebase — especially if you've inherited a repo where nobody's sure what all the process.env references actually are — scanning the source files is the most reliable way to get a definitive list. I'm working on envscan, a tool that does exactly that: scans your source files to discover every env var your code references, and compares it against your .env.example. It's in development with a waitlist open if that sounds useful.


Data collected 2026-03-18 using the GitHub Code Search API. Counts reflect the state of the default branches at time of writing. Repos with monorepo structures may have higher counts due to cross-package test fixtures and build scripts.

Repos scanned: express, fastify, nest, strapi, keystone, supabase

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.