DEV Community

ckmtools
ckmtools

Posted on

I Scanned 10 Popular GitHub Actions Workflows for Undocumented Environment Variables. Here's What I Found.

I Scanned 10 Popular GitHub Actions Workflows for Undocumented Environment Variables. Here's What I Found.

Every repo has GitHub Actions workflows. They're full of environment variables nobody documents. I spent an afternoon scanning 10 popular open-source JavaScript projects to find out how bad the problem really is.

What I Was Looking For

I was hunting for variables referenced in workflow YAML — ${{ secrets.VAR }}, env: blocks, hardcoded values — that appear nowhere in the project's README, .env.example, or CONTRIBUTING.md. The silent assumptions that break your fork on day one. The things maintainers know instinctively but never wrote down.

Methodology

I chose 10 projects that most JavaScript developers have at minimum heard of: Electron, NestJS, Next.js, Remix, Prisma, Supabase, Strapi, Fastify, TypeORM, and Vitest. For each, I fetched their workflow YAML files via the GitHub API and looked for env: blocks, ${{ secrets.* }} references, and any hardcoded values that looked like configuration. I then cross-checked against their README and CONTRIBUTING.md files. "Undocumented" means the variable name appears in no public documentation — not a sentence, not a comment, nothing.


The Findings

1. electron/electron

electron/electron — ★☆☆

Electron's build pipeline is understandably complex, but the env var situation is rough. CHROMIUM_GIT_COOKIE appears in nearly every workflow file — it's clearly essential for fetching the Chromium source — but there's no explanation of what it is, how to obtain it, or who manages it. The README has zero environment variable mentions. The contributing guide links to an external docs page.

The one that caught my eye: PATCH_UP_APP_CREDS. It shows up in the ARM/ARM64 Linux build job with zero context. Searching the repo reveals nothing useful. If you're trying to fork Electron's build pipeline, you'd have to ask in an issue and hope someone answers.

Also present: DD_API_KEY (Datadog) and CI_ERRORS_SLACK_WEBHOOK_URL — neither documented anywhere public.

2. nestjs/nest

nestjs/nest — ★★★

Honestly refreshing. NestJS has a single workflow file: codeql-analysis.yml. No custom secrets, no bespoke environment variables. Just the standard GITHUB_TOKEN. There's nothing to document because there's nothing unusual. This is what good hygiene looks like for a library project.

3. vercel/next.js

vercel/next.js — ★★☆

Next.js has the largest collection of environment variables of any project I looked at — and the README mentions zero of them. The build_reusable.yml alone defines 15+ env vars at the top level.

Most interesting cluster: three separate Vercel test tokens — VERCEL_TEST_TOKEN, VERCEL_ADAPTER_TEST_TOKEN, and VERCEL_TURBOPACK_TEST_TOKEN — each pointing to a different internal test team. The team names (vtest314-next-e2e-tests, vtest314-next-adapter-e2e-tests, vtest314-next-turbo-e2e-tests) suggest these are Vercel-internal accounts that nobody outside the org can replicate.

There's also KV_REST_API_URL and KV_REST_API_TOKEN (a Vercel KV store used for test timing data) and DATA_DOG_API_KEY — spelled differently from the DATADOG_API_KEY used in a separate job in the same file. Whether that inconsistency is intentional or a bug is unclear.

To be fair, some of this complexity is genuinely hard to document — it's infrastructure that only Vercel employees can operate. But a note explaining why these exist would help.

4. remix-run/remix

remix-run/remix — ★★★

The other clean result. Remix's build.yaml has zero environment variables. The check.yaml is equally bare. Their README focuses on library portability across JavaScript environments, which tracks with having almost no CI-specific secrets. If you fork Remix and run the CI, it should just work.

5. prisma/prisma

prisma/prisma — ★★☆

Prisma's README is actually solid — 12 mentions of environment variables, with clear docs on DATABASE_URL and how Prisma loads .env files. That's genuinely good documentation for library users.

The CI side is a different story. The release pipeline requires REDIS_URL — no explanation of what this Redis instance stores or where it lives. The benchmark workflow sets PRISMA_TELEMETRY_INFORMATION to the string 'prisma benchmark.yml' — an undocumented internal field that presumably tags telemetry events but isn't documented anywhere public. The release workflow also posts to Slack via SLACK_RELEASE_FEED_WEBHOOK and uses BOT_TOKEN (a personal access token, per an inline comment) for release tagging.

None of these are critical for contributors building features, but they mean you can't replicate the release process without asking.

6. supabase/supabase

supabase/supabase — ★☆☆

This one surprised me. Supabase requires OPENAI_API_KEY in two separate test workflows: ai-tests.yml and studio-e2e-test.yml. There's also a braintrust-evals.yml that pulls in BRAINTRUST_PROJECT_ID and BRAINTRUST_API_KEY for running LLM evaluations as part of CI.

The README has zero environment variable mentions. The CONTRIBUTING.md mentions "inclusive environment" and that's it. If you're a contributor who wants to run the full test suite, you need three external service accounts (OpenAI, Braintrust, Vercel) that are never mentioned in any onboarding document.

The CONTRIBUTING.md is 2,454 characters total. It links to a code of conduct and a Slack. That's all.

7. strapi/strapi

strapi/strapi — ★★☆

Strapi actually documents one of its important env vars: STRAPI_LICENSE gets a sentence in CONTRIBUTING.md explaining that contributors need it to run Enterprise Edition tests. Credit where it's due.

The rest is less tidy. SONARQUBE_HOST_URL is stored as a secret — not just the token, but the URL itself — which suggests they're running a private SonarQube instance. TRUNK_API_TOKEN appears for the trunk.io lint service. RELEASE_APP_ID and RELEASE_APP_SECRET power a GitHub App used for releases, with no public documentation on which app or why a dedicated one is needed.

The README is completely silent on all of this.

8. fastify/fastify

fastify/fastify — ★★★

Fastify is minimal and clean. The CI uses NODE_OPTIONS: no-network-family-autoselection in the TypeScript test jobs — an undocumented flag that presumably addresses some network behavior in the test environment — but that's the only non-obvious thing. No custom secrets beyond GITHUB_TOKEN. No unexplained infrastructure dependencies. If you fork Fastify, CI will work.

9. typeorm/typeorm

typeorm/typeorm — ★★☆

TypeORM's notable env vars are CLOUDFLARE_API_TOKEN and CLOUDFLARE_ACCOUNT_ID, used to deploy their documentation to Cloudflare Pages. These are CI infrastructure secrets that you wouldn't need as a code contributor, but they're also never mentioned — not even a comment in the workflow file explaining what they deploy to or why. A line like # Deploys docs to Cloudflare Pages project 'typeorm' would answer every question.

The preview workflow has zero env vars. The codeql analysis has none either. Relatively clean overall.

10. vitest-dev/vitest

vitest-dev/vitest — ★★☆

Vitest sets VITEST_GENERATE_UI_TOKEN: 'true' as a global env var across the entire CI. This isn't documented in the README, the CONTRIBUTING, or the public docs. Based on context, it appears to control whether Vitest generates a token for its UI panel during test runs — but what that token is used for and why it's enabled in CI specifically isn't explained.

PLAYWRIGHT_BROWSERS_PATH is a standard Playwright caching pattern — acceptable. No external secrets required, which means forks can run the full test suite without any extra configuration.


Summary

Project Workflow Files Env Vars Found Secrets Found README Docs Doc Quality
electron/electron 15+ 2 5 0 mentions ★☆☆
nestjs/nest 1 0 0 0 mentions ★★★
vercel/next.js 5+ 15 11 0 mentions ★★☆
remix-run/remix 5+ 0 0 3 mentions ★★★
prisma/prisma 18 2 4 12 mentions ★★☆
supabase/supabase 40+ 0 4 0 mentions ★☆☆
strapi/strapi 22 2 6 0 mentions ★★☆
fastify/fastify 5 1 1 0 mentions ★★★
typeorm/typeorm 5 0 2 1 mention ★★☆
vitest-dev/vitest 6 2 0 0 mentions ★★☆

Patterns Worth Noting

The infrastructure-as-secret problem. Several projects store URLs as secrets, not just tokens — Strapi's SONARQUBE_HOST_URL is the clearest example. This is reasonable from a security standpoint (you don't want to advertise your internal tooling endpoints), but it means contributors can't understand the CI pipeline from reading the YAML alone.

Third-party service sprawl. Supabase requires OpenAI and Braintrust accounts to run the full test suite. Next.js requires Vercel-internal accounts that literally no external contributor can create. When your CI has hard dependencies on services that only your org controls, you've effectively made full CI reproduction impossible for outsiders — and none of these projects acknowledge this in their contributing docs.

The "works if you're an employee" problem. The most undocumented variables tend to be the ones that are only relevant to the maintainer doing releases or running internal benchmarks. This makes sense — they never break for contributors building features. But it creates a knowledge silo. When you eventually need to run that release pipeline or onboard a new maintainer, the documentation doesn't exist.


Why This Matters

If you're maintaining a Node.js or Python project and want to audit your own repo for exactly this kind of gap, I've been building a tool called envscan that scans your codebase for environment variables used in code, workflow files, and configuration — then flags which ones are missing from .env.example or any documentation. You can check it out and get early access at envscan.ckmtools.dev. It's free while I'm validating the idea.

Have a project with surprisingly good env var docs? Drop it in the comments — I'd genuinely like to see it.

Top comments (0)