DEV Community

Todd Sullivan
Todd Sullivan

Posted on

Killing the Setup Endpoint: Moving Env Provisioning into GitHub Actions

We had an API endpoint that set up environments. It claimed a pre-warmed org from a pool, authenticated two users, imported test data, installed a bundle, and published config. Six sequential shell calls. Runtime dependency on a server. Credentials scattered across process state. A pain to debug when it failed at step 4 of 6 at 2am.

The fix wasn't to rewrite the API. It was to stop having an API at all.

The move: GitHub Actions as the runtime

The entire setup sequence now lives in a single GitHub Actions workflow file. No server, no queue, no process isolation hacks. The runner is the environment — ephemeral, observable, retryable.

The key architectural shifts:

1. Parallelise everything that can be.

The old endpoint ran sequentially because it was Node.js with a queue. GitHub Actions has native parallelism via step grouping. Auth for two users? One run block, two background processes, wait. Test data import for multiple data keys? Matrix strategy, each key in its own parallel job. What was 6 serial calls is now 3 parallel groups.

Before: ~8 minutes end-to-end.
After: ~3.5 minutes.

2. Reusable workflows for cross-repo consumption.

The real unlock was workflow_call. Instead of every repo maintaining its own setup script or calling an API, they just reference the central workflow:

jobs:
  provision:
    uses: your-org/env-setup/.github/workflows/setup.yml@main
    with:
      environment: staging
      dataset: core
    secrets: inherit
Enter fullscreen mode Exit fullscreen mode

secrets: inherit means the caller's secrets pass through automatically — define them once at the org level, every repo picks them up. No per-repo secret duplication. Rotate once, everything updates.

3. Credentials as artifacts, not environment variables.

Secrets (passwords, tokens, auth URLs) get written to a JSON file and uploaded as a run artifact with masking:

echo "::add-mask::$ADMIN_PASSWORD"
Enter fullscreen mode Exit fullscreen mode

Downstream jobs download the artifact and unmask what they need. This means logs stay clean, credentials are scoped to the job that needs them, and there's no secret bleeding into env vars that outlive the step.

4. Non-secret outputs as workflow outputs.

Instance URLs, user IDs, org IDs — non-sensitive stuff — get published as jobs.<job>.outputs. Any downstream job can reference needs.provision.outputs.instanceUrl directly. Clean separation between sensitive and non-sensitive data.

What this replaced

The old flow required a running API server, a cloud function, and a manually-maintained shell script per environment type. When the server had a bad deploy, env setup broke. When the shell script fell out of sync with the API, you got silent failures.

Now it's a YAML file in a repo. PRs are reviewed. Failures show up in Actions logs with full context. Retries are a button click.

The unexpected benefit

Making setup a reusable workflow forced us to define its interface clearly: inputs, outputs, required secrets. That contract made the setup process legible to anyone on the team, not just the person who wrote the original API endpoint.

If you're running environment provisioning as a service endpoint and it's causing pain — consider whether it needs to be a service at all. Sometimes the right move is to make the CI runner do the work.

Top comments (0)