If you're running a monorepo where multiple apps deploy independently through multiple environments, you'll eventually hit a limitation that GitLab CI handles natively: GitHub Actions doesn't support wildcards for environment variable scoping.
In this post, I'll explain why this becomes a real problem and how to work around it using a single repository variable and a jq one-liner.
The Problem
GitHub Actions supports named environments with their own variables and secrets, configured under Settings → Environments. For a single app with testing, staging, and production environments, this works fine - you define your variables three times and you're done.
The problem arises in a monorepo where multiple apps deploy independently, each with its own GitHub environment per tier. Independent per-app environments are useful because they give you fine-grained protection rules: you can require a reviewer for production/apps/dashboard without blocking production/apps/performance. In a reusable deployment workflow, the environment name is dynamic:
jobs:
deploy:
environment:
name: ${{ inputs.environment }}/apps/${{ inputs.name }}
With ten apps and three tiers, you end up with thirty environments:
testing/apps/dashboard
testing/apps/performance
testing/apps/monitoring
...
staging/apps/dashboard
staging/apps/performance
staging/apps/monitoring
...
production/apps/dashboard
production/apps/performance
production/apps/monitoring
...
Variables like AWS_OIDC_IAM_ROLE_ARN, CDK_DEFAULT_ACCOUNT, or DOMAIN are the same for every app within a tier - all staging environments point to the same AWS account and domain. But GitHub has no wildcard scoping, so you can't say "set DOMAIN=staging.example.com for all staging/* environments" in one place.
Without wildcard support, you have two bad options:
- Define every variable on every environment individually - 30 environments × 15 variables = 450 entries to keep in sync
- Promote shared variables to the repository level - losing all environment separation
In GitLab, you'd define the variable once with scope staging/* and move on.
The Workaround
Store all environment-tier-specific variables in a single repository variable called ENVIRONMENT_CONFIG as a JSON object:
{
"testing": {
"AWS_OIDC_IAM_ROLE_ARN": "arn:aws:iam::111111111111:role/testing-role",
"CDK_DEFAULT_ACCOUNT": "111111111111",
"DOMAIN": "test.example.com",
"COGNITO_USER_POOL": "eu-central-1_abc123",
"COGNITO_DOMAIN": "auth.test.example.com"
},
"staging": {
"AWS_OIDC_IAM_ROLE_ARN": "arn:aws:iam::222222222222:role/staging-role",
"CDK_DEFAULT_ACCOUNT": "222222222222",
"DOMAIN": "staging.example.com",
"COGNITO_USER_POOL": "eu-central-1_def456",
"COGNITO_DOMAIN": "auth.staging.example.com"
},
"production": {
"AWS_OIDC_IAM_ROLE_ARN": "arn:aws:iam::333333333333:role/production-role",
"CDK_DEFAULT_ACCOUNT": "333333333333",
"DOMAIN": "example.com",
"COGNITO_USER_POOL": "eu-central-1_ghi789",
"COGNITO_DOMAIN": "auth.example.com"
}
}
ENVIRONMENT_CONFIG is a repository variable - not an environment variable - so it's accessible regardless of which GitHub environment the job runs in.
As the first step of the deployment job, extract the right tier and write each key-value pair to $GITHUB_ENV:
- name: Load environment variables from repository variables
run: |
echo '${{ vars.ENVIRONMENT_CONFIG }}' | jq -r '.${{ inputs.environment }} | to_entries[] | "\(.key)=\(.value)"' >> $GITHUB_ENV
jq selects the object for the current environment (e.g., staging), converts each entry to KEY=VALUE format, and appends it to $GITHUB_ENV. Everything written to $GITHUB_ENV is available to all subsequent steps.
These vars.* references will be empty at job start - they exist only to document which variables the job expects. The Load environment variables step overwrites them via $GITHUB_ENV.
jobs:
deploy:
environment:
name: ${{ inputs.environment }}/apps/${{ inputs.name }}
env:
AWS_OIDC_IAM_ROLE_ARN: ${{ vars.AWS_OIDC_IAM_ROLE_ARN }}
CDK_DEFAULT_ACCOUNT: ${{ vars.CDK_DEFAULT_ACCOUNT }}
DOMAIN: ${{ vars.DOMAIN }}
COGNITO_USER_POOL: ${{ vars.COGNITO_USER_POOL }}
COGNITO_DOMAIN: ${{ vars.COGNITO_DOMAIN }}
steps:
- uses: actions/checkout@v6
- name: Load environment variables from repository variables
run: |
echo '${{ vars.ENVIRONMENT_CONFIG }}' | jq -r '.${{ inputs.environment }} | to_entries[] | "\(.key)=\(.value)"' >> $GITHUB_ENV
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v6
with:
role-to-assume: ${{ env.AWS_OIDC_IAM_ROLE_ARN }}
aws-region: ${{ vars.AWS_DEFAULT_REGION }}
The env: block is evaluated once at job start - at that point vars.AWS_OIDC_IAM_ROLE_ARN is empty. The Load environment variables from repository variables step writes the correct value to $GITHUB_ENV, and all subsequent steps read from there. By the time Configure AWS credentials runs, ${{ env.AWS_OIDC_IAM_ROLE_ARN }} resolves to the correct value for the tier.
When this approach doesn't work:
This workaround is best suited for:
- Non-sensitive configuration values
- Values that change per environment tier, not per app
Consider alternatives if:
- You need to store secrets (use GitHub Secrets instead)
- You have app-specific variables (use environment variables on the GitHub environment)
I hope this post helps you manage environment-specific configuration without drowning in repeated variable definitions.
If you have any kind of feedback, suggestions or ideas - feel free to comment this post!
Top comments (0)