The problem every GitLab user eventually hits
You've got a .env.production file with 80 variables. Time to deploy to a new environment. So you open GitLab, navigate to Settings → CI/CD → Variables, and start clicking.
Add variable. Set key. Paste value. Toggle "masked". Toggle "protected". Save. Repeat.
After the fifth variable you're already making mistakes. After the twentieth you've lost count. After the fiftieth you've decided that whoever designed this workflow has never had to actually use it.
The next attempt is a bash script. Something like:
while IFS='=' read -r key value; do
curl -X POST "https://gitlab.com/api/v4/projects/$PROJECT_ID/variables" \
--header "PRIVATE-TOKEN: $TOKEN" \
--form "key=$key" \
--form "value=$value"
done < .env.production
It works until it doesn't. No rate limiting, no retry on 429, no masked/protected flags, sequential execution, no way to preview what's changing. One mistyped variable and you're debugging a broken pipeline at 2am.
I built glenv to fix this properly.
Meet glenv
glenv is a single-binary CLI written in Go that syncs .env files with GitLab CI/CD variables via the API. It handles bulk imports, exports, diffs, and multi-environment workflows — with rate limiting and auto-classification built in.
What it does:
- Syncs hundreds of variables in seconds with concurrent workers
- Auto-detects which variables should be
masked,protected, orfiletype - Shows a diff before applying any changes so there are no surprises
- Handles GitLab's rate limits and 429 responses automatically
- Manages production, staging, and custom environments from a single config
- Works with gitlab.com and any self-hosted instance
Get started in 30 seconds
# macOS/Linux via Homebrew
brew install ohmylock/tools/glenv
# Or via go install
go install github.com/ohmylock/glenv/cmd/glenv@latest
Set your credentials:
export GITLAB_TOKEN="glpat-xxxxxxxxxxxx"
export GITLAB_PROJECT_ID="12345678"
Preview what would change before touching anything:
glenv diff -f .env.production -e production
Output:
+ DB_HOST=postgres.internal
+ DB_PORT=5432
~ API_KEY: *** → *** [masked]
- OLD_DEPRECATED_VAR
= LOG_LEVEL
If it looks right, apply:
glenv sync -f .env.production -e production
That's it. 80 variables, a few seconds, done.
Smart variable classification
One of the more annoying parts of the GitLab UI is that you have to manually decide whether each variable should be masked or protected. Miss a DATABASE_PASSWORD and it shows up in plain text in your pipeline logs.
glenv auto-classifies variables based on key name patterns and value properties:
| Property | When applied |
|---|---|
| masked | Key contains _TOKEN, SECRET, PASSWORD, API_KEY, DSN — and value is single-line, ≥8 chars |
| protected | Environment is production AND key matches a secret pattern |
| file | Key contains PRIVATE_KEY, _CERT, _PEM — or value contains -----BEGIN
|
Variables with placeholder values like your_api_key_here or CHANGE_ME are automatically skipped — they won't pollute your remote variables.
You can customize the patterns via config:
classify:
masked_patterns:
- "_TOKEN"
- "SECRET"
- "PASSWORD"
masked_exclude:
- "MAX_TOKENS" # don't mask rate limit settings
- "PORT"
file_patterns:
- "PRIVATE_KEY"
- "_PEM"
Multi-environment workflows
For projects with multiple environments, a .glenv.yml config file replaces repetitive flags:
gitlab:
token: ${GITLAB_TOKEN} # env var expansion supported
project_id: "12345678"
environments:
staging:
file: deploy/.env.staging
production:
file: deploy/.env.production
Then sync all environments at once:
glenv sync --all
glenv processes environments alphabetically, reports results per environment, and aggregates errors so you see the full picture even if one environment fails.
Rate limiting that actually works
GitLab.com allows ~2,000 API requests per minute. With 5 concurrent workers and no rate limiter, you'll hit that ceiling on any non-trivial project.
glenv uses a token bucket rate limiter shared across all workers. The default is 10 requests/second — well under the limit, but fast enough to sync 100 variables in about 10 seconds. When GitLab returns a 429, glenv reads the Retry-After header, waits, then retries with exponential backoff.
For self-hosted instances you can push it harder:
glenv sync -f .env -e production --workers 10 --rate-limit 50
CI/CD pipeline integration
glenv can run inside your GitLab pipeline itself — useful for promoting variables between environments:
# .gitlab-ci.yml
sync-variables:
image: golang:1.23-alpine
script:
- go install github.com/ohmylock/glenv/cmd/glenv@latest
- glenv sync -f deploy/.env.${CI_ENVIRONMENT_NAME} -e ${CI_ENVIRONMENT_NAME}
variables:
GITLAB_TOKEN: ${DEPLOY_TOKEN}
GITLAB_PROJECT_ID: ${CI_PROJECT_ID}
What's next
A few things on the roadmap:
- Group-level variables support (not just project-level)
-
glenv importfrom an existing GitLab project (clone variables between projects) - Watch mode — detect
.envfile changes and sync automatically - GitHub Actions artifact: pre-built binary for pipeline use without
go install
If you're dealing with GitLab CI/CD variables at any scale beyond a handful of keys, give it a try.
GitHub: github.com/ohmylock/glenv
Feedback, issues, and PRs are all welcome. If it saves you time, a star helps others find it. ⭐
Top comments (0)