Have you ever joined a project and spent your first day hunting down the magical incantation of environment variables needed to run the application locally? Perhaps someone pointed you to an outdated .env.example file, or worse, a Slack message from six months ago with half the values redacted. "Just ask Sarah for the database password," they say. Sarah left the company in March.
The environment variable problem seems trivial until it isn't. What starts as a handful of configuration values inevitably grows into dozens of settings scattered across developer machines, deployment pipelines, and hastily-shared documents. And it's not just secrets—API endpoints, feature flags, service URLs, port numbers, and environment-specific behavior toggles all end up in the mix. I've watched teams lose entire days to configuration drift—where production, staging, and local environments slowly diverge until mysterious bugs appear that "work on my machine."
There's a better way. In this post, I'll walk you through a pattern I've been using that treats a cloud vault as the single source of truth for all environment configuration, populates it automatically during infrastructure provisioning, and provides developers with a single command to sync the right variables to the right projects—no secrets in source control, no copying values through chat, no wondering if your local setup matches everyone else's.
While the examples here use Azure Key Vault and Azure-specific tooling, the pattern itself is cloud-agnostic. Swap in AWS Secrets Manager, Google Cloud Secret Manager, HashiCorp Vault, or any other secret store with a CLI—the architecture remains the same. The key insight isn't about Azure; it's about establishing a single source of truth and automating the flow from that source to everywhere configuration is needed.
The Core Problem: Configuration Has Too Many Homes
Let me paint a picture. A typical application deployment involves several environments: local development, perhaps a shared dev environment, staging, and production. Each environment needs its own set of configuration values—and these aren't just secrets. Yes, there are database connection strings and API keys (I'm not saying this is right, but you can't argue that it's there), but there's also:
- Service endpoints: Where does the frontend call the API? Where does the API call the authentication service?
- Feature flags: Is the new checkout flow enabled? Should we show the beta banner?
- Behavioral settings: What's the session timeout? How many retry attempts? What log level?
- Infrastructure coordinates: Which Redis cluster? What storage account? Which message queue?
The traditional approach scatters these values across multiple locations:
-
Developer machines: Each developer maintains their own
.envfiles, manually updated whenever something changes - CI/CD pipelines: Variables configured in GitHub Actions, Azure DevOps, or whatever orchestrates your deployments
- Hosting platforms: App Service configuration, Static Web App settings, Azure Functions local settings
- Documentation: READMEs, wikis, or Notion pages that inevitably fall out of sync
-
Human memory: "Oh, you need to set
SPECIAL_FLAG=truefor that feature to work"
The fundamental problem is that there's no authoritative source. When a value needs to change, someone has to remember everywhere it lives and update each location. They won't. Drift is inevitable.
The Vault as Source of Truth
The pattern I recommend flips this model. Instead of the vault being one of many places configuration lives, it becomes the place. Every other location—local .env files, CI/CD variables, app service configuration—is derived from the vault rather than managed independently.
When you need to change a configuration value, you update it in the vault. Developers run a sync script to refresh their local environments. Deployments pull from the vault directly or through secure variable injection. One change propagates everywhere.
Populating the Vault During Provisioning
The first piece of this pattern is ensuring the vault gets populated with the right values when infrastructure is provisioned. If you're using Bicep, Terraform, or CloudFormation, this becomes part of your infrastructure-as-code.
Here's an Azure Bicep example that provisions a Key Vault and populates it with generated values:
@description('Environment name (dev, staging, prod)')
param environment string
@description('Base name for resources')
param baseName string
// Generate a secure random string for JWT secret
var jwtSecret = uniqueString(resourceGroup().id, 'jwt', environment)
resource keyVault 'Microsoft.KeyVault/vaults@2023-07-01' = {
name: 'kv-${baseName}-${environment}'
location: resourceGroup().location
properties: {
sku: {
family: 'A'
name: 'standard'
}
tenantId: subscription().tenantId
enableRbacAuthorization: true
enableSoftDelete: true
softDeleteRetentionInDays: 90
}
}
// Secrets that should be generated during provisioning
resource jwtSecretEntry 'Microsoft.KeyVault/vaults/secrets@2023-07-01' = {
parent: keyVault
name: 'JWT-SECRET'
properties: {
value: '${jwtSecret}${uniqueString(deployment().name)}'
}
}
// Secrets derived from other provisioned resources
resource dbConnectionString 'Microsoft.KeyVault/vaults/secrets@2023-07-01' = {
parent: keyVault
name: 'DATABASE-CONNECTION-STRING'
properties: {
value: 'Server=${sqlServer.properties.fullyQualifiedDomainName};Database=${sqlDatabase.name};...'
}
}
// Environment-specific configuration
resource apiBaseUrl 'Microsoft.KeyVault/vaults/secrets@2023-07-01' = {
parent: keyVault
name: 'API-BASE-URL'
properties: {
value: environment == 'prod'
? 'https://api.mycompany.com'
: 'https://api-${environment}.mycompany.com'
}
}
The key insight here is that configuration values fall into categories. Some—like JWT signing keys—should be randomly generated during provisioning and never typed by a human. Others—like database connection strings—are derived from resources created in the same deployment. Still others—like API URLs—follow predictable patterns based on environment. By encoding these rules in your infrastructure templates, you eliminate manual configuration entirely.
For values that genuinely need human input—third-party API keys, for instance—I recommend a two-phase approach. The infrastructure provisioning creates placeholder secrets, and a separate onboarding script prompts operators to fill in the values that can't be generated:
#!/bin/bash
# post-provision-setup.sh
VAULT_NAME="kv-myapp-$ENVIRONMENT"
echo "Setting up third-party integrations for $ENVIRONMENT..."
# Check for placeholder values and prompt for real ones
if [[ $(az keyvault secret show --vault-name $VAULT_NAME --name "STRIPE-API-KEY" --query "value" -o tsv) == "PLACEHOLDER" ]]; then
read -sp "Enter Stripe API key: " stripe_key
az keyvault secret set --vault-name $VAULT_NAME --name "STRIPE-API-KEY" --value "$stripe_key"
echo "✓ Stripe API key configured."
fi
The Three-Layer Configuration Architecture
Here's where the pattern gets interesting. Fetching values from the vault is only part of the puzzle. In a real monorepo, different applications need the same configuration in different formats. Your Vite frontend expects VITE_API_URL in a .env file. Your Azure Functions expect ApiUrl in a local.settings.json file with a specific JSON structure. Your backend services might want API_URL in yet another .env file. Same value, different names, different formats.
The solution is a three-layer architecture:
The configuration manifest is the crucial middle layer. It's a declarative JSON file that lives in your repository and describes how configuration flows into each application:
{
"$schema": "./env-manifest.schema.json",
"vault": {
"namePattern": "kv-{environmentCode}-{appName}"
},
"globalVariables": [
"jwt-secret",
"database-connection-string",
"api-base-url",
"control-api-url",
"redis-host",
"redis-password",
"feature-flag-new-checkout"
],
"defaults": {
"redis-port": "6380",
"jwt-expiration": "24h"
},
"localOverrides": {
"api-base-url": "http://localhost:7073",
"control-api-url": "http://localhost:7074",
"enable-test-credentials": "true"
},
"applications": {
"portal": {
"configPath": "apps/portal/.env",
"format": "dotenv",
"requiredGlobals": [],
"secretMappings": {
"api-base-url": "VITE_API_URL",
"control-api-url": "VITE_CONTROL_API_URL",
"enable-test-credentials": "VITE_ENABLE_TEST_CREDENTIALS"
},
"specificOverrides": {
"VITE_ENVIRONMENT": "${ENVIRONMENT}"
}
},
"management-api": {
"configPath": "apps/management-api/.env",
"format": "dotenv",
"requiredGlobals": [],
"secretMappings": {
"jwt-secret": "JWT_SECRET",
"database-connection-string": "DATABASE_URL",
"redis-host": "REDIS_HOST",
"redis-password": "REDIS_PASSWORD"
},
"specificOverrides": {
"NODE_ENV": "${NODE_ENV}",
"PORT": "7073",
"JWT_EXPIRATION": "24h"
}
},
"control-functions": {
"configPath": "apps/control-functions/local.settings.json",
"format": "json-settings",
"requiredGlobals": [],
"secretMappings": {
"api-base-url": "ApiBaseUrl",
"jwt-secret": "JwtSecret",
"database-connection-string": "DatabaseConnectionString"
},
"specificOverrides": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "node"
}
}
}
}
This manifest captures everything the loader script needs to know. The same canonical secret api-base-url becomes VITE_API_URL in the portal and ApiBaseUrl in the Azure Functions project. The localOverrides section automatically redirects URLs to localhost—without polluting the vault with development-specific values. And specificOverrides handles values that are constant for an application but shouldn't live in the vault (like port numbers or runtime settings).
The Environment Loader Script
With the manifest in place, the loader script becomes surprisingly straightforward. Before diving into implementation, let me walk through the essential flow:
- Read the manifest — what variables exist, where they go
- Fetch secrets from Key Vault — the canonical values
- Apply local overrides — localhost URLs, test credentials
-
Map canonical names to app-specific names —
api-url→VITE_API_ENDPOINT - Write config files in each app's expected format — dotenv, JSON, YAML
Here's the logic in pseudocode:
function loadEnvironment(targetEnvironment):
# 1. RESOLVE CONFIGURATION SOURCES
manifest = readJSON("config/env-manifest.json")
infraParams = readJSON("infra/env/{targetEnvironment}.parameters.json")
vaultName = buildVaultName(infraParams.environmentCode, infraParams.region)
# 2. AUTHENTICATE & CONNECT
if not cloudCLI.isLoggedIn():
exit("Please authenticate with cloud provider first")
vault = connectToVault(vaultName)
# 3. FETCH ALL SECRETS FROM VAULT
secrets = {}
for secretName in manifest.globalVariables:
secrets[secretName] = vault.getSecret(secretName)
?? manifest.defaults[secretName]
?? null
# 4. APPLY LOCAL OVERRIDES (when running locally)
if targetEnvironment == "local":
for key, value in manifest.localOverrides:
secrets[key] = value # localhost URLs, test credentials, etc.
# 5. GENERATE CONFIG FILES FOR EACH APPLICATION
for app in manifest.applications:
# Build this app's environment variables
appEnvVars = {}
# Add global variables this app needs
for varName in app.requiredGlobals:
appEnvVars[varName] = secrets[varName]
# Map infrastructure secrets to app-specific names
# e.g., "api-url" → "VITE_API_ENDPOINT" for frontend
for secretName, localVarName in app.secretMappings:
appEnvVars[localVarName] = secrets[secretName]
# Add app-specific overrides
for key, value in app.specificOverrides:
appEnvVars[key] = value
# Write in the correct format for this app type
outputPath = app.configPath
switch app.format:
case "dotenv":
writeAsDotenv(outputPath, appEnvVars)
case "json-settings":
writeAsJsonSettings(outputPath, appEnvVars)
case "yaml":
writeAsYaml(outputPath, appEnvVars)
log("Generated {outputPath} with {count(appEnvVars)} variables")
# 6. VALIDATE & REPORT
detectConflicts(generatedFiles)
reportMissingSecrets(secrets)
log("Environment setup complete for: {targetEnvironment}")
The key insight is that the script separates what from how. The manifest declares what each application needs. The script handles how to fetch, transform, and write those values. This separation means adding a new application is a manifest change, not a script change.
Output Formatters
Different runtimes expect different file formats. The script handles this with format-specific writers:
function writeAsDotenv(path, vars):
content = "# Generated - do not edit\n"
for key, value in vars:
content += "{key}={value}\n"
writeFile(path, content)
function writeAsJsonSettings(path, vars):
settings = {
"IsEncrypted": false,
"Values": vars
}
writeFile(path, toJSON(settings))
In practice, the bash implementation uses jq for JSON parsing and simple string concatenation for dotenv output. The actual script I'm using for a personal project runs about 150 lines, including error handling, colored output, and progress reporting—but the core logic follows the pseudocode above.
The Security Boundary: Frontend vs. Backend
This brings us to an important architectural concern that the manifest encodes: the security boundary between frontend and backend configuration.
In a Vite-based frontend (or Create React App, Next.js, and similar), environment variables are embedded into the JavaScript bundle at build time. Only variables with a specific prefix—VITE_ for Vite, REACT_APP_ for CRA, NEXT_PUBLIC_ for Next.js—are included. This is a security feature. It means you can't accidentally expose your JWT signing secret to every browser that loads your application.
The manifest makes this explicit through the secretMappings:
{
"portal": {
"format": "dotenv",
"secretMappings": {
"api-base-url": "VITE_API_URL",
"enable-test-credentials": "VITE_ENABLE_TEST_CREDENTIALS"
}
},
"management-api": {
"format": "dotenv",
"secretMappings": {
"jwt-secret": "JWT_SECRET",
"database-connection-string": "DATABASE_URL"
}
}
}
The portal only gets variables mapped with VITE_ prefixes—safe for browser exposure. The API gets the actual secrets—never bundled into client-side code. The manifest documents and enforces this boundary.
Format-Aware Generation
Different application types expect different configuration formats. The loader script handles this transparently:
| Application Type | Output File | Format |
|---|---|---|
| Vite/React Frontend | .env |
Dotenv with VITE_ prefix |
| Node.js Backend | .env |
Standard dotenv |
| Azure Functions | local.settings.json |
JSON with Values object |
| Database Tools | .env |
Connection string only |
For Azure Functions specifically, the output looks like:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "node",
"ApiBaseUrl": "http://localhost:7073",
"JwtSecret": "dev-jwt-secret-value-here",
"DatabaseConnectionString": "Server=localhost;Database=myapp;..."
},
"Host": {
"CORS": "*"
}
}
Same configuration, different shape. The manifest defines the mapping, and the script handles the transformation.
Handling Multiple Environments
Real applications need to support multiple environments, and developers occasionally need to point their local setup at staging or production (carefully, and with appropriate access controls). The loader script handles this through a simple argument:
# Local development (default) - uses dev vault with localhost overrides
./scripts/load-env.sh local
# Point at the dev environment's actual URLs
./scripts/load-env.sh dev
# Point at staging (requires staging vault access)
./scripts/load-env.sh staging
# Production (release automation, no human should require access unless absolutely required)
./scripts/load-env.sh prod
For teams that frequently switch environments, I've found it helpful to add convenience wrappers in package.json:
{
"scripts": {
"env:local": "./scripts/load-env.sh local",
"env:dev": "./scripts/load-env.sh dev",
"env:staging": "./scripts/load-env.sh staging"
}
}
A word of caution: developers should never need production secrets locally. If they do, it usually indicates a gap in your staging environment's data or configuration. Grant read access to production vaults sparingly and audit its use. The convenience of the loader script shouldn't become a vector for secret sprawl.
Smart Defaults and Graceful Degradation
One pattern I've come to appreciate is building applications that work with minimal configuration in development while requiring explicit configuration in production. The manifest supports this by simply omitting optional services from an application's secretMappings—if a secret isn't mapped, the variable won't be set, and the application can detect this and degrade gracefully:
function getRedisClient(): Redis | null {
const host = process.env.REDIS_HOST;
// Graceful degradation: Redis is optional for local dev
if (!host) {
console.log('REDIS_HOST not configured, using in-memory fallback');
return null;
}
return new Redis({
host,
port: parseInt(process.env.REDIS_PORT || '6380', 10),
password: process.env.REDIS_PASSWORD,
tls: process.env.REDIS_TLS === 'true' ? {} : undefined,
});
}
For local development, you might configure the management API without Redis at all:
{
"management-api": {
"secretMappings": {
"jwt-secret": "JWT_SECRET",
"database-connection-string": "DATABASE_URL"
// Redis mappings omitted for local dev
}
}
}
This means a new developer can clone the repo, run the loader script, and start the application immediately—even if they don't have Redis running locally. The application gracefully degrades to in-memory alternatives. When they deploy to staging or production, the full infrastructure is available and configured.
Integrating with CI/CD
The final piece is connecting this pattern to your deployment pipeline. Rather than maintaining a parallel set of variables in GitHub Actions or Azure DevOps, the pipeline pulls directly from Key Vault.
For GitHub Actions with Azure, this looks like:
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Fetch secrets from Key Vault
id: keyvault
uses: azure/get-keyvault-secrets@v1
with:
keyvault: kv-myapp-prod
secrets: 'JWT-SECRET, DATABASE-CONNECTION-STRING, STRIPE-API-KEY'
- name: Build frontend
run: npm run build --workspace=portal
env:
VITE_API_URL: https://api.mycompany.com
VITE_ENVIRONMENT: prod
- name: Deploy API
uses: azure/webapps-deploy@v2
with:
app-name: myapp-api-prod
For App Service specifically, Azure supports Key Vault references in application settings. Instead of copying secrets into App Service configuration, you reference them:
resource appService 'Microsoft.Web/sites@2023-01-01' = {
name: 'myapp-api-${environment}'
properties: {
siteConfig: {
appSettings: [
{
name: 'JWT_SECRET'
value: '@Microsoft.KeyVault(VaultName=${keyVault.name};SecretName=JWT-SECRET)'
}
{
name: 'DATABASE_URL'
value: '@Microsoft.KeyVault(VaultName=${keyVault.name};SecretName=DATABASE-CONNECTION-STRING)'
}
]
}
}
}
With Key Vault references, App Service fetches the current secret value at runtime. When you rotate a secret in Key Vault, the application picks up the new value on its next restart—no redeployment required.
The .gitignore Imperative
None of this works if secrets accidentally end up in source control. Your .gitignore must be ironclad:
# Environment files - never commit these
.env
.env.*
!.env.example
# Azure Functions local settings
local.settings.json
# Local overrides
*.local
I also recommend a pre-commit hook that rejects any attempt to commit files matching these patterns:
#!/bin/bash
# .git/hooks/pre-commit
# Check for .env files
if git diff --cached --name-only | grep -E '\.env($|\.)' | grep -v '\.example$'; then
echo "Error: Attempting to commit .env file(s). This is not allowed."
exit 1
fi
# Check for Azure Functions local settings
if git diff --cached --name-only | grep 'local.settings.json'; then
echo "Error: Attempting to commit local.settings.json. This is not allowed."
exit 1
fi
Onboarding: From Clone to Running in Minutes
The payoff for all this infrastructure is a dramatically simplified onboarding experience. Here's what a new developer's first day looks like:
# 1. Clone the repository
git clone https://github.com/myorg/myapp.git
cd myapp
# 2. Install dependencies
npm install
# 3. Authenticate with Azure
az login
# 4. Load environment configuration
./scripts/load-env.sh local
# 5. Start developing
npm run dev
That's it. No hunting for configuration values. No asking colleagues for files. No wondering whether your setup matches everyone else's. The vault is the source of truth, the manifest defines the mapping, the script generates the files, and the developer is productive in minutes rather than hours.
Tradeoffs and Considerations
This pattern isn't without costs, and it's worth being explicit about them:
Cloud dependency: Developers need CLI authentication and vault access before they can generate local configuration. For open-source projects or teams not already using a cloud vault, this may be prohibitive.
Script complexity: The loader script grows as applications are added to the monorepo. Each new application type might need a new output format. This is manageable but requires maintenance.
Implicit configuration: Developers might not understand where values come from. When something breaks, they need to understand the three-layer model to debug effectively. Good error messages and documentation help.
Stale local configs: If configuration changes upstream, developers must re-run the loader script. Consider adding a reminder to your PR template or making the script part of your standard "pull and update" workflow.
Offline limitations: First-time setup requires network access to the vault. Once generated, .env files work offline, but you can't onboard a new developer on an airplane.
Alternatives Not Taken
It's worth briefly mentioning approaches I considered and rejected:
Committed .env.example files: Requires manual copying and filling in values. Drifts from reality. Doesn't solve the "ask Sarah" problem.
Encrypted .env files in repo: Key distribution becomes its own problem. Who has the decryption key? How do you rotate it?
Container-based dev environments: Higher overhead, requires Docker knowledge, less flexible for polyglot development. Good for some teams, but adds complexity.
Manual documentation: Prone to drift, slow onboarding, doesn't scale. The only thing worse than no documentation is wrong documentation.
Alternatives Worth Considering
Hybrid local/dev environments for tier-specific development: In a monorepo, you might want to work only on the frontend while connecting to cloud services deployed in your development environment. You could extend the manifest and script to handle this case—perhaps a local-frontend environment that uses localhost for the portal but dev URLs for backend services. For my needs so far, I pull the variables for development and manually override the endpoints I want to run locally. It's not elegant, but it works until the pattern proves itself enough to warrant the extra complexity.
Where This Leads
Once you've established a vault as the source of truth with a manifest-driven loader, interesting possibilities emerge. Secret rotation becomes a single-point operation. Audit logs show exactly who accessed which configuration and when. Environmental parity is guaranteed rather than hoped for.
The manifest also becomes documentation. New team members can read env-manifest.json to understand what configuration each application needs. When adding a new service, the manifest change in the pull request makes the configuration requirements explicit and reviewable.
Looking forward, this pattern positions you well for more sophisticated configuration management: automatic rotation policies, integration with CI/CD secret scanning, even zero-trust architectures where applications authenticate using managed identities rather than shared secrets.
If you've been battling environment variable chaos, I'd encourage you to start with a single vault and a minimal manifest that covers one application. Write a loader script that generates one .env file. Experience the simplicity of ./scripts/load-env.sh local instead of "ask Sarah for the database password." Then gradually extend the manifest to more applications, add more sophisticated features, and watch onboarding time collapse from days to minutes.
Sarah will thank you. And so will the next developer who joins the team.


Top comments (0)