I wanted one clean deployment path for local testing and real cluster rollout, so I packaged the same app for both Docker Compose and Kubernetes (Helm).
This post is the exact runbook I now use for darkedges-entraid-tokenexchange.
Why this setup works
This app is a Next.js token-exchange broker with:
- OIDC sign-in (single or multiple providers)
- Redis-backed sessions
- Entra token exchange support
- Optional Vault integration
The key idea is simple: run the same container image everywhere, then swap environment values per environment.
Local path: Docker Compose
The repository already includes a ready-to-run docker-compose.yaml with:
-
appon port3000 vault-
vault-initandvault-data-init
Step 1: Create your environment file
cp .env.example .env
Populate at minimum:
NEXTAUTH_SECRET=<random-32-byte-secret>
NEXTAUTH_URL=http://localhost:3000
ENTRAID_TENANT_ID=<tenant-guid>
ENTRAID_CLIENT_ID=<app-client-id>
ENTRAID_CLIENT_SECRET=<app-client-secret>
ENTRAID_SCOPE=https://graph.microsoft.com/.default
REDIS_URL=redis://localhost:6379
VAULT_ADDR=http://localhost:18200
VAULT_TOKEN=dev-root-token
If you support multiple providers, also set OIDC_PROVIDERS as valid JSON.
OIDC_PROVIDERS: quick configuration pattern
Use a JSON array. Each provider entry needs the same core fields.
OIDC_PROVIDERS='[
{
"id": "pingfederate",
"name": "PingFederate",
"description": "Sign in with corporate credentials",
"icon": "lock",
"baseUrl": "https://id.example.com",
"clientId": "your-client-id",
"clientSecret": "your-client-secret",
"scopes": ["openid", "profile", "email"]
}
]'
For multiple providers, add more objects to the same array (for example Entra, PingFederate, Auth0).
Tips:
- Keep it valid JSON inside the quotes: double quotes for keys/strings, no trailing commas.
- Keep provider
idvalues unique. - Make sure each
baseUrland client credentials match the IdP app registration. - Use each provider's correct callback URL in the IdP config.
If login page/provider rendering fails, the first thing to check is JSON validity in OIDC_PROVIDERS.
Entra app types you should provision
For this project, think in terms of three Entra app registrations (each with an Enterprise Application/service principal):
- Web login app for OIDC sign-in (
msentraidprovider inOIDC_PROVIDERS) - API/resource app exposing your delegated scope (for example
access_as_user) - Backend token-exchange app for OBO/Graph calls (
ENTRAID_CLIENT_ID/ENTRAID_CLIENT_SECRET)
If you support External ID (ciamlogin.com issuer), add a fourth confidential app and map it to ENTRAID_CIAM_* variables.
1-minute env mapping check
| App | Key env vars |
|---|---|
| Web login app |
OIDC_PROVIDERS[*].baseUrl, OIDC_PROVIDERS[*].clientId, OIDC_PROVIDERS[*].clientSecret
|
| API/resource app | Usually represented in NEXT_PUBLIC_ENTRAID_SCOPE
|
| Backend token-exchange app |
ENTRAID_TENANT_ID, ENTRAID_CLIENT_ID, ENTRAID_CLIENT_SECRET, ENTRAID_SCOPE
|
| Optional CIAM app |
ENTRAID_CIAM_TENANT_ID, ENTRAID_CIAM_CLIENT_ID, ENTRAID_CIAM_CLIENT_SECRET, ENTRAID_CIAM_SCOPE
|
Step 2: Build and start
docker compose up --build -d
Step 3: Verify runtime
docker compose ps
docker compose logs -f app
Open:
- App:
http://localhost:3000 - Vault UI (optional):
http://localhost:18200
Step 4: Stop or reset
docker compose down
Full reset including volumes:
docker compose down -v
Cluster path: Kubernetes + Helm
The chart is already in the repo at:
helm/darkedges-entraid-tokenexchange
There is also an environment-flavored values file:
helm/broker.yaml
Step 1: Build and push image
docker build -t <registry>/darkedges-entraid-tokenexchange:<tag> .
docker push <registry>/darkedges-entraid-tokenexchange:<tag>
Step 2: Create namespace
kubectl create namespace broker
Step 3: Add pull secret for private registry
kubectl create secret docker-registry darkedges-registry-credentials \
--docker-server=<registry> \
--docker-username=<username> \
--docker-password=<password> \
--namespace broker
Step 4: Update values
In helm/broker.yaml (or your own values file), set:
-
image.repositoryandimage.tag -
ingress.hostsandingress.tls -
envfor non-sensitive config -
secretEnvfor secrets -
REDIS_URLto a reachable Redis service
Step 5: Install or upgrade release
helm upgrade --install darkedges-entraid-tokenexchange \
helm/darkedges-entraid-tokenexchange \
-f helm/broker.yaml \
--namespace broker \
--create-namespace
Step 6: Verify rollout
kubectl get pods -n broker
kubectl get svc -n broker
kubectl get ingress -n broker
kubectl rollout status deploy/darkedges-entraid-tokenexchange -n broker
kubectl logs deploy/darkedges-entraid-tokenexchange -n broker --tail=200
Step 7: Access the app
Preferred:
- Ingress host over HTTPS
Quick test from laptop:
kubectl port-forward svc/darkedges-entraid-tokenexchange 3000:3000 -n broker
Then open http://localhost:3000.
Lessons learned
Three things made this reliable:
- Keep one container artifact and promote it across environments
- Keep non-secret and secret config split (
envvssecretEnv) - Validate rollout every time with both status and logs, not just pod readiness
Production checklist
Before go-live:
- Move secrets from values files to a proper secret manager
- Set
NEXTAUTH_URLto your real HTTPS URL - Configure CPU and memory requests/limits
- Use managed or HA Redis
- Harden Vault TLS/auth and remove dev credentials
Handy commands
Upgrade:
helm upgrade darkedges-entraid-tokenexchange \
helm/darkedges-entraid-tokenexchange \
-f helm/broker.yaml \
-n broker
Rollback:
helm rollback darkedges-entraid-tokenexchange 1 -n broker
Uninstall:
helm uninstall darkedges-entraid-tokenexchange -n broker
If you already have Docker Compose and Helm in this repo, this workflow should get you from local proof-of-concept to repeatable Kubernetes deployment with minimal friction.
Top comments (0)