Most deployment setups do not fail because the tools are weak. They fail because the process drifts: one person edits nginx.conf, another tweaks Compose, and soon nobody knows what the real deployment contract is.
SwiftDeploy solves that with a simple principle: declare intent once in manifest.yaml, and make everything else derived and verifiable.
It started as a config-generation CLI, but it became more useful when policy and observability were added. The interesting part is not that it creates files; it is that it can block unsafe actions when runtime signals say “don’t proceed.”
The core idea in one view
From left to right:
- Clients hit Nginx.
- Nginx proxies to the Go app on the internal network.
- The
swiftdeployCLI generates configs and asks OPA for policy decisions on loopback (127.0.0.1), not through public ingress.
That separation is intentional. User traffic and control decisions use different paths.
What makes this practical
Real manifest excerpt:
services:
image: swiftdeploy-api-go:latest
port: 3000
mode: stable
app_version: "1.0.0"
nginx:
image: nginxinc/nginx-unprivileged:stable-alpine
port: 8080
proxy_timeout: 30s
opa:
image: openpolicyagent/opa:latest
port: 8181
policy:
data_file: policy-data/thresholds.json
decision_timeout_seconds: 5
From this, ./swiftdeploy init renders:
template-output/nginx.conftemplate-output/docker-compose.yml
From there:
-
validateconfirms preflight health, -
deploystarts policy sidecar + stack, -
promoteswitches stable/canary mode safely, -
statusandauditleave an evidence trail.
Why this is more than a wrapper script
The CLI enforces two decision points:
Pre-deploy gate
Host conditions (disk/cpu/memory snapshot) are checked against policy before full startup proceeds.Pre-promote gate (canary -> stable)
Windowed metrics from/metricsare evaluated before promotion completes.
So this is not only “template rendering.” It is decision-aware deployment.
What I learned while building it
Policy paths are easy to get subtly wrong.
POST /v1/data/policy/infrastructure/decisionis valid.
POST /v1/data/infrastructure/decisionoften returns{}and can waste debugging time.Ingress isolation should be proven, not assumed.
Hitting OPA-shaped URLs on Nginx should return app-level404/non-OPA response, while loopback OPA returns policy JSON.Canary tests need traffic.
Windowed error-rate checks can look “healthy” if there is no meaningful request volume during the window.Generated files are outputs, not source files.
If you edit generated Compose/Nginx files directly,initwill overwrite them.
Outcome
With this setup, deployment becomes:
- declarative (
manifest.yamlas source of truth), - reproducible (templates + generated configs),
- observable (
/metrics,status), - enforceable (OPA gates),
- auditable (
history.jsonl->audit_report.md).
That is the difference between “it runs on my machine” and “I can prove this rollout is safe.”
Verification Appendix (commands + expected checks)
Run from swiftdeploy-project/.
A) Path flow
./swiftdeploy build
./swiftdeploy init
./swiftdeploy validate
./swiftdeploy deploy
./swiftdeploy promote canary
./swiftdeploy promote stable
./swiftdeploy status 5
./swiftdeploy audit
./swiftdeploy teardown --clean
Expected highlights:
-
validatereports all checks PASS. -
deployends with stack healthy. -
promote canaryenablesX-Mode: canaryon/healthz. -
promote stableremovesX-Modeafter successful gate. -
statusappends records tohistory.jsonl. -
auditgeneratesaudit_report.md.
SCREENSHOT_VALIDATE_PASS
SCREENSHOT_DEPLOY_HEALTHY
SCREENSHOT_PROMOTE_CANARY_AND_HEALTHZ
SCREENSHOT_PROMOTE_STABLE_AND_HEALTHZ
SCREENSHOT_STATUS_OUTPUT
SCREENSHOT_AUDIT_REPORT
SCREENSHOT_TEARDOWN_CLEAN
B) Confirm metrics and policy endpoints
Check that required Prometheus metric families are exposed through Nginx ingress:
curl -sS "http://127.0.0.1:8080/metrics" | grep -E "http_requests_total|http_request_duration_seconds|app_uptime_seconds|app_mode|chaos_active" | head
Check infrastructure policy decision endpoint (OPA loopback):
curl -sS -X POST "http://127.0.0.1:8181/v1/data/policy/infrastructure/decision" \
-H "Content-Type: application/json" \
-d '{"input":{"context":"pre-deploy","disk_free_gb":50,"cpu_load":0.3}}'
Check canary policy decision endpoint (OPA loopback):
curl -sS -X POST "http://127.0.0.1:8181/v1/data/policy/canary/decision" \
-H "Content-Type: application/json" \
-d '{"input":{"context":"pre-promote","window_seconds":30,"error_rate":0.001,"p99_latency_ms":100}}'
Expected:
- Metrics families visible.
- OPA responses contain a
resultobject andallow.
SCREENSHOT_METRICS_OUTPUT
SCREENSHOT_OPA_INFRA_ALLOW
SCREENSHOT_OPA_CANARY_ALLOW
C) Confirm OPA is not exposed through ingress
curl -sS -o /dev/null -w "%{http_code}\n" "http://127.0.0.1:8080/v1/data/policy/infrastructure/decision"
Expected:
-
404or non-OPA response via Nginx path.
SCREENSHOT_INGRESS_ISOLATION_CHECK
D) Test deny and recovery scenarios
Force deploy deny by tightening thresholds in policy-data/thresholds.json, then:
./swiftdeploy teardown
./swiftdeploy deploy
Force promote deny:
./swiftdeploy promote canary
curl -sS -X POST "http://127.0.0.1:8080/chaos" \
-H "Content-Type: application/json" \
-d '{"mode":"error","rate":1.0}'
./swiftdeploy promote stable
Recover:
curl -sS -X POST "http://127.0.0.1:8080/chaos" \
-H "Content-Type: application/json" \
-d '{"mode":"recover"}'
./swiftdeploy promote stable
SCREENSHOT_DEPLOY_DENIED_BY_POLICY
SCREENSHOT_PROMOTE_DENIED_BY_POLICY
SCREENSHOT_PROMOTE_SUCCESS_AFTER_RECOVER















Top comments (0)