A plain-English walkthrough of building SwiftDeploy — a declarative deployment tool in Go and Python
Imagine handing someone a single index card and saying: everything you need to know about how this system runs is on this card. No digging through config files, no cross-referencing documentation, no wondering whether what's on disk matches what's actually running. Just the card.
That's the idea behind SwiftDeploy. You write one file — manifest.yaml — and a CLI tool called swiftdeploy reads it and generates everything else: the Nginx configuration, the Docker Compose file, the running containers. If you delete the generated files and run swiftdeploy init again, the stack rebuilds identically. The manifest is the only thing that matters.
Here's how every piece of it works.
Why this matters
Most deployment setups have a coordination problem. The Nginx config lives in one place. The Docker Compose file lives in another. The environment variables live somewhere else. They're all supposed to agree on the same port numbers, the same network name, the same image — but nothing enforces that. You change one and forget to update another, and the stack breaks in a way that takes an hour to trace.
Config drift — when multiple config files that are supposed to agree on the same values silently fall out of sync over time.
SwiftDeploy solves this by having only one file that a human ever edits. Everything else is generated from it. You can't have drift between files that are all derived from the same source.
The architecture in one sentence
swiftdeploy reads manifest.yaml, renders Jinja2 templates into nginx.conf and docker-compose.yml, brings up a Go HTTP service and an Nginx reverse proxy as Docker containers, and manages their lifecycle through five subcommands.
The CLI never sits inside a running container — it runs on the host and talks to Docker from the outside. A bug in the CLI cannot take down the running stack.
Piece 1: The manifest — one file to rule them all
Everything starts here. manifest.yaml describes the entire deployment:
services:
image: swift-deploy-1-node:latest
port: 3000
version: "1.0.0"
nginx:
port: 8080
proxy_timeout: 30
contact: "ops@swiftdeploy.internal"
network:
name: swiftdeploy-net
driver_type: bridge
mode: stable
The CLI reads this file, resolves all values, and uses them as variables when rendering config templates. Nothing in the generated files is hardcoded.
One design rule that matters: the manifest is immutable during normal operations. The only exception is the mode field, which swiftdeploy promote updates in-place when switching between deployment modes. Every other subcommand reads the manifest and generates from it — never writes back to it.
This also comes with a layered override system. You can pass flags like --nginx.port=9090 at invocation time, and they take effect for that session only without touching the file:
hardcoded defaults → manifest.yaml → CLI flags (highest priority)
Same pattern used by Docker and Kubernetes — predictable, auditable, no surprises.
Why flag overrides exist
The manifest is the source of truth for stored configuration — what the system looks like by default, in its resting state. But deployment is rarely one-size-fits-all.
Twelve-factor app — a methodology for building software that runs cleanly in any environment. One of its core principles: configuration that varies between environments (ports, credentials, endpoints) should come from the environment, not from the codebase.
Consider what changes across environments without the flag system:
- A developer running locally might need a different port because
8080is already taken by another service - A CI pipeline might want to override the contact email to point to an automated alert channel
- A staging environment might need a longer
proxy_timeoutbecause its upstream dependencies are slower
Without flags, each of these would require editing manifest.yaml — which means either maintaining separate manifest files per environment, or making changes you then have to remember to revert. Both are sources of mistakes.
With flags, manifest.yaml stays clean and environment-agnostic:
# local dev — port conflict on 8080
swiftdeploy deploy --nginx.port=9090
# staging — slower upstreams, different contact
swiftdeploy deploy --nginx.proxy_timeout=60 --nginx.contact=staging-alerts@corp.com
# CI — validate only, don't care about the contact field
swiftdeploy validate --services.image=swift-deploy-1-node:ci-build-42
The critical constraint: flags are session-only. They affect what gets generated and deployed, but they never write back to manifest.yaml. The file stays as the committed, reviewable record of what the system looks like in its canonical state. Flags are the escape hatch for the edges, not a replacement for the centre.
This is the same pattern Helm, Docker, and Kubernetes kubectl all use — a base config with runtime overrides layered on top.
Piece 2: The CLI and layered config resolution
swiftdeploy is a Python executable — no .py extension, just a shebang line at the top (#!/usr/bin/env python3) and chmod +x. The OS reads the shebang and knows which interpreter to use. The language is invisible to whoever calls it.
Shebang — the
#!line at the top of a script that tells the operating system which interpreter to use when the file is executed directly.#!/usr/bin/env python3means "find Python 3 wherever it lives on this machine and use it."
It has five subcommands, each a thin wrapper that orchestrates the other pieces:
-
init— generate configs from the manifest -
validate— five pre-flight checks before anything touches Docker -
deploy— init + bring the stack up + wait until healthy -
promote— switch deployment mode with a rolling restart -
teardown— bring everything down;--cleanalso removes generated files
The config resolution happens in cli/config.py. It uses a deep merge:
- Start from hardcoded Python defaults
- Merge in
manifest.yamlvalues (these win over defaults) - Merge in any CLI flag overrides (these win over everything)
The result is a ResolvedConfig dataclass — a clean, typed object with no missing fields that every other piece of the CLI works from.
Piece 3: Template rendering — generating configs, not writing them
nginx.conf and docker-compose.yml are never written by hand. They are the output of rendering Jinja2 templates with the resolved config values.
Jinja2 — a Python templating engine. You write a file with
{{ variable }}placeholders, provide a dictionary of values, and Jinja2 produces the final file with all placeholders substituted. The same engine Django and Ansible use.
The templates live in templates/ and look like this:
upstream swiftdeploy_app {
server {{ service_host }}:{{ service_port }};
}
server {
listen {{ nginx_port }};
proxy_read_timeout {{ proxy_timeout }}s;
}
One deliberate constraint: templates contain no logic. No if statements, no loops. All decisions happen in cli/generator.py before rendering — the template receives flat, already-resolved values. Logic in templates is a maintenance trap; it belongs in code.
The generator uses StrictUndefined — if a variable is referenced in a template but not provided, rendering fails immediately with a clear error rather than silently producing a config with an empty field. Same philosophy as a compiler refusing to let you use an uninitialised variable.
Running swiftdeploy init twice on the same manifest produces byte-identical output. This property — called idempotency — is what ensures you can safely delete generated files at any point and cleanly regenerate them.
Idempotent — an operation you can run multiple times and always get the same result.
swiftdeploy initis idempotent: same manifest in, same config files out, regardless of how many times you run it.
Piece 4: The Go service — stable and canary modes
The HTTP service is written in Go and runs the same binary in both modes. The mode is injected as an environment variable (MODE=stable or MODE=canary) by Docker Compose at startup.
Environment variable — a named value set in a process's environment before it starts. Programs read these at startup rather than hardcoding configuration. Docker Compose injects them into containers via the
environment:block.
The service exposes three endpoints:
-
GET /— welcome message with current mode, version, and server timestamp -
GET /healthz— liveness check returning status and uptime in seconds -
POST /chaos— simulates degraded behaviour (canary mode only)
The architecture inside the service is hexagonal — also called ports and adapters.
Hexagonal architecture (ports and adapters) — a design pattern where the core business logic sits in the middle with no knowledge of the outside world. All external concerns — HTTP, databases, file systems — connect to it through defined interfaces called ports, with concrete implementations called adapters. You can swap an adapter without touching the core.
In practice this means:
-
internal/core/service.go— pure logic, zero external dependencies. Knows about modes, chaos states, responses. Knows nothing about HTTP. -
internal/core/ports.go— defines two interfaces:ServicePort(what the HTTP layer calls in) andChaosStore(what the core calls out to for chaos state) -
internal/adapters/http/handler.go— translates HTTP requests into core calls, and core responses into HTTP responses -
internal/adapters/store/memory.go— stores chaos state in memory, behind theChaosStoreinterface -
cmd/main.go— the composition root; the only file allowed to know about all layers at once
The payoff: the /chaos endpoint returning 403 in stable mode is a policy that lives in the core, not in the HTTP layer. The HTTP adapter just translates the domain error into a status code. Swap the HTTP adapter for a gRPC adapter and the policy travels with it automatically.
Piece 5: Chaos engineering — simulating failure on demand
The POST /chaos endpoint is only active in canary mode. It accepts a JSON body and puts the service into a degraded state:
{ "mode": "slow", "duration": 5 } // sleep 5 seconds before every response
{ "mode": "error", "rate": 0.5 } // return 500 on ~50% of requests
{ "mode": "recover" } // cancel any active chaos
Canary mode — a deployment strategy where a new or experimental version of a service runs alongside the stable version, receiving a subset of traffic. The name comes from "canary in a coal mine" — if something goes wrong, the canary version shows it first before it affects everyone. Here, canary mode is the same binary as stable but with chaos capabilities unlocked.
Chaos state is held in memory in MemoryChaosStore, protected by a sync.RWMutex.
Mutex (mutual exclusion lock) — a mechanism that ensures only one goroutine can write to a shared value at a time, while allowing many to read simultaneously. Without it, two concurrent requests modifying chaos state could corrupt each other's writes — a bug called a race condition.
One intentional design choice: /healthz is never wrapped by the chaos middleware. Chaos on health checks would cause Docker to think the container is unhealthy and restart it — not the intended behaviour. The health check must always tell the truth about liveness.
Piece 6: Nginx — the only public face of the stack
Nginx sits in front of the Go service and is the only container with a port bound to the host. The service port (3000) is declared with expose: in Docker Compose — visible within the Docker network, never reachable from outside.
Reverse proxy — a server that sits in front of your application and forwards incoming requests to it. External clients talk to the proxy, not to the application directly. This lets you add timeouts, logging, error handling, and SSL termination in one place without touching the application.
Nginx handles three things the Go service doesn't need to know about:
Custom access logs — every request is written to a named volume in a specific format:
2026-05-03T10:00:00+00:00 | 200 | 0.001s | 172.18.0.2:3000 | GET / HTTP/1.1
JSON error bodies — if the upstream service is unavailable, Nginx returns a structured JSON response instead of its default HTML error page:
{
"error": "Bad Gateway - upstream service unavailable",
"code": "502",
"service": "swiftdeploy-app",
"contact": "ops@swiftdeploy.internal"
}
Header forwarding — the Go service sets X-Mode: canary on every response in canary mode. Nginx passes this through to the client with proxy_pass_header X-Mode. Nginx also adds X-Deployed-By: swiftdeploy to every response from its side.
Two things that look trivial but aren't:
- JSON error strings use ASCII only — multi-byte characters (like em dashes) in Nginx
returndirectives can cause the response body to be silently truncated -
add_headerdirectives are not inherited by named@errorlocations — each error location repeats the header explicitly or it disappears from error responses
Piece 7: The deploy and promote lifecycle
swiftdeploy deploy does three things in sequence:
- Runs
initto ensure configs reflect the current manifest - Runs
docker compose up -dto start the containers - Polls
GET /healthzthrough Nginx (not directly to the service) every 2 seconds until it gets a 200, or fails after 60 seconds
Polling through Nginx matters. External monitoring tools will also go through Nginx — if the service is up but Nginx isn't routing yet, a direct poll would give a false positive.
swiftdeploy promote canary is more interesting:
- Updates
mode: canaryinmanifest.yamlin-place - Regenerates
docker-compose.ymlonly — Nginx config is mode-agnostic - Restarts the
appcontainer only, with--no-deps --force-recreate— Nginx is never touched - Confirms the switch by hitting
GET /and verifying the response body says"mode": "canary"and theX-Mode: canaryheader is present - If the restart fails, rolls
manifest.yamlback tostablebefore returning an error
The in-place manifest update uses ruamel.yaml rather than PyYAML.
YAML round-trip — reading a YAML file, modifying a value in memory, and writing it back out. PyYAML does not preserve comments, key ordering, or quote styles on write — it produces a structurally correct but reformatted file.
ruamel.yamlpreserves all of these, so the manifest looks exactly the same after apromoteas it did before, just with one field value changed.
Piece 8: Security by default
A few practices baked in rather than bolted on:
- The Go service runs as a non-root user inside the container (
uid 1001). The log directory is pre-owned by that user in the Dockerfile before the named volume mounts over it — without this, Docker would create the volume owned by root and the non-root process couldn't write logs. - Both containers use
cap_drop: ALLand add back only the specific Linux capabilities each needs. Nginx needsSETUID/SETGIDto drop from its master process to worker processes; the Go service needs none. - The Docker image is built in two stages: a full Go toolchain in the builder stage, and a bare Alpine base in the runtime stage. Only the compiled binary transfers between them. Final image size: ~12MB, well under the 300MB limit.
Multi-stage Docker build — a Dockerfile technique where you use one image (the builder) to compile your code, then copy only the output into a second, minimal image (the runtime). The build tools, source code, and intermediate files never land in the final image.
The ultimate test of the architecture
A core constraint of this system is absolute reproducibility: you must be able to delete all generated files, re-run swiftdeploy init, and verify they regenerate exactly as they were.
Every design decision above serves this constraint:
- Templates are committed; generated files are not (they go in
.gitignore) -
initis idempotent — same manifest always produces identical output -
ruamel.yamlkeeps the manifest intact throughpromoteso a subsequentinitstill has a valid source to read from -
StrictUndefinedin Jinja2 means a missing template variable fails loudly duringinit, not silently at runtime
If swiftdeploy init breaks, the entire stack breaks. That single command is the load-bearing piece.
Top comments (0)