DEV Community

Neural Download
Neural Download

Posted on

The 12 Factor App in 13 Minutes | Coffee Time

https://www.youtube.com/watch?v=OU-89UaPG-Q

Twelve rules, written in 2011 by Heroku's co-founder Adam Wiggins, that describe every way your backend can break in production. They're called the 12-Factor App methodology, and if you've ever been paged at 2am because your backend does something unexpected, one of these factors was being violated.

Here's each factor with the specific failure mode it prevents.

I. Codebase

One app, one repo, many deploys.

Skip it and you end up with two teams maintaining "their version" of the same service in separate repos. A critical CVE gets patched in repo A, never makes it to repo B. One region gets exploited. The other is fine. Nobody can tell you which commit is running where.

One Git repo per app. Every environment — dev, staging, prod — is a deploy of a known commit.

II. Dependencies

Declare them. Isolate them.

"Works on my machine" is usually this factor being violated. You ran brew install imagemagick two years ago. Your laptop has it. CI has it. Production container doesn't. Image uploads silently return 500.

Every dependency gets declared — package.json, requirements.txt, go.mod — and shipped with the app. No reliance on system-wide packages.

III. Config

Store it in the environment.

The one that ends careers. Hardcode DB_PASSWORD = "hunter2" into your settings file, push to a public repo, and automated scrapers will find it in minutes. There are documented cases of AWS credentials being exploited into five-figure bills before the engineer even woke up.

Config (anything that varies between environments) lives in environment variables. Code and config stay separate. Nothing sensitive gets committed.

IV. Backing Services

Treat them as attached resources.

Your database, cache, and queue are not part of your app. They're attached to it through URLs. If you've got localhost:5432 burned into forty source files, moving to a managed database becomes a grep-and-replace nightmare. Worse, staging accidentally hits production data.

One env var = one URL. Swap the URL, swap the service. Your code never knew.

V. Build, Release, Run

Three stages, strictly separated.

SSH into prod, git pull, restart. Two days later, nobody knows what version is live. A bug hits, you try to roll back — the old code is gone.

Build compiles code into an immutable artifact. Release combines the artifact with this environment's config. Run just executes. Rollback means pointing at a previous release.

VI. Processes

Stateless. Always.

Store user sessions in a Python dict, scale to three instances behind a load balancer, and half your users get logged out on every request. The LB round-robins them across machines with different memory.

Sessions in Redis. Uploads in S3. Queues in a message broker. Your process becomes disposable. Kill it, start another, nothing is lost.

VII. Port Binding

The app exports itself.

If your app assumes Apache is in front of it parsing requests, you can't move it to Docker. It can't stand alone.

One line: app.listen(3000). The app binds its own port and exports its own service. Any reverse proxy is external.

VIII. Concurrency

Scale out via the process model.

One giant process doing HTTP, background jobs, and cron means the image worker can saturate the CPU and time out your checkout pages. You can't scale one without scaling the other.

Split process types. Twenty web processes. Two workers. One scheduler. Scale each independently.

IX. Disposability

Fast startup. Graceful shutdown.

90-second boots mean the autoscaler can't keep up during traffic spikes. Processes killed mid-request mean every deploy drops some in-flight traffic.

Boot in seconds. Catch SIGTERM, stop accepting requests, finish the in-flight ones, return queued jobs to the queue, then exit.

X. Dev/Prod Parity

Keep them close.

SQLite in dev, Postgres in prod. A query uses SQLite-only syntax. The test passes locally. Prod 500s on deploy.

Same database engine. Same queue. Docker Compose mirrors prod. The further dev drifts from prod, the more Friday-night surprises you get.

XI. Logs

Treat them as event streams.

Write to a log file on disk and eventually the disk fills. Writes block. App hangs. Or the container restarts and all logs vanish.

Write to stdout. Let the platform — Docker, Kubernetes, Heroku — capture, route, and store. Your app emits events. Someone else listens.

XII. Admin Processes

Run them as one-offs.

Migration as an HTTP endpoint — someone triggers it twice and the schema half-applies. Migrations on app boot — ten pods race to ALTER TABLE simultaneously.

Admin tasks run as one-off processes. Same codebase. Same release artifact. Separate lifecycle — a single container, running once, then exiting.

The Twelve Together

The methodology is fifteen years old. It predates Docker, Kubernetes, and the entire microservices wave. And every modern incident report — "config leaked to public repo," "session lost on deploy," "migration deadlocked," "logs went missing" — maps directly to a specific factor being violated.

Follow all twelve and you've eliminated the failure modes that cause most production incidents. Miss one, and you find out which one the hard way.

Top comments (0)