DEV Community

Cover image for The Cloud Exit Strategy: Architecting for Independence
Zoo Codes
Zoo Codes

Posted on

The Cloud Exit Strategy: Architecting for Independence

We have normalized a dangerous standard in modern web development: paying hundreds of dollars a month for infrastructure before acquiring a single customer. We build distributed systems for simple problems. We rent our identity, our data, and our compute from vendors who profit from our complexity.

It is time to reject the complexity tax. It is time to return to the $5 VPS.

Here is the architecture of independence.

The Philosophy: The Modular Monolith

The industry tells you to split your application into microservices, serverless functions, and edge computing nodes. This is excellent advice for Netflix. It is terrible advice for you.

For 99% of SaaS applications, the correct architecture is a Modular Monolith running on a Linux server you control. This approach minimizes network latency, eliminates distributed systems failures, and—most importantly—gives you infinite runway.

This isn't just about saving money. It is about architectural sanity.

Layer 1: The Control Plane (Coolify / Dokploy)

The main reason developers flock to Vercel or Heroku is the "git push" deployment experience. Setting up Linux servers, configuring Nginx reverse proxies, and managing SSL certificates manually is tedious.

But we don't need to do that manually anymore. We can run our own Platform as a Service (PaaS).

  • The Choice: Coolify or Dokploy.
  • The Lesson: These tools act as a self-hosted overlay for your VPS. You connect your GitHub repository, and they handle the Docker builds, reverse proxies, and SSL certificates automatically.
  • Why: You get the Vercel experience (automated deployments, preview URLs) without the Vercel restrictions (bandwidth limits, function timeouts, vendor lock-in).

Layer 2: The Data Layer (SQLite + Litestream)

We have been gaslit into believing we need a distributed Postgres cluster for a standard CRUD application.

SQLite in WAL (Write-Ahead Logging) mode is not a toy; it is a production-grade database engine. Because it runs in-process with your application, query latency drops from 50ms (cloud roundtrip) to sub-0.1ms (memory access).

The Disaster Recovery Strategy:
The fear of SQLite is, "What if the server disk dies?"

  • The Solution: Litestream.
  • How it works: It hooks into SQLite’s WAL file and streams changes to commodity object storage (S3, Cloudflare R2, MinIO) in real-time. You achieve essentially zero RPO (Recovery Point Objective) for pennies a month.

Migration Path: When to Eject?

A common anxiety with this stack is scalability. "What happens when I grow?"

The truth is, SQLite handles reads effortlessly (it powers billions of mobile devices). The bottleneck is strictly concurrent writes. Because SQLite locks the database file during a write operation (even in WAL mode, though significantly optimized), you generally hit a ceiling at around 200–500 write transactions per second, depending on your hardware.

The Reality Check:
If you are processing 500 writes per second, you likely have tens of thousands of concurrent users. At that point, you are no longer a "bootstrapped project"—you are a successful business with revenue.

The Ejection Plan:

  1. Vertical Scaling First: Before you leave SQLite, upgrade your $5 VPS to a $40 VPS. Faster NVMe drives and more CPU cores will extend SQLite's life significantly.
  2. The Switch: If you truly saturate the drive I/O, you switch to Postgres. Because we are using standard SQL (via an ORM or query builder), this migration is often just changing a connection string and a driver.

Reaching the limits of SQLite is a "good problem." Do not optimize for it on Day 1.

Layer 3: Identity & Auth (Better-Auth / Authentik)

Identity is the stickiest layer of the stack. If you use Auth0 or Clerk, you are renting your users. If you stop paying, you lose your customers.

Option A: The Library Approach (Better-Auth)
For TypeScript/Node ecosystems, libraries like Better-Auth allow you to embed robust authentication (Social Logins, 2FA, Passkeys) directly into your monolith.

  • Pros: The user data lives in your database tables. Zero external dependency.
  • Cons: Tightly coupled to your codebase.

Option B: The Service Approach (Authentik)
For a language-agnostic solution (Go, Rust, Python), run Authentik as a container in your stack.

  • Pros: Provides a full SSO portal. Decouples auth from your application logic.
  • Cons: Uses more RAM than a library.

Layer 4: Observability (Prometheus + Grafana)

When your app crashes in the cloud, you check the dashboard they provide. When you run your own VPS, you need your own dashboard. Do not pay Datadog $500/month to monitor a $5 server.

We deploy a standard Prometheus + Grafana stack via Docker Compose.

  1. Prometheus: Scrapes metrics from your app and your server (CPU, RAM, Disk I/O).
  2. Grafana: Visualizes that data.
  3. Loki (Optional): Aggregates logs.

This gives you deep visibility into your application's performance. You will understand your system better because you built the dashboard yourself.

The Application Layer: Language Agnostic

This architecture does not care what language you write in. The "blocks" of this stack—Docker, SQLite, Linux—are universal standards.

  • Go: Use Fiber or Echo for blazing fast binaries that sip memory.
  • Rust: Use Axum for type-safe, bulletproof performance.
  • Node/TS: Use Hono or FastAPI for rapid iteration.
  • Python: Use Django or FastAPI if data science is your core.

The interface between your code and the infrastructure is a Dockerfile. That is the only contract you need to fulfill.

Summary: The Cost of Freedom

Here is the bill for your production stack:

Service Tool Cost
Compute Hetzner / DigitalOcean $5.00 / mo
PaaS Coolify (Self-hosted) Free
Database SQLite Free
Backups S3 / R2 (via Litestream) ~$0.02 / mo
Auth Better-Auth Free
Monitoring Prometheus/Grafana Free
Total ~$5.02 / mo

Conclusion

You don't need a venture capital budget to build scalable software. You need a Linux server and the courage to manage your own stack.

When you remove the bloated layers of managed services, you aren't just saving money—you are regaining control. You can host ten failed experiments on this server without paying a cent more. You have bought yourself time.

Stop renting your infrastructure. Own it.

If you have any questions or suggestions, please feel free to comment below or reach out to me on Twitter or LinkedIn. You can also check out my other articles on Dev.to. Thanks for reading!

Next-time

Buy Me a Coffee

Top comments (0)