DEV Community

Mustafa ERBAY
Mustafa ERBAY

Posted on • Originally published at mustafaerbay.com.tr

An Engineer's Sustainability Ledger: Why I Run on Less

"Sustainability isn't a sticker. It's an architectural choice."

One morning I SSH'd into my VPS. 96 GB RAM, 18 vCPU. Was it full? No. CPU at 4%, RAM at 30%. The rest — where does it go? It sits there in watts, paid to the world in carbon.

That morning I asked myself: "Am I using too little, or is the industry selling everyone more than they need?"

The answer is: both.

Over-consumption is the modern software default

I've been doing systems architecture for 20 years. My work — from pulling cable to kernel debugging, from a five-port office switch to a 200+ server data center — was always built on "enough resources." But something shifted in the last decade:

The industry lost "enough." It replaced it with "just in case."

The receipts are everywhere:

  • Teams provisioning 12 GB RAM for an API that would happily run on 3.
  • Friends spinning up a 3-node Kubernetes cluster for a blog with 50 visitors a day.
  • Startups splintering a 10-user application across 6 microservices.

The common thread: over-provisioning is fear, not need. The fear of what? "What if it isn't enough?" "What if traffic 10x's overnight?" "What if my boss wants something more 'enterprise'?"

That fear has a cost. Not just dollars — watts and carbon.

The watt math

Let's do the math. A single VPS under normal load draws roughly 40-60 watts. Spread the same workload across a 3-node Kubernetes cluster? 130-180 watts. Triple the compute, triple the electricity, triple the cooling.

The numbers feel small — "What's 100 watts?" — until you stack them up:

  • 100 watts × 24 hours × 365 days = 876 kWh/year
  • The Turkish grid emits roughly 450 g CO₂ per kWh (coal- and gas-heavy mix). Most of Europe is similar; the U.S. grid average is comparable.
  • That's an extra ~394 kg CO₂/year for one small blog.

An airline passenger on a short domestic flight emits ~150 kg CO₂. So your needless cluster has a carbon footprint equivalent to flying that route twice a year. For one person. For one application.

Multiply: 100 startups, 1,000 indie hackers, 10,000 hobby projects. That's when the real number shows.

What my choices actually mean

I run my self-hosted blog — the site you're reading right now — on a single Contabo VPS. On the same box:

  • This blog's Astro SSR backend
  • Self-hosted analytics (Umami, not Google)
  • 13+ containers (side projects, ERP demos)
  • A self-hosted GitHub Actions runner
  • Mail relay, monitoring, watchdogs

Annual VPS bill: roughly €360. Average power draw: ~50W. About 438 kWh/year. If I did the same work with modern "best practices" — multi-platform (Vercel + Cloudflare Workers + Supabase + Sentry + Mailchimp + …) — the direct watts wouldn't be on my invoice. But every service runs on a data center somewhere. My estimate: 3-4x my current footprint.

I tried Kubernetes — on this blog, on my own VPS. I pulled it out. Not just because of operational complexity: paying to run capacity I didn't use, burning watts for nothing, felt like an engineering mistake and an ethical problem.

"No Kubernetes here" is a small carbon win.

Pragmatic sustainability

I don't think of myself as a tree-hugger. I'm not an eco-activist. But 20 years of engineering taught me one thing: running lean and good architecture are the same door.

What's good architecture?

  1. Allocate only what you need. No more. No "maybe."
  2. Multi-tenancy. Let one CPU core serve many small workloads.
  3. Async + queue. The difference between "containers spawning per request" and "a queue absorbing the spikes."
  4. Cache aggressively. Compute it once, not a thousand times.
  5. Self-host what you can. Every SaaS is your data being processed in someone else's data center, with extra network hops and extra power draw.

Those five lines cut the bill and cut the carbon. They're not in tension; they point the same way.

The "the cloud is green" lie

The big cloud providers (AWS, GCP, Azure) love to wave around their "100% renewable" certificates. They do. But the truth underneath the marketing is messier:

  • "Renewable" is usually delivered through Power Purchase Agreements — buying wind energy in one location to "offset" the coal-fired consumption in another.
  • Your application is, in physical reality, burning coal, but the spreadsheet has been zeroed out.

This is scope-3 accounting trickery. Legal, technically correct, but not real.

Real sustainable architecture doesn't lean on accounting games — it leans on actually consuming less. Fewer watts. Real watts. Whatever the grid happens to be.

What I do in practice

Decisions I've made on this project so far:

  1. One VPS, 13 containers — multi-tenancy gets maximum work out of one piece of metal.
  2. No Kubernetes — at this scale it's pure overhead.
  3. Self-hosted Umami — no third-party analytics hop.
  4. Multi-provider AI fallback — I rotate across 4 providers so no single one gets hammered before its quota refreshes.
  5. Build cache + atomic deploy — every deploy went from 5 minutes to 45 seconds. Less CPU-seconds across thousands of builds.
  6. Cloudflare CDN + 1h cache — origin hits dropped ~95%, my server spends most of its time idle.
  7. systemd watchdog — the kernel restarts dead services. No separate monitoring daemon eating cycles.
  8. Cut Hashnode — one less SaaS = one less data center round-trip per post.

These read as engineering decisions, not climate decisions. But the bill of materials adds up to the same thing.

What this means for you

When you're about to spin up your next project, pause for a few seconds and ask:

  • "Does this cluster solve a real problem, or is it CV decoration?"
  • "Do I need 12 GB of RAM, or did I just not tune the 3 GB I had?"
  • "Are these 6 microservices for me, or for a conference talk?"
  • "Can I self-host this, or am I just willing to keep paying the bill?"

The honest answers usually go: "no, no, no, yes."

Every "no" is ~100 watts. One less short-haul flight on your annual ledger.

Closing

I've been an engineer for 20 years. Early on, I built systems thinking "enough." Then I went through the "just in case" era. Today I'm back: enough.

Not just to save money. Not just for operational simplicity. The other reason — and I've stopped feeling shy about saying it out loud — is the planet's ledger.

Running lean isn't selfishness. It's not laziness. It's not under-engineering.

Running lean is the quiet, mostly invisible part of engineering ethics.

My server will draw 50 watts tonight. If I'd built a cluster it would draw 150. Choosing not to turn those extra 100 watts into coal smoke — that's a decision. I make it every night.

Would you?

Top comments (0)