DEV Community

Mustafa ERBAY
Mustafa ERBAY

Posted on • Originally published at mustafaerbay.com.tr

The Psychology of Running Production on a Single VPS

23:47, I turned off the lamp, then turned it back on

A typical night. Before going to bed, the phone screen lights up one last time. I check that mustafaerbay.com.tr loads. It does. How much RAM? I run free -h over SSH. 5.6 GB available, good. I can sleep.

Then my mind catches on something else. "When did the Pipeline-health monitor last run?" I open the phone again. Inbox check, no state-change in the last 4 hours. Good.

Then one more. "Will the itoverdose container I migrated yesterday rebuild tonight? If it falls into the same window as the 03:00 disk-cleanup, what happens?"

It's 23:51. Still not in bed, still on the phone.

This is the story of production paranoia and learning to live with it.

"One server, multiple projects" adds emotional weight

I have a 7.6 GB RAM VPS. Five of my own products live on it:

  • mustafaerbay.com.tr (this blog — Astro + Node + SQLite)
  • gercekveri.com (Turkey data platform)
  • islistesi.com (task management — web + iOS + Android)
  • hesapciyiz.com (TR financial calculators — different VPS but I monitor the same panel)
  • spamkalkani.com (Android spam blocker — different infra, same control)

Plus two Next.js Docker containers for client projects, postgreses, redises. 13 containers total.

All of them on the same VPS by paranoid choice. Because:

  • AWS multi-AZ "production grade" sounds expensive
  • Indie scale = zero users one night, maybe 100 the next
  • Make it work first, split later when richer

But two things I learned quickly:

  1. The "same VPS" decision creates an obligation to manage my own limits
  2. That management isn't only technical — it's psychological

What is deploy fear, why it's real

Yesterday morning at 06:14, I pushed a test commit. A typo fix on the homepage. Quick, tiny change, five lines of code.

The 60 seconds after the push were bittersweet. The deploy timer on the VPS pulls every minute. "Is it pulling now? Did the build start? Will it OOM?" — those run physically through my mind. On the phone I opened gh run list, looked at the mustafaerbay-deploy.service log.

Build had started. Astro build takes 4 minutes. For 4 minutes the VPS RAM dances around 2.5 GB. If another project enters the deploy lane during that time → swap pops. Again sshd timeout. Again the phone message: "sites are down dude."

This happened to me 3 times in the last two weeks. On the third I set up the flock mutex solution, but the unease around pre-deploys hasn't gone away. There's still a small voice that says "are we okay?" after every push. Its name isn't classic anxiety — it's more like care. But after 30 pushes, doesn't the same care extend to the 31st? It does.

The RAM graph is watched like a heart rate

A personal secret: I have a terminal app on my phone. Right above the keyboard there's a fixed command: ssh vps 'free -h | head -2'. Paste, enter. Reply in 1 second.

On the bus, on the beach, at the dinner table. "available 5.6" — I relax. "available 800 MB" — I tense up, immediately run ps aux --sort=-%mem | head to find "which container is bloating?"

This professional paranoia can look strange to others. "Don't be that obsessed with production" some say. But to me it's the opposite:

"When you're not obsessed, production blows up. When it blows up, you stay awake for hours. Looking for 30 seconds beforehand is cheaper than a 4-hour incident."

That knowledge is rational. But the feeling side is exhausting. It means living with a small constant worry.

The night alert: "if I rebooted now, would it come back?"

My Pipeline-health monitor mail checks state every 4 hours. When state changes, it emails. After I built it, in the first week the inbox had two mails: a DEGRADED and a RECOVERED. The system worked correctly.

But what did I do between those two mails?

At 02:47 I woke up, opened the phone. Saw the DEGRADED mail. "Did it explode? Which one? Is it the build? Is the VPS down?" I got up, walked around in pajamas, SSH'd from the laptop, looked at the log. The reason: dlvr.it had a temporary issue, couldn't poll the RSS, the new post didn't drop into Bluesky. Not important.

When I got back into bed I couldn't sleep for an hour. "If a reboot is needed, will those containers restart? How long do the postgreses take to do WAL recovery? The customer comes at 06:00, opens our baseerps; if it's still recovering at that point..."

Eventually, every minute on the phone I hit curl -sI https://baseerp.../healthz. It returns 200. Okay. Close it. 5 minutes later open it again.

There was no need. The system was working, no reboot was required. But my brain couldn't get out of the "I'm checking" loop.

I think the name for this is vigilance fatigue. The mental exhaustion of constant monitoring.

Three things I try (haven't fully managed yet)

1) Automation = emotional backup

Pipeline-health monitor, disk-cleanup.timer, kernel-update-check, flock mutex — all of these are systems that watch on my behalf. Set them up correctly and the system protects itself while I sleep. The principle of "automation = locking the worry in a safe."

But setting up that automation produces stress on its own. "Is running pipeline-health every 4 hours the right threshold? Should I make it 2?" — those questions never end. Set it somewhere, stop, trust it.

2) Worst-case planning reduces anxiety

What would I do if the VPS died completely one day? I rely on these backups:

  • Postgreses get nightly cron-backed up to S3
  • mustafaerbay SQLite is daily-archived to /var/lib/mustafaerbay/blog.db.bak
  • Code is in git, content collection is in git
  • DNS is at Cloudflare
  • I have a draft Ansible playbook to bring the VPS up at another provider in 30 minutes

Walking through that plan in my head makes the worst case concrete. Knowing "30 min recovery if the whole system dies" is far more bearable than the panic of "is the whole system going to die tomorrow?"

3) Stoic acceptance: it will explode one day

If I run this system for 5 years, statistically there will be a long outage at some point. The disk will fail, the data center will burn, an exotic kernel bug will appear. Accepting that is freeing:

  • My target isn't "never let it explode", it's "recover quickly when it explodes"
  • My target isn't "never let an incident happen", it's "alert the right person during the incident"

Wanting uptime in production is understandable. Wanting 100% uptime is a sickness.

Practical personal rules

I'm writing this post as a lesson to myself. My notes:

  • Limit using terminal on the phone to work hours. After 22:00, don't run ssh vps 'free -h'. If you don't, nothing bad happens — the monitor will tell you anyway.
  • Outside of pipeline-health alerts, don't check the inbox outside business hours. If there's mail, the monitor will surface it at the right time.
  • Don't push on weekends. This is the hardest. Weekends are my most productive time. But every small deploy on Sunday = early-morning anxiety on Monday.
  • Have a hobby outside the VPS. Music, walking, a book. Hours that don't belong to production.

ℹ️ If you run a single server

You'll know this text very well after reading it. Most indie hackers experience this control paranoia but don't write about it. Once written, it becomes more manageable. If you're living it: you're not alone. Setting up automation is good; trusting the automation is better.

Conclusion: production = a relationship

Running multiple projects on a single VPS feels like a relationship to me now. You want it to be trustworthy and you trust it. You constantly need to pay attention to the small things, but if you're paranoid about every detail, neither you nor it is comfortable.

At some point you learn the letting-go point. "Okay, this is the monitor's job to control. I'll sleep. If it explodes, I'll find out. That's it." Saying that sentence took me 6 months. Saying it and doing it are still different.

I wrote this at 23:55. Soon I'll open the phone and run ssh vps 'free -h' — I accept that now. Just one last time.

Then I'll turn off the light.

Promise.

Top comments (0)