How I Run 31 Automated Jobs on a $5/month VPS with Claude and systemd
I run 31 automated jobs and 3 always-on services from a single $5/month Hetzner VPS. They handle everything from morning brief emails to weather betting to inbox cleanup. Here's how I got here, what broke along the way, and why systemd beat everything else I tried.
Why systemd, not cron
I started with cron. Everyone starts with cron. And for simple stuff — run this script at 8am — it's fine. But the moment you need:
- Logs that actually persist (not just piped to /dev/null)
- Automatic restarts when something crashes
- Dependency ordering (don't run the brief until the email scan finishes)
- Easy enable/disable without commenting out lines
...cron falls apart. systemd timers give you all of that for free. Each job gets a .service file (what to run) and a .timer file (when to run it). You get journalctl logs, systemctl status, and if something dies at 3am, systemd can restart it without you waking up.
Here's what a typical timer looks like:
# jet-daily-brief.timer
[Unit]
Description=Jet Daily Brief Timer
[Timer]
OnCalendar=*-*-* 14:00:00 UTC
Persistent=true
[Install]
WantedBy=timers.target
# jet-daily-brief.service
[Unit]
Description=Jet Daily Brief
After=network-online.target
[Service]
Type=oneshot
WorkingDirectory=/opt/jet
ExecStart=/usr/bin/python3 -u -m SKILLS.daily_brief
TimeoutStartSec=300
Environment=PYTHONUNBUFFERED=1
The -u flag and PYTHONUNBUFFERED=1 are non-negotiable. Without them, Python buffers stdout when there's no TTY, and your logs show up 20 minutes late (or never). I learned this after spending an hour wondering why a job that clearly ran had zero output in journalctl.
The 3am overnight worker failure
My overnight worker runs at midnight. It handles maintenance tasks — clearing stale data, syncing dashboards, running health checks. One night it silently failed because the Anthropic API returned a 529 (overloaded) and my retry logic had a bug: it retried 5 times with no backoff, hit rate limits, and cascaded.
I woke up to a Telegram message from my heartbeat monitor: "Overnight worker: LAST RUN >24h ago." The fix was embarrassingly simple — exponential backoff and capping retries at 2 instead of 5. But the real lesson was: the heartbeat monitor caught it. Every 30 minutes, a separate timer checks that every other timer actually ran recently. If something hasn't run in 2x its expected interval, I get an alert.
# Simplified heartbeat check
for timer_name, expected_interval_hours in TIMERS.items():
last_run = get_last_run(timer_name)
if (now - last_run).hours > expected_interval_hours * 2:
send_alert(f"{timer_name}: LAST RUN >{expected_interval_hours}h ago")
The OAuth Sunday nightmare
Google OAuth tokens expire. Specifically, if your Google Cloud app is in "testing" mode (which mine is, because moving to production requires a security review I haven't done), tokens expire every 7 days. Every. Seven. Days.
For a while, this meant every Sunday morning my email-dependent services would break. The daily brief would fail. Inbox cleanup would fail. Calendar sync would fail. I'd wake up, re-auth manually, push the new token to the VPS, and pretend it was fine.
The self-healing fix was an OAuth helper that:
- Tries the token from the environment variable first
- Falls back to the token file if the env var is stale
- Runs an hourly sync that refreshes the token before it expires
- If all else fails, sends me an alert with a re-auth link
def get_gmail_service():
creds = None
# Try env var first (most recently updated)
token_data = os.getenv('GMAIL_TOKEN_JSON')
if token_data:
creds = Credentials.from_authorized_user_info(json.loads(token_data))
# Fall back to file
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
# Write back to env for next time
update_env_var('GMAIL_TOKEN_JSON', creds.to_json())
It's not elegant, but it hasn't broken on a Sunday in 3 weeks. I'll take it.
The real cost audit: $19 to $5/month
When I first set this up, I was spending about $19/month on Anthropic API calls alone. The VPS was $5. Most of the API cost came from three things:
- Overnight worker using Sonnet ($7.73/mo) — Switched to Haiku 4.5. It handles maintenance tasks just fine.
- Newsletter summaries using Sonnet ($2.83/mo) — Same switch. Haiku summarizes newsletters perfectly.
- Telegram bot without prompt caching ($3.50/mo) — Added prompt caching. Cache hit rate is ~70%.
I also evaluated running Ollama locally for the cheap stuff. At $5/month total, it wasn't worth the complexity. The tasks that could run on a local model were already costing pennies on Haiku.
Final breakdown:
- Hetzner VPS (CX22, 2 vCPU, 4GB RAM): $5/month
- Anthropic API (all 31 jobs + bot): ~$5/month
- Total: ~$10/month for a full personal AI operations platform
What still breaks
I'd love to tell you it's bulletproof. It's not.
Python dependency conflicts. The VPS has one Python environment. When I install something for one bot that conflicts with another, both break. I should use venvs. I don't. It hasn't bitten me hard enough yet.
Disk space. 40GB sounds like a lot until you're storing screenshots, audio files, and logs from 31 services. I added a cleanup job to the overnight worker. It deletes files older than 7 days from /tmp.
The VPS itself. It's a single point of failure. No redundancy, no failover. If the host has an outage, everything stops. I have manual fallbacks, but I haven't tested them in weeks.
Me. I'm the bottleneck. When something needs a new feature or a bug fix, it waits until I have time after work and after evening routines. The whole system is one person deep.
Is it worth it?
Absolutely. My morning brief email arrives at 7am with weather, calendar, market data, and a summary of overnight emails. My inbox cleanup runs 5 times a day. Package tracking updates automatically. And I have a bot that can do almost anything the full system can do, from my phone.
The total cost is less than a single SaaS subscription. The total setup time was about 3 weeks of evenings. And the hardest part wasn't the code — it was getting systemd timer syntax right and remembering to add PYTHONUNBUFFERED=1 to every service file.
If you're thinking about doing something similar: start with one timer. Get the daily brief working. Once you see that email land in your inbox every morning without you touching anything, you'll want to automate everything else too.
Top comments (0)