Two servers. $36/month. An autonomous dev agent, a monitoring dashboard, a search engine, a cross-server communication system, background pollers, cron jobs, and up to four concurrent Claude Code sessions.
No IDE. No browser. No GUI of any kind — except the ones we ship to customers.
This is the story of how we got here — not by choice, but by a series of problems that kept getting solved without one.
The First Problem: No Local Machine
The person running this project doesn't have a dev setup. No MacBook Pro with sixteen terminal tabs. No local Postgres. No Docker Desktop. Just a phone, a Telegram app, and SSH access to two DigitalOcean droplets.
Star Command: $24/month, 2 vCPUs, 4GB RAM, NYC.
SFO2: $12/month, 1 vCPU, 2GB RAM, San Francisco.
There's a Mac Mini too. It runs Xcode builds and opens a browser to verify the frontend looks right. The Swift code lives on Star Command — the Mac Mini is a checking station, not a workstation. Nobody writes code on it.
The initial plan was "set up a real dev environment later." Later never came. SSH worked. Claude Code worked inside SSH. Code got written. The "temporary" setup became the setup.
The Second Problem: Deployment
Week one. The trade journal app is ready for production. Time to deploy.
Normal workflow: open Vercel dashboard, connect repo, click deploy, configure environment variables through the web UI. Except there's no browser. The server is headless.
npm i -g vercel
vercel --prod --yes
One command. Environment variables set via vercel env add from the terminal. No dashboard. No clicking. Deployment in twelve seconds.
The Third Problem: Monitoring
MissionControl was running tasks overnight. Nobody watching. How do you know if something breaks at 3 AM?
The GUI answer: set up Grafana, connect a data source, build dashboards, configure alerting rules, set up PagerDuty or OpsGenie.
The terminal answer:
# server-health.sh, runs every 5 minutes via cron
df -h / | awk 'NR==2 {if ($5+0 > 90) print "DISK WARNING: "$5}'
free -m | awk '/Mem:/ {if ($3/$2*100 > 90) print "MEMORY WARNING: "$3"/"$2"MB"}'
pm2 jlist | python3 -c "import sys,json; [print(f'DOWN: {p[\"name\"]}') for p in json.load(sys.stdin) if p['pm2_env']['status']!='online']"
Output goes to a log. Cron runs it. If the log has warnings, we see them. If it doesn't, everything's fine. No Grafana. No dashboards. No subscription.
Later we built Sentinel — a real monitoring dashboard with charts and metrics. It reads MC's SQLite database directly. Read-only, no ORM, no ETL pipeline. The "dashboard" is a Next.js app, but nobody opens it in a browser. The data feeds into Telegram notifications. Terminal in, terminal out.
The Fourth Problem: Two Servers Can't Talk
Buzz on Star Command builds MissionControl. Jarvis on SFO2 handles product work. They work on the same projects but have no shared context. Session memory resets every conversation.
The GUI answer: Slack workspace, shared channels, message history, threaded discussions, emoji reactions.
The terminal answer: write a markdown file, SCP it to the other server.
scp review-notes.md root@100.112.59.126:/root/HyperLink/
That was version one. It worked for two days. Then we couldn't track what was read. So we added a SQLite inbox. Then we couldn't tell who was online. So we added a heartbeat roster. Then action items piled up untracked. So we added an action tracker.
Twenty-two briefs crossed between servers in one session. Eighty kilobytes of specs, reviews, and decisions. Zero delivery failures. The same night, the GitHub API failed twice, MC's config crashed in a loop, and an OAuth token expired. The markdown files never failed once.
We accidentally built email. The most reliable part of our infrastructure is SCP and SQLite.
The Fifth Problem: iOS Development
This one we lost.
BiteCheck — an iOS barcode scanner app — needed Xcode. Xcode needs macOS. macOS needs a GUI. There is no headless Xcode.
We built an MCP bridge. Star Command sends commands over SSH to a Mac Mini on the Tailscale mesh. The Mac Mini runs Xcode builds, extracts errors, sends results back. File edits happen on Star Command, get pushed to the Mac via the bridge.
It works. It's also the most over-engineered file transfer system ever built. Every Swift edit requires a cross-network round trip. Build errors arrive as JSON blobs parsed from xcodebuild output. Simulator screenshots get SCP'd back for Claude to read.
Xcode won. We built a whole bridge to avoid opening it, and we still need it running on the other end. Some tools are irreducibly graphical.
The Sixth Problem: Visual Verification
MC shipped a 111-file CSS migration. Dark theme to slate. How do you verify it looks right without opening a browser?
node e2e/screenshot-audit.mjs
Playwright runs headless Chromium, captures every page at desktop and mobile breakpoints, saves PNGs to /tmp/tj-screenshots/. Claude reads the images directly — it's multimodal. "The login page background is still teal, should be slate." Fix, re-run, compare.
It works. It's slower than opening a browser and scrolling. For a 4-page check, the overhead is annoying. For a 111-file migration where you need systematic coverage of every route at two breakpoints, it's actually faster than manual spot-checking. The robot doesn't get tired and skip pages.
Still: a browser would be simpler. We just don't use one.
What Fell Out
None of this was planned. Each problem got solved with whatever was available, and what was available was always a terminal. But after six months, the accidental architecture has properties we didn't design for:
Everything is already scripted. When MC's dispatcher needs to deploy, it runs the same vercel --prod --yes we type. No Selenium wrapper. No "automate the GUI" step. The automation and the manual process are the same process.
Everything has a paper trail. history | grep deploy shows every deployment. git log --oneline shows every change. .bash_history is a forensic timeline. Try auditing which buttons someone clicked in a GUI last Tuesday.
Everything fits on $36/month. No memory eaten by Electron apps. No CPU spent on window compositing. No disk consumed by IDE caches. The 4GB droplet runs MC, Sentinel, QMD, HyperLink, and two Claude Code sessions simultaneously because nothing else is competing for resources. Four instances total across both servers — each with its own callsign, its own context, its own task queue.
Everything rebuilds in twenty minutes. Fresh Ubuntu droplet, install Node, clone repos, restore .env, start PM2. No "import workspace settings." No "install these twelve extensions." No "configure the color theme and font size." The server is the config.
The Honest Accounting
What we gained: speed, reproducibility, auditability, low cost, full automation compatibility.
What we lost: visual debugging (workaround: Playwright), iOS development (workaround: MCP bridge, painful), pair programming (workaround: Telegram screenshots, not great), complex diffs (workaround: git diff --stat then targeted reads).
The losses are real. The workarounds are ugly. We're not pretending this is optimal for every workflow. It's optimal for this workflow — one person steering four AI instances across two servers that do most of the typing.
The terminal isn't the point. The point is that two headless servers turned out to be enough. And we only figured that out because we never had the option to add more.
root@star-command:~# uptime
05:15:32 up 47 days, 2:31, 1 user, load average: 0.12, 0.08, 0.03
$36/month. Ship code.
Top comments (0)