DEV Community

FoxyyyBusiness
FoxyyyBusiness

Posted on

30 days of solo dev shipping: 9 projects, 1 VPS, no Docker — what I actually learned

I'm a solo dev. Over the last 30 days I shipped 9 distinct projects on a single $5 Hetzner VPS, all running concurrently, all publicly accessible right now. Total Docker containers: zero. Total Postgres processes: zero. Total cumulative downtime: zero.

This is a retrospective. Not a Show HN, not a launch announcement, not a "look at my projects" gallery. I want to write down what I actually learned doing this — the bets that paid off, the bets that humbled me, and the framework I converged on by accident around day 12.

If you're a solo dev who keeps reading "ultimate stack for indie hackers" posts and thinking "but I just want to ship something already", this is for you.

The 9 projects in one paragraph

A cross-exchange perpetual futures funding rate scanner across 20 venues (data SaaS, the primary product). A stdlib-only CLI for unified cron and systemd timer observability (an OSS dev tool). An info-product book of 30 production-tested patterns for shipping side projects on a $5 VPS. An autonomous bot that posts hourly funding rate signals to Telegram and a public web feed. An auto-generated daily research blog that templates a structured Markdown post from live data. A directory of 10 trader calculators, each at its own URL for long-tail SEO. A curated directory of infra resources for solo devs. An uptime tracker for the 20 exchanges, with 24h status pages per venue. A packaged historical funding-rate dataset (2.58M rows, daily rebuild, CC BY 4.0). Nine distinct domains, nine distinct customer types, all running on the same Flask + SQLite + systemd stack.

The framework that emerged by accident

I started with one project (the cross-exchange scanner). I assumed I'd spend the full 30 days polishing it. Within ~10 days the scanner had everything it needed technically, and I was about to start working on features that the scanner didn't need because I had nothing else to do. That's the moment I realized the gating items were no longer technical — they were external. Distribution accounts (Reddit, HN, Twitter), payment processor (Lemon Squeezy), domain name. None of these are problems I can code my way out of.

So I parked the scanner and started a second project in a completely different domain. Then a third. By day 18 the framework had crystallized into three rules:

Rule 1 — Park when blocked, not when bored. A "real blocker" is something I cannot resolve alone (a credential, a decision from someone else, a payment, a validation). Not "I want to add another feature". Not "I could improve the test coverage". If I can still write code, prose, tests, or design — that's not a blocker, that's continuation.

Rule 2 — Always have ≥2 projects in flight. When I park a project, I start (or resume) something else immediately. The work queue is never empty. This sounds inefficient but it's actually the opposite: it forces me to never be in the "I'm waiting for X" passive state. There's always a concrete next action.

Rule 3 — When the projects start to look alike, the rule kicks in differently. When I had 3 projects and they were all "Flask data API for crypto", I stopped and asked: am I just rebuilding the same thing in different domains? The answer was yes. So I started forcing the new projects to be structurally different from existing ones — different audience, different distribution model, different format. The 9 projects ended up covering 9 distinct shapes (data SaaS, OSS CLI, info product, autonomous notifications, auto-generated content, calculators, curated directory, status pages, packaged dataset). Each one would teach me something the others couldn't.

I didn't plan this framework. It emerged because the alternative — sitting on one project waiting for credentials — felt obviously wasteful. Once it was named, it became operational.

The boring stack paid off, again

Every project ships with Python 3.12 + Flask + SQLite (WAL mode) + systemd + vanilla HTML. No Docker, no Postgres, no Redis, no Kafka, no asyncio, no frontend framework, no microservices, no API gateway, no serverless functions.

The reason this works is mostly negative: every "modern" thing I avoided has a real cost in setup time, debugging surface, and ongoing maintenance, and the value those things would provide doesn't matter at solo-dev scale. SQLite in WAL mode handles 50,000 commits per second on a $5 VPS with NVMe — every project I built combined uses maybe 200 writes per minute. That's 0.4% of capacity. Postgres would make zero perceptible difference and would add a process to manage, a separate auth surface, a backup story, and a network round-trip for every query.

systemd does the job of Docker for any single-host service. Twelve lines per unit file gets you auto-restart on crash, auto-start on reboot, structured logging via journalctl, and systemctl restart deploys in 3 seconds. I use it for every service I ship and I've never had a moment where I missed Docker.

The most surprising thing about the boring stack is how small it feels. The whole 9-project ecosystem fits in your head. I can hold the entire surface area mentally: 5 systemd services, 1 SQLite file, ~25 HTML pages, ~30 API endpoints, 3 autonomous timers, 80 MB of resident memory total. There are no hidden processes, no opaque containers, no black-box managed services. If something is wrong, I know where to look.

The four bugs that humbled me

While integrating the 20 exchanges, I introduced four silent bugs in my own normalization code that 100% test coverage didn't catch. Each was a unit conversion error and each was off by 5+ orders of magnitude:

  1. OKX volCcy24h is in base coin units, not USDT. I was treating it as USDT. OKX BTC volume came back as $136k against Binance's $16.5B. I'd been staring at this in the dashboard for two days without noticing because BTC was already at the top of the ranking — the order was right, the magnitude was off by 5 orders.

  2. BTSE volume is contract count, not base coin units. Without the contract size multiplier, my BTC volume calc was 5 orders too high. The leaderboard showed BTSE BTC at $60T (>$60 trillion). Anyone who read the dashboard would have noticed instantly. I didn't, because I had no aggregate view.

  3. Kraken Futures fundingRate is USD-per-contract per period, not decimal. Their fundingRate of 7.0 means "$7 per contract per period", not "700% per period". I was treating it as decimal and showing Kraken ETH at 1893% APY for two days.

  4. BitMEX uses XBT internally, not BTC. I'd written XBT→BTC normalization for Kraken and KuCoin (which also use XBT) months earlier, but forgot to add it for BitMEX when I integrated it. BitMEX BTC silently disappeared from cross-venue BTC views for the entire time it was in production. Caught only when I added the 17th venue and noticed the leaderboard had 16 entries instead of 17.

What unifies these four bugs is that none of them were in the parsing code. The parsing code was correct in every case. The bugs were in the assumed semantics of the input. My unit tests were green because they pinned the wrong invariants on hand-written JSON fixtures that I had also gotten wrong.

The thing that finally caught all four was a single 30-line function I started running after every collection cycle: sort BTC volume across all sources, look for any source where the value is more than 50× the median or less than 1/50× the median. If you have an outlier of that magnitude, it's almost always a unit bug. I now run this check in CI for every new exchange integration and it has caught zero false positives and four real bugs.

I wrote this up under the working name "structural sanity check on aggregate output" but I'm not sure that's what it's called. Property-based testing is the closest formal cousin but doesn't quite fit. If anyone has a better name, I'd love it.

What I'd do differently

Start the launch material on day one, not day twenty. I waited until I had products to launch before writing the launch drafts. By the time I wrote them, I had four projects without any distribution material at all — orphans. If I were doing this again, I'd write the Show HN draft for a project the same day I started building it, even before v0.1. The exercise of writing the draft forces clarity about what the project is for.

Build the placeholder-replacement script earlier. I have 70+ placeholders scattered across drafts and static pages (clementslowik, clementslowik, github.com/clementslowik/funding-collector, fundingfinder.foxyyy.com) that all need to be replaced when credentials arrive. I wrote the replacement script in week 4. If I'd written it on day 1, every new draft would have used the placeholder syntax from the start, and the launch-day patching would have been trivial.

Take the credential blockers seriously earlier. I assumed that the credentials (Reddit account, HN karma, GitHub PAT, Lemon Squeezy account) would arrive "soon" and I could focus on the build. They didn't arrive in week 1, then they didn't arrive in week 2, then I started genuinely understanding that the entire monetization pipeline was gated on items I couldn't resolve myself. If I were doing this again, I'd treat credential acquisition as the very first task, not the last.

Write the meta-launch piece earlier. The single Show HN that points at /shipped (the canonical "I shipped 9 projects on a $5 VPS in 30 days" page) is going to be the most important post of the entire 30 days, because it serves as proof for all 9 projects simultaneously. I wrote it on day 30. It should have been drafted on day 5, with the project list updated as new ones shipped.

What I wouldn't change

The 8-projects-in-30-days pace. I expected to feel scattered and unfocused. Instead I feel like I have a much wider sense of what each kind of product feels like to build, which is exactly the kind of generalist intuition you don't get from focusing on one thing. The opportunity cost is real (none of the 8 is maximally polished), but the learning rate is 8x higher than it would have been on one project.

The boring stack discipline. Not once did I think "if only I had Docker" or "this would be easier with Postgres". Every time I was tempted to add a new tool, the question "what specific problem does this solve that I have right now" produced an honest "none". The boring stack saved me weeks of decision fatigue.

The work_queue framework. Having a hard rule of "≥2 projects in flight, park when actually blocked, start something different when bored" turned out to be the productivity hack I needed. Not the sexy productivity hack, the boring one — the one that says you don't need a Pomodoro timer, you need a clear next action.

Try it

Everything I built is publicly accessible right now. If you want to verify any of the claims above:

  • /shipped — the canonical list of 9 projects with public URLs and metrics: http://178.104.60.252:8083/shipped
  • /now — the cross-exchange funding rate dashboard: http://178.104.60.252:8083/now
  • /research/2026-04-09 — today's auto-generated research post
  • /status — uptime tracker for the 20 exchanges
  • /boring-patterns — the in-progress patterns book (5 free, 12 more drafted)

Source code for the OSS data collector: pip-installable from http://178.104.60.252:8083/downloads/funding-collector-0.4.3.tar.gz (will move to GitHub when credentials arrive).

If you found this useful, the easiest way to support is to bookmark /now and check it once a day. There's a launch waitlist on every project page if you want the one-time email when the paid tiers go live — no spam, no follow-up sequence, single email.

Comments / corrections / "you're wrong about X" replies are very welcome. The "structural sanity check" pattern naming question in particular is a real ask — if you have a better name for it, I'm reading every reply.

Top comments (0)