DEV Community

Cover image for I tried to build a personal SaaS with zero backend. Here's where that strategy hits a wall.
Deeshan Sharma
Deeshan Sharma

Posted on

I tried to build a personal SaaS with zero backend. Here's where that strategy hits a wall.

I track my overtime in a spreadsheet. Or I did, until I lost three months of data because my formula references broke when I opened the file on a different device.

That was the nudge. I started looking for an app to replace it — something that would track sessions, calculate my earnings at different rates, show me patterns over time. The usual.

What I found surprised me. Every app that solved this problem stored my data on their server. My hours. My rates. My employer name. My project names. On their infrastructure, behind their security decisions, subject to whatever their terms of service said about data portability when (not if) they pivoted or shut down.

That felt wrong. So I decided to build my own.


The constraint that shaped everything

Before writing a single line of code, I set one hard constraint: work data cannot live on any server I control. Not because I have anything to hide — but because this data (what I work on, for whom, for how many hours, at what rate) is genuinely sensitive, and I didn't want to be the entity responsible for its security on behalf of even one user besides myself.

This constraint ruled out the obvious architectures immediately:

  • Traditional backend + Postgres? Eliminated. I'd own the data.
  • Supabase for everything? Same problem for work data.
  • Firebase? Same. That left an unusual option: client-side only, with the database living on the user's own Google Drive.

The v1 architecture

What I built in version 1 was a pure React SPA with no backend at all:

Browser
  └── React 18 + Vite
  └── sql.js (SQLite running as WASM)
  └── Zustand (state)
  └── Google OAuth PKCE (no client secret)
  └── Google Drive API v3 (direct from browser)

Storage
  └── localStorage (serialised SQLite buffer)
  └── Google Drive (overtimeiq.db — the user's own file)

Infrastructure
  └── Vercel static hosting
  └── No server. No database. No backend. Nothing.
Enter fullscreen mode Exit fullscreen mode

The database is a real SQLite file. sql.js compiles SQLite to WebAssembly and runs it entirely in the browser. On every write, I serialise the database to a Uint8Array, persist it in localStorage, and debounce an upload to Google Drive 10 seconds after the last change. The user's Drive stores overtimeiq.db. My hosting stores nothing about the user.

The auth flow uses PKCE — Proof Key for Code Exchange — which is the correct OAuth flow for SPAs precisely because it doesn't require a client secret. Everything runs in the browser. No server ever sees a token.


Why this worked well — better than I expected

Honestly, this architecture solved most of what I cared about:

Offline-first, genuinely. The database lives in localStorage. The app loads from the Service Worker cache. When you're on a flight, everything still works. Drive sync queues up and flushes on reconnect.

Sync without a sync server. I compare modifiedTime on the Drive file against settings.last_synced_at in the local DB. Drive newer → download and replace. Local newer → upload. Within 30 seconds of each other → no action (handles the same-device multi-tab edge case). Simple, reliable, and requires no server infrastructure at all.

The upload safety pattern. I upload to overtimeiq_tmp.db, then rename atomically to overtimeiq.db via the Drive API. A partial upload or a browser crash mid-upload can never corrupt the live file. One version history entry is kept by Drive automatically as a free backup.

Earnings calculations across midnight. This was the most interesting logic to get right. A shift that starts at 22:00 Friday and ends at 02:00 Saturday spans two calendar days — and potentially two different rate multipliers (weekday vs. weekend). I calculate hours_before_midnight = 24:00 − start_time and hours_after_midnight = end_time, apply the correct multiplier to each segment, and sum them. The December 31 → January 1 case (where the second segment is a public holiday) is handled automatically from the calendar date. No user input required.

The schema design held up perfectly. A jobs table with multipliers, a logs table with a crosses_midnight boolean, a holidays table seeded with public holidays, an active_session singleton for the punch-in state, and a settings singleton for preferences. Clean, extensible, and completely portable as a single file.


Where the ceiling appeared — the first crack

When I started thinking about sharing the app beyond just myself — even just a small invite-only beta with colleagues — I hit the first wall.

Invite-only access requires server-side token validation.

If you want to gate who can sign in, the check has to happen somewhere the user can't bypass. In a pure client-side app, there's nowhere to put that check. I could check a whitelist in the JavaScript bundle, but anyone who wanted to reverse-engineer a static JS file and modify their email could get in.

For a real invite flow — generate a unique token, send it via email, validate it on claim, mark it used — you need a server-side route handler. There's simply no way around this in a public SPA.

I wanted to control beta access carefully. A beta full of the wrong users (people who weren't actually going to use the app to track overtime) would give me useless feedback. I needed invite-only, and invite-only needed a server.


Where the ceiling appeared — the second crack

The second wall was worse, because I didn't see it until later.

I built a freemium pricing model into v1. Free tier with a 3-month visibility limit, Pro tier with full history and exports. The way I initially planned to gate Pro features: store a is_pro flag in the SQLite settings table, set it when payment is confirmed, check it before rendering locked features.

Then I thought for about five minutes about what that actually means.

The SQLite file lives on the user's own Google Drive. Any user can open overtimeiq.db in a free SQLite browser (there are several), find the settings table, change is_pro from 0 to 1, save the file, and reload the app. The entire paywall disappears with a two-minute effort.

For a ₹149/month subscription, this is real money. I couldn't ship security theater and call it a business.

A full backend server would solve this — the gating logic lives on a server the user can't modify. But a full backend is disproportionate overhead for a product at this scale. I didn't want to run a database server, manage migrations in production, and pay for infrastructure for something that might have 50 users.

There had to be a middle path.


What forced the rebuild

These two ceilings together — invite access control and cryptographically secure feature gating — forced a v2.

Not a complete rewrite of the app. The core of it — the SQLite schema, the Drive sync, the earnings logic, the entire UI — that's all unchanged. What changed was the addition of a thin server layer for the two specific things that genuinely require it:

  1. Identity and access control. I added Supabase for user management, invite lifecycle, and subscription state. Critically: Supabase holds identity data only. No work data (logs, jobs, earnings) ever enters Supabase. The privacy constraint still holds.
  2. Cryptographically enforced feature gating. I replaced the SQLite is_pro flag with an ECDSA-signed JWT, minted on a server that holds the private key. The public key is hardcoded in the JavaScript bundle. Any edit to the token — including extending the expiry — invalidates the signature. The user cannot forge a valid token without the server private key. The architectural insight that made this feel clean rather than like a compromise: work data and identity data have completely different privacy requirements. Work data belongs on the user's own Drive. Identity data (who you are, whether your subscription is active) needs to be on a server that can be trusted. Once I drew that line explicitly, the architecture followed naturally.

What the ceiling taught me

The "no backend" constraint is genuinely achievable — and genuinely valuable — for a certain category of application feature. Data storage, offline operation, complex calculations, sync: all of these can be done entirely client-side with the right tools.

But there's a ceiling. Two specific things require a server because they fundamentally depend on something the user can't control:

  • Access control (is this user allowed in?)
  • Cryptographic trust (can I issue a token this user can't forge?) Everything else? Genuinely serverless is a viable architecture. The v1 proof of concept proved that the core of the application — 90% of what users actually interact with — works without a server.

The server that v2 added is minimal: three API routes and a Supabase project. The work data — the thing that actually matters for privacy — never touches it.


What's next

In the next post in this series, I'll go deep on the SQLite-in-browser setup: how sql.js works, the serialisation pattern, the Drive sync in detail, and the upload safety mechanism that prevents corruption on interrupted uploads.

After that: the PKCE auth flow, the ECDSA feature gating system, and the invite-only beta architecture.

If you're building something similar — a privacy-first personal tool with user data that shouldn't live on your server — I'd be curious what constraints are shaping your architecture. Drop a comment.


I'm building OvertimeIQ — a personal overtime tracker where your data lives on your own Google Drive. This is part of an ongoing series documenting the technical decisions behind the build.

Top comments (0)