DEV Community

RAXXO Studios
RAXXO Studios

Posted on • Originally published at raxxo.shop

Neon Database Branching Saved Me 200 EUR Every Month

  • I run 14 Neon database branches per project for the price of one Postgres instance

  • Neon copy-on-write branching clones a 12 GB database in under 1 second

  • A GitHub Action spins up a fresh branch per pull request and tears it down on merge

  • My Postgres bill dropped from 240 EUR to 40 EUR per month after switching to Neon Launch

I used to pay 240 EUR a month for Postgres because every project had a dev DB, a staging DB, and a prod DB, and I had four projects. Then I switched to Neon, kept 14 environments running, and the bill dropped to 40 EUR. Here is exactly how that math works.

Why I had 14 database environments anyway

Solo dev, four products, one me. The math gets out of hand fast.

Every product needs at least three Postgres environments. Local for me, staging for the GitHub preview deploy, prod for the customers. That is twelve. Add two long-running feature branches I keep around (a content migration and an experimental schema for the Studio backend), and I am at 14.

On the old setup I had a Supabase instance per environment. Each one cost roughly 25 EUR a month at the smallest paid tier. I needed paid tiers because the free tier auto-pauses after a week of inactivity, and a paused dev DB at 9 PM on a Sunday is the worst kind of papercut. So 14 instances times ~17 EUR average came out to roughly 240 EUR.

The painful part: 13 of those 14 databases sat idle 95% of the time. I would touch the dev DB twice a day, push to staging maybe four times a week, and prod hummed along on its own. I was paying for 14 always-on compute boxes to use about one and a half of them.

Two more annoyances pushed me to look elsewhere. First, every time I wanted to test a destructive migration locally, I had to pg_dump prod, restore to dev, and pray the schema lined up. That was 20 minutes per test. Second, when I opened a pull request on the Shopify backend project, the preview deploy pointed at staging, which meant two PRs in flight would step on each other.

I needed something where compute scales to zero when I am not looking, and where forking the database is free. Neon does both.

The other thing I underestimated was how often I avoided risky work because the test loop was slow. If running a destructive migration on a clean copy takes 20 minutes, I do it twice a week. If it takes 1 second, I do it twenty times a day. That changes how I write migrations. I started writing smaller, more reversible ones, because the cost of trying was effectively zero.

How Neon branching actually works under the hood

Neon splits Postgres into two pieces: storage and compute. Storage is a shared, content-addressable layer. Compute is a regular Postgres instance that reads and writes against that storage.

A branch is just a pointer. When I create a branch from main, Neon does not copy the data. It marks the current LSN (log sequence number), then any new writes on the branch go to fresh pages, and reads fall through to the parent's pages for anything unchanged. Copy-on-write, the same trick ZFS and BTRFS use for snapshots.

The result: forking a 12 GB production database takes under 1 second. I have timed it. Here is the CLI call I run a few times a day:


neon branches create \
  --project-id rough-sun-12345 \
  --name pr-247-fix-checkout \
  --parent main

Enter fullscreen mode Exit fullscreen mode

That returns a connection string in maybe 800 ms. The branch is a full read-write copy of prod as of the moment I ran the command. I can drop tables, run migrations, seed garbage data, whatever, and main does not notice.

Two more details that matter for the cost story. Each branch gets its own compute endpoint, but compute autosuspends after 5 minutes of inactivity by default. A suspended branch costs zero compute. Storage is billed once for the parent plus only the diff each branch has written. My 14 branches use about 14.3 GB total because the diffs are tiny.

When I actually hit a suspended branch, it cold-starts in around 300 ms. Annoying for a single curl, invisible inside any real app session. Worth it for a 200 EUR a month delta.

One thing that surprised me: the storage layer is genuinely shared, not "shared-ish". I ran a test where I created a branch, dropped a 4 GB table, and checked the parent's storage. No change. The drop only updated the branch pointer. That same 4 GB table existed in 13 other branches at the same time and Neon stored it once. The branch-per-PR workflow only works because of that property, otherwise 14 branches of a 12 GB DB would cost as much as the 14 separate Postgres instances I started with.

The branch-per-pull-request workflow

The cleanest workflow I built is one branch per GitHub PR, created by CI on PR open and destroyed on merge. The preview deploy points at it. Every reviewer gets an isolated database that mirrors prod schema.

Here is the GitHub Action fragment that does the create step:


on:
  pull_request:
    types: [opened, reopened]
jobs:
  neon-branch:
    runs-on: ubuntu-latest
    steps:
      - uses: neondatabase/create-branch-action@v5
        id: branch
        with:
          project_id: ${{ vars.NEON_PROJECT_ID }}
          branch_name: pr-${{ github.event.number }}
          api_key: ${{ secrets.NEON_API_KEY }}
      - name: Push DB URL to Vercel
        run: |
          vercel env add DATABASE_URL preview \
            --token ${{ secrets.VERCEL_TOKEN }} \
            --git-branch ${{ github.head_ref }} \
            <<< "${{ steps.branch.outputs.db_url }}"

Enter fullscreen mode Exit fullscreen mode

A matching workflow on PR close runs neondatabase/delete-branch-action. The whole loop costs nothing extra because compute autosuspends within 5 minutes of the preview deploy going quiet.

For local dev I keep one personal branch per machine, and I switch to it with a tiny env injection block in my shell:


export DATABASE_URL=$(neon connection-string \
  --project-id rough-sun-12345 \
  --branch-name local-acme-ltd-fixtures)
bun run dev

Enter fullscreen mode Exit fullscreen mode

The branch local-acme-ltd-fixtures has my seed data for an Acme Ltd test customer. If I want a fresh copy of prod, I delete that branch and recreate it from main. 1 second, zero ceremony. Compare that to the old pg_dump-and-restore dance that ate 20 minutes every morning I wanted clean data.

This is also the workflow I documented in The 5 Postgres Extensions Every Shopify Backend Needs, because pg_stat_statements and pgvector both need to be enabled per branch.

One useful side effect: every PR review now happens against real-shaped data. I seed each PR branch from prod, scrub the personal info with a single SQL script, and reviewers can poke at the preview deploy with realistic counts and edge cases. Bugs that only show up at 50,000 rows actually show up. Bugs that only happen with three special-character customer names also show up. That alone caught two bugs last month that would have shipped to prod under my old "test against an empty seeded DB" workflow.

Cost math, the gotchas, and what broke

The Neon Launch plan sits at 50 EUR per month and includes 300 compute hours, 10 GB storage, and unlimited branches. I never hit the branch limit because there is none on Launch. I do flirt with the compute hour cap when I forget to close a long-running psql session against a branch (autosuspend does not kick in while a connection is active).

Real numbers from my last invoice:

  • Compute: 247 hours at 0.16 EUR overage = 0 EUR (under the cap)

  • Storage: 14.3 GB total, 4.3 GB over the included 10 = 1.50 EUR

  • Plan base: 50 EUR

  • Total: 51.50 EUR for one project

Wait, I said 40 EUR. The trick is I consolidated three of my four products into one Neon project, each as a separate database within the project. Branches are scoped per database, which means I get the same isolation without paying for four projects. The fourth product runs on the free tier because it is the Claude Blueprint demo data and barely sees traffic.

Three things broke along the way.

Cold starts on tiny endpoints. A 300 ms cold start is fine inside a request, painful for a serverless cron that fires every 30 seconds. I switched those crons to fire every 5 minutes so autosuspend never gets a chance to kick in.

Connection pooler quirks. Neon's PgBouncer-flavored pooler does not support session-level features like prepared statements by default. I had to swap to the unpooled endpoint for a job runner that uses LISTEN/NOTIFY. Worth knowing before you migrate. I covered the same gotcha pattern in The 7 Postgres Indexes That Took My API From 400ms to 40ms.

Branch sprawl. Without a cleanup action, I ended up with 60+ stale PR branches inside a month. The delete-on-merge workflow fixes that, plus a weekly cron that nukes anything older than 14 days.

Schema drift between long-lived branches. Two of my 14 branches are not PR-scoped, they are the content migration and the experimental schema I mentioned earlier. After three weeks of parallel work, those branches had drifted from main enough that merging back was painful. The fix was a Friday ritual: rebase each long-lived branch on top of fresh main, run the test suite against it, fix what broke. 30 minutes a week, no surprises at merge time.

Region pinning. I run prod in eu-central-1. The first time CI created a branch it defaulted to us-east-2 and added 90 ms to every preview deploy round trip. Pin the region in the create call.

Bottom Line

I went from 240 EUR a month for 14 always-on Postgres instances to 40 EUR a month for the same 14 environments, plus instant clones of prod whenever I want them. The savings paid for my Shopify plan with room to spare, but the bigger win is the workflow change. Every PR gets a real database. Every destructive migration gets a real test against real data in 1 second instead of 20 minutes.

If you are still running one always-on Postgres per environment as a solo dev or tiny team, the math almost always favors a switch. Start with one project, port a single product, and watch what your dev velocity does when forking a fresh DB is free. The first time a teammate (or future me) opens a PR and sees a green preview deploy with isolated data, the migration pays for itself.

Top comments (0)