DEV Community

Cover image for Re-thinking DevOps practices to handle accelerating dev throughput
Joel Milligan
Joel Milligan

Posted on

Re-thinking DevOps practices to handle accelerating dev throughput

In the age of AI orchestration, gitflow and other dev-centric models just can't keep up with the throughput of pull requests. "Continvoucly morging" breaks down quickly when each dev has migrations, rewrites, SOPS edits, renames, new CI pipelines, and agent file updates. I'd worked for nearly 10 years under such development flows and I already knew when I started as a Founding Engineer at Neno that we would need a new approach to stay competitive and on the cutting edge.

I'm going to walk you through exactly how we did it, why the old way is killing your velocity, and what I think the future of deployment looks like. We call it PReFlow.


Red Tape and Broken Environments

GitFlow can honestly be a great development approach. The pitch sounds reasonable: you have a dev branch, a staging branch, and main (or prod). Code flows through each stage like a responsible assembly line. Merge to dev, deploy, test. Promote to staging, deploy, test. Promote to prod, deploy, pray.

In practice? I've found it to be a traffic jam with no cops.
Here's what actually happens. Engineer A merges a feature to dev. Engineer B merges a different feature to dev ten minutes later. Engineer A's feature depends on a migration that hasn't been tested against Engineer B's schema change. Dev breaks. Now neither of them can test. Engineer C, who was about to merge something completely unrelated, is blocked too. Everyone stops shipping and starts debugging the shared environment.

Multiply this by a team of four with ambition, the kind of team where everyone has multiple PRs open at once, and you get gridlock. Not the productive kind of friction that catches bugs early, but the soul-crushing kind where engineers spend their mornings in Slack figuring out whose turn it is to deploy to dev.

The merge conflicts alone are brutal. When five branches are all targeting dev, and each one touches overlapping files, you're resolving conflicts that have nothing to do with your actual work. You're doing merge archaeology, except the artifacts are three hours old and already stale.

And the worst part? None of this has anything to do with building the product. It's pure organizational overhead. Red tape dressed up as engineering process.

Gitflow was designed for a different world in a different time. We don't live in that world anymore thanks to git worktrees and multiple agent sessions.


The Dev Environment Trap

Dev, staging, and prod environments are a logical approach. Engineering teams need some buffer between "I think this works" and "customers are seeing this." However, this setup has been, in my own belief, one of convenience rather than the right setup for most teams. It is easier to "copy and paste" the setup for a cloud environment multiple times than to invest in better pipelines.

While this works, I would caution that dev environments don't scale the way you need them to.

Every new developer added to the team increases the contention on that shared environment. Every open PR is another potential conflict. And database migrations? Those are the real killers. When two PRs both need to alter the same table, or even when they alter different tables but in conflicting orders, your shared dev environment requires meetings, syncs, and other work that reduces velocity just to make a feature push.

You end up with coordination overhead that has nothing to do with code quality:

  • "Hey, can I deploy to dev? I need to test my migration."
  • "Hold on, I'm in the middle of testing something."
  • "Who ran migrate last? The schema doesn't match what I expected."
  • "Dev is broken again. Who deployed last?"

These are not engineering problems. These are scheduling problems. You've turned your development workflow into a shared calendar.

At Neno, we had four engineers all moving fast. AI-assisted development meant we were producing PRs at a rate that would've been unthinkable two years ago. We regularly had 20 to 30 PRs in flight at the same time. Gitflow would have buried us. A shared dev environment would have been a full-time job just to keep running.

I saw this coming as I had seen it before and knew there was a better way.


PReFlow: One Environment Per PR

Here's what we built instead, and it's deceptively simple.

When an engineer opens a pull request against main, a unique preview environment spins up automatically. That environment is theirs. It has its own database, its own services, its own URL. It's a complete, isolated copy of the application running their branch — and only their branch.

No conflicts with other engineers. No waiting for someone else to finish testing. No broken shared state. You open a PR, you get an environment. Done.

The full flow looks like this:

  1. Branch from main — always from the latest main, always targeting main. No dev branch. No staging branch. Just good, clean, trunk-based development.
  2. Open a PR — this triggers the creation of a dedicated preview environment. CI runs. The engineer (or their AI agent) tests against a real, running instance of the app with their changes applied.
  3. Merge to main — when the PR is approved and merged, it doesn't go straight to customer-facing production. Instead, it first deploys to an internal sandbox — a production-like environment that our team uses.
  4. Sandbox succeeds → deploy to prod — if the sandbox deployment is healthy, the pipeline automatically promotes the change to production.

Here is a helpful diagram:

Neno deployment diagram

That's it. No dev branch. No staging branch. No environment scheduling. No merge archaeology.
Every PR is independent. Every engineer is unblocked. And the pipeline from "code written" to "code in production" is as short as it can possibly be without being reckless.

We get the safety of staged deployments without the organizational overhead. The sandbox acts as our canary — if something breaks there, it never reaches customers, and only the offending merge needs to be investigated, not a tangled mess of five developers' changes landing simultaneously.


The Numbers

This isn't theoretical. We've been using this system at Neno for 4 months now with a team now consisting of 6 devs.

Here's what it looked like:

  • 30-35 PRs merged per week across the team with an average of 3,000 LoC
  • 30-35 deployments to production per week — each one isolated, tested, and promoted through the pipeline
  • 0% of total production downtime over the past 4 months attributable to deployment issues
  • 0 broken shared environments — because there are no shared environments to break
  • 6 minute average time from PR merge to production — including the sandbox validation step

The point is that this system removed an entire category of work from our engineers' plates. Nobody at Neno spent time debugging dev. Nobody waited for their turn to deploy. Nobody resolved merge conflicts caused by a shared integration branch. That time went back into building product.

When you're a small team competing against companies with ten times your headcount, that reclaimed time is everything.


Our Setup

Now for the important part, telling you how to replicate this.

We have quite the simple setup, I'll hit the main points:

  • Cloud Provider: GCP
  • Infrastructure as Code: Pulumi in typescript, self-hosted on a GCP bucket
  • Frontends: Deployed from react project as static sites through Cloudflare
  • Git and CI Platform: GitHub, though we aren't thrilled with them

We did follow the same approach as GitFlow, whereby we used pulumi to create preview (dev), sandbox (staging), and production environments. However we don't use the default Cloud Run instance we've setup for preview. Instead we take another approach.

Here's the blow-by-blow:

Step 1 (~28 seconds): We begin the CI pipeline by spinning up a new database within our Cloud SQL instance with the PR name: api_pr_xxx
Step 2 (~1-2 minutes): Next, we calculate all the domains for the preview environments and inject them into the builds. At this point we build the API docker image from our monorepo. We also separately build the frontends in parallel. Also during this step we run any DB migrations.
Step 3 (~1 minute): We deploy the new API image through the gcloud CLI directly in our CI. Frontends are pushed to preview deployments in Cloudflare
Step 4 (~4 seconds): We comment on the PR with links to all the domains to access the services.

And that's IT.

Working on a new feature and you want to share it with someone? Just share the preview frontend link. It's that easy.

We also run a cleanup action on merge, but I'll not cover that now.


Clues We're on the Right Track

If you think this is niche: a cute optimization for a small startup — I'd ask you to look at what's happening in the industry right now.

In February 2026, OpenAI's VP of Engineering Srinivas Narayanan revealed that internal teams had shipped a beta product with zero human-written code. The entire thing was generated by Codex agents. Not scaffolded by AI and polished by humans; fully AI-authored, start to finish.

Around the same time, Fortune reported that engineers at both Anthropic and OpenAI said AI now writes 100% of their code. The humans are reviewing, directing, and architecting — but the actual code production is machine-speed.

Think about what that means for deployment infrastructure.

If your engineers (or their AI agents) can produce code at 10x the previous rate, your deployment pipeline needs to handle 10x the throughput. Gitflow couldn't handle 5 PRs a day gracefully. What happens when AI agents are opening 50?
Shared dev environments that buckle under four engineers will collapse under the weight of AI-generated pull requests. The bottleneck won't be writing code anymore: it already isn't. The bottleneck will be getting that code tested, validated, and into production without everything falling over.

That's exactly the problem PReFlow solves. Every PR gets its own isolated world. It doesn't matter if a human wrote it or an AI agent did. It doesn't matter if there are 5 PRs open or 50. The system scales horizontally because each PR is independent.

This is where software development is headed. The teams that figure out deployment at AI speed will ship circles around the teams still debating whose turn it is to deploy to dev.


Build for Throughput

Your deployment pipeline is your competitive advantage. Not your framework choice. Not your cloud provider. Not whether you're using the latest AI model. The thing that determines how fast you ship — and therefore how fast you learn, iterate, and win — is how quickly a change can go from "written" to "live" without breaking anything.

Gitflow was a good answer to a 2010 problem. Shared dev environments were a reasonable compromise when teams were small and PRs were infrequent. But we're in a different era now. AI is writing code at top speed. Small teams are producing enterprise-level output. And the old deployment models are buckling.

If you're a founder, an engineering lead, or just someone who's tired of spending their mornings debugging a shared dev branch, consider this: kill your dev environment. Kill your staging branch. Give every PR its own world, validate through an internal sandbox, and ship to prod with confidence.

It's simpler than it sounds. And it's faster than you think.

Joel is a Founding Engineer of Neno. He writes about engineering velocity, AI-assisted development, and the systems that make small teams dangerous.

Top comments (0)