DEV Community

Warren Cain
Warren Cain

Posted on

I Mass-Produced Spaghetti Code for Three Months Before I Figured Out the Actual Problem

Last October I was sitting in my kitchen at 1am watching Claude Code refactor an auth middleware. Feature number four of twelve. I'd been at this since 9am. Not building anything. Just waiting. Watching an AI work, clicking approve, feeding it the next thing. Like a hall monitor for a robot that's smarter than me.

I don't have a CS degree. I don't have a technical background at all. I'm a founder who builds product with Claude Code and Codex CLI. Before those tools existed I could not ship software. After they existed I could, and for about two weeks that felt like the most important thing that had ever happened to me professionally.

Then it got boring. Then it got really boring. Then it got so boring that I started questioning whether I'd just traded one bottleneck for another.

The bottleneck was me

My workflow every single day for three months: give Claude Code a feature. Watch it think. Watch it code. Review the diff (pretend I understand the diff). Commit. Give it the next feature. Twelve features took about three days. I was the slowest part of my own AI-powered development pipeline. That's a humbling thing to realize about yourself.

The obvious move was running multiple agents at once. Four terminals, four Claude sessions, four features simultaneously. I felt smart for about two hours. Then two agents both rewrote auth.ts in completely different ways and I spent the rest of my afternoon trying to understand a merge conflict that, honestly, I never fully understood. I'm bad at git. I've always been bad at git. I just kind of stared at it until the diff looked plausible and committed. That was my merge strategy. Squinting.

This happened three more times over the next month. I kept telling myself I'd be more careful about which features I ran in parallel. I kept forgetting which files overlapped because I do not have the codebase memorized. I am a person who describes features to an AI and hopes.

The retreat

So I went back to one-at-a-time mode. Three days for a dozen features. Safe. Slow. My entire product roadmap gated on my willingness to sit in a chair and watch a cursor blink.

I think a lot of founders who build with AI agents are quietly living with this exact problem and just accepting it as the cost of doing business. Nobody talks about it because it sounds ridiculous. "I have access to the most advanced coding AI on the planet and my bottleneck is merge conflicts." It sounds like a complaint from the future that nobody would sympathize with.

But it ate months of my life. Actual months. I could have shipped twice the product in that window.

So I built the thing that should have existed already

So I built the coordination layer myself. It took three months. Which means I spent three months building a tool to recover the three months I lost babysitting agents. I try not to do that math.

The tool is called OpenWeft. It's a CLI. You give it a list of features, it figures out which ones would touch the same files, runs the safe ones at the same time in separate git worktrees, merges everything back automatically, re-analyzes the queue, runs the next batch. Your existing $20/mo Claude or Codex subscription. No API keys, no new accounts.

npm install -g openweft
openweft add "Add password reset flow"
openweft add "Add audit log export"
openweft add "Refactor auth middleware"
openweft start
Enter fullscreen mode Exit fullscreen mode

OpenWeft CLI dashboard showing one feature completed and another actively running in parallel

You leave. You come back to this:

OpenWeft CLI showing run complete with two features planned and merged automatically

Three features. Zero me squinting at diffs at 1am.

How it knows what not to run together

The conflict detection is basically: before any agent starts, the tool builds a list of which files each feature will probably touch. If two features share files, they don't run at the same time. If they don't share files, they run in parallel. It's heuristic. It doesn't catch everything. But it catches the exact kind of collision that kept ruining my afternoons, which was good enough for me.

It's beta. It works on my projects. It might do something weird on yours. It needs Node 24+. It only supports Claude Code and Codex for now. If you feed it twelve badly scoped features you'll get twelve badly scoped features done faster. I'm not going to oversell it.

The part I can't figure out

Here's the thing I keep turning over in my head though. I don't know if I actually solved a technical problem or if I just automated around my own skill gap. A real developer probably would have handled the merge conflicts in ten minutes. A real developer probably wouldn't have lost three months to sequential babysitting because they'd have known how to coordinate git worktrees by hand. I built a tool to compensate for the fact that I'm not qualified to do the thing the tool does.

And I genuinely can't tell if that's pathetic or if that's the whole point of building with AI in the first place.

Every founder I've talked to who vibe-codes has the same story. The four-terminal moment. The merge conflict that broke them. The retreat back to sequential. Most of them are still there. Still waiting. Still watching one agent at a time.

I don't know if OpenWeft is the right answer to that. But I know the current situation is wrong.

npm install -g openweft
Enter fullscreen mode Exit fullscreen mode

GitHub: NeuraCerebra-AI/openweft

MIT licensed. Beta. Tell me what breaks.

Top comments (0)