Last night at 8:50 PM, I sat down to rebuild a product's backend from scratch. By 11 PM, I had a complete Laravel API with auth, CRUD, real-time WebSockets, and chat. By 1:50 AM, the existing React Native frontend was wired up and running against it locally. Deployed to the cloud the next morning.
That's about two hours for the entire backend. Five hours total including frontend integration.
Here's how multi-agent orchestration, Laravel, and solid documentation made that possible.
What I Was Building
The product is a real-time sports analytics platform. Live play-by-play, game scores, chat, coach dashboards, player profiles, the whole deal. The original product was built on a monolithic Python script with a FastAPI backend. It worked, but it wasn't built to scale.
I decided to tear down the backend and rebuild it on Laravel. The existing React Native frontend just needed to be rewired to the new API. And I needed it done fast, because we're shipping in August.
The Three-Tool Workflow
I ended up using different Claude tools for different parts of the process. I'd done some research beforehand on how to work with Claude effectively, and the workflow that emerged was effective enough that I want to break it down.
I used three:
Claude in the browser for strategic planning. I fed it the CEO's product documentation, the existing codebase context, and our technical requirements. It generated a phased build plan, six phases covering everything from auth to deployment prep. This wasn't vague, it was specific enough to hand directly to Claude Code as prompts.
Claude Code for execution. Each phase of the build plan became a prompt. Claude Code scaffolded the Laravel backend phase by phase: migrations, models, controllers, API resources, broadcasting events, chat with profanity filtering, security hardening. Six phases in about two hours.
Multi-agent orchestration for the messy middle. This is where it got interesting.
18 Agents Fixing Errors Simultaneously
After the backend was built and the frontend was wired up, I had 175 TypeScript errors across 33 files. The old Python API returned flat objects with string IDs. The new Laravel API returned nested resources with integer IDs. Every screen that touched game data, team data, or player data was broken.
Instead of fixing them one file at a time, I spun up multiple Claude Code agents in parallel. At one point, 18 agents were running simultaneously, each tackling a different batch of errors. One agent handled coach dashboard screens. Another fixed player stat field names. Another rewired the scores screens from flat fields to nested team objects.
They all worked against the same codebase without stepping on each other, because each agent had a scoped task. The errors went from 175 to 0 across eight batches.
That kind of parallelism just isn't possible with a single-threaded workflow, human or AI.
Why Laravel Was the Cheat Code
The original backend was a monolithic Python script. One file doing everything. Translating that into Laravel was almost unfairly easy.
Laravel gives you so much out of the box. Sanctum for API auth with token management. Eloquent for database models with relationships. API Resources for consistent JSON response shaping. Broadcasting with Reverb for WebSockets. Built-in rate limiting, validation, and middleware. Even the profanity filter for chat was straightforward with Laravel's pipeline pattern.
What would've been weeks of custom plumbing in the old stack became php artisan make:model, define relationships, write a resource class, done. The CRUD patterns practically wrote themselves, and Claude Code was incredibly effective at generating Laravel code because the framework's conventions are so well-defined.
Six phases. Two and a half hours. A complete API with auth, CRUD, game stats, play-by-play engine, real-time broadcasting, chat, and deployment prep.
Documentation Made the AI Work
The reason Claude's build plan was so accurate on the first pass? The founder's documentation was thorough.
The founder had detailed specs for every feature. Field definitions for game data. User role descriptions. Chat behavior rules. Stat categories and how they should be grouped. When I fed that into Claude in the browser alongside the existing codebase, the AI had enough context to generate a build plan that actually mapped to reality.
Garbage in, garbage out applies to AI workflows too. If the product documentation had been vague, I'd have spent half my time clarifying requirements instead of building.
The Full Timeline
Here's how the night actually went:
| Time | What Happened |
|---|---|
| 8:50 PM | Created the Laravel project, started Phase 1 |
| 9:32 PM | Auth system complete (Sanctum, login, register, roles) |
| 10:00 PM | Core CRUD, search, dashboard, coach notes |
| 10:15 PM | Game stats engine, play-by-play, data export |
| 10:31 PM | Real-time broadcasting, chat, profanity filter |
| 10:54 PM | Security hardening, error tracking, deployment prep |
| 11:00 PM | Backend done. Six phases in ~2 hours. |
| 11:42 AM | Brought in Claude Code for frontend integration |
| 1:50 AM | Frontend wired to backend, full stack running locally |
| Next morning | Deployed backend to Laravel Cloud, frontend to Vercel |
Push-to-deploy on both platforms. The team could pull up the app on their phones by the time church was over.
What I'd Tell Other Developers
Use AI at every layer, but different AI for each layer. Browser-based Claude is great for planning and strategy. Claude Code is great for execution. Multi-agent orchestration is great for parallel problem-solving. Don't try to do everything in one tool.
Pick a framework with strong conventions. Laravel's opinionated structure made it trivially easy for Claude Code to generate correct, idiomatic code. The more predictable your framework, the more effective AI-assisted development becomes.
Invest in documentation before you start building. The single biggest accelerator wasn't the AI itself, it was the quality of the input. Thorough product docs meant the build plan was right on the first try. No back-and-forth. No "actually, I meant this."
Let the agents scope themselves. When I ran 18 agents simultaneously, Claude Code broke the work into batches on its own, each agent handling a specific set of files and error types. They didn't overlap. I didn't have to manually assign work, I just pointed it at the problem and it parallelized the fix.
Don't be afraid of ambitious timelines. A complete backend rebuild in one night sounds absurd. But with the right tools, the right framework, and solid documentation, it's just a series of well-scoped tasks executed in parallel. The ceiling on what one developer can ship in a session has fundamentally changed.
I'm Tessa Kriesel, CEO of Built for Devs and part-time CTO at a startup. By day I help developer tools with product adoption and go-to-market. By night, apparently, I rebuild backends with 18 AI agents running at once. Find me online if you want to talk developer products or multi-agent orchestration.
Top comments (4)
While I love PHP, it feels you got a case of refactoritis (not a real disease).
This signals to me that all the code is dropped in main.py, and that is why it didn't scale?
FastAPI provides websockets, authentication and authorisation and a small main.py.
You can add database models.
If the only goal was to make the application scale, you better used claude money on restructuring the existing application.
The lessons you learned running the application on whatever server you were using is now lost, because you moved to another hosting platform. And every platform comes with its own set of quirks.
This is a problem AI providers try to hide from you. Changing from one codebase to another is cheap, but in the process you lost the accumulated knowledge.
In the days before AI, making incremental changes was not only a time issue but also a strategic one.
Claude moved you from one monolith to another. I find it a bit weird you called a script monolithic and FastAPI the backend. The script is the frontend of the backend and fastAPI is the backend of the backend?
This wasn't a live app in any capacity, it was only used amongst a group of friends. I had multiple reasons for the Laravel decision, including the team that was hired to do the work knows PHP best. Thank you for your opinions though.
While i don't mind self promotion, we are all writing to get views and engagement.
This story feels too much like a hype story, and that is why I reacted.
With skills multi agent orchestration is as easy as; Create an agent skill that checks when errors happen and when the errors go over ten it starts creating batches to send to sub agents to work on in parallel.
You don't need advise from a human when you have access to a thinking AI model that knows best how to create the context it understands.
I didn't care about views, I cared about celebrating this win. I hope you have a blessed day, though!