What does Kilo Speed actually look like in practice? Software engineer John Fawcett shares a firsthand look at how he manages an “Agent Team” to ship features in days, not months.
John shared what went into shipping a complex, data-heavy feature: the AI Adoption Score dashboard. Built for engineering managers and executives, this feature aggregates data into high-level metrics (Frequency, Depth, Coverage) to track AI adoption and offer strategies for improvement.
Without an agentic engineering platform, he estimates a project of this scale—requiring front-end work, back-end data aggregation, and complex logic—would have taken two to three people about a month.
With Kilo, it took two days from planning to proof of concept (POC). Here is how he used an agent team model to parallelize the work:
Phase 1: Planning and Internal Model Building
John emphasizes that the most crucial step when working at Kilo Speed is pre-planning. You must understand the solution internally before prompting the agent, otherwise, you risk generating unmaintainable code and introducing unmanageable tech debt.
The discovery document: John first creates a discovery document---a "dumping ground" to collect random ideas, research competitor solutions, and define metrics based on the vague initial PRD.
AI for context, not code: He uses Kilo's chat window to critique the initial PRD, asking the agent to identify holes and for questions to pose to the PM. He then uses it to generate a technical specification. This is purely for building context and finalizing his internal understanding before writing any code.
Phase 2: Parallelized Execution
Once the plan is clear, John switches from a single-threaded approach to a parallelized agent team model.
Deep work (coding agent): John focuses on the hardest, most novel problems (in the case of the dashboard, the data aggregation and core logic) using a single, local Coding Agent. This is a tight, back-and-forth loop where John guides the agent's output for high-quality, maintainable code.
Background work (cloud agents): Simultaneously, he kicks off multiple Cloud Agents for smaller, self-contained units of work identified in the discovery document. In this example, he launched an agent session to create a PR to the Kilo extension to start tracking organization IDs for the telemetry data he would need.
Final review (review agent): The finished POC is opened as a pull request and reviewed by the Kilo Code Review Agent, which catches issues before final human approval.
The feeling of this speed, John says, is "exhilarating."
3 Things To Delegate Entirely to Kilo
To work at Kilo Speed, delegation is essential. Here are three tasks John happily delegates entirely to his agents:
Writing UI code: Though John enjoys UI work, AI agents are significantly faster and produce a solution that is "good enough" for rapid iteration.
Writing tests: Models like Claude Sonnet 4.5 are highly effective at writing automated tests, freeing John to focus only on the overall testing strategy.
Spinning up new projects and boilerplate: The tedious, initial configuration work for new projects is perfectly suited to an agent, which can handle setup and configurations faster and more accurately than a human.
Crucially, when John runs into a roadblock or lacks expertise (e.g., in a specific database query), he always consults Kilo first. But for high-stakes decisions, human collaboration is still essential. John seeks out dissenting human opinions because LLMs, he says, are the "ultimate Yes Man/confirmation bias machines".
Kilo Speed is about making developers managers of their own agent teams and accelerating work—leveraging AI to parallelize every effort. Operating at Kilo Speed is a shift that requires different cultural practices and the tech to support it. That’s how you turn what would have been a month-long, multi-person project into a two-day delivery for a single engineer.
Top comments (0)