DEV Community

Cover image for The real problem with agentic coding is ownership
Jonas Eriksson
Jonas Eriksson

Posted on

The real problem with agentic coding is ownership

AI can generate code fast.

That part is no longer the hard thing.

What gets difficult is everything that comes after. Someone still has to understand what was built, review it properly, know what an approval actually means, resume the work later and take responsibility for the result.

That is the problem I kept running into.

The question was never really how to get more code from AI. The real question was how to make agentic coding easier to own.

That question is what led me to build AIM, the Agile Iteration Method.

What usually goes wrong

Without a method, agentic development tends to break in familiar ways.

The agent jumps between theories without proving anything. Scope grows quietly. “Progress” turns into a pile of partial edits instead of something coherent and shippable. After a while, nobody is fully sure what the next approval is supposed to mean.

You can get a lot of output from AI. That does not automatically give you a delivery model.

Better prompts are not enough

A lot of teams still try to solve these problems by writing better prompts.

That helps to a point. It does not solve the deeper problem.

What is missing is ownership. Teams need clear approval points, smaller scopes, resumable state and a delivery loop that stays understandable over time.

That is what I wanted AIM to solve.

The basic idea

With AIM, I mainly describe the problem.

I do not need to spend forever planning. I do not need to create a huge PRD before anything useful can happen.

AIM understands the problem and formulates the one thing we actually need first: an Epic.

From that Epic, AIM creates the next single Done Increment.

That means the work can start almost immediately. Instead of spending too much time planning, we get to something concrete fast and start collecting feedback while it is still useful.

That is the part I care about most.

AIM is built to get into a controlled loop quickly, deliver a meaningful increment and learn from the result.

One explicit loop

AIM works through one clear loop:

PO -> TDO -> Dev -> Reviewer -> TDO -> PO

The Epic is owned by PO. The next single Done Increment is owned by TDO. The implementation stays scoped. Review happens before acceptance. The work is judged as end-to-end user value instead of random partial progress.

The practical effect is simple. I write less manually and I only need to review a smaller, clearer scope each time.

That is a lot easier than trying to approve one big blurry AI-generated change.

Ownership changes the whole workflow

The value is not just that the process looks neat on paper.

The value is that AI-generated work becomes easier to understand, review, approve, resume and take responsibility for.

That is why I think ownership is missing from so many conversations about agentic coding.

People talk a lot about speed. They talk about automation. They talk about how much code AI can produce.

But if nobody really owns the result, the workflow is weaker than it first appears.

Planning less and learning faster

This is one of the parts I care most about.

A lot of teams still behave as if safety comes from spending a long time planning before work begins.

With agentic coding, that can easily become a trap.

You lose time. You create more text than value. You delay the moment when real users, developers or stakeholders can react to something concrete.

AIM takes a different path.

You describe the problem. PO shapes the Epic. TDO proposes the next Done Increment. Then the work starts.

That does not mean less thinking. It means less time spent producing planning artifacts for their own sake and more time moving toward something reviewable.

The important thing is that feedback starts earlier.

From method to operating model

AIM started as a way to bring more discipline to AI-assisted delivery.

Over time it became more than a prompt pattern.

It now has a repo-local runtime workspace, durable state for start, resume and gate tracking, adapter support across different environments, explicit guidance for scope, review and ownership, and a clearer distinction between approval flow and runtime depth.

That is why I describe AIM as an operating model for agentic software delivery.

I call it an AI operating system in a practical sense. It helps structure, run, resume and govern AI-assisted delivery work.

One lesson that mattered a lot

One thing we learned while evolving AIM is that small scope does not mean touching as few files as possible.

That sounds obvious, but many AI workflows drift toward oversized mixed-responsibility files because they treat “fewer files” as proof of discipline.

It is not.

A small increment is about behavioral scope.

Sometimes the clearest and most reviewable result is a few focused files with clean responsibilities.

You only really notice this when you stop treating AI coding as a prompt game and start treating it as delivery.

Why I’m sharing this

I do not think AIM is the only possible answer.

But I do think teams need something in this area.

If we are serious about agentic software delivery, we need a way to create ownership, keep approvals meaningful, work in scoped increments, preserve runtime state and trust what is happening.

That is what AIM is trying to provide.

If that sounds useful, the project is here:

GitHub: Agile Iteration Method (AIM)

I would genuinely like to hear how other teams are handling ownership, approvals and feedback loops in agentic development.

Top comments (1)

Collapse
 
joneri profile image
Jonas Eriksson

Author here. Happy to answer questions about how AIM works in practice or how it compares to just writing better prompts. The thing I'm most curious about from this community: how are other teams handling ownership and approvals in agentic workflows? I haven't seen a lot written about that side of it.