Problem
A lot of our tasks started the same way:
- unclear Jira description
- 20–30 minutes just figuring out what is actually required
- then jumping around the codebase trying to find the right entry point
If the area was unfamiliar, easily 1–2 hours gone before writing anything.
Or you just go ask someone who knows.
Which is fine, but doesn’t scale
And this wasn’t rare.
This was pretty normal.
What was wrong
We kept paying for the same thing:
- understanding context
- figuring out patterns
- re-learning how things are done
Nothing complex, just repeated work.
And heavily dependent on:
- who already knows this part of the system
Solution
We didn’t try to “generate code faster”.
We focused on:
understanding + execution as a single flow
Not magic, just structure.
1. Give the agent real system context (agent.md)
A structured entry point (~500 lines):
- architecture and patterns
- conventions (code, testing, logging)
- how to extend the system
- risky areas and common pitfalls
This removes the need to “figure things out from scratch”.
2. Build a custom prompt that actually thinks
Not just “generate code”.
The prompt does this in steps:
- Understand the task (type, scope, intent)
- Estimate complexity
- Decide if it should be solved by the agent
- Find relevant parts of the codebase
- Propose implementation options
- Build a step-by-step plan
- Highlight risks
- Generate a verification checklist
If the task is too complex, it splits it into smaller parts.
3. Use the agent where it makes sense
If complexity is low/medium - the prompt explicitly suggests using the agent.
At this point the agent already has:
- system context
- correct patterns
- a clear plan
So it can generate:
- code aligned with the repository
- tests for new functionality
- regression tests for existing behavior
Not perfect, but good enough to remove most of the routine work.
Result
- Tasks estimated at 6–8 hours - often 2–3 hours
- Simple tasks - from ~1 hour to minutes
- Less need to “figure things out” before starting
This works because:
- context is given upfront
- implementation is partially handled
- common mistakes are reduced
But the main change:
- less mental overhead
- faster start
- fewer interruptions
Takeaway
Most teams don’t have a coding problem.
They have a “time-to-understanding” problem.
If every task starts with digging through the system
and re-learning patterns, that’s the real bottleneck.
AI helps when:
- it understands your system
- and is used selectively
I see this a lot:
Teams either don’t use AI,
or try to use it everywhere.
Neither works well.
I see this pattern in many teams - curious how you approach it.
Top comments (0)