In Part 1, I walked through the architecture: JIRA webhooks → GitHub → Cursor agent. Today I'm covering the process — the five stages that turn a rough ticket into a merged PR without losing human oversight.
Most teams try "ticket → AI → code" and it breaks. The agent misunderstands the requirement, or devs lose trust after one bad PR. The fix isn't better prompts — it's structured handoffs.
The Five Transitions
We built this with two agent stages and three human review gates. Agents never jump from a vague ticket straight to implementation.
1. Refinement (human)
A team member (BA, PO, or dev) writes the initial ticket. It can be rough:
"Add error handling for the payment webhook timeout case."
Status: Refinement. Next step: agent formatting.
2. Agent: Refine
A Cursor agent reads the ticket and generates:
- Acceptance criteria (pass/fail conditions)
- Definition of Done (checklist: tests, docs, deploy)
- How to test (manual or automated outline)
- Implementation plan (file-by-file breakdown, like Cursor's plan mode)
Everything saves to a Confluence page linked from JIRA. The agent clarifies what we're building, but doesn't write code yet.
Ticket moves to: Plan: Review.
3. Plan: Review (human)
Team reviews the Confluence plan. If something's off — wrong approach, missing edge case, unclear criteria — we comment in JIRA or Confluence.
Then we move the ticket back to Agent: Refine. The agent reads feedback, updates the plan. This loop can run multiple times.
Once approved, ticket moves to: Agent: Implement.
4. Agent: Implement
Agent writes code using the Confluence plan. Runs tests (if configured), opens a pull request.
PR links to the JIRA ticket and Confluence plan. Reviewers see the requirement, approach, and code changes — all connected.
If the PR needs changes, devs comment in GitHub and move the ticket back to Agent: Implement. Agent reads PR feedback, updates code, pushes a new commit.
5. Review (human)
Standard code review. If it's good, merge. If not, send back to step 4 with feedback.
After merge, ticket moves to Test (outside this workflow), then Done.
Why This Works
Agents work from approved plans. They don't guess. When they get it wrong, they iterate based on structured feedback.
Humans review before code is written. Step 3 (plan review) catches bad approaches early. A 10-minute review saves hours of rework.
Feedback loops are cheap. Sending a ticket back to "Agent: Refine" or "Agent: Implement" takes minutes. Agent re-runs with context. No senior dev escalation needed.
Trust builds gradually. Start with small tickets. Expand to complex work as the team gains confidence.
What We Learned
- Rovo didn't cut it. Atlassian's AI tooling was unusable for our workflow. Cursor agents + GitHub gave us the control we needed.
- Plan review is not optional. Skipping step 3 always backfires. It's the cheapest gate and the highest ROI.
- PR comments > ticket comments for implementation feedback. Devs already write PR comments. The agent reads them natively. No translation layer.
- Confluence as the plan artifact is key. JIRA description fields are too limited. Confluence gives us version history, inline comments, and space for a real implementation roadmap.
What's Next
In Part 3, I'll dive into the Agent: Implement stage — how we configure the Cursor agent, the repo structure (rules, skills, agents), and how it generates PRs that don't need heavy rewrites.
For now, if you're automating dev work with AI: Don't let agents write code until you've reviewed their plan. That one gate will save you more debugging time than any other optimization.
Top comments (0)