DEV Community

Christopher Groß
Christopher Groß

Posted on • Originally published at grossbyte.io

My Agent Workflow – From Idea to Deployment in Minutes

A Bug, a Prompt, Done

The other day I noticed: blog posts weren't showing up in my sitemap. Annoying, especially for SEO.

In the past I'd have opened the sitemap module, read through the config, found the fix, tested, committed, deployed. Maybe 30 minutes, maybe an hour.

Instead I typed one sentence into my terminal:

"Blog posts are not included in the sitemap."

Five minutes later, the fix was live.


The Setup: Three Agents, Clear Roles

I use a multi-agent setup in Claude Code. Three agents, clear responsibilities:

  • Lead Agent – Plans, creates tickets, coordinates
  • Builder Agent – Implements the code
  • Tester Agent – Runs real browser tests with screenshots

The workflow is always the same: I give the lead agent a task – sometimes a sentence, sometimes a paragraph. It analyzes the problem, searches the codebase, creates a YouTrack ticket with a detailed description and proposed solution. I glance at it, say "go" – and the rest happens.

The lead delegates to the builder, which writes the code following my coding rules from CLAUDE.md. Then the tester spins up a real browser, loads the page, checks if the problem is solved. Only when everything is green does the lead come back to me: ready for review.


A More Impressive Example

The sitemap fix was trivial. But the workflow really shines on bigger tasks.

When I wanted full WCAG accessibility for my website, I essentially told the lead agent:

"The website needs WCAG-compliant accessibility."

What happened:

  1. The lead analyzed the scope – every page, every component, every form
  2. It created a ticket with a prioritized list of all necessary changes
  3. The builder systematically worked through every component – ARIA labels, focus management, contrast, skip links, focus traps
  4. The tester took screenshots after each iteration and verified accessibility

Result: WCAG 2.1 Level AA in under 2 hours. Estimated manual effort: 2–3 days.

Not because the AI is magic, but because the workflow eliminates context switching and parallelizes the repetitive parts.


What I Actually Love About It

No context loss. I describe the problem once – the context is preserved from ticket through implementation to testing.

Documentation happens automatically. Every change is documented as a YouTrack ticket with description, proposed solution, and screenshots. No writing tickets after the fact.

Small fixes actually get done. You know how you spot a typo or a small visual bug and think "I'll fix it later"? With this workflow, the barrier is so low I just do it immediately.


The Part Nobody Wants to Hear

This is not "AI does everything, I sit back."

In roughly every third or fourth task, I intervene. Sometimes the AI interprets a requirement differently. Sometimes the code is technically correct but stylistically off. Sometimes it's faster to change three lines myself than explain what I want.

That's fine. The workflow doesn't save 100% of my work – it saves 70–80%. The remaining 20–30% is where human judgment actually matters.

I review every single change. Every one. Not because I don't trust the AI, but because it's my code running in production. Read the diff, check the logic, manually test critical paths. That usually takes 2–5 minutes per change – and those minutes are non-negotiable.


Concrete Time Savings (Real Numbers)

Task With agents Manual estimate
Sitemap bug 5 minutes 30–60 minutes
Full WCAG accessibility ~2 hours 2–3 days
i18n text adjustments (DE + EN) 3 minutes ~20 minutes
New blog section 1 day ~1 week

The leverage is greatest for tasks that touch many files, follow clear rules, and are repetitive. It's smallest for creative work, complex business logic, and architecture – that's still on me.


What You Need to Get Started

Three things:

  1. A good CLAUDE.md – Your project spec. Design system, coding rules, project structure. The better this file, the better the results.
  2. Clear agent definitions – Each agent has a role, tools, and boundaries. This prevents chaos.
  3. A place for tasks – Doesn't have to be a ticket system. Markdown files in your repo work fine. But personally, I prefer YouTrack – better history, easy referencing, agents comment directly in tickets.

The setup takes a few hours. The time savings afterward are multiples of that.


The Reality Behind the Hype

AI agents aren't autopilot. They're more like a very fast, very patient junior developer who never gets tired and follows your specs exactly – as long as you state them clearly.

The workflow works because I stay in control. I decide what gets built. I review what was built. I intervene when necessary. The agents accelerate execution – but the responsibility stays with me.

AI agents don't replace developers. They replace the parts of the work that keep developers from focusing on what actually matters.


Originally published on grossbyte.io

Curious how the agent setup looks in detail? Happy to write a follow-up on the CLAUDE.md structure and agent definitions if there's interest.

Top comments (0)