DEV Community

Mitko Tschimev
Mitko Tschimev

Posted on

Why Atlassian Rovo Failed Us (and What We Built Instead)

Why Atlassian Rovo Failed Us (and What We Built Instead)

The Problem with AI Task Automation

Atlassian Rovo promised seamless AI-driven task automation: understand JIRA tickets, connect to your codebase, ship features faster. For engineering teams, this sounded like the answer to constant context-switching and manual ticket translation.

We tried it. It didn't work.

Rovo is polished in demos but struggles in production. It can't handle nuanced tickets, doesn't understand team conventions, and needs constant supervision. For a tool marketed as "AI automation," it felt like another integration to babysit.

What We Built Instead

After weeks of frustration, I stopped waiting for enterprise tools and built this:

JIRA automationGitHub webhookGitHub runnerCursor agent (with full repo context)

The Flow

  1. JIRA fires a webhook when a ticket hits "Ready for Dev" or gets updated with specs
  2. GitHub receives it and triggers a custom runner
  3. Cursor agent (pre-configured with rules, skills, and context) connects to the repo
  4. Agent reads the ticket, understands the codebase, and ships a PR

No manual handoff. No "AI tried but got confused." Just working automation.

Where the Real Magic Happens

The webhook setup is straightforward. The breakthrough is the repo design.

Cursor and Claude are only as good as the context you provide. Rovo fails because it tries to be everything. Our Cursor agent succeeds because the repo is designed for AI collaboration:

1. Cursor Rules (.cursorrules)

We define:

  • Coding standards
  • Naming conventions
  • Testing requirements
  • Architectural patterns

When the agent writes code, it already knows:

  • What our API responses look like
  • How we structure components
  • Commit message format
  • Which libraries to use (and avoid)

2. Skills Directory (skills/)

Domain knowledge documentation:

  • Common patterns (auth flows, error handling)
  • Edge cases we've solved
  • Integration quirks (third-party APIs, legacy systems)

The agent references this before touching code—it's not guessing, it's using institutional knowledge.

3. Agent Context (ADRs + Architecture)

We include:

  • Architecture decision records
  • Service boundaries
  • Deployment constraints
  • Performance considerations

When evaluating a JIRA ticket, the agent understands why our system is shaped the way it is—not just what the code does.

Why This Beats Enterprise Tools

Rovo and similar tools try to be everything to everyone. They promise "AI that understands your business" but deliver:

  • Shallow codebase context
  • Generic responses that miss team conventions
  • Product demo polish, not production depth

Our approach inverts the problem: we shaped our repo to work with AI instead of waiting for vendors to catch up.

The result? Cursor agents that:

✅ Understand our architecture from day one

✅ Write code that passes review without major rewrites

✅ Learn from documented patterns instead of re-inventing solutions

Key Takeaways

  1. Enterprise AI tools optimize for demos, not production complexity
  2. The breakthrough is repo design, not webhook plumbing
  3. Context (rules + skills + architecture) makes AI useful
  4. Build the glue yourself—don't wait for vendors

What's Next

This is Part 1 of a series. Coming up:

  • Part 2: The JIRA → GitHub webhook architecture (setup, failures, wins)
  • Part 3: GitHub runner + Cursor agent config (rules, skills, agent setup)
  • Part 4: Results, trade-offs, and iterations

If you're building AI automation, the lesson is simple: design your systems to work with AI, not against it.


Have you tried Rovo or built your own automation? What worked (or didn't)? Drop a comment—I'd love to hear what other teams are doing.

Top comments (0)