DEV Community

Cover image for AI Integration and the Traceability Gap: Atlassian vs. Competitors
Vitalii Oborskyi
Vitalii Oborskyi

Posted on

AI Integration and the Traceability Gap: Atlassian vs. Competitors

Introduction

Artificial intelligence is everywhere now. Open almost any project management tool — Jira, Asana, Monday.com, GitLab, Azure DevOps — and you’ll see AI badges on nearly every feature. We get auto-generated summaries, smart assistants that draft test cases, bots offering suggestions around the clock. It all sounds like the future is finally here.

But step away from the product pages and into real delivery work, and a stubborn problem remains: traceability. Can you actually follow a single requirement — from user story, through code and tests, all the way to business value? Does AI really help, or does it just make the local pieces shinier, leaving the big picture as tangled as ever?

This isn’t a lone complaint. Earlier this year, I published an open letter to Atlassian which sparked a lively discussion among professionals in my network. But what’s even more important is that the demand for native, end-to-end traceability in the Atlassian ecosystem has been consistently voiced within the community since at least 2009. I referenced this long-standing need in my one-pager as well. The truth is, while the industry talks about AI driving projects forward, without genuine traceability we’re still just automating isolated steps — not the full journey. [Open letter: https://www.linkedin.com/pulse/traceability-atlassian-missing-ai-enabler-open-letter-oborskyi-z3v4f/].

That’s what pushed me to dig deeper. Here’s my hypothesis: Traceability is the real unlock for modern delivery frameworks in the age of AI. The more visibility and connection AI gets across the software lifecycle — requirements, code, tests, and impact — the more it can do for real teams. Without traceability, every AI feature is still a bit of a sideshow. With it, AI might finally shift from “nice-to-have” to the engine room of project delivery.

So, what’s in this article? I’m cutting through the marketing and diving into hands-on experience, new product releases, and what people are saying in the trenches. We’ll look at questions like:

Are Atlassian and its main competitors really integrating AI into their delivery tools — or just relabeling automation?
Does any platform actually provide traceability from end to end? Or is that just another buzzword?
And most importantly, why are even the biggest vendors struggling to bridge this gap — and what could change the game?
All claims are linked, so you can check sources as you go. Let’s get into it.

Why Traceability Is the Real Enabler for AI in Delivery

Let’s be honest: most “AI features” in today’s project tools are clever assistants, not full-scale transformers. They’ll write a ticket here, summarize a comment there, maybe help with test coverage. But all this is just local optimization. It’s not close to changing how entire delivery chains operate.

Here’s the real issue: for AI to make an impact, it needs the full context — the whole story. Not just a backlog or a code repo, not just isolated test cases, but a complete map of how business goals break down into requirements, flow into code, get tested, and eventually bring real value. That’s end-to-end traceability.

Think of it like this: if AI is the brain, then traceability is the nervous system. No matter how advanced the brain, if the nerves don’t connect the organs, you get twitching muscles, not coordinated movement. Most delivery frameworks today are packed with “muscles” — smart bots and helpers for every little job — but hardly any “nerves” connecting it all together.

This is why traceability isn’t just another checkbox or dashboard. It’s the underlying structure that lets AI connect cause and effect, understand impact, predict risks, and actually drive system-wide improvement.

Consider the numbers. According to Gartner, by 2030, 80% of today’s project management tasks could be handled by AI, as machines take over routine work like data collection, reporting, and tracking [Gartner: https://www.gartner.com/en/newsroom/press-releases/2019-03-20-gartner-says-80-percent-of-today-s-project-management#:~ ].

Supporting this, the Project Management Institute (PMI) found that organizations using AI deliver 61% of projects on time (versus 47% without AI) and achieve business benefits in 69% of cases (versus 53% without AI) [PMI Pulse: https://www.pmi.org/-/media/pmi/documents/public/pdf/learning/thought-leadership/pulse/ai-innovators-cracking-the-code-project-performance.pdf?rev=acf03326778f4e64925e70c1149f37ea&sc_lang_temp=en].

But these gains aren’t automatic. Industry analysts at Epicflow note that the benefits from AI become real only when project data is connected, structured, and traceable. AI project management expert Paul Boudreau says it plainly in his Epicflow interview: “It’s all about having project data in a form that is accessible, accurate, and connected. AI can provide value only when it has good data to work with. If the data is incomplete, inconsistent, or siloed, you can’t expect to get good results from your AI tools.” [Epicflow/Paul Boudreau: https://www.epicflow.com/blog/ai-in-project-management-is-the-future-already-here/].

In short:

Without traceability, AI stays shallow — a bunch of disconnected features that can’t see the big picture.
With traceability, AI finally has the context to become a real co-pilot, able to guide, predict, and optimize across the whole delivery chain.

Atlassian’s Competitive Landscape

Let’s be real: for most software teams, Atlassian is still the gold standard. Jira, Confluence, Bitbucket, Trello — for years, these tools have formed the backbone of delivery. If you’ve worked in tech, you’ve probably lived inside Jira’s dashboards and plugin menus more than you’d care to admit.

There are plenty of upsides:

Atlassian pours resources into automation, integrations, and custom workflows.
The Atlassian Marketplace is massive. There’s a plugin for almost anything: advanced reporting, custom traceability graphs, process automations, you name it [Atlassian Marketplace: https://marketplace.atlassian.com/].
But here’s the catch: despite all this, true end-to-end traceability — following exactly how a requirement turns into code, tests, and working features — is still not a native capability.

In most real-world Jira setups, traceability means one of three things:

Manual linking — connecting issues and epics by hand.
Plugins — each with their own quirks, learning curves, and costs.
Automations cobbled together, but fragile: one change in workflow, and you might be debugging broken links for days.
And when it comes to AI, the focus isn’t on traceability. As of 2025, “AI in Jira” means smart field suggestions, ticket summaries, duplicate detection, and a chatbot for search — not an engine that connects delivery across the SDLC [Valiantys: https://valiantys.com/en/blog/agility/understanding-ai-implementation-on-the-atlassian-platform-in-2025/] [Atlassian AI announcement: https://www.atlassian.com/blog/announcements/atlassian-intelligence-ai].

Real-life example: Talk to any delivery manager or Jira admin and you’ll hear a familiar story: Teams spend weeks wiring up custom issue links, setting up add-ons, and building automations — all to keep requirements, code, and test coverage aligned, especially in regulated or complex projects. Sometimes it works… for a while. But as soon as the team structure shifts or processes change, automations break, plugins demand updates, and reporting falls apart. The usual support answer? “Try this new plugin — and don’t forget to update your automation rules.”

It’s a cycle that anyone managing traceability in Jira knows all too well.

Atlassian gives you the Lego bricks — but if you want true traceability (requirements, code, tests, and business value connected), expect to spend serious time and effort piecing it all together.

Question for readers: How are you solving traceability in Jira? Have you found a way to truly connect requirements, code, and tests in a way that helps AI? Or is your setup still a patchwork of links, plugins, and scripts?

Monday.com: “AI Vision,” But What About Traceability?

Monday.com loves to show off its “AI Vision,” promising no-code automations and seamless team collaboration [Monday AI: https://monday.com/w/ai]. The marketing is bold: any workflow, any project, supercharged by AI.

Here’s what Monday.com AI actually does as of mid-2025:

AI Assistant: Summarizes tasks and updates, drafts and rewrites descriptions, helps with emails, meeting recaps, and proposals.
AI Formulas: Autofills board fields, creates formulas and calculations from natural language.
AI Insights: Summarizes long threads, suggests action items, spots duplicates.
AI Search & Workflow: Smarter search across boards, and AI steps right into automations.
It’s a real boost for everyday task management and communications.

But let’s talk traceability. What’s missing?

There’s no built-in end-to-end traceability for software delivery. You can’t natively link requirements, code commits, test cases, and releases in a connected flow.
AI does not analyze code changes, test coverage, or pull requests.
No deep, automatic integration with tools like Jira, GitHub, or TestRail to build a true traceability matrix.
Any attempt at deep integration or artifact linkage? Still a manual job — expect scripts, connectors, or third-party services.
What happens in real teams? Monday.com excels for planning, updates, and automating simple workflows. But if your software team needs to track requirements, code, and tests in sync — the kind of traceability needed for compliance or audits — you’ll end up building your own bridges and maintaining them yourself.

The bottom line: If your priority is fast collaboration and automating routine work, Monday AI delivers. If you’re after end-to-end traceability for serious delivery, prepare for extra setup and a lot of manual maintenance.

Asana: Clarity and Automation, But Not End-to-End Traceability

Asana is a crowd favorite for clear UI, task ownership, and easy project visuals [Asana Product Overview: https://asana.com/product]. It’s everywhere — from marketing teams to product squads — and increasingly pops up in tech as a lightweight hub.

So, what does Asana AI really bring in 2025?

Smart summaries: Turn long updates into quick highlights.
AI-generated status: Draft and polish progress reports.
Task automation: Suggests next steps, sets up recurring tasks, keeps work moving.
AI search & insights: Finds info fast and helps sort priorities [Asana AI features: https://asana.com/ai].
For team coordination and reporting, it just works.

But here’s the catch — and it’s a big one for tech delivery:

No built-in SDLC traceability. There’s no native way to connect requirements to code changes, test results, or releases.
AI doesn’t watch code commits, test runs, or tie deep into dev tools.
Integrations with GitHub, Jira, and similar? They mostly sync status, not create a traceability matrix.
For full delivery traceability, most teams fall back on spreadsheets, custom scripts, or lots of manual updates.
How does this play out? Asana shines for planning, tasks, and basic reporting. But when you need to follow a business requirement all the way to code and tests, you’re forced to patch together other tools — and rely on people to keep the links alive.

Community tip: Some teams juggle Asana for planning and Jira or GitHub for development and testing. It’s doable, but only works if you’re willing to constantly maintain the connections and processes yourself.

GitLab: Deep DevOps, AI Everywhere — But Traceability Still Takes Work

GitLab brands itself as the “DevSecOps platform” — one place for code, CI/CD, security, and deployment [GitLab Product Overview: https://about.gitlab.com/solutions/devops-platform/]. In recent years, GitLab has rapidly layered on AI features across the pipeline, with the ambition to turn DevOps into a truly “smart” experience.

What can GitLab AI actually do in 2025?

Code suggestions: Offers completions and refactoring right in the web IDE [GitLab Duo: https://about.gitlab.com/gitlab-duo/].
AI summaries: Instantly condenses long discussions, MR comments, and issue threads.
Test coverage insights: Helps spot gaps in test coverage, flags untested code.
Vulnerability detection: Surfaces security issues earlier in the process.
But here’s where the magic stops:

End-to-end traceability isn’t automatic. Yes, you can manually link issues, commits, and merge requests — but mapping the full path from business requirement to code and tests is still a DIY project.
Most traceability relies on naming conventions, custom tags, or team discipline, not system intelligence.
There’s no “traceability matrix” that automatically connects requirements, user stories, code, tests, and releases out of the box.
For business-level traceability, most teams end up building their own integrations or scripts to sync requirements from Jira, Confluence, or other external systems.
What does this look like in practice? GitLab shines for engineers who want everything under one roof — code, pipelines, collaboration, with AI features saving time along the way. But if you need a verifiable trail from business request to deployed feature (or you’re facing compliance and audit requirements), you’ll still be piecing things together with templates, scripts, or external tools.

Heads-up for delivery leads and PMs: If traceability is mission-critical (compliance, security, regulated environments), invest early in process design and integration work. GitLab is powerful, but as a “single source of truth” for requirements-to-code-to-test, it isn’t there yet out of the box.

Azure DevOps: Enterprise Integration, Smart Automation — But Still No Native Traceability Chain

Azure DevOps is Microsoft’s all-in-one platform for source control, build pipelines, test management, and release workflows [Azure DevOps overview: https://azure.microsoft.com/en-us/products/devops/]. It’s popular with enterprises for a reason: seamless integration with Microsoft’s ecosystem, flexible processes, and robust security and permissions.

Here’s what Azure DevOps (and its AI features) can do in 2025:

AI-powered code suggestions: Code completion and pull request reviews (often via GitHub Copilot).
Automated work item creation: Turn customer feedback or incidents into backlog items with built-in automation.
Integrated test management: Plan, run, and track test results alongside code and builds.
Dashboards & analytics: Customizable dashboards and anomaly detection, using AI to surface key insights.
For teams deeply invested in Microsoft, it’s a comfortable hub for the whole DevOps pipeline.

But what about traceability?

No out-of-the-box end-to-end traceability. You can link work items (requirements, user stories) to commits, pull requests, builds, and tests — but it’s a manual process.
There’s no automatic “traceability matrix” connecting requirements to code to tests to deployment — everything relies on team discipline and custom process.
Full traceability often requires extra tools, custom Power BI dashboards, or plugins layered on top of Azure DevOps.
AI features mostly focus on code and workflow automation, not on mapping and validating the entire business-to-code chain.
How does this play out in real life? Azure DevOps is great for organizations that want flexible workflows and integration with the rest of Microsoft’s stack. But for regulated industries or anyone facing complex audit requirements, traceability is still a build-it-yourself experience, not something you can just “switch on.”

Insider tip: Many enterprise teams use Azure DevOps in tandem with specialized requirements management tools, third-party traceability plugins, or custom scripts. If audit trails matter, plan ahead — the platform gives you flexibility, but real traceability will take extra work.

Other Notable Platforms: The AI Hype, The Traceability Gap

Atlassian, Monday.com, Asana, GitLab, and Azure DevOps might dominate the headlines, but plenty of other tools are adding “AI” to their product pitches.

Notion — well-known for its flexibility with wikis and documentation — now boasts Notion AI, a writing assistant that drafts content, summarizes notes, and answers questions about workspace pages [Notion AI: https://www.notion.so/product/ai]. ClickUp, an all-in-one work hub, introduced an AI assistant that can help generate task descriptions, to-do lists, and even summaries [ClickUp AI: https://clickup.com/features/ai]. Wrike has rolled out AI-based risk prediction and project analytics to highlight schedule or budget issues [Wrike AI: https://www.wrike.com/features/work-intelligence/ ].

On paper, these features sound like the future. In practice, it’s more about saving time on daily chores: writing, scheduling, and simple reporting.

But when it comes to traceability, the story is the same:

None of these platforms offer true, out-of-the-box traceability that maps requirements to development artifacts and test results.
AI is mostly used to automate the obvious: surface-level tasks, quick summaries, basic automations.
Full delivery traceability — the kind needed for software teams to follow requirements from start to finish — remains a DIY project, usually cobbled together with plugins, spreadsheets, or third-party integrations.
Bottom line: Despite all the buzz about new AI features, end-to-end traceability is still missing across the board. The gap remains, and delivery teams are left to connect the dots on their own.

The Traceability Gap: Atlassian and the Market

Despite rapid progress in AI and automation, the traceability gap is a defining weakness across all leading platforms. Atlassian’s Jira, for example, has been consistently criticized for its limited traceability — it’s nearly impossible to generate a true end-to-end traceability matrix in Jira without relying on third-party add-ons or significant manual work. There’s no built-in requirements management or test case management; linking user stories, code changes, and test results still demands plugins or custom integrations.

This fragmentation isn’t just an inconvenience. It directly limits what AI can do. Siloed data forces teams to maintain separate workflows, increases the cost and complexity of project oversight, and — crucially — means that even the smartest AI is constrained to surface-level insights. When project information is spread across disconnected systems, AI can’t see the full story, only isolated events.

Industry analysts and the Atlassian community have recognized this as a blocker for realizing AI’s full promise. As one open letter to Atlassian put it, “Traceability isn’t a feature — it’s the foundation” for building a truly intelligent delivery platform. Until platforms address this gap at the core, every new AI-powered feature will remain an isolated assistant, not a true system optimizer.

Why Traceability Matters for AI: Expert Insights

AI and machine learning feed on data — but not just any data. They need both quantity and quality. In project delivery, you get plenty of information: requirements, tickets, code commits, test results, production stats. But unless these pieces are properly mapped and connected, even the best AI can only offer local help — a summary here, a faster report there.

The experts are clear: Robust AI in project management is only possible when data is structured, organized, and, above all, traceable. As one AI researcher bluntly put it, “machine learning won’t provide any results without organized and structured data” [AI & Traceability discussion: https://www.epicflow.com/blog/ai-in-project-management-is-the-future-already-here/]. In software delivery, that means showing how top-level requirements connect to design, code, tests, and deployment — all the way to business outcomes.

So why is this such a big deal? If an AI can “walk” this traceability map, it can:

Instantly assess the impact of a change request
Pinpoint where a defect was introduced
Proactively flag which features or code might be affected by a new risk
And the numbers back it up. A PMI survey found that companies using AI-driven tools delivered 61% of projects on time (vs. 47% without AI) and saw business benefits in 69% of cases (vs. 53%). But those benefits only scale when AI works with rich, interconnected data — which means strong traceability under the hood.

The upshot: Traceability isn’t just about governance or oversight. It’s what unlocks the potential for AI to drive real, systemic improvements across delivery. Without it, even advanced features are reduced to a set of digital “helpers” — useful, but working in silos.

The Real Breakthrough: Why Traceability Is the Missing Link for AI in Delivery

Despite all the progress in project management platforms, the true leap forward — AI that transforms delivery from end to end — still hasn’t happened. The main reason is simple: no major tool today provides native, connected traceability across the full delivery lifecycle in a way that’s truly usable for AI.

Consider Atlassian: it covers every major stage — discovery in Jira Product Discovery, development in Jira and Bitbucket, documentation in Confluence, operations in Jira Service Management. But there’s still no true, unified traceability matrix that connects requirements, tickets, code, tests, and business value — and makes these connections accessible for AI-driven analysis. This problem isn’t unique to Atlassian. Asana’s Work Graph, Azure DevOps’ work item links, GitLab’s all-in-one promises: they all take steps in the right direction, yet critical data remains scattered or only loosely joined.

Why Does This Gap Matter?

AI in its current state — whether developer copilots, QA copilots, or requirements generators — remains siloed, only optimizing isolated tasks. What AI really needs is structure and relationships: a connected data model that reveals the whole chain from business need to code to value delivered. I’m not advocating a single rigid traceability model — there are many valid approaches, from strict to lightweight. What matters is end-to-end structure. This is especially important because large language models (LLMs) are, and likely will remain, limited by the size and structure of their input and output. The better structured the data, the more value AI can provide.

When requirements, tickets, code, tests, and business value are mapped and linked, AI can finally move from automating local tasks to delivering system-level insight, guidance, and prediction.

Why the PMO Copilot Is the Real Breakthrough

Talk about AI in software development almost always circles around coding assistants, QA copilots, or tools for requirements management. These helpers are becoming standard, but their impact is fundamentally limited by fragmented, siloed data. Each solves a local problem — none see the whole delivery landscape.

But here’s where the real opportunity lies: a new kind of AI, capable of understanding and orchestrating the entire delivery system. Not just automating code or reporting bugs, but tracking how business needs flow into requirements, turn into code and tests, and ultimately deliver real value.
This is the promise of the PMO Copilot.

We’re on the threshold of this shift. Recent research and early pilot cases — especially in AI-driven risk management — show what’s possible when end-to-end traceability becomes reality. With structured, connected data, AI can move beyond isolated assistants and become a nerve center for delivery: anticipating risks, surfacing bottlenecks, coordinating efforts, and enabling continuous improvement across the lifecycle.
To see these emerging possibilities in action, explore my analysis here: [https://www.linkedin.com/pulse/ai-driven-enhancements-project-risk-management-pmo-vitalii-oborskyi-q5iof/]

And this isn’t limited to risk management. The same approach — structuring and linking delivery data — can transform every aspect of software delivery. The PMO Copilot model offers a glimpse of what’s next: AI that is truly a system-level partner, not just another assistant.

What’s Next?

The next real leap in software delivery won’t come from another local copilot for coding, QA, or requirements. It will come when AI steps into the role of PMO Copilot — seeing the entire delivery chain, connecting every requirement, ticket, commit, and test to business outcomes, and guiding teams as a single adaptive system.

Whoever closes the traceability gap — making end-to-end connections a native, seamless part of the platform — will shape the next era of delivery.
Traceability isn’t just about compliance or reporting. It’s the foundation for true AI-driven transformation.

If you’re building tools or shaping processes, start with traceability. If you’re tackling the same problems, know you’re not alone. Let’s push the industry toward real, end-to-end traceability as the new standard for AI-enabled delivery.

Want to connect or share your own traceability experience? Find me on LinkedIn. [https://www.linkedin.com/in/vitaliioborskyi/]

Top comments (0)