Last year, teams using AI shipped slower and broke more things. This year, they're shipping faster, but they're still breaking things. The difference between those outcomes isn't the AI tool you picked—it's what you built around it.
The 2025 DORA State of AI-assisted Software Development Report introduces an AI Capabilities Model based on interviews, expert input, and survey data from thousands of teams. Seven organizational capabilities consistently determine whether AI amplifies your effectiveness or just amplifies your problems.
This isn't about whether to use AI. It's about how to use it without making everything worse.
First, what DORA actually measures
DORA is a long-running research program studying how software teams ship and run software. It measures outcomes across multiple dimensions:
- Organizational performance – business-level impact
- Delivery throughput – how fast features ship
- Delivery instability – how often things break
- Team performance – collaboration and effectiveness
- Product performance – user-facing quality
- Code quality – maintainability and technical debt
- Friction – blockers and waste in the development process
- Burnout – team health and sustainability
- Valuable work – time spent on meaningful tasks
- Individual effectiveness – personal productivity
These aren't vanity metrics. They're the lenses DORA uses to determine whether practices help or hurt.
What changed in 2025
Last year: AI use correlated with slower delivery and more instability.
This year: Throughput ticks up while instability still hangs around.
In short, teams are getting faster. The bumps haven't disappeared. Environment and habits matter a lot.
The big idea: capabilities beat tools
DORA's 2025 research introduces an AI Capabilities Model. Seven organizational capabilities consistently amplify the upside from AI while mitigating the risks:
- Clear and communicated AI stance – everyone knows the policy
- Healthy data ecosystems – clean, accessible, well-managed data
- AI-accessible internal data – tools can see your context safely
- Strong version control practices – commit often, rollback fluently
- Working in small batches – fewer lines, fewer changes, shorter tasks
- User-centric focus – outcomes trump output
- Quality internal platforms – golden paths and secure defaults
These aren't theoretical. They're patterns that emerged from real teams shipping real software with AI in the loop.
Below are the parts you can apply on Monday morning.
1. Write down your AI stance
Teams perform better when the policy is clear, visible, and encourages thoughtful experimentation. A clear stance improves individual effectiveness, reduces friction, and even lifts organizational performance.
Many developers still report policy confusion, which leads to underuse or risky workarounds. Fixing clarity pays back quickly.
Leader move
Publish the allowed tools and uses, where data can and cannot go, and who to ask when something is unclear. Then socialize it in the places people actually read—not just a wiki page nobody visits.
Make it a short document:
- What's allowed: Which AI tools are approved for what use cases
- What's not allowed: Where the boundaries are and why
- Where data can go: Which contexts are safe for which types of information
- Who to ask: A real person or channel for edge cases
Post it in Slack, email it, put it in onboarding. Make not knowing harder than knowing.
2. Give AI your company context
The single biggest multiplier is letting AI use your internal data in a safe way. When tools can see the right repos, docs, tickets, and decision logs, individual effectiveness and code quality improve dramatically.
Licenses alone don't cut it. Wiring matters.
Developer move
Include relevant snippets from internal docs or tickets in your prompts when policy allows. Ask for refactoring that matches your codebase, not generic patterns.
Instead of:
Write a function to validate user input
Try:
Write a validation function that matches our pattern in
docs/validators/base.md. It should use the same error
handling structure we use elsewhere and return ValidationResult.
Context makes the difference between generic code and code that fits.
Leader move
Prioritize the plumbing. Improve data quality and access, then connect AI tools to approved internal sources. Treat this like a platform feature, not a side quest.
This means:
- Audit your data: What's scattered? What's duplicated? What's wrong?
- Make it accessible: Can tools reach the right information safely?
- Build integrations: Connect approved AI tools to your repos, docs, and systems
- Measure impact: Track whether context improves code quality and reduces rework
This is infrastructure work. It's not glamorous. It pays off massively.
3. Make version control your safety net
Two simple habits change the payoff curve:
- Commit more often
- Be fluent with rollback and revert
Frequent commits amplify AI's positive effect on individual effectiveness. Frequent rollbacks amplify AI's effect on team performance. That safety net lowers fear and keeps speed sane.
Developer move
Keep PRs small, practice fast reverts, and do review passes that focus on risk hot spots. Larger AI-generated diffs are harder to review, so small batches matter even more.
Make this your default workflow:
- Commit after every meaningful change, not just when you're "done"
- Know your rollback commands by heart:
git revert
,git reset
,git checkout
- Break big AI-generated changes into reviewable chunks before opening a PR
- Flag risky sections explicitly in PR descriptions
When AI suggests a 300-line refactor, don't merge it as one commit. Break it into logical pieces you can review and revert independently.
4. Work in smaller batches
Small batches correlate with better product performance for AI-assisted teams. They turn AI's neutral effect on friction into a reduction. You might feel a smaller bump in personal effectiveness, which is fine—outcomes beat output.
Team move
Make "fewer lines per change, fewer changes per release, shorter tasks" your default.
Concretely:
- Set a soft limit on PR size (150-200 lines max)
- Break features into smaller increments that ship value
- Deploy more frequently, even if each deploy does less
- Measure cycle time from commit to production, not just individual velocity
Small batches reduce review burden, lower deployment risk, and make rollbacks less scary. When AI is writing code, this discipline matters more, not less.
5. Keep the user in the room
User-centric focus is a strong moderator. With it, AI maps to better team performance. Without it, you move quickly in the wrong direction.
Speed without direction is just thrashing.
Leader move
Tie AI usage to user outcomes in planning and review. Ask how a suggestion helps a user goal before you celebrate a speedup.
In practice:
- Start feature discussions with the user problem, not the implementation
- When reviewing AI-generated code, ask "Does this serve the user need?"
- Measure user-facing outcomes (performance, success rates, satisfaction) alongside velocity
- Reject optimizations that don't trace back to user value
AI is good at generating code. It's terrible at understanding what your users actually need. Keep humans in the loop for that judgment.
6. Invest in platform quality
Quality internal platforms amplify AI's positive effect on organizational performance. They also raise friction a bit, likely because guardrails block unsafe patterns.
That's not necessarily bad. That's governance doing its job.
Leader move
Treat the platform as a product. Focus on golden paths, paved roads, and secure defaults. Measure adoption and developer satisfaction.
What this looks like:
- Golden paths: Make the secure, reliable, approved way also the easiest way
- Good defaults: Bake observability, security, and reliability into templates
- Clear boundaries: Make it obvious when someone's about to do something risky
- Fast feedback: Catch issues in development, not in production
When AI suggests code, a good platform will catch problems early. It's the difference between "this breaks in production" and "this won't even compile without the right config."
7. Use value stream management so local wins become company wins
Without value stream visibility, AI creates local pockets of speed that get swallowed by downstream bottlenecks. With VSM, the impact on organizational performance is dramatically amplified.
If you can't draw your value stream on a whiteboard, start there.
Leader move
Map your value stream from idea to production. Identify bottlenecks. Measure flow time, not just individual productivity.
Questions to answer:
- How long does it take an idea to reach users?
- Where do handoffs slow things down?
- Which stages have the longest wait times?
- Is faster coding making a difference at the business layer?
When one team doubles their velocity but deployment still takes three weeks, you haven't improved the system. You've just made the queue longer.
VSM makes the whole system visible. It's how you turn local improvements into company-level wins.
Quick playbooks
For developers
- Commit smaller, commit more, and know your rollback shortcut.
- Add internal context to prompts when allowed. Ask for diffs that match your codebase.
- Prefer five tiny PRs over one big one. Your reviewers and your on-call rotation will thank you.
- Challenge AI suggestions that don't trace back to user value. Speed without direction is waste.
For engineering leaders
- Publish and socialize an AI policy that people can actually find and understand.
- Fund the data plumbing so AI can use internal context safely. This is infrastructure work that pays compound returns.
- Strengthen the platform. Measure adoption and expect a bit of healthy friction from guardrails.
- Run regular value stream reviews so improvements show up at the business layer, not just in the IDE.
- Tie AI adoption to outcomes, not just activity. Measure user-facing results alongside velocity.
The takeaway
AI is an amplifier. With weak flow and unclear goals, it magnifies the mess. With good safety nets, small batches, user focus, and value stream visibility, it magnifies the good.
The 2025 DORA report is very clear on that point, and it matches what many teams feel day to day: the tool doesn't determine the outcome. The system around it does.
You can start on Monday. Pick one capability, make it better, measure the result. Then pick the next one.
That's how you ship faster without breaking things.
Want the full data? Download the complete 2025 DORA State of AI-assisted Software Development Report.
Top comments (0)