The question of whether to adopt AI in software development has already been answered. According to the 2025 DORA AI Capabilities Model - based on research from nearly 5,000 technology professionals and over 100 hours of qualitative analysis - close to 90% of developers are already using AI in their day-to-day work. What remains unresolved is not adoption, but effectiveness. Many organizations have equipped their developers with powerful AI tools, yet struggle to translate individual productivity gains into meaningful business outcomes. This disconnect is at the heart of DORA’s latest research.
The report introduces a critical insight: AI is an amplifier. It does not inherently improve systems; instead, it magnifies the strengths and weaknesses that already exist. High-performing teams become faster and more effective, while struggling teams often see their inefficiencies scale. This reframes the entire conversation around AI adoption. Success is no longer about choosing the right tools—it is about building the right foundations. Understanding and investing in these foundations is what determines whether AI becomes a competitive advantage or just another layer of complexity.
The Core Insight: AI Alone Doesn’t Improve Performance
One of the most important findings from DORA is that AI adoption, on its own, has only a modest impact on organizational performance. While developers may experience significant gains in speed and efficiency, these improvements often fail to propagate through the rest of the system. Instead, they are absorbed by bottlenecks in testing, security reviews, approvals, and deployment pipelines. This creates a situation where teams appear to move faster locally, but the overall system remains constrained.
This phenomenon highlights a fundamental truth: software delivery is a system, not a collection of individual tasks. Optimizing one part of the system without addressing the rest leads to imbalances rather than improvements. If data is fragmented, workflows are unclear, or processes are overly complex, AI will simply accelerate these issues. Teams may generate more code, but that code will still face the same downstream friction.
DORA’s research makes it clear that meaningful improvements only emerge when AI is paired with strong technical and cultural capabilities. These capabilities ensure that gains at the individual level can flow through the entire value stream, ultimately impacting organizational performance. Without them, AI remains an isolated productivity tool rather than a transformative force.
Clear and Communicated AI Stance
A clear and communicated AI stance is one of the most foundational capabilities identified by DORA. In many organizations, ambiguity around AI usage creates uncertainty, which in turn slows adoption and increases risk. Developers often fall into two extremes: either they avoid using AI due to fear of violating policies, or they use it freely without understanding the boundaries. Both scenarios lead to suboptimal outcomes.
DORA emphasizes that an effective AI stance must be both comprehensible and communicated. It should clearly define what is expected, what is permitted, and how AI can be safely used within the organization. This clarity provides psychological safety, allowing developers to experiment and adopt AI tools with confidence. Importantly, the stance does not need to be overly restrictive or overly permissive—it simply needs to be well-defined and consistently applied.
The impact of this capability is significant. Organizations with a clear AI stance see improvements in individual effectiveness, organizational performance, and software delivery throughput, while also reducing friction. This is because developers are no longer second-guessing their decisions or navigating uncertainty. Instead, they can focus on using AI effectively within a known framework, which ultimately leads to better outcomes across the board.
Healthy Data Ecosystems
DORA identifies healthy data ecosystems as one of the most impactful capabilities for successful AI adoption. AI systems rely heavily on data, and the quality of that data directly influences the quality of outcomes. Organizations with high-quality, accessible, and well-integrated data see significantly stronger benefits from AI compared to those with fragmented or unreliable data systems.
A healthy data ecosystem is characterized by three key attributes: data must be trustworthy, easily accessible, and unified across the organization. When these conditions are met, AI can operate with the context it needs to produce meaningful and accurate outputs. However, when data is siloed or inconsistent, AI tends to generate results that reflect those inconsistencies, often leading to confusion and rework.
DORA also highlights that poor data environments lead to what can be described as “localized productivity gains.” Developers may work faster with AI, but their output gets slowed down or corrected later in the process due to data-related issues. This prevents organizations from realizing true end-to-end improvements. Investing in data quality, governance, and accessibility is therefore not just a data initiative—it is a prerequisite for making AI effective at scale. Without it, AI becomes a force multiplier for bad data rather than a driver of better outcomes.
AI-Accessible Internal Data
Closely related to healthy data ecosystems is the concept of making internal data accessible to AI systems. DORA distinguishes between simply having good data and ensuring that AI tools can effectively use that data. This capability focuses on connecting AI systems to internal sources such as codebases, documentation, and organizational knowledge. When AI operates without access to internal context, it remains a general-purpose assistant. It can provide useful suggestions, but those suggestions lack specificity and alignment with the organization’s unique systems and practices. In contrast, when AI is connected to internal data, it becomes significantly more effective, offering insights and outputs that are tailored to the organization’s environment.
DORA’s findings show that this capability has a strong positive impact on both code quality and individual effectiveness. Teams that enable AI to access internal data experience more relevant outputs and fewer errors, which reduces rework and improves overall efficiency.
However, this capability also comes with responsibility. Poor-quality or outdated data can lead to poor AI outputs at scale. Organizations must ensure that the data being exposed to AI is accurate, up-to-date, and well-maintained. This reinforces the importance of strong data governance and continuous data hygiene as part of AI adoption strategies.
Strong Version Control Practices
As AI increases the speed and volume of code generation, version control becomes more critical than ever. DORA’s research highlights that AI-assisted development introduces a level of unpredictability, as generated outputs can vary in quality and correctness. This makes it essential for teams to have strong version control practices in place to manage risk effectively.
Frequent commits and the ability to rollback changes are particularly important. DORA found that these practices amplify the positive effects of AI adoption. Frequent commits create a clear and traceable history of changes, making it easier to identify issues and isolate problems. Rollback mechanisms provide a safety net, allowing teams to quickly revert changes when something goes wrong.
This capability enables teams to experiment with AI-generated code without compromising system stability. It transforms version control from a passive tool into an active safeguard that supports safe and continuous development. In an AI-assisted environment, version control is not just about tracking changes—it is about enabling controlled experimentation. Teams that invest in strong version control practices are better positioned to harness the benefits of AI while minimizing the associated risks.
Working in Small Batches
Working in small batches is a long-standing best practice in software development, and DORA reinforces its importance in the context of AI. While AI enables developers to generate large amounts of code quickly, large changes are inherently more difficult to review, test, and integrate. This increases the likelihood of errors and slows down the overall delivery process.
DORA’s research shows that teams working in small batches experience better product performance and reduced friction, even if their perceived individual productivity is slightly lower. Smaller changes are easier to validate, easier to deploy, and less likely to introduce instability into the system.
This capability acts as a counterbalance to the speed introduced by AI. It ensures that rapid code generation does not lead to uncontrolled complexity. Instead, it channels that speed into manageable, incremental improvements. By focusing on small, testable units of work, teams can maintain a steady flow of value while minimizing risk. This approach aligns with the broader goal of turning individual productivity gains into consistent and reliable system-level performance.
User-Centric Focus
DORA’s findings around user-centric focus are particularly striking. The report shows that AI adoption can have dramatically different outcomes depending on whether teams are aligned with user needs. Teams with a strong user-centric focus see improvements in performance, while those without it can actually experience declines.
This highlights a critical point: AI amplifies direction, not just speed. If teams are focused on delivering user value, AI helps them do it faster and more effectively. However, if teams are focused on output rather than outcomes, AI accelerates the production of features that may not deliver real value. Maintaining a user-centric approach requires continuous alignment with user needs. This includes integrating user feedback into development processes, measuring success based on outcomes rather than outputs, and ensuring that development efforts are guided by clear user goals.
In an AI-driven environment, developers must take on a more active role in ensuring that generated outputs align with user expectations. This requires a shift in mindset from simply building features to delivering meaningful outcomes.
Quality Internal Platforms
The final capability identified by DORA is the presence of high-quality internal platforms. These platforms play a critical role in enabling AI adoption at scale by providing standardized workflows, reducing friction, and ensuring consistency across teams. DORA’s research shows that the impact of AI on organizational performance is heavily influenced by the quality of internal platforms. When platforms are well-designed and provide a seamless developer experience, AI-driven improvements can propagate throughout the organization. When platforms are lacking, these improvements remain isolated.
Internal platforms serve as the infrastructure that supports modern software development. They provide the tools, processes, and guardrails that allow teams to build, test, and deploy software efficiently and safely. In the context of AI, they ensure that generated outputs can move smoothly through the delivery pipeline.
By reducing complexity and standardizing processes, internal platforms enable teams to focus on delivering value rather than managing infrastructure. This makes them a key enabler of successful AI adoption.
From AI Adoption to Agentic Workflows
As organizations mature across these capabilities, a broader shift begins to emerge. AI is no longer limited to assisting developers at the code level—it starts to participate in workflows across the software development lifecycle. Tasks such as generating changes, validating outputs, and triggering processes become increasingly automated. This shift can be understood as a move toward more agent-assisted or semi-autonomous workflows, where AI systems operate within defined guardrails to support end-to-end processes.
However, this evolution is only possible when the foundational capabilities identified by DORA are in place. Without strong data, version control, and platforms, introducing automation at the workflow level increases risk rather than reducing it. With the right foundations, however, it enables a new level of efficiency and consistency in software delivery.
The Final Shift: From AI Adoption to Platform Orchestration
As organizations mature across these capabilities, the challenge shifts from adoption to orchestration. Having the right practices in place is no longer sufficient - teams need a central layer that connects systems, enforces workflows, and maintains consistency across the entire SDLC. This is where the quality of your internal platform becomes the defining variable. AI embedded within a strong platform multiplies output. AI layered on top of a weak one multiplies chaos.
The IDP Imperative: Why Your Platform Is the Make-or-Break Variable
The numbers are hard to ignore. According to the DORA report, 90% of organizations already report using an internal developer platform. Gartner projects that 85% of platform engineering teams will have IDPs by 2028, and 80% of large engineering organizations will have dedicated platform teams by 2026. But here is the critical nuance DORA surfaces: having a platform is not enough. Platform quality is the make-or-break variable for AI ROI. When platform quality is high, AI adoption has a strong and measurable positive impact on organizational performance. When it is low, that impact is negligible — no matter how sophisticated the AI tools in use.
This is where the conversation shifts from platform engineering to agentic engineering. The next generation of IDPs cannot simply manage services and workflows - they need to power a shared environment where humans and AI agents run the software development lifecycle together. That requires four critical capabilities: a rich, holistic context lake that correlates data across all environments, services, tools, and policies in real time; orchestration and automation that supports code, low-code, and AI-enabled workflows with governed execution; embedded guardrails and governance with RBAC, confidence thresholds, and human-in-the-loop approval gates; and unified measurement and optimization across DORA metrics, AI impact, and custom standards.
Port.io is built for exactly this. As an agentic developer portal, Port goes beyond traditional IDP functionality by embedding AI workflows directly into the platform layer - giving developers not just visibility and self-service, but intelligent automation that operates within defined guardrails. The result is not just faster developers. It is a system where humans stay in control, teams consistently ship value, and AI incidents stop derailing delivery.
You know, you can build the DORA dashboard inside your Port account to see your engineering performance. See below through a simple walkthrough. Signup to Port right now and start measuring your engineering performance.



Top comments (0)