DEV Community

my2CentsOnAI
my2CentsOnAI

Posted on

Software Development in the Agentic Era (2026)

A research-informed guide for developers, teams, and decision-makers

By Mike, in collaboration with Claude (Anthropic)


AI coding tools have moved from autocomplete to autonomous agents that plan, write, test, and iterate on code across entire codebases. The conversation has shifted from "should we use AI?" to "how do we use it without making things worse?"

Most writing about AI-assisted development is either breathless hype ("10x productivity!") or dismissive skepticism ("it's just fancy autocomplete"). Neither is useful. The reality is messier and more interesting than either camp suggests.

This guide synthesizes the available evidence from randomized controlled trials, large-scale telemetry, security audits, and practitioner experience. A central finding runs through all of them: AI doesn't change what good engineering is. It raises the stakes. Teams with strong fundamentals — testability, modularity, clear documentation — are getting real value from agents. Teams without them are generating more code, faster, with more problems.

That's not a reason to avoid AI. It's a reason to invest in the things that make AI useful.

What follows covers the research on productivity and perception (it's not what you think), how codebase design has become the primary "prompt" in the agentic era, where the real security risks are, how skill atrophy works and what to do about it, and how to measure whether any of this is actually helping.


1. Foundational Principle: AI Amplifies, It Doesn't Transform

Core thesis: AI doesn't change what good engineering is. It makes the consequences of good and bad engineering arrive faster. Your codebase is now the interface to the AI — its architecture, testability, and documentation determine whether agents help or create chaos.

  • Dave Farley: "AI won't replace software engineers, but it will expose the ones who never learned to think like engineers. Tools can speed you up, but if your thinking's wrong, AI just gets you to the wrong place faster."
  • The 2025 DORA State of AI-Assisted Software Development report confirms this: teams reporting gains from AI were already high-performing or elite. Teams working in small batches, with tight feedback loops and continuous integration, got a boost. Teams working in large batches saw "downstream chaos" — longer queues, more problems leaking into releases.
  • Jason Gorman's framing: "Same game, different dice." The principles that made teams effective before AI — small steps, testing, code review, modular design — are the same principles that make AI useful. Without them, AI just produces more broken code faster.
  • In the agentic era, this cuts even deeper. An agent operating on a well-structured, well-tested codebase with clear conventions will produce meaningfully better results than the same agent on a tangled monolith with no tests. The AI didn't change the rules — it raised the stakes.

2. The Perception Gap: You Think It's Helping More Than It Is

Subjective productivity reports are unreliable. This is the one finding teams should internalize before anything else.

  • METR RCT (2025): The only randomized controlled trial in this space found a striking perception gap — developers estimated AI sped them up ~20%, while measured results showed the opposite. The specific "19% slower" number should be taken with caveats: n=16 is small, early 2025 models (Claude 3.5/3.7 Sonnet) are already outdated, and the context was narrow (experienced devs on their own large, familiar codebases). METR is redesigning the study to address these limitations. The durable insight isn't the speed number — it's that developers genuinely cannot tell whether AI is helping them on any given task.
  • Faros AI telemetry (10,000+ developers): AI-adoption teams handled 47% more pull requests and 9% more tasks per day, but individual task cycle time didn't improve. The gain was parallelization and multitasking, not speed on any single task. This suggests AI changes how you work more than how fast you work.
  • The Gorman Paradox: If AI delivers the 2x–10x gains people claim, where's the evidence in app stores, business bottom lines, or GDP? The optimistic findings measure what the customer doesn't care about (lines of code, commits, PRs). The less sensational findings measure what matters (lead times, failure rates, cost of change).
  • With agents, the perception gap likely widens. An agent that autonomously completes a task in 10 minutes feels like magic — but if you spend 30 minutes reviewing, debugging, and fixing what it produced, you're net negative and may not even realize it.

Takeaway for practitioners: Track what matters. If your metrics are LoC or PR throughput, you're measuring water pressure at the firehose, not at the shower. And if your evidence for AI ROI is "developers say they feel faster," the METR perception gap — whatever the true speed effect turns out to be — should give you pause.


3. Your Codebase Is the Interface: Architecture for the Agentic Era

The shift from prompting to codebase design is the defining change of 2026. Your code, tests, and documentation are now the primary "prompt" — the agent reads them to understand your system.

3.1 Separation of Concerns as Agent Enablement

What was always good practice is now operationally critical:

  • Separate logic from data. Agents work well with pure functions and clear data boundaries. When business logic is entangled with I/O, framework code, or configuration, agents make cascading changes they don't understand.
  • Clear module boundaries. An agent needs to make isolated changes without breaking unrelated things. Dependency injection, well-defined interfaces, and small modules aren't just clean code — they're the blast radius control for AI-generated changes.
  • Small, composable units. The smaller and more self-contained a unit of code is, the better an agent can reason about it, test it, and modify it without exceeding its effective context.

3.2 Test Design for Agents

Tests are the agent's verification layer. They're how it knows whether its changes work. This means test design is now an AI collaboration concern, not just a quality concern.

  • Fast and deterministic. If your test suite takes 10 minutes, the agent's feedback loop is 10 minutes. If tests are flaky, the agent can't distinguish its own failures from noise.
  • Signal-rich, concise output. If your test runner dumps 500 lines of stack traces, warnings, and deprecation notices, the agent burns context parsing noise instead of understanding what failed. Clean red/green with clear failure messages is what enables effective self-correction.
  • TDD as agent protocol. Write the test first, let the agent implement to make it pass. This isn't just a development philosophy — it's the tightest feedback loop you can give an agent. The test is the specification.
  • Test the behavior, not the implementation. Agents will refactor and restructure. If your tests are coupled to implementation details, they'll break on every valid change.

3.3 Context Engineering: Documentation as Agent Context

Prompt engineering is dead. Context engineering — structuring the information environment the agent operates in — is what matters now.

  • AGENTS.md / CLAUDE.md / GEMINI.md: These repo-level instruction files encode your conventions, constraints, architectural decisions, and "don't do this" rules. They're the single highest-leverage artifact for AI collaboration. Treat them as living documents, reviewed in PRs like any other code.
  • ADRs (Architecture Decision Records): The "why" and "why not" behind your design choices. Without these, agents will confidently suggest the thing you already tried and rejected. ADRs are now a form of agent guardrail.
  • Inline comments for intent, not mechanics. Agents can read what code does. They can't infer why it does it that way, what constraints drove the decision, or what business rules are implicit. Comments explaining intent are agent context; comments restating the code are noise.
  • Up-to-date API contracts and type definitions. These are the agent's map of your system. Stale types and undocumented APIs are the #1 source of plausible-looking but wrong agent output.
  • Security implication: These config files are now part of your threat model. The "Rules File Backdoor" attack demonstrated that hidden instructions in .cursorrules can manipulate agents into inserting malicious code. Review these files with the same rigor as production code.

4. Plan Review: The Primary Skill

In the agentic era, you're not reviewing code suggestions — you're reviewing plans before execution. This is a different cognitive skill.

  • Nearly every AI coding assistant now has a plan mode. Use it. Letting an agent execute without reviewing its plan is like approving a PR without reading it, except the PR was written by someone who's never seen your system before.
  • What to look for in a plan: Architectural coherence (does this fit how we build things?), missing edge cases, wrong assumptions about dependencies, scope creep (agent adding things you didn't ask for), and unnecessary changes to unrelated files.
  • When to interrupt the agent: If the plan touches areas you didn't expect, if it proposes structural changes for a simple feature, or if you can't understand why it's doing something — stop, clarify, re-scope. This is the agentic equivalent of "knowing when to stop asking AI."
  • The sunk cost trap scales up. An agent that's been working for 5 minutes feels like it's "almost there." You let it keep going. A colleague would've said "I think we're going down the wrong path" after step 3. The agent never will.

5. Cognitive Debt and Skill Atrophy

Agents make this worse, not better. The more the AI does, the less you engage — and the less equipped you become to evaluate what it produces.

  • Anthropic's skill formation RCT (January 2026, n=52): Software developers learning a new Python library with AI assistance scored 17% lower on comprehension tests — nearly two letter grades. The time savings from using AI were not statistically significant; participants spent up to 30% of their allotted time just composing queries. The study used a chat-based assistant, not agentic tools — the authors explicitly note that agentic impacts are "likely to be more pronounced."

The biggest gap was on debugging questions — the ability to recognize when code is wrong and understand why it fails. This is precisely the skill most needed for reviewing agent output in the agentic era.

Interaction pattern was the key variable, not whether you used AI at all:

  • Low-scoring patterns (<40%): Complete AI delegation (fastest but learned nothing), progressive reliance (started independent, ended up delegating everything), iterative AI debugging (using AI to solve problems rather than clarify understanding).
  • High-scoring patterns (65%+): Generation-then-comprehension (generate code, then ask follow-up questions to understand it), hybrid code-explanation (requesting code and explanations together), conceptual inquiry (asking only conceptual questions, coding independently).
  • The "conceptual inquiry" pattern was the fastest high-scoring approach — faster than hybrid or generation-then-comprehension, and second fastest overall after pure delegation. Asking the AI conceptual questions and then coding yourself was both faster and produced better learning than asking it to write code.

    • The "copying vs. pasting" problem (Jason Gorman): Learning by copying code from books in the 1980s forced it through your brain — eyes, brain, fingers. "Copying isn't the problem. The problem is pasting. When we skip the 'through the brain' step, we don't engage with source material anywhere near as deeply." Agents take this to the extreme — you didn't even ask for the code, it just appeared.
    • The "Perpetual Junior" pattern: Developers who appear productive on the surface while foundational skills atrophy. They implement features quickly with AI, but struggle with system-level thinking, complex troubleshooting, and independent problem-solving when tools aren't available.
    • In the agentic era, the atrophy risk shifts up the skill ladder. It's no longer just syntax and boilerplate you forget — it's architectural reasoning, debugging strategy, and system design. If the agent handles multi-file refactors end-to-end, you stop building the mental model of how your system fits together.

Practical mitigations:

  • Use AI for conceptual questions and explanations — the Anthropic study shows this is both faster and better for learning than using it for code generation
  • When you do generate code, ask follow-up questions to build understanding before moving on
  • Alternate AI-assisted and AI-free work deliberately
  • Review agent plans actively — trace through the reasoning, don't just check if tests pass
  • Maintain habits of reading documentation and source code directly
  • Consider learning modes (Claude Code Learning/Explanatory mode, ChatGPT Study Mode) when working in unfamiliar territory
  • Track "skill debt" the way you track technical debt

6. Security: Agents Raise the Stakes

The security research is mostly from the pre-agentic era, but the findings are directionally worse with agents — because agents can execute code, not just suggest it.

  • Veracode 2025 GenAI Code Security Report (100+ LLMs, 80 real tasks): 45% of AI-generated code contains at least one vulnerability. For Java, the rate exceeds 70%.
  • Empirical GitHub analysis (733 Copilot snippets): 29.5% of Python and 24.2% of JavaScript snippets contained security weaknesses across 43 CWE categories.
  • Copilot's own code review can't catch it: A study evaluating Copilot's code review feature found it frequently fails to detect critical vulnerabilities like SQL injection and XSS, instead flagging low-severity style issues.
  • AI config file poisoning: The "Rules File Backdoor" attack allows hidden malicious instructions in .cursorrules or similar config files to manipulate agents into inserting malicious code. Since agents read these files automatically, this is a supply chain attack that requires no user interaction.
  • Hallucinated dependencies: LLMs invent package names that don't exist. Attackers register these names with malicious code. Agents that can run npm install or pip install will execute the attack autonomously.
  • Agent-specific risk: autonomous execution. An agent that can run shell commands, modify files, and commit code can do damage at a scale that a code suggestion tool cannot. Sandbox, constrain, and audit agent actions.

7. Don't Use the Same Tool to Write and Review

No single clean A/B study exists, but the underlying mechanism is well-supported. Using an LLM to review the code it just generated is both mathematically and practically flawed.

  • Self-correction blind spot: LLMs fail to detect their own errors at a rate of ~64.5%, even as they readily correct identical errors in external inputs. Once a model hallucinates, subsequent tokens align with the initial error ("snowball effect"). The model doesn't just miss its mistake — it doubles down on it.
  • Self-preference bias: Evaluator LLMs select their own outputs as superior, and this bias intensifies with fine-tuning.
  • LLM-as-judge gaps: IBM research on production-deployed LLM judges found they detected only ~45% of errors in generated code. Adding an external rule-based checker pushed coverage to 94%.
  • Self-consistency failures: Code LLMs can't reliably generate correct specifications for their own code or correct code from their own specifications.

Practical recommendation: Use a different model, a static analysis tool, or a dedicated review tool as a second pair of eyes. The generation tool should never be the sole reviewer. Tests help here too — they're a model-independent verification layer, which is one more reason TDD is especially valuable in the agentic era.


8. Maintainability, Measurement, and the Volume Problem

The "Echoes of AI" study (Borg, Farley et al., 2025) is the first RCT to test whether AI-assisted code is harder to maintain.

  • Result: No significant maintainability difference. Developers who inherited AI-assisted code could evolve it just as easily. Habitual AI users even showed slightly higher CodeHealth scores.
  • But the volume problem is real: The study authors argue maintainability has never been more important because the sheer volume of code will increase rapidly. More code = more to understand, review, and maintain, even if each piece is individually fine.
  • CodeRabbit's 2025 analysis (470 PRs): AI-generated code produces 1.7x more issues per PR — logic errors up 75%, security vulnerabilities 1.5–2x, performance issues nearly 8x.
  • With agents, the volume problem accelerates. Agents generate more code per session than chat-based tools. If your review capacity stays flat while generation throughput 10x's, quality will degrade regardless of per-file code health.

Manage the blast radius. Keep agent-generated changes small and scoped. Review proportional to generation speed. The architecture from Section 3 — small modules, clear boundaries, strong tests — is what makes this manageable.

How to Measure What Actually Matters

  • What to measure: Lead time, failure rate, cost of change, time-to-recover. Not lines of code, not commits, not PRs. If your AI metrics are all activity-based (more PRs, more commits, more LoC), you're measuring the firehose, not the shower.
  • The SPACE framework (from Microsoft Research) offers a multi-dimensional view: Satisfaction, Performance, Activity, Communication, Efficiency. Use it to avoid collapsing "productivity" into a single number.
  • CodeScene's CodeHealth metric as a maintainability proxy — validated against human expert assessments, outperforms SonarQube's Maintainability Rating. Consider tracking CodeHealth over time as a leading indicator of whether AI-generated code is accumulating hidden costs.
  • Be skeptical of self-reported gains. The METR perception gap showed developers can't reliably tell whether AI is helping on a given task. If your evidence for AI ROI is "developers say they feel faster," that's a starting point for investigation, not a conclusion.

9. Vibe Coding vs. Production Coding

  • Vibe coding is a legitimate workflow for prototypes, scripts, explorations, and throwaway work. Don't fight it — but know the boundary.
  • Farley and the Infosys research both frame it as suitable for hackathons but risky for anything with users, dependencies, or a future.
  • Gorman's dice metaphor: agentic workflows are sequences of probabilistic throws. On a small, isolated problem, you'll hit your number quickly. In a large system with constraints, the probability of getting a valid result on each throw drops fast.
  • The danger is the prototype-to-production pipeline. Vibe-coded prototypes have a way of becoming production systems. If it's going to live, it needs tests, structure, and review — regardless of how it was born.

10. Team and Org Level

  • Shared conventions in agent config files. Team-level AGENTS.md / CLAUDE.md, reviewed in PRs, versioned like code. This is the new "team style guide."
  • Onboarding with AI: The Anthropic skill study suggests using AI for conceptual questions during onboarding is fine; using it to skip understanding the codebase is not.
  • Who reviews the reviewers? If an agent generates code, an AI reviews it, and the developer rubber-stamps — there's no human in the loop. Define where human judgment is non-negotiable.
  • Invest in testability and documentation as team infrastructure. These are no longer "nice to have" — they're what makes the entire team's AI tooling effective. A team with great tests and a thorough CLAUDE.md will outperform a team with better models but a messy codebase.

11. License, IP, and Transparency

  • Training data and code ownership: Know whether your AI tools were trained on open-source code and what that means for the license status of generated output. Establish an org-level policy on which models are approved for use with proprietary code, and whether generated code needs to be flagged in commits or PRs.
  • Disclosure: Define when and how to disclose AI involvement to your team and clients. This is less about legal obligation (which varies) and more about trust and professional integrity. If an agent wrote a significant chunk of a deliverable, the people maintaining it should know.
  • Hallucinated dependencies: AI tools sometimes suggest packages that don't exist or that carry unexpected licenses. Vet every dependency the AI suggests — check it exists, check its license, check its maintenance status. Treat AI-suggested dependencies with the same scrutiny you'd apply to a random Stack Overflow recommendation.
  • Compliance: If you operate in a regulated industry (finance, healthcare, government), understand whether your AI tooling and its outputs meet your compliance requirements. This includes data residency concerns if code or context is sent to external APIs.


Conclusion: AI Is a Multiplier — and a Multiplier Is Only as Good as What It's Multiplying

Everything in this guide points to the same conclusion: developers matter more now, not less. AI doesn't reduce the need for engineering skill — it makes engineering skill the thing that determines whether AI helps or hurts.

The DORA data says only already-high-performing teams benefit. The Anthropic study says the developers who learn are the ones who think, not the ones who delegate. The Gorman Paradox asks where the productivity gains went — and the most likely answer is they got absorbed by the cost of not understanding what was produced. Farley's framing that AI amplifies what you already are is the same insight from a different angle.

The examples exist of agents rebuilding entire systems in hours. But they all share a common trait: strong tests, clear architecture, and developers who understood the system well enough to validate the output. The tests made it possible. Without them, those would be impressive demos that don't actually work.

The trap is that AI makes it look like engineering skill matters less. You get working code faster, features ship, the PR count goes up. But what's actually happening is that the consequences of not understanding your system are deferred, not eliminated. They show up later as bugs you can't diagnose, architecture you can't evolve, and security holes you can't see — because you never built the mental model.

This creates a widening gap. The teams that would benefit most from AI — the ones drowning in legacy code, no tests, unclear architecture — are exactly the teams whose codebases give agents the worst context. The agent reads your codebase to understand your system. If your codebase is a mess, the agent confidently produces more mess, faster, in the same style. Meanwhile, the teams that already have clean architecture, strong tests, and good documentation are the ones getting the most out of it.

AI doesn't close the gap between good and bad teams. It widens it.

So the honest framing is not "here's how AI will make everyone better." It's this: invest in the engineering fundamentals first — testability, modularity, documentation, clear conventions. Those are no longer just good practice. They're the prerequisite for AI to help rather than hurt. If you don't have them, start there before you throw agents at the problem.

The good news is that these investments pay off immediately and compoundingly. A team with solid tests and a well-maintained CLAUDE.md will get more out of any AI tool — current or future — than a team chasing the latest model on a messy codebase. The fundamentals are future-proof in a way that no specific tool or technique is.

The most advanced AI skill in 2026 is not prompting. It's not tool selection. It's knowing how to build systems that are worth amplifying.


Key References

Source Year Key Finding
METR RCT 2025 Small-n study (16 devs); key finding is the perception gap, not the speed number. Redesign underway.
Anthropic Skill Formation RCT 2026 17% lower comprehension (n=52); debugging hit hardest; interaction pattern is the key variable; agentic impact expected to be worse
Echoes of AI (Borg, Farley et al.) 2025 No maintainability degradation detected; volume risk flagged
Veracode GenAI Security Report 2025 45% of AI code contains vulnerabilities; Java >70%
Faros AI Telemetry 2025 47% more PRs, but no individual task speedup
DORA State of AI Report 2025 Only already-high-performing teams benefit from AI
Self-Correction Blind Spot (Tsui) 2025 64.5% blind spot rate for models reviewing own errors
IBM LLM-as-Judge 2025 LLM judges catch ~45% of code errors; +external checker → 94%
Gorman, "Same Game, Different Dice" 2026 No macro-economic evidence of AI productivity gains
CodeRabbit PR Analysis 2025 AI code: 1.7x more issues/PR, logic errors +75%
Pillar Security "Rules File Backdoor" 2025 AI config files as supply chain attack vector
Farley, "Continuous Delivery" YouTube 2025 AI amplifies existing engineering capability, good or bad

GitHub

Top comments (0)