DEV Community

Cover image for The 6-Month Clock Ran Out — Are Companies Full AI Yet?
krlz
krlz

Posted on

The 6-Month Clock Ran Out — Are Companies Full AI Yet?

So during this year, many changes have been introduced in software development. A big disruption is coming, and some new terminology — SDD, IDD, Cognitive Debt, DORA, the "new" XP, Agentic Programming, and others — is being promoted, sometimes used as a slogan.

I personally feel it is many times used as a strong advertising campaign, taking advantage of the panic of people feeling they might be about to lose a train, with severe consequences in their professional careers if they don't catch up. This might be true to a certain level.

But let's look at the actual numbers: in early 2026, AI coding tools have reached 90% adoption among developers, yet only 1% of companies consider themselves mature in AI usage. Everyone has the tools — almost nobody has figured out the practices.

The debate between "AI enthusiasts" and "fundamentals defenders" misses the point entirely. The real question isn't whether to use AI tools, but how companies must restructure their practices, teams, and culture to use them without accumulating what Thoughtworks now calls cognitive debt.

So let's cut through the noise.


What the Data Actually Says

Google's 2025 DORA State of AI-Assisted Software Development report is probably, for now, the most rigorous report made about the current experience of AI in companies. It surveyed nearly 5,000 technology professionals worldwide and includes over 100 hours of qualitative interviews, combining both quantitative and qualitative data. Some clear conclusions can be taken from it: the real value comes from platform quality, workflow clarity, and team alignment. The organizational capability to use AI well is the differentiator — not the tools themselves.

AI is an amplifier, not a fixer. Strong teams get stronger. Struggling teams get worse faster.

The numbers tell the story:

  • 90% of software professionals now use AI tools
  • 80%+ say AI enhanced their productivity
  • 30% report little or no trust in AI-generated code
  • Seven organizational capabilities were identified that magnify the positive impact of AI

That last point is key. It's not about which AI tool you pick. It's about what your team already looks like before AI enters the picture.


From Vibes to Discipline

So if the tools aren't the answer, what is? To understand where things are heading, it helps to see where they just came from.

In February 2025, Andrej Karpathy — co-founder of OpenAI and former AI leader at Tesla — posted on X about a new way of coding: "fully give in to the vibes, embrace exponentials, and forget that the code even exists." The post got over 4.5 million views. Within weeks, the term "vibe coding" had spread from social media to The New York Times, Ars Technica, and The Guardian. By November, Collins Dictionary named it Word of the Year 2025. By mid-2025, The Wall Street Journal reported professional engineers were adopting it for commercial use cases.

But the honeymoon didn't last. By late 2025, Thoughtworks Technology Radar observed the concept fading in favor of more disciplined approaches — concerns about code quality, security, and maintainability in production couldn't be vibed away.

What's replacing it? Three approaches are gaining traction:

Intent-Driven Development (IDD) flips the focus: the quality of your specification determines the quality of your outcome. Teams restructure around 60% product judgment, 30% engineering architecture, and 10% design precision. The specification becomes the primary artifact, not the code.

Spec-Driven Development (SDD) takes a similar angle but draws a clearer line: humans own the design, AI owns the implementation. The key discipline is writing specifications good enough that AI can execute them correctly.

Agentic Engineering, coined by Karpathy himself in early 2026 as the successor to vibe coding, is about designing systems where AI agents plan, write, test, and ship code under structured human oversight. The risk it addresses is real: an agent writing 1,000 PRs/week with a 1% vulnerability rate creates 10 new vulnerabilities weekly.

All three share a common thread: the human's job shifts from writing code to defining intent clearly and verifying outcomes rigorously.


The Return of XP

Here's something nobody expected: Extreme Programming is back.

Kent Beck describes TDD as a "superpower" when working with AI agents. The reasoning is simple — unit tests are the most reliable way to prevent AI from introducing regressions. When AI generates code, the test suite becomes your verification layer. Not code review. Not manual testing. Tests.

The Pragmatic Engineer newsletter and the Thoughtworks Deer Valley retreat both highlight this resurgence. The practices that felt "too rigorous" five years ago are now the safety net that makes AI velocity sustainable. It turns out the answer to "how do we trust AI-generated code?" was already here — we just stopped using it.


What the Industry Leaders Are Saying

In February 2026, approximately 50 tech leaders — including Kent Beck and Martin Fowler — gathered at Thoughtworks' Deer Valley retreat. It was the 25th anniversary of the Agile Manifesto, and the question on the table was direct:

"If AI handles the code, where does the engineering actually go?"

Their conclusions reshape how we think about engineering teams:

Cognitive debt is the new technical debt. When AI generates code that nobody fully understands, you accumulate cognitive debt — and it compounds just like technical debt. "Velocity without understanding is not sustainable."

Agent Topologies. Just as Team Topologies redesigned how human teams interact, organizations now need to design how AI agents fit into their workflows. Who owns what the agent produces? How do agents interact across teams?

Tiered Code Review. Not all code needs the same scrutiny. Reserve human reviews for high-risk components and rely on strong CI/CD guardrails for the rest.

Staff engineers become supervisors. The senior role shifts toward agent orchestration and governance — designing the systems within which AI operates, not writing the code it produces.


Teams Are Shrinking, Roles Are Shifting

These aren't just theoretical changes. They're already reshaping how companies hire and organize.

Teams are shrinking dramatically. One Series C startup restructured from 12 engineers to 3 using AI tools, with a 40% increase in velocity. New roles are appearing — AI Workflow Engineers, PromptOps Specialists, Agent Orchestration leads — titles that didn't exist two years ago.

The old hiring model — bring in juniors cheaply, train them up — is being replaced by hiring fewer senior people and equipping them with AI tools. AI widens the gap between strong and weaker engineers rather than leveling the playing field.

Interview practices are changing too. Organizations are replacing generative coding tests with Review Simulations — candidates audit pre-generated, flawed codebases. The skill being tested: can you catch what AI gets wrong?

And there's a quiet crisis brewing for mid-level engineers. New graduates are more productive with AI tools — they have no habits to unlearn. Senior engineers benefit from deep experience to guide AI. Mid-career professionals risk being squeezed from both sides. Addy Osmani from Google warns this is one of the most underappreciated shifts happening right now.


Measuring What Matters

With all these changes, the old metrics stop making sense. When AI can generate unlimited code, measuring output becomes meaningless.

What We Used to Measure What Actually Matters Now
Velocity / story points Cycle time
Lines of code Lead time to production
PRs merged Defect escape rate
Story completion rate Cognitive debt indicators

The shift is from output to outcomes: how fast does value reach users, and how reliably?


The Fundamentals Debate Is Over

If there's one thing every major source — DORA, Thoughtworks, Gartner, Google — agrees on, it's this: AI makes fundamentals more important, not less.

You need to understand architecture to design scalable systems and guide AI effectively. Developers who skip algorithms, system design, or manual debugging "lose problem-solving muscle memory."

"The best software engineers won't be the fastest coders, but those who know when to distrust AI." — Addy Osmani, Google

Gartner backs this up with a stark warning: by 2028, prompt-to-app approaches by citizen developers will increase software defects by 2,500%. The quality crisis won't come from teams that invest in fundamentals — it will come from those that skip them.

As Satya Nadella put it: "Getting the fundamentals right matters a lot." This isn't a compromise between AI and fundamentals. It's the only approach that works.


Where to Start

If you're leading a team or a company, here's a practical starting point:

  1. Read the DORA 2025 AI Report (free). Assess where your organization actually stands.

  2. Invest in testing culture before AI tooling. TDD, integration tests, strong CI/CD — these are what make AI velocity safe.

  3. Adopt Spec-Driven Development. Better specifications mean better AI output. It's that direct.

  4. Tier your code review. Human review for high-risk paths, automated guardrails for the rest.

  5. Rethink how you hire. Test for review and debugging skills. Can candidates spot what AI gets wrong?

  6. Measure outcomes, not output. Cycle time and defect escape rate over velocity and story points.

  7. Read the Thoughtworks Deer Valley retreat takeaways (free PDF). It's the most forward-looking strategic document on this topic.


Resources


The tools are here. The question was never "should we use AI?" — it was always "do we have the engineering culture to use it well?" The answer to that question determines everything.

Top comments (0)