Over forty percent of agentic AI projects will be canceled by the end of 2027. Not because the technology failed. Because organizations automated broken processes instead of redesigning them. The six percent who succeed share one trait: they were willing to admit the workflow was wrong before the agent arrived.
Gartner published a prediction in June 2025 that should have been a warning. Over forty percent of agentic AI projects will be canceled by the end of 2027 — not because the agents are incapable, but because of escalating costs, unclear business value, and inadequate risk controls. Anushree Verma, the analyst behind the prediction, put it plainly: agentic AI magnifies whatever it touches. If it is built on top of a misaligned or broken system, it accelerates the problems that already exist.
The technology works. The processes do not.
The Evidence
Four independent research efforts converged on the same finding in the span of six months.
McKinsey surveyed roughly two thousand organizations for its 2025 State of AI report and identified a small cohort — six percent of respondents — that it called high performers. These companies were not distinguished by model quality, compute budgets, or talent density. They were distinguished by one factor above all others: they were nearly three times as likely to have fundamentally redesigned individual workflows before deploying AI. Fifty-five percent of high performers reported reworking processes end to end. Among everyone else, the figure was twenty percent.
BCG formalized this as a ratio. Its 10-20-70 framework holds that AI success is ten percent algorithms, twenty percent data and technology, and seventy percent people, processes, and cultural transformation. Companies that led with technology typically saw single-digit productivity improvements. Some saw negative returns when factoring in implementation costs.
MIT's NANDA Initiative was more direct. It examined a hundred and fifty interviews, three hundred and fifty enterprise surveys, and three hundred public AI deployments. Ninety-five percent of generative AI pilot projects failed to deliver measurable impact on the P&L. For every thirty-three proofs of concept launched, only four reached production. The root cause was not model quality. It was the gap between what organizations thought they were automating and what they were actually asking agents to do.
Deloitte surveyed 3,235 business and IT leaders across twenty-four countries and found that only twenty-three percent of companies are using agentic AI even moderately today. Only twenty-five percent have moved forty percent or more of their AI experiments into production. And only twenty-one percent — one in five — report having a mature governance model for autonomous agents.
The numbers describe the same phenomenon from four angles. Most organizations are automating workflows that were designed for human cognition, human communication patterns, and human error-correction rhythms. The agent inherits not just the task but the assumptions embedded in the process that created the task. When those assumptions are wrong — and in seventy-year-old enterprises, many of them are — the agent executes the wrong thing faster and at greater scale than any human could.
Process Debt
Technical debt has a name and a literature. Process debt does not, which is part of why it persists.
Technical debt accumulates when engineers take shortcuts in code — hardcoded values, missing tests, duplicated logic. It is visible in the codebase, measurable in bug rates, and addressable with refactoring sprints. Process debt accumulates when organizations take shortcuts in workflows — approval chains designed around a manager who left in 2019, reporting structures that exist because two departments merged in 2014, handoff protocols that compensate for a system limitation that was patched three years ago but whose workaround was never removed.
Process debt is invisible because nobody owns it. The code has an author. The workflow has a history that nobody remembers. It exists as institutional muscle memory — the way things are done, which is distinct from the way things should be done, which is distinct again from the way things were originally designed to be done.
When an organization deploys an AI agent into a workflow carrying thirty years of process debt, the agent does not question the debt. It operationalizes it. An agent automating a procurement workflow does not ask why three separate approvals are required for purchases under five hundred dollars when the policy was written for an era of paper purchase orders. It routes the digital request through three approval queues, each adding latency, each occasionally timing out, each generating the same class of exception that a human used to resolve by walking down the hall.
The agent is not failing. It is succeeding — at the wrong task.
The Redesign Minority
McKinsey's six-percent cohort did something specific that distinguished them. They did not start by asking what can this agent do. They started by asking why does this process exist in its current form.
The difference is not philosophical. It is operational. Starting with the agent's capabilities leads to bolt-on automation — layering intelligence on top of existing workflows. Starting with the process leads to redesign — questioning whether the workflow should exist in its current shape at all.
Block is the most visible case study, though also the most contested. Jack Dorsey cut roughly four thousand employees — forty percent of the workforce — and declared a shift to an intelligence-native operating model. The stock rose twenty-four percent. Dorsey cited a forty-percent productivity gain from AI tooling and predicted every company would follow. Whether Block genuinely redesigned its processes or simply used AI as cover for cost reduction remains an open question. A Darden business school analysis asked directly: is AI the strategy or the scapegoat?
The question is the right one. The market rewarded the announcement because it implied process redesign, not just headcount reduction. When C3 AI cut twenty-six percent and the stock fell twenty-three percent on the same day, the market revealed that it can distinguish between the two — or at least believes it can. The signal the market rewards is not fewer people. It is a credible claim that the organization has been restructured around what agents can actually do.
Agent Washing
Gartner estimates that only about a hundred and thirty vendors worldwide offer genuine agentic AI products. Thousands claim to. The gap is agent washing — rebranding RPA bots and chatbots as autonomous agents without changing the underlying architecture.
Agent washing is a supply-side problem with demand-side consequences. When an enterprise buys what it believes is an autonomous agent and receives a chatbot with a new label, the pilot fails. The failure gets attributed to the technology rather than to the mislabeling. The organization concludes that agentic AI does not work for its use case. It cancels the project. It becomes one of Gartner's forty percent.
The vendor has moved on. The enterprise has lost not just the investment but the organizational will to try again. The second attempt — which might have involved genuine process redesign — never happens because the first attempt poisoned the well.
Arcade.dev's State of AI Agents report found that forty-six percent of organizations cite integration with existing systems as their primary challenge. Not model quality. Not cost. Integration. The word is doing heavy lifting. What it means, in most cases, is that the agent cannot operate within the process as designed because the process was not designed for agents. The integration challenge is the process debt surfacing.
The Seventy Percent
BCG's ratio — seventy percent people, processes, and culture — is uncomfortable because it implies that the majority of the work in an AI deployment is not AI work. It is organizational work. Change management. Process mapping. Workflow redesign. The work that consulting firms have been selling for decades, now with an AI label that makes it simultaneously more urgent and more ignorable.
More urgent because agents amplify whatever they touch. A ten-percent improvement in a well-designed process compounds. A ten-percent acceleration of a broken process generates errors at a rate that can overwhelm the humans responsible for catching them.
More ignorable because the technology is mesmerizing. A demo of an agent completing a complex task in seconds is more compelling than a slide deck about process redesign. The demo sells the project. The redesign determines whether the project survives.
MIT found that internal AI builds succeed only one-third as often as vendor partnerships. The explanation is not that vendors have better models. It is that vendors bring an external perspective on the process. They see the debt because they did not accumulate it. They ask why the workflow has seven steps when three would suffice because they were not present for the four meetings where each additional step was added as a compromise.
The technology is the easy part. It was always going to be the easy part. The hard part is looking at a process that has worked — imperfectly, expensively, but functionally — for years, and admitting that it needs to be torn down before it can be rebuilt. The organizations that cannot do this will automate their way to Gartner's forty percent. The organizations that can will join McKinsey's six.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)