What High-Performing AI Companies Have Already Figured Out
Every workflow has invisible seams, steps that only function because a human with ten years of context fills the gaps.
Most companies don't notice these gaps because the process works well enough and the entire human’s job is not documented, step-by-step (an unreasonable expectation, of course). What usually happens in these cases is people route around the broken handoff, apply judgment where the documentation runs out, and quietly absorb complexity that was never formally accounted for.
Oftentimes, humans supporting and filling gaps is great when humans run the workflow. But as the use of AI agents begins to rise, things start to change and each one of these gaps become places where the agent fails and never picks up.
Drop an agent into a workflow built on informal human compensation, and the agent will execute the process exactly as written. Which means the real question is whether the workflow itself was ever designed to run without a human quietly holding it together.
For most companies, the answer is no. And that means the work needs to start with workflow redesign.
Why Pilots Succeed and Scaling Breaks
Pilots work because a small team compensates for every gap the agent can't handle. Scale via Agents and technology removes that team. What's left is a workflow designed for humans, now being executed by software with zero tolerance for ambiguity.
Agents don't adapt to broken handoffs. They don't infer ownership when it's unclear. All they do is follow the process as defined.
If the process is being held together by informal knowledge and human workarounds, the agent will expose every seam.
About 90% of the function-specific AI use cases that hold real transformative potential are still stuck in pilot, according to McKinsey. The problem is process and workflows, not technology. High-performing AI companies are roughly three times more likely to redesign workflows from scratch rather than layer agents onto what already exists. The redesign is where the real value lives.
What Workflow Redesign Actually Looks Like
Redesigning a workflow for agents means answering four questions at every stage of the process.
Question 1: Which steps can an agent fully own?
These are tasks with clear inputs, defined outputs, and minimal need for contextual judgment. Data extraction. Standardized formatting. Pulling records from structured sources. If the step can be described as a contract (this input produces this output, within these constraints), an agent can own it.
Question 2: Which steps require a human decision point?
Anywhere the process involves evaluating trade-offs, exercising risk tolerance, or making a call that depends on relationships or institutional context. These steps don't disappear when agents arrive. They become more visible, because the agent will stop and wait rather than guess.
Question 3: Where does the agent hand back?
The handoff points matter more than most teams realize. A poorly defined handoff creates the same ambiguity problem that broke the original workflow. Every transition between agent and human needs an explicit output contract. The agent delivers a specific artifact, in a specific format, with a clear expectation for what the human does next. Vague handoffs like "the agent prepares a draft for review" just move the ambiguity to a different part of the chain.
Question 4: What does the output contract look like at each stage?
This is where most redesigns fail quietly. Teams define what the agent does but skip defining what "done" looks like at each step. Without an output contract, downstream steps inherit uncertainty, and the compounding effect makes the whole workflow fragile.
This Is an Org Design Decision
Most conversations about AI agents stay in the technology lane. Which model, which framework, which vendor.
But deploying an agent into an existing workflow is an organizational design decision. You're changing who does what, where decisions get made, and what information flows where. That makes it a structural change to how your operation runs, and it deserves the same rigor you'd apply to any reorg.
Skipping the redesign means the agent will faithfully execute a process that was already broken. It will do it faster, at scale, and with none of the informal corrections that made it barely work before. Every workaround your team normalized over the years becomes a failure point. Every undocumented decision becomes a gap in the chain.
The companies pulling real value from agents share one thing in common. They were willing to look at a workflow that "works fine" and admit it only works because humans have been compensating for design flaws the org stopped noticing years ago.
…
Nick Talwar is a CTO, ex-Microsoft, and a hands-on AI engineer who supports executives in navigating AI adoption. He shares insights on AI-first strategies to drive bottom-line impact.
→ Follow him on LinkedIn to catch his latest thoughts.
→ Subscribe to his free Substack for in-depth articles delivered straight to your inbox.
→ Watch the live session to see how leaders in highly regulated industries leverage AI to cut manual work and drive ROI.

Top comments (0)