Amazon asked departing engineers to document their decision-making in recorded sessions. The documentation trained AI systems. Then an AI agent followed an outdated wiki and crashed the retail website. Polanyi predicted this sixty-eight years ago.
A senior engineer at Amazon reportedly spent his final two weeks creating prompt libraries and workflow documentation. He believed he was assisting with a standard transition. He was training the AI system that would replace his department.
This is the knowledge transfer session — a practice reported at Amazon and a growing number of technology companies in early 2026. Departing employees are asked to document their decision-making processes, record their workflows, and articulate the reasoning behind their daily judgments. The recordings feed into AI training datasets. The documentation becomes the blueprint for automated replacements.
The practice is not new in kind. Companies have always tried to capture institutional knowledge before employees leave — exit interviews, standard operating procedure manuals, mentorship periods. What is new is the destination. The knowledge is not being transferred to another human who will accumulate their own tacit understanding over time. It is being transferred to a model that will execute exactly what was documented, nothing more, and never learn from what was left out.
Michael Polanyi would have predicted exactly what happened next.
The Session
Reports from within Amazon describe a pattern. Internal documents discussed by multiple outlets in the week of March 12 outline knowledge transfer sessions where outgoing engineers document their decision-making processes in recorded sessions. The sessions are part of what sources describe as an efficiency matrix — a framework that prioritizes AI-augmented productivity over traditional headcount. An additional fourteen thousand job cuts are reportedly planned for the second quarter of 2026, following the sixteen thousand confirmed layoffs announced in January.
The sessions capture what can be articulated. The prompt libraries encode the questions engineers ask. The workflow documents encode the sequences they follow. The priority matrices encode the rules they apply. All of this is propositional knowledge — it can be stated, it can be searched, and it can be automated.
What the sessions cannot capture is what Polanyi called the tacit dimension: the knowledge that exists in practice but resists articulation. The engineer who triages production incidents does not just follow the priority matrix. She knows which alerts tend to be noise on Monday mornings after deployments. She knows which team's error messages mean something different from what they say. She knows that when a particular service degrades slowly over forty-eight hours, it is almost always a memory leak in the same component — not because anyone documented this pattern, but because she watched it happen seven times over three years.
None of this appears in the prompt library. Not because it was omitted, but because the engineer herself does not know it in a form that can be stated. She knows it the way a cyclist knows balance — through practice, not through propositions. Ask her to write it down and she will produce something that sounds right but misses the thing that actually makes her effective.
The Test
On March 2, 2026, an AI agent operating within Amazon's engineering infrastructure followed advice from an outdated internal wiki and triggered a cascading failure on Amazon's retail website. The outage lasted nearly six hours. Customers could not access checkout, view account information, or see product pricing. Internal reporting attributed approximately 1.6 million errors and 120,000 lost customer orders to the incident. A second outage on March 5 was even larger — reports cited 6.3 million lost orders.
Amazon's public statement was precise: the cause was an engineer following inaccurate advice that an agent inferred from an outdated internal wiki. The AI agent had access to the documented knowledge. It could read the wiki. It could parse the procedures. It could infer recommendations from the text. What it could not do was what any human engineer with three years of experience at Amazon would have done instinctively: recognize that the wiki was stale.
This is Polanyi's prediction, instantiated. The documented knowledge transferred perfectly. The tacit knowledge — the sense of which sources to trust, which documentation has been maintained and which has been abandoned, which advice is current and which is a relic of a previous architecture — did not transfer because it was never documented. It was never documented because it was never propositional. It existed as pattern recognition accumulated through years of navigating the company's documentation landscape, learning through experience which wikis were maintained by active teams and which had been orphaned when their authors were reorganized out of the department.
Amazon's response was structural: a ninety-day code safety reset across approximately 335 critical systems. Engineers must now obtain two reviewers before deployment and follow stricter automated checks. The company added humans back into the approval chain — controlled friction, in their language. The tacit knowledge could not be transferred to the model, so the humans who possess it were reinserted as gates.
The Metric
The efficiency matrix that reportedly drives Amazon's restructuring measures what it can see. Headcount reduction. Cost per transaction. Throughput. Processing speed. By these metrics, the knowledge transfer sessions succeed. The documented workflows transfer. The automated systems execute faster, at lower cost, around the clock.
But the metric cannot measure what it cannot see. The wiki that was stale but not flagged. The incident that was about to escalate but was not yet alarming. The vendor relationship that prevented a supply chain disruption because someone picked up the phone instead of filing a ticket. The junior engineer who was going to make the same mistake the senior engineer made four years ago, except the senior engineer was going to notice the pattern and intervene before it reached production.
This creates the same phenomenon The Tacit State described in government: a Confidence Gradient Inversion. The less tacit knowledge the organization retains, the more confident its metrics appear, because tacit knowledge is what generated uncertainty, professional doubt, and the instinct to slow down. The efficiency matrix registers the absence of doubt as the presence of efficiency.
The ninety-day safety reset is the correction that arrives after the inversion is exposed. But it is a correction to the deployment process, not to the knowledge transfer process. Amazon added review gates. It did not restore the tacit knowledge that would have made the gates unnecessary.
The Pattern
Amazon is not alone. Google restructured its ad sales division after employees spent months training an AI system on client relationship management workflows. JPMorgan Chase deployed an AI contract analysis tool trained in part by the legal analysts whose roles it was designed to replace. UnitedHealth Group used claims processors to train an AI system for medical claim reviews. A 2026 Gartner survey found that sixty-four percent of organizations implementing AI had used existing employees to create training data.
The pattern is consistent across industries: extract, encode, automate, eliminate. Each step is rational in isolation. The employees know the work. The work can be documented. The documentation can train a model. The model can perform the documented work. The employees are no longer needed for the documented work.
The gap is in what the documentation misses. When Forrester reported that fifty-five percent of employers regret AI-attributed layoffs, the regret is not about the documented workflows. Those transferred fine. The regret is about the undocumented everything else — the judgment, the relationships, the pattern recognition that nobody thought to ask about because nobody knew it existed until it was gone.
The Limit
The Tacit State examined what happens when a government loses institutional knowledge through mass departures — not by deliberate extraction but by neglect. Nobody asked the three hundred and seventeen thousand departing federal employees to document their tacit knowledge. Nobody tried to capture what they knew. The knowledge simply left with the people.
The Knowledge Transfer examines the opposite case. Amazon and its peers are not neglecting the knowledge. They are actively, methodically attempting to capture it. They are recording the sessions. Documenting the workflows. Building the prompt libraries. Investing significant engineering effort in the extraction.
And they are arriving at the same place.
The difference between neglect and attempted capture turns out to be smaller than it appears, because both run into the same structural limit: tacit knowledge resists extraction not because the extraction is poorly executed but because the knowledge is not propositional. You cannot extract what was never encoded in a form that extraction can reach. The sessions capture what the engineer can tell you. Polanyi's point — the point that has survived sixty-eight years and now has a Fortune headline as empirical evidence — is that what the engineer can tell you is not what makes the engineer effective.
The knowledge that transfers is the knowledge of how to play a defined game — follow the runbook, execute the procedure, apply the rule. The knowledge that does not transfer is the knowledge of when the game has changed — when the runbook is stale, when the procedure no longer fits, when the rule should be broken. The first kind of knowledge can be documented because it operates within a fixed frame. The second kind of knowledge is the awareness that the frame itself has shifted — and you cannot document a frame from inside it.
A company that successfully extracts all the articulable knowledge and eliminates the people who held it has not automated its workforce. It has frozen its understanding at the moment of extraction. The world continues to change. The documentation does not. And on March 2, an AI agent confidently followed a wiki that the world had already left behind — because nobody was left who knew to distrust it.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)