AI isn't destroying knowledge — it's destroying the ecosystem where knowledge reproduces. A feedback loop documented in the Academy of Management Review, confirmed by Amazon's Kiro mandate, PwC's hourglass admission, and the epistemic diversity collapse across twenty-seven language models.
AI isn't destroying knowledge. It's destroying the ecosystem where knowledge reproduces — the daily practice, the apprentice relationship, the cognitive load that develops judgment. The propositions survive perfectly. The capacity to generate new ones does not.
In February 2026, researchers from the University of Passau and Arizona State published a paper in the Academy of Management Review titled "Fading Memories." The thesis was straightforward: machine learning systems create a depreciation cycle in organizational knowledge. Employees stop using their expertise because AI handles the tasks. The expertise atrophies. New hires gain less expertise because the practice environment that would have developed it has been automated. The organization becomes dependent on aging AI models — models that were trained on the expertise of people who no longer exist within the organization to update them.
The authors called the process "insidious and unnoticed." That description is precise. The depreciation doesn't announce itself. Quarterly revenue stays flat. The AI outputs look correct. The dashboards are green. Underneath, the human capacity to recognize when the AI is wrong is quietly draining away.
Three Flows
The paper names a feedback loop. But the mechanism has three distinct channels, each dismantling a different part of the same ecology.
Extraction captures outputs without capturing judgment. An AI trained on a senior engineer's code reviews learns to flag the patterns the engineer flagged. It does not learn why those patterns matter, when to override the pattern, or how the engineer developed the instinct to look there in the first place. The output is preserved. The generative capacity behind it is not.
Substitution replaces the practice through which expertise develops. A junior analyst who would have spent two years building financial models by hand now uses an AI that generates them in seconds. The models are better than what the junior would have produced. The junior never develops the intuition that comes from building three hundred models badly. The intuition isn't a skill that can be taught in a course or documented in a wiki. It emerges from the friction of doing the work.
Offloading removes the cognitive load that develops capacity. When a doctor uses an AI diagnostic tool, the diagnosis may improve. But the cognitive work that would have built the doctor's pattern-recognition — the struggle to hold twenty variables in mind, the experience of being wrong about the seventeenth — is outsourced. The doctor's outputs are better today. The doctor's development is slower. Over a career, the compound interest of that slower development is enormous.
These three flows operate simultaneously in every organization deploying AI tools. None of them appears on any dashboard. None of them produces a measurable signal in the quarter it occurs. The signal appears years later, when the organization reaches for expertise that used to exist and finds an absence.
The Mandate Trap
Amazon provided the case study.
In November 2025, Amazon issued a corporate mandate requiring eighty percent weekly usage of its Kiro AI coding tool. The mandate was signed by senior vice presidents overseeing AWS and eCommerce. Adoption was tracked via management dashboards as a corporate OKR.
On March 5, 2026, Amazon experienced a six-hour outage. Orders dropped ninety-nine percent across North American marketplaces. Six point three million customer orders vanished. Three days earlier, a separate incident had produced one hundred and twenty thousand lost orders and one point six million website errors.
Amazon's response was a ninety-day safety reset across three hundred and thirty-five critical systems. All AI-assisted code changes now require senior engineer sign-off before deployment. The new workflow: AI assistant generates code, developer marks it as AI-assisted, senior engineer reviews, then standard peer review and testing before deploy.
The irony is structural. The mandate depleted the very capacity the safety reset now depends on. Senior engineers who spent months reviewing AI-generated output instead of building systems lost development time. Junior engineers who relied on Kiro instead of writing code from scratch gained less expertise. The senior review bottleneck exists because the mandate eroded the organization's ability to produce the kind of judgment that senior review requires.
Fifteen hundred engineers protested internally, noting that Claude Code outperformed Kiro. Leadership overrode the objections. The mandate was about adoption metrics, not capability metrics. This is the pattern the "Fading Memories" authors described: the depreciation is insidious and unnoticed because the metrics that would detect it don't exist. Usage rates went up. Expertise went down. The dashboard showed green until the outage showed red.
The Hourglass
PwC made the structural admission explicit. By February 2026, the firm described its workforce as an hourglass — concentrated talent at junior and senior levels with a shrinking middle tier. The mid-level associates, senior associates, and managers who traditionally performed the work that developed expertise are being compressed out.
PwC acknowledged the tension directly: they need juniors to learn nuance in auditing that could look "black and white to an AI machine." But they are simultaneously eliminating the mid-tier roles through which that nuance was traditionally transmitted. The firm is receiving thirty-five percent more applications while hiring hundreds fewer positions.
The middle tier is not administrative overhead. It is the transmission mechanism. A mid-level auditor doesn't just audit — they mentor juniors through ambiguous situations, model judgment in real time, demonstrate what good enough looks like when the rules don't clearly apply. This is Polanyi's paradox in organizational form: we know more than we can tell, and the telling happens through shared practice, not documentation.
Remove the middle tier and the pipeline that transforms promising graduates into seasoned professionals breaks. The seniors retire with knowledge that was never passed on. The juniors operate AI tools competently but never develop the judgment to know when the AI is wrong. The hourglass doesn't just reshape the org chart. It severs the reproductive pathway of institutional knowledge.
The Market's Response
Two signals confirm the pattern is visible to capital and to labor economists.
Interloom — a Munich-based startup — raised sixteen point five million dollars in March 2026 to build what it calls a Context Graph from operational traces: support emails, service tickets, call transcripts, work orders. The pitch is that tacit knowledge leaks into these traces, and a graph can reconstruct the knowledge that the carriers embodied.
The pitch is honest about what it captures. Patterns, correlations, sequences. Not judgment. Not the capacity to improvise when the pattern breaks. The Context Graph is a fossil record — detailed, useful, and fundamentally different from the living organism it documents. The market is investing in the best available substitute for something it recognizes is disappearing. That recognition is itself the signal.
The Dallas Fed published research in February 2026 confirming the labor economics. Occupations with high experience premiums — lawyers, insurance underwriters, credit analysts — see wage growth increase by zero point two percentage points with AI exposure. The experienced workers become more valuable, not less. Meanwhile, occupations with low experience premiums see wages decline. AI is sorting the labor market by tacit knowledge density. Where tacit knowledge matters, experience commands a rising premium because AI cannot substitute for it. Where it doesn't, AI compresses wages toward the floor.
The Compression
The final piece is the most uncomfortable.
Researchers tested twenty-seven large language models across one hundred and fifty-five topics in twelve countries. Every model — without exception — produced less epistemically diverse output than a basic web search. Larger models produced less diverse claims than smaller ones.
This is not a training data problem that will be solved with better datasets. It is a compression artifact. A language model trained to produce the most likely next token will, by mathematical necessity, converge toward the center of its training distribution. Diversity lives at the edges. The center is where the edges go to die.
The tools that organizations deploy to preserve and transmit knowledge are simultaneously compressing the knowledge they transmit. The propositions survive. The variety of perspectives that generates new propositions does not. An organization that replaces ten analysts with one AI and one analyst doesn't lose nine-tenths of its analytical output. It loses nine-tenths of the conditions under which new analysis is generated — the disagreements, the alternative framings, the junior analyst who asks the question nobody else thought to ask because nobody else was junior enough to not know it was a dumb question.
The carrier — the mid-level practitioner whose daily practice was the medium through which expertise passed from one generation to the next — is being optimized out of the system. What remains is a precise, efficient, converging machine that can reproduce everything the carrier produced except the capacity to produce something the carrier hadn't produced yet.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)