A note before you read. I used AI to pressure-test the argument in this essay. Not to write it. To challenge it. I will tell you where it surprised me and where it failed me, because that is the honest way to write about this subject.
When I use the word AI in this essay I am not acknowledging that these systems are intelligent. I am using the word the industry uses, because that is the word that has conquered the conversation. What we are actually talking about is an advanced deep learning model. Extraordinarily capable at pattern recognition, statistics, and probability. Not thinking. Not understanding. Not intelligent in any meaningful sense of the word. There is no academic consensus on what intelligence actually is, and there is certainly no evidence that these models possess it. The word AI is a marketing decision. It was chosen to make the technology feel inevitable, significant, and human. I use it here because refusing to use it would make the essay harder to read. But I want you to know that every time I write AI in this essay, I am describing a very powerful statistical engine. Nothing more. The intelligence is in the room. It is in you.
There is a carpenter I know who has been doing the same work for thirty-one years.
He is not sentimental about his tools. He replaces them when better ones arrive. He adopted computer-aided design software in the nineties when most of his peers were still hand-drawing. He uses laser measuring tools now, humidity sensors for the wood, a digital system for tracking grain and cut sequences that would have taken him three hours of calculation to do manually. Each of these things made him more capable. More precise in the places where precision serves the work. More efficient in the places where efficiency creates time for the things that require judgment.
He told me last year that he has never felt threatened by a tool.
I asked him what he would feel threatened by.
He thought about it for a while. Then he said: a machine that makes decisions about the wood.
Not a machine that helps him make decisions. A machine that makes them. That looks at the grain and the humidity reading and the customer's specification and produces an output without him in the room. A machine that does not need him to understand what it is doing because the understanding is no longer required.
He said: the moment the understanding leaves the room, I am not a carpenter anymore. I am a machine minder.
He said it without drama. As a simple statement of what the distinction actually is.
I have been thinking about that distinction ever since.
Two Things That Look the Same
The word AI is doing too much work in almost every conversation being had about it right now.
It is covering, under a single label, two fundamentally different things that have opposite implications for the humans inside the systems deploying them.
The first thing is AI as a tool. A hammer that amplifies what a person can do. The radiologist whose AI system flags the scan anomaly she might have missed after six hours on shift. The engineer whose AI assistant catches the specification error in the third-layer dependency. The teacher whose AI tool identifies which three students in her class of thirty are falling behind before she would have noticed in the normal rhythm of the term. In each of these cases, the human remains in the room. The human still makes the decision. The human still holds the responsibility. The tool has made the human more capable without making the human less necessary.
The second thing is AI as a weapon. A system deployed not to amplify what people can do but to remove the people from the equation. The radiologist whose hospital has replaced her diagnostic role with an automated system and kept one radiologist per three hospitals for sign-off on liability purposes. The call centre that has eliminated its workforce and deployed a conversational AI that handles ninety-two percent of customer interactions without a human ever entering the exchange. The content platform that has automated the judgment calls that editors used to make and removed the editors.
In both cases the technology is, in narrow technical terms, similar. Pattern recognition, large-scale training, inference from prior data. What is different is the intention behind the deployment. Who the system is designed to serve. Whether the human in the chain is being amplified or replaced.
This distinction is not new. It was named clearly in the early days of computing by people who were paying close attention. The question was always whether automation would free humans from the tedious to do more of the meaningful, or free companies from the human to extract more of the profit. Both were possible. The direction was never determined by the technology. It was determined by who owned it and what they were trying to maximise.
Fifty years later, we have the answer. The direction was the second one. Not because the first was impossible. Because the second was more profitable.
The Inconvenience of Having a Self
A colleague of mine who works in HR at a large technology company described a conversation she had in an executive meeting last year.
They were discussing a new AI system for customer support. The system was good. It handled the standard query range with accuracy the human team could not match on a bad day, and came close on a good one. The cost per interaction was, by any measure, significantly lower.
Someone in the room asked about the team. The hundred and forty people currently doing the work the system would do.
The response from the executive leading the session was, in her telling, one of the most clarifying things she had heard in fifteen years of corporate life.
He said: the problem with people is that they have needs.
He did not mean this as a cruelty. He was describing, matter-of-factly, what the business case document showed. People have wages. People have benefits. People have sick days and parental leave and the occasional conflict with a manager and the occasional decision to leave for a competitor. People require training. People require management. People have rights that create liability. People, in aggregate, are a source of risk and cost that the AI system does not introduce.
The AI system does not unionise. It does not ask for a raise when the company has a record quarter. It does not develop a grievance about the direction of the organisation. It does not need to be motivated or recognised or given a reason to stay. It does not have a family situation that occasionally makes it less available. It does not have a perspective on whether what it is being asked to do is right.
The executive was not describing a preference for machines over people. He was describing the logic of a system that treats humans as cost centres and machines as assets, and then making a decision that the logic made obvious.
The hundred and forty people were inconvenient. Not as individuals. As a category. As the kind of thing that has needs.
This is the actual agenda of replacement-focused AI. Not progress. Not efficiency for the benefit of the people the organisation serves. The elimination of the inconvenience of human dignity from the cost structure of the enterprise.
What Augmentation Would Actually Look Like
Augmentation, genuine augmentation, has a set of characteristics that are recognisable and measurable. You can check whether it is happening.
The human remains in the decision. Not as a rubber stamp on a machine output. As the actual decision-maker, informed and made more capable by the tool. The surgeon who uses AI assistance to identify candidates for a particular procedure still decides whether the procedure happens. Remove the human from the decision and you have crossed the line from tool to replacement.
The productivity gains circulate to the people doing the work. If AI makes a team twice as productive, and the team stays the same size, the humans are working half as much or earning twice as much or some combination. If productivity doubles and headcount halves and wages stay flat, the augmentation framing was a lie. The benefit went to the shareholders. The cost went to the hundred people who lost their jobs.
The human capacity for the work grows, not shrinks. Genuine augmentation means the doctor who works with AI diagnostic tools over a decade becomes a better doctor. The engineer who works with AI design assistance develops a more sophisticated sense of what the tool gets right and wrong. The human grows inside the tool relationship, not around it.
The understanding stays in the room. This is what the carpenter was pointing at. When the machine makes decisions, the human loses access to the knowledge of why those decisions are correct. Over time, that knowledge cannot be recovered. When the machine fails, or when the situation falls outside the training data, there is nobody left who knows how to handle it from first principles.
By these four checks, most of what is being deployed under the name of AI augmentation is not augmentation. It is replacement, staged gradually, dressed in the language of tools and assistance and freeing humans for higher-value work. The higher-value work never quite materialises. The lower-value humans are gradually removed. The cycle continues.
The Carpenter's Line
Let me go back to the carpenter and the line he drew.
The moment the understanding leaves the room, I am not a carpenter anymore.
He was not talking about job security. He was talking about the relationship between a person and their craft. The knowledge that lives in hands and judgment and years of accumulated experience. The understanding that cannot be described in a training dataset because it is not declarative. It is procedural, embodied, built into the way his hands move and the way his eyes read the surface of a plank.
A machine that assists him retains his access to that understanding. He uses the tool. He remains the carpenter.
A machine that replaces his judgment removes his access to it. Not immediately. But the muscle that is not used atrophies. Within a generation of workers trained to operate the machine rather than understand the wood, the embodied knowledge is gone. Not recoverable from a manual. Not downloadable from a database. Gone.
This is the loss that does not appear in the business case for automation.
Every domain of human expertise contains this knowledge. The nurse who knows from the way a patient is breathing that something is changing before any monitor has registered it. The teacher who knows from the quality of silence in a classroom that something happened in the corridor before the lesson. The journalist who knows from the way a source is answering that the source knows more than they are saying.
This is not mysticism. It is pattern recognition of a specific kind. Pattern recognition that is embodied, contextual, and dependent on the human being present and genuinely responsible for what happens next. Remove the responsibility and you remove the attention that builds the knowledge.
The Test Nobody Is Running
Here is a question that is almost never asked in the boardroom presentations about AI deployment.
What happens when it fails.
Not fails in the narrow technical sense. Fails in the deeper sense of encountering a situation that falls outside the training data. A context that has changed. A case that is genuinely novel.
I have been in organisations that replaced significant portions of their customer-facing workforce with AI systems and then experienced a crisis. A product recall, a regulatory change, a viral incident that generated an unusual pattern of customer contact at unusual emotional intensity. The AI handled the standard queries. The novel situation, the one that required judgment and empathy and the capacity to say "I understand this is not what the script says but here is what I am going to do for you," the system could not navigate.
There were almost no humans left who knew how to navigate it either. Not because the humans who had been displaced were incapable. Because the humans who remained had been operating in a system that handled the judgment calls for three years, and the judgment muscle had atrophied accordingly.
The business case showed the savings from the headcount reduction. It did not show the liability from the capability reduction.
This is a systemic failure of how we evaluate AI deployment. We measure what we can measure. The cost savings are measurable. The knowledge destruction is not, until it manifests as a crisis.
The augmentation version of this story is different. The organisation that deploys AI to assist its workers rather than replace them retains the embodied knowledge. The workers who are made more capable by the tool remain capable when the tool fails.
This is not a sentimental argument. It is a resilience argument.
Where This Is Already Happening
In healthcare, diagnostic AI is being deployed in contexts where the radiologist who used to read the scan is no longer reading it. The AI reads it. A doctor in another country signs off on the output. The local radiologist has been replaced. Not assisted. Replaced. When the system is wrong, and it is wrong with the specific blindspots of its training data, there is nobody left in the local chain who can identify the error before it becomes a harm.
In journalism, automated content generation is replacing reporters covering local news. City council meetings, planning decisions, local court proceedings. Stories that require a journalist to be present, to build relationships, to understand the context well enough to know which fact matters and why. This work is being eliminated. Not because AI does it better. Because it is cheaper to not do it.
In education, AI tutoring systems are being deployed as replacements for teaching staff in underfunded districts. The thirty students in the room are now working with a screen. The teacher who knew which three were falling behind before the test, who knew when the silence in the room was productive and when it was stuck, that person has been replaced by a system that is cheaper and does not require benefits.
The students in those districts are not getting better education with fewer teachers. And the substitution of AI for teachers is happening where the children of parents with less power have no choice but to accept it.
These are not edge cases. They are the current direction of deployment, in the places where the people affected have the least power to resist it.
What Refusing Looks Like
You can refuse replacement-focused AI. Not by rejecting the technology. By insisting on the four characteristics of genuine augmentation and refusing to accept deployments that fail them.
A worker whose role is being automated can ask: am I still in the decision? If the answer is no, that is not augmentation. The company is replacing you, not assisting you. You are entitled to say so.
A leader approving a deployment can insist that the productivity gains are distributed to the people whose work generated them. Not as charity. As a condition of approval.
A technologist building these systems can refuse to build the replacement version. This is not a small ask. It has career consequences. It also has the consequence of being able to look at the work and know what it was for.
A citizen can demand that the governments with the regulatory authority to intervene use it. The EU AI Act creates the legal framework. Enforcement requires political pressure from people who understand what is being deployed and are willing to make it politically costly not to act.
The choice between the hammer and the weapon is not made once, in a single boardroom decision. It is made thousands of times, in thousands of smaller decisions, by thousands of people who have varying degrees of power over the direction of the deployment.
Every one of those decisions is a leadership act. Or a failure of one.
The carpenter said: the moment the understanding leaves the room, I am not a carpenter anymore.
He was describing a threshold.
We are standing at it.
What we choose here is what we will have chosen when it is behind us.
Originally published in Leadership as a Verb on April 21, 2026.
This is the final essay of a four-part series. The Prequel names the system. A Delusional Ape asks whether we want the direction. Who Are You Without the Title asks the personal question. This essay names the specific choice being made right now and what refusing it looks like.
Top comments (0)