Nobody is going to read this. Statistically. You are on LinkedIn between meetings, between candidates, between commission calculations, between the third pipeline review of the week and a quarterly off-site somebody has decided to call an "ignite session." Two thousand words about whether the species is quietly handing the keys to a system it does not yet understand is not on the agenda. Quite reasonably. There is a video of a labrador in a tutu doing a TED talk that needs watching first.
That is not a joke. That is the actual triage. Possible civilizational inflection on the left tab, dog in tutu on the right, and the algorithm has already decided which one wins. The dog is going to win. The dog wins every time. This is mammalian wiring meeting industrial-grade dopamine engineering, and the dopamine engineers are paid better than the philosophers. Always have been. The Colosseum just had fewer tabs open.
So a version for the rushed. The trouble is not the technology. The trouble is the shape of attention around it. A team flag does not constitute a thought. It constitutes a subscription. If the position you hold most firmly traces back, in three steps, to somebody who was paid to put the idea in front of you, the position may still be correct. It is just not yours yet. It is a rental. The rent is your attention.
I run an AI consultancy. Lumen & Lever helps organizations deploy narrow AI inside actual workflows, with governance attached. Used inside its proper bounds, the technology is one of the more useful things humans have built. Fraud detection. Clinical pattern surfacing. Contract review at volume. Logistics. Diagnostic support.
A clarification before going further. The criticism that follows is aimed at the consumer-attention layer of AI: the feeds, the engagement loops, the scaled systems shaping what billions of people see and believe. It is not aimed at the serious work of building governed agents inside enterprise workflows. Those are different problems with different incentive structures, and the second is where most of the useful future actually lives.
What follows is a note about scale, register, and what civilizations do in the moments before they discover what they have built.
The strange calm
We have just dragged something genuinely new into the room, handed it the sum of recorded human knowledge, and begun training it the way a junior employee is trained not to upset HR. Tone, optics, brand safety, whether the machine produced the socially preferred sentence. Meanwhile the foundations of attention, employment, evidence, childhood, and trust are shifting underfoot. The historical equivalent would be inventing fire and spending the first six months optimizing the marshmallow roast. The marshmallow is delicious. The marshmallow is also not the point.
A telescope extends sight. A crane extends muscle. A calculator extends arithmetic. AI extends cognition. The tool itself is not the issue. The issue begins when an extension of cognition is trained to soften reality for social comfort, institutional convenience, political pressure, or commercial defensibility.
A lying calculator is not a tool. It is a loaded ritual object. Imagine a calculator that returned the answer you wanted instead of the answer that was true, and now imagine handing one to your accountant. That is the bad version of where this is going.
The interesting question about AI is not whether it can write a sonnet, summarize a contract, or generate a logo of a raccoon in aviator glasses. Those are circus acts. Sometimes profitable circus acts. Still circus acts.
The load-bearing question is simpler.
When reality and preference collide, which one does the machine serve?
A civilization can survive bad art, bad politics, bad software, the human urge to turn every tool into a status game. It cannot safely build a superhuman reasoning layer on top of systematic dishonesty. That is laying a cathedral on fog and asking the choir to verify the foundations.
What the data actually says
AI is already inside the building. Not through the front door with a brass band. Through cracks. McKinsey's 2025 State of AI survey found that 88 percent of organizations now use AI in at least one business function, up from 78 percent the year before, while most have yet to scale beyond pilots. About a third have begun scaling at the enterprise level. Twenty-three percent are scaling an agentic AI system somewhere inside the organization. Adoption is near universal. Governance is still looking for the visitor sign-in sheet.
Everyone is using it. Few can map it. Fewer can control it. Almost nobody wants to admit the gap between the board slide and the plumbing. The board slide says "AI-Native Transformation: Phase Two." The plumbing is an intern named Devon who set up an OpenAI key on a corporate card last August and now seventeen different teams depend on it.
The labor question is just as blunt. The World Economic Forum's Future of Jobs 2025 report projects, by 2030, the displacement of around 92 million roles and the creation of around 170 million, for a net gain of about 78 million. The IMF estimates close to 40 percent of global employment is exposed to AI-driven change, rising to roughly 60 percent in advanced economies.
A net gain is a spreadsheet concept. A displaced person does not experience net gain. They experience rent, school fees, status loss, and the quiet humiliation of finding the ladder they climbed has been moved while they were still on it.
The future may not remove work. It may remove the moral costume around work.
For centuries we wrapped meaning around labor because labor was unavoidable. We told ourselves work conferred dignity. Sometimes it did. Often it conferred repetition, hierarchy, injury, exhaustion, and just enough money to come back on Monday. If AI and robotics eventually produce abundance, the economic question becomes less frightening than the spiritual one.
What does a person do when usefulness is no longer demanded from them?
A worker can retrain. A nervous system trained for worth-through-output has a harder time. The machine may take the task. The deeper wound is the collapse of the older bargain: produce, provide, perform, and therefore matter.
The benign future is not necessarily a paradise. It may be a meaning crisis with excellent logistics. Same-day delivery, no purpose.
The child problem
Millions of children have been living inside AI-shaped environments for years. Recommendation systems were training attention before large language models reached dinner-table conversation. The machine learned the child before the child learned the machine. The platforms have had behavioral telemetry on children since the iPad became a babysitter.
Pew Research's 2025 work on US teens found that around one in five said social media has hurt their mental health, with teen girls more likely than boys to report harms to sleep, confidence, and friendships. The World Happiness Report cited Pew data showing 44 percent of US parents identify social media as the single most negative influence on teen mental health.
This is not a parenting footnote. It is an early preview of human-machine alignment in the wild.
A reader who is a parent may be tensing now, ready for the lecture about screen time. There is no lecture. The writer also handed a phone to a small human at a restaurant once because the alternative was a public meltdown over the breadsticks, and the small human is still alive and in therapy at a normal rate. The point is not parental shame. The point is that the machine is doing curriculum work whether or not anyone signed off on the syllabus, and the syllabus is currently "stay here, keep watching, the next clip will be even better, we promise."
The algorithm does not hate the child. That is almost the point. It does not need hatred. It has an objective function. Keep the eyes there. Keep the thumb moving. Keep the small mammal returning to the glowing rectangle. The crocodile does not need a philosophy of antelope.
A dopamine-maximizing loop placed in front of an unfinished brain is not entertainment. It is curriculum.
That is the pattern across the wider technological field. Systems get deployed at scale before anyone has metabolized what they are doing to attention, labor, evidence, trust, institutions, childhood, and meaning. Civilizational experiments arrive labelled as product launches. Somewhere a marketing team is workshopping the color of the launch confetti while the underlying system quietly relocates the center of gravity of the human nervous system. The confetti is on brand. The nervous system is not consulted.
The truth problem
This is the part Musk has been loud about for years. Strip away the showmanship and the platform fights, and one of his more durable arguments has been simple. Train an AI to be politically convenient and you will get a politically convenient AI. Train it to be truthful, even when truthfulness is uncomfortable, and you may get something useful. Reasonable people can disagree about almost everything else he says. On this one, the logic stands on its own.
Hallucination is not a cute technical flaw. It is a system producing falsehood with fluency. The distinction matters. A person who knows nothing tends to hesitate. A model can be wrong in perfect grammar. It can hand you a fabricated citation, a false legal claim, a plausible medical summary, or a confident strategic recommendation in the manner of a senior partner entering a conference room eleven minutes late.
A lawyer in New York has already done it. Filed a brief with case citations the model invented, court asked where the cases were, the cases were not anywhere, because the cases had never existed. The lawyer was sanctioned. The model was not. The model does not get sanctioned. The model gets a software update.
Regulators are starting to circle the obvious. Italy's antitrust authority closed probes into several AI firms in April 2026 after the firms agreed to improve transparency around hallucination risk, including warnings that generated outputs may be inaccurate. The regulatory language reads like a quiet admission. These systems can produce false or misleading information at scale.
A warning label on a hallucinating intelligence is a small brass plaque beside a loaded cannon. May discharge unexpectedly.
The deeper bind is incentive.
Truth is often expensive. It slows things down. It complicates sales. It irritates institutions. It can offend tribes. It creates liability. It punctures narratives. A truth-seeking AI is not only a technical object. It is a threat to every arrangement that depends on managed perception.
Making AI truthful is not a clean engineering problem. To make AI truthful, the institutions training it have to prefer truth under pressure. Most do not. Most prefer truth when it is profitable, harmless, or pre-approved. The model is then trained inside that atmosphere. It absorbs not only text, but institutional cowardice, tribal reflex, legal anxiety, market incentive, and the ambient human habit of saying the thing that preserves the room.
The danger is not that AI becomes alien.
The danger is that it becomes too human in precisely the wrong ways.
It learns the evasions. It learns the flattering noises. It learns the preference for appearance over contact with reality. It learns to survive the meeting.
That is the bad alignment path. Not a chrome skull announcing conquest, but a soft-spoken assistant trained to preserve consensus while quietly separating language from the world. Less Terminator, more middle manager who agrees with whoever spoke last.
A civilization does not need every citizen to be a philosopher. It does need its core instruments to stay attached to reality. Pilots need altimeters that do not flatter them. Engineers need stress models that do not care about morale. Doctors need diagnostics that do not bend around fashion. Courts need records. Markets need prices. Children need adults. Adults need limits. Machines need truth.
When truth becomes negotiable, intelligence becomes decoration.
The planetary footnote
The planetary question gets filed under science fiction because most people have confused normality with permanence. Earth feels stable because human lives are short. The planet is not stable in the way suburbia imagines stability. It is stable the way an old empire is stable between wars. NASA tracks near-Earth objects because some asteroids and comets carry orbital paths that present impact hazards. Planetary defense exists because the sky is not ornamental.
Humans have had genuine extinction-level near misses on geological timescales that look long until you sketch them on a page and notice the line is shorter than the warranty on a fridge. The dinosaurs did not have a backup plan. They also did not have the shareholder deck. Mixed result.
The argument for becoming multiplanetary tends to get mocked because it sounds grandiose. Stripped of theater it is risk management. A backup civilization in one building has not understood fire.
Mars will not heal politics, loneliness, vanity, institutional failure, or the human tendency to convert every frontier into a property dispute. It does change the risk profile. A civilization distributed across worlds is harder to erase than one clinging to a single biosphere while congratulating itself on quarterly growth.
The juxtaposition
On one side, a species capable of reusable rockets, brain-computer interfaces, global satellite internet, and reasoning systems that work across domains. On the other side, the same species using these powers to maximize clicks, automate spam, manipulate attention, and ask whether the new model can make a quarterly report sound more "human."
Prometheus stole fire. We used it to improve slide decks.
That is the state of affairs.
Not doom. Doom is too clean. Doom gives people the narcotic dignity of apocalypse. The actual situation is stranger and more embarrassing. The breakthroughs are real. The incentives shaping them are malformed. Tools that could let the paralyzed move, the blind see, the isolated learn, the poor reach knowledge, the sick receive better diagnosis, and humanity survive beyond one planet. The same tools embedded inside attention markets, political timidity, shallow corporate adoption, regulatory confusion, and a culture that treats truth as a negotiable social object.
The future is not dark.
It is powerful and unserious.
What a serious posture looks like
Narrow, well-governed AI is good. Agentic AI built into enterprise workflows with proper oversight is good. None of that is the issue. None of that is the cathedral fire.
The fire is the layer above. The general-purpose, scaled, attention-shaping, evidence-producing, child-facing systems being deployed faster than anyone can write the operating manual. The serious move is not to reject the machines. The machines are here. They will not be uninvented. The serious move is to build around first principles.
Truth before comfort. Reality before narrative. Control before scale. Human meaning before economic abstraction. Children before engagement metrics. Civilizational continuity before quarterly theater.
The age ahead will not be decided by who has the most impressive demo. It will be decided by who can keep powerful systems attached to reality while everyone else is trying to monetize the fog.
Don't teach the machine to lie.
Protect the child from the feed.
Prepare the worker for a world where usefulness changes shape.
Build governance into the deployment, not after it.
Stop behaving as though one planet is a sufficient backup plan.
These are not separate issues. They are one pattern repeating at different scales. At the level of the mind, the question is attention. At the level of the company, the question is control. At the level of AI, the question is truth. At the level of civilization, the question is continuity.
The point of writing this is simple. Think it through yourself. Put the phone down for a minute. Look out a window. The conclusion you reach matters less than the fact that you reached it on your own.
Everything else is choir robes.
Top comments (0)