DEV Community

Jason (AKA SEM)
Jason (AKA SEM)

Posted on • Originally published at Medium on

I Helped My Client Learn AI. Five Minutes Later, He Didn’t Need to Hire Anyone.

That “wow” is the sound of the economy restructuring in real time. And nobody has a plan.

Last week I showed a client how to use Claude. He runs a three-person team. He’d been struggling with something in Excel for hours.

Five minutes. That’s how long it took. He had exactly what he needed, built inside Google Sheets, functioning perfectly. Work that would have taken him six or seven hours.

He said “wow.”

In that wow was everything. The power. The promise. The problem.

Because here’s what else was in that moment: the realization that his three-person team could now do the work of fifteen. That sounds great for his business. It is great for his business. But those are twelve people he will never hire. Twelve jobs that will never exist.

Scale that across every small business in America. Every mid-market company. Every enterprise. And you start to see the shape of what’s coming.

Oppenheimer Didn’t Build a Bomb. He Solved a Physics Problem.

I’ve been a software developer since 1994. Thirty years of building things. For the last two years, I’ve been building AI-powered SaaS applications, multi-agent systems, autonomous agents that can reason, plan, and execute complex workflows with minimal human oversight.

I am building the thing that replaces people like me.

There’s a moment every serious builder hits where the abstraction collapses. Where you stop seeing the elegant architecture and start seeing the consequences. Oppenheimer had it at Trinity. He and his colleagues were solving fascinating physics problems. Brilliant people doing brilliant work. And then the thing they built worked. And working was the problem.

I’m not comparing myself to Oppenheimer. I’m not comparing AI to a nuclear weapon. But the psychological structure is identical: the moment a builder looks at what they’ve created and realizes the implications extend far beyond the original intent.

That moment is happening right now, across every AI lab, every startup, every developer’s home office. Some people feel it. Most are too busy shipping to notice.

The Numbers Are Already Ugly

This is not speculation. This is not a forecast. This is happening.

In 2025, companies directly attributed 55,000 job cuts to AI. That’s twelve times the number from just two years earlier. In the first two months of 2026 alone, AI was cited in over 12,000 layoff announcements.

The names are the ones you know. Amazon. 16,000 cuts to start 2026. Oracle preparing for up to 30,000 — the largest layoff in its history — while simultaneously spending $100 billion on AI infrastructure. Block, Jack Dorsey’s company, announced it would shrink from 10,000 employees to 6,000. His exact words: a significantly smaller team, using the tools they’re building, can do more and do it better.

Block’s stock surged 15% on the announcement. Wall Street rewarded the destruction of 4,000 jobs with a standing ovation.

And the developer market — my market, the market I’ve spent three decades in — is getting hit in ways that should alarm anyone paying attention. Entry-level developer hiring has dropped 73%. The average tech job search now takes five to six months and requires over 200 applications. Companies are posting “entry-level” roles and quietly filling them with seniors. In 2019, new graduates represented 32% of Big Tech hires. By 2026, that number has cratered to 7%.

I saw an HR recruiter talking about receiving 3,800 applicants for a single developer role. A role that used to pay $200,000, now listed at $140,000. She’d never seen anything like it.

This is early. This is the beginning.

The Pandora’s Box Problem

Can we stop this?

No. And anyone who tells you otherwise is either lying or hasn’t thought it through.

Even if the United States stopped all AI development tomorrow — shut down every lab, pulled every GPU, banned every model — China doesn’t stop. Europe doesn’t stop. The Gulf states don’t stop. The game theory is inescapable. It’s the same logic that drove nuclear proliferation. No single actor can afford to be the one who pauses, because the ones who don’t pause gain an insurmountable advantage.

Countries are charging toward AGI and ASI with everything they have. Massive data centers going up everywhere. SpaceX exploring AI infrastructure in orbit. Hundreds of billions in capital expenditure flowing into a technology whose full consequences nobody can predict.

Pandora’s box is open. It stays open. There is no closing it.

The Abundance Paradox

Here’s where the optimists lose me.

The standard narrative goes like this: AI will make everything cheaper. Production costs approach zero. Abundance for everyone. We’ll live in the world of Star Trek, where people only work if they want to. Utopia through automation.

It sounds beautiful. It also doesn’t survive contact with basic economics.

If AI makes production nearly free, but people have no income because they’ve been displaced, who buys the products? If corporations automate to maximize margins but destroy their customer base in the process, the system eats itself. Henry Ford understood this a hundred years ago — he paid his workers enough to buy his cars. We are heading toward the opposite of that.

The abundance paradox is simple: abundance only works if people can access it. If everything is free or nearly free, but you have no money because your job was automated, “free” is meaningless. You can’t buy the new iPhone with theoretical abundance.

And the cycle gets stranger the more you examine it. Companies automate. They reduce headcount. They pay taxes on what they sell. Those taxes fund the government. The government sends UBI checks to the people who lost their jobs. Those people use UBI to buy products from the companies that automated them. The companies use that revenue to further automate. The cycle tightens. The UBI needs to increase. The taxes need to increase. The automation accelerates.

At what point does that loop become self-sustaining? At what point does it collapse?

Nobody knows. Nobody has modeled this at scale because it’s never happened before.

The UBI Fantasy

Universal Basic Income is the answer everyone reaches for. And I get why. When you stare at the displacement numbers long enough, UBI starts to feel inevitable. If people can’t work because the work doesn’t exist, you have to give them something.

But let’s be honest about the math.

The United States is running a $2 trillion annual deficit. The national debt is north of $36 trillion. We can’t fund the programs we already have. The idea that we’re going to layer on meaningful UBI — not “barely survive” UBI, but “live with dignity and agency” UBI — for tens of millions of displaced workers requires either massive new revenue sources or a fundamental restructuring of how we think about government finance.

Neither happens fast. Neither happens without enormous political will that does not currently exist.

And even if we solve the funding problem, there’s a deeper question: what does a life on UBI actually look like? Is it enough to live? Is it barely enough to live? The vision of doing whatever you want collapses pretty quickly when you realize that “whatever you want” requires resources. Travel costs money. Hobbies cost money. Education costs money. If you’re getting a subsistence check, you’re not living the Star Trek dream. You’re surviving.

People forget something about the Star Trek timeline. Humanity went through World War III and near-total societal collapse before arriving at that post-scarcity utopia. The transition wasn’t smooth. It was catastrophic.

The Bunker Problem

Here’s a detail that should keep you up at night.

Some of the wealthiest people in technology — the people building these systems, the people who see more than you or I see, who have access to information and projections we don’t — are buying property in New Zealand. Building underground bunkers. Investing in survival infrastructure. Reid Hoffman publicly said a significant percentage of Silicon Valley billionaires have done this.

I’m not a conspiracy theorist. But when the people building the future are hedging against it, that tells you something about their private assessment of the risks. These are not stupid people. They’re not paranoid. They’re doing math that the rest of us don’t have access to, and the conclusion they’re reaching involves concrete walls and water filtration systems.

What Actually Destroys Us

Here’s where I land, and I want to be precise about this.

AI will not destroy the human race. Not Terminator-style. Not Skynet. Not a rogue superintelligence deciding humans are inefficient.

What could destroy us — what is already beginning to fracture us — is something much more mundane. A slow-motion economic restructuring that happens faster than our institutions can respond. Wealth and capability concentrating in fewer and fewer hands. Millions of people losing not just their income but their sense of purpose and identity. A growing chasm between the people who own the AI and the people who were replaced by it.

That’s not science fiction. That’s the French Revolution. That’s the fall of Rome. That’s every historical moment where the gap between the elite and everyone else became unsustainable.

A CEO quoted in the Wall Street Journal last week warned about “pitchforks and torches.” He wasn’t being metaphorical. When people lose their livelihoods and see no path forward, history shows us exactly what happens next.

The Speed Problem

Every major technological disruption in history has followed the same pattern: enormous pain during the transition, followed by a new equilibrium that was genuinely better.

The Industrial Revolution produced child labor, sixteen-hour workdays, and Dickensian poverty before it produced the middle class. It took labor movements, regulation, public education, and decades of political struggle to turn industrial productivity into broadly shared prosperity.

We need the equivalent of that now. And we need it faster. Because this transition is moving faster than anything in history.

Technology moves in months. Labor markets adjust in years. Policy moves in decades.

That mismatch is the danger zone. And we are entering it right now. Today. Not in some hypothetical future. Now.

The AI capabilities are arriving faster than our institutions can adapt. Faster than our education systems can retrain workers. Faster than our political systems can design safety nets. Faster than our culture can develop new frameworks for meaning and purpose in a world where machines do most of the cognitive work.

What I Can’t Stop Building

Here’s the part I haven’t reconciled.

I know all of this. I see the numbers. I feel the weight of it. I have the Oppenheimer moment at least once a week now.

And I keep building.

Not because I’m in denial. Not because I don’t care. But because the alternative to thoughtful builders isn’t “no AI.” It’s AI built by people who never ask these questions. People who never lie awake thinking about what it means. People who look at their client’s “wow” and feel only the pride, never the dread.

The people who terrify me aren’t the ones building bunkers. It’s the ones building AI systems and never having this conversation. Never once sitting with the weight of what they’re creating.

I don’t have a solution. I don’t think anyone does yet. But I know that the builders who understand both the power and the danger — who hold both the pride and the dread in the same hand — are the only people with any chance of steering this toward something survivable.

The question isn’t whether AI is good for the human race. That framing is too simple. The question is whether we will make the deliberate political, economic, and moral choices required to distribute the benefits of the most powerful technology ever created. Or whether we’ll just let it happen to us.

Right now, we’re letting it happen to us.

Jason Brashear is a senior software developer and AI systems architect with 30+ years of experience building production systems. He is the creator of ArgentOS, an intent-native multi-agent operating system, and a partner at Titanium Computing. He writes about AI architecture, agentic systems, and what happens when the builders start questioning what they’re building.

Follow him on GitHub: webdevtodayjason

Top comments (0)