An economic bubble occurs when asset prices detach from long term fundamentals and are driven instead by irrational exuberance and social contagion.
Since 2025 we’ve seen that pattern again: mass layoffs, elevated valuations, and extreme market volatility, all of this are tied to the AI narrative.
Let’s try to analyse what happens and see if we can define a strategy for us, engineer.
Last year major tech firms, Meta, Amazon, Microsoft and others, announced mass layoffs totalling roughly 200,000 roles. Publicly, the rationale was the rise of AI: many positions will be automated or restructured around AI capabilities.
There is some truth in this reason, but this is not the main reason. To get the main reason, take a lot on what these companies are doing with their investments.
There’s truth in that claim, but it misses the main driver. Look at where these companies are investing infrastructure, chips, datacentres and energy capacity. It is not just about chatbots.
They’re not just chasing chatbots; they’re preparing for an agentic AI world. Training large models consumes massive energy, while inference, asking something, is far cheaper. The real shift is the infrastructure needed to run persistent, autonomous AI agents at scale.
Agentic AI is not like Chatbot: agents such as Claude CoWork operate autonomously, executing tasks without continuous human input; managing email, CRM workflows, scheduling, and more. Once created, an agent can remain active and consume resources while completing its tasks.
The capacity required is colossal: servers, accelerators, datacentre space, networking and power provisioning at scale.
To build that capacity companies must invest heavily; buying accelerators (e.g., NVIDIA GPUs), networking devices and datacenter capacity, and securing power contracts. That capital commitment helps explain why some firms reallocate headcount toward these investments.
It is capital reallocation, laying off people to build AI capacity, I can’t say it is fair or normal. But this is very rational.
What’s less rational is the market’s reaction to agentic AI in 2026. Agents differ from chatbots: the value proposition is task execution, not prompt finesse. An agent combines an LLM, tool access and autonomy, but that doesn’t mean it instantly replaces entire software categories.
In January Anthropic launched Claude CoWork, an agent that handles non technical tasks locally (initially macOS only): files, email, calendar, and deliver artifacts like spreadsheets and text summaries.
Claude CoWork wasn’t the first agentic solution, but it was the first to capture broad public attention. Anthropic developed it after customers used Claude Code for non coding tasks; managing email, expense reports and file organization, demonstrating real demand for delegated AI automation.
Soon after CoWork’s general availability, several SaaS and enterprise software stocks, like Salesforce, SAP or Adobe, saw a large market declines, contributing to a broader sell off in software. The MSCI Software & Services index fell significantly in early 2026.
The market appears to be pricing a future where AI agents displace some SaaS functionality. That view is optimistic and only partially rational, because agents may complement rather than fully replace many enterprise apps in the near future.
What’s coming next is surprising. Markets moved into irrational territory. On 20 February Anthropic published a blog post suggesting Claude Code could assist with static code analysis. It is just a concept, not a shipped product or roadmap. Yet cybersecurity stocks, from niche vendors to large firms like Palo Alto and CrowdStrike, tumbled in response.
On 23 February another Anthropic post suggested agents could assist with modernizing COBOL code, again a concept. Consulting firms and legacy technology vendors saw market pressure as investors extrapolated far beyond the announcement.
This is the irrational zone: a static analysis capability cannot replace entire cybersecurity teams, runtime scanners, or the complex processes that govern legacy banking systems. The market is reacting to buzzwords without understanding the engineering and operational realities behind the headlines.
Sound familiar? It echoes the dot com bubble, when a single buzzword in a press release could inflate a stock. The mechanism is the same: narrative driven speculation divorced from technical substance.
It’s easy to laugh, but boards will react. A sudden market signal can pressure executives to “do AI”, sometimes by reallocating resources toward visible AI projects at the expense of core capabilities like cybersecurity.
We may be in a feedback loop where irrational market moves trigger irrational corporate responses. As engineers, remember that AI is a tool, a powerful tool, but only a probabilistic predictor that generates text, code or images. It does not make value judgments or replace human decision making.
Strategy for engineers: learn agentic AI, not just prompt engineering, but how agents are built, orchestrated and monitored. Master relevant tools and workflows (e.g., agent orchestration, observability and safety patterns). AI will also create new roles; those who understand agentic systems will be better positioned.
And of course, there will be soon some AI tools that could complete the cybersecurity landscape, but it will be a real product, with verifiable, measurable and concrete results.
Top comments (0)