Software Moats in the Age of AI: What's Actually Defensible?
Why the "AI writes code now" narrative misses the point—and where competitive advantages stubbornly persist
Another pitch deck lands on your desk. Another startup promises to "displace custom software development." Another analyst proclaims coding is now a commodity. Another LinkedIn thought leader announces developers are obsolete.
You've heard variations for eighteen months. And yet—enterprise software spending hasn't collapsed. Professional services firms keep hiring. COBOL programmers are getting paid (handsomely!).
Where's the disconnect?
The Moat Everyone Forgot
Traditional software moats centered on code: proprietary algorithms, accumulated functionality, engineering discipline, the sheer grind required to replicate something. Two years and $20 million to build something equivalent? That was your moat.
AI compresses that timeline—for greenfield development, building from scratch. But most enterprise software is brownfield: decades of accumulated code, undocumented business logic, integration points understood by the one developer that was deeply involved...or a black box written by retired engineers that nobody fully understands. AI hits some hard limits in brownfield. That's our focus here.
The Context Problem (Temporary)
AI assistants operate within fixed "context windows"—working memory of roughly 200,000 to 1 million tokens (a token is generally equivalent to about 3/4 of a word). This sounds generous until you realize enterprise systems span millions of lines across thousands of files. Context management is a huge issue at the moment -- EVERYTHING your agent needs as context for the current task (files, prompts, tool instructions, etc.) must be in its context window AT THE SAME TIME...otherwise, the agent doesn't know it exists. And even worse, elements at the beginning and end of the context window are given greater importance (like the recency and primacy effect in human cognition), so the muddled middle stuff can get de-emphasized. The longer a session runs, the worse performance can become because things get thrown away to make room or otherwise lost...a condition sometimes called "context rot". This can REALLY adversely affect performance.
Now, ask an AI to modify a billing calculation touching seven services, three databases, and two external APIs. You'll get confident suggestions that cheerfully ignore most of the complexity that isn't explicitly called out. The AI doesn't know what it doesn't know—a trait it shares with us all, but it can execute a bad solution faster.
This context challenge is a problem today, but solutions are coming. Recursive Language Models (RLMs) can now decompose arbitrarily large codebases, analyze pieces, and synthesize understanding across the whole. Within 18-36 months, "codebase too large" stops being a moat. (Note that RLM may sound like an object, but in reality it's a strategy, and one that can be used with any LLM)
Now, even if the size constraint goes away, understanding what code does differs from understanding why it exists. The political archaeology of enterprise systems—why this exception, why that workaround—remains opaque to any model trained on code alone. Nobody documented why the foundation was intentionally built with one crooked wall in 1987.
Implication: Codebase scale buys you 2-3 years. Institutional knowledge may buy you longer.
The Languages AI Hasn't Learned (Yet)
Large language models learned from public code. They excel at solving problems in Python and JavaScript. They may just hallucinate confidently in COBOL.
COBOL still processes 95% of ATM transactions and 80% of in-person financial transactions globally. RPG hums along on IBM midrange systems. ABAP powers SAP implementations. These languages lack the public training data AI needs—and ironically, they run the systems with the largest budgets.
Ask an AI to modify your COBOL billing module. It generates something plausible, but may be introducing subtle bugs you'll discover in production six months later.
Implication: Legacy languages create unexpected insulation. Constrained talent pools cut both ways.
Domain Knowledge Doesn't Scale
Software that works isn't software that sells. Software that works correctly for a specific domain sells.
Healthcare billing involves thousands of payer-specific rules changing quarterly. Energy trading handles physical delivery constraints across jurisdictions. Insurance policy administration encodes actuarial logic accumulated over decades. AI generates code that processes data—not code embodying expertise it never learned.
Implication: Vertical software companies with genuine domain expertise have stronger moats post-AI. The easy parts got easier. Encoding specialized knowledge stayed exactly as hard.
Battle Scars Have Value (and Institutional Knowledge Isn't In the Docs)
Enterprise systems connect to ERPs, CRMs, data warehouses, payment processors, and countless internal tools. Each integration point represents hard-won understanding of how systems actually behave—as opposed to how documentation claims they behave.
AI writes integration code. It cannot anticipate that upstream systems send malformed JSON on Tuesdays, that authentication tokens expire differently in production, or that the "deprecated" field is actually required for three specific customer configurations. (Every developer reading this just nodded ruefully.)
This knowledge lives in tribal memory and incident reports. Nobody writes it down because nobody realizes it's unusual until something breaks.
Implication: Deep integrations create sticky moats. Competitors must relearn every lesson the hard way.
Relationships Beat Tech Chops
CIOs choose vendors they trust to exist in five years who will navigate regulatory inquiries alongside them. A fintech startup with AI-generated code competes against thirty years of relationship and proven reliability. Good luck with that pitch.
AI-generated code also creates compliance questions nobody has answered. Who's liable when it processes protected health information incorrectly? When AI-built credit decisioning introduces unintended discrimination? Until liability frameworks mature, AI faces adoption friction in precisely the industries with the largest software budgets.
Implication: Relationship-intensive businesses in financial services, healthcare, and government hold moats invisible to technical assessments.
The Maintenance Cliff
Every line of code creates maintenance obligations. AI-generated code optimizes for working now, not being understood later—producing solutions humans wouldn't choose, using patterns inconsistently, generating technical debt at accelerated rates.
Organizations rapidly building AI-assisted systems simultaneously build maintenance liabilities. Developers five years from now will struggle to understand logic no human designed. Black boxes that work but nobody understands may come to dominate codebases. How good must AI get before you trust it to fix its own mysteries? Sometimes it's already there. Sometimes we'll always need humans.
Implication: Discount AI productivity gains by future maintenance costs. Ignoring accelerated technical debt means overestimating returns.
The Pragmatic Bottom Line
AI transforms software development. The competitive landscape for simple ground-up greenfield applications has shifted fundamentally. But enterprise software operates differently. Complexity, integration, domain expertise, relationships, and regulatory dynamics create moats AI hasn't breached.
These moats aren't permanent. RLMs will erode context advantages. Training data will expand. Regulatory frameworks will mature and become more machine-compatible. But defensible positions today require understanding something deeply, integrating thoroughly, and maintaining relationships that transcend technical capability.
Those advantages outlast the headlines—and organizations that distinguish temporary moats from durable ones will allocate capital more effectively than those chasing the narrative.
The question isn't "can AI build this?" It's "can AI build it correctly, integrate it properly, and support it reliably?" The answer determines where you put your money.
Top comments (0)