DEV Community

Cover image for The Model Is Not the Moat
Harry Floyd
Harry Floyd

Posted on • Originally published at harryfloyd.substack.com

The Model Is Not the Moat

I keep hearing the same assumption underneath AI strategy talk: the winner will be whoever has the strongest model. Smarter model wins. Everything else is secondary.

That sounds plausible right up until you look at how people actually choose tools in real life.

If scale laws keep holding, local models probably won't beat the frontier on raw intelligence. But users don't adopt "raw intelligence." They adopt something that fits into their day: something fast enough, private enough, legible enough, reliable enough, cheap enough, and integrated enough that using it becomes natural.

The benchmark measures the visible product. The moat forms one layer out, in the surrounding package.

The Product and the Seat

People compare models as if users experience them as isolated intelligence engines. Most of the time they don't. They experience a bundle: interface, defaults, permissions, speed, memory, privacy, cost, and how much the tool asks them to rearrange their behaviour.

That bundle is where trust accumulates. It's also where switching costs quietly form.

A frontier model can win the benchmark and still lose the seat.

By "seat" I mean the position a product earns inside a user's actual workflow. The place where their context lives. The place they trust not to embarrass them, leak data, slow them down, or force them to relearn everything.

The product is the thing they evaluate. The seat is the thing they get used to living in. These are not the same asset.

You Can Already See It in Coding Tools

A model that is slightly worse on a public benchmark can still be the one people prefer if it lives inside the editor, sees the repo, responds instantly, keeps sensitive code local, and fits the way they already work.

This is also why "just use the best model" is often bad product advice. The best model according to a leaderboard may carry the wrong latency, the wrong privacy posture, the wrong integration burden, or the wrong failure mode for the actual job.

How Moats Actually Form

Apple is the familiar version of this dynamic outside AI. The moat was never just one visible feature. It was the package: ecosystem fit, convenience, defaults, identity, and the low-grade friction of leaving. The product got attention. The package became hard to leave.

I think a lot of AI products will work the same way. Moats usually form in the residue, not in the headline claim. In habit. In muscle memory. In stored context. In predictable behaviour. In the feeling that this tool understands how you work and doesn't make you pay a tax every time you use it.

What This Means for Builders

Local models don't need to win the intelligence race to matter. They can win a different race entirely: trust, control, governance, latency, privacy, and workflow fit.

Once a tool becomes the place where your context lives, your defaults settle, and your work starts to flow, a better benchmark somewhere else is not enough to dislodge it.

If I were building in this market, I'd treat that as a design rule. Don't ask only, "How do we make the model look stronger?" Ask:

  • Where does the user feel risk right now?
  • What part of the workflow still feels awkward or fragile?
  • What context would make the tool more useful after 30 days than on day 1?
  • What would make leaving this product feel expensive in a good way?

That is how you build the seat. Not by winning one benchmark snapshot, but by creating a package that compounds through use.

Top comments (0)