DEV Community

Cover image for Why the AI arms race missed the point, and what we built instead.
Sajjad Heydari for Patrick Bot

Posted on

Why the AI arms race missed the point, and what we built instead.

The Model Isn't the Product. The Context Is.

There's a seductive narrative in AI right now. It goes like this: bigger models produce better results. More parameters, more training data, more compute — more intelligence. The logical conclusion is that the companies with the largest models win, and everyone else is just waiting for the next release from OpenAI or Anthropic or Google to make their product incrementally better.

This narrative is wrong. Not because large models don't matter — they do — but because it confuses the engine with the vehicle. The model is the engine. What determines whether you arrive anywhere useful is everything else: the steering, the suspension, the road, and critically, the map.

We learned this the hard way.

The Lossy Compression Problem

Every time a human writes a prompt, they're performing lossy compression. They're taking the full, messy, interconnected reality of what they know — their context, their constraints, their history — and flattening it into a string of text. The model never sees the original signal. It only sees the compressed version.

This is the real bottleneck of AI adoption, and almost nobody talks about it.

Consider a founder trying to decide which feature to build next. The "right" answer depends on which customers are waiting, what the engineering team's capacity looks like, which partnerships are in motion, what the competitive landscape just did, and how all of those things relate to each other. No human can hold all of that in working memory at once. And if they can't hold it, they certainly can't type it into a prompt.

So what happens? They ask the model a simplified question. They get a simplified answer. They walk away thinking AI is "pretty good but not quite there yet." The model was fine. The context was broken.

What If You Fixed the Input Instead of Chasing a Better Engine?

This is the question that led us to build Patrick.

We're a small medtech company. At any given time, we're managing relationships with over 40 organizations across four provinces, tracking 25+ product features in various stages of development, coordinating 120+ tasks across a team of fewer than ten people, maintaining five distinct products, and navigating a regulatory environment that changes faster than we can document it.

That's not a prompt. That's an ecosystem.

No spreadsheet captures it. No CRM models the actual relationships between a customer's stated need, the feature that addresses it, the tasks required to build that feature, and the strategic initiative that justifies the investment. These connections exist — they're just trapped in meeting notes, Slack threads, email chains, and people's heads.

Patrick is a graph-based intelligence layer that makes those connections explicit and queryable. It captures relationships between entities, performs semantic search across them, and is exposed via MCP (Model Context Protocol) so that any LLM can access the full graph through natural conversation.

The key insight: we didn't build a better model. We built better context.

When someone on our team asks "what should we build next?", the answer doesn't come from a model's general knowledge about product strategy. It comes from Patrick traversing actual dependency chains — from customer needs, through feature requirements, down to implementation tasks — and surfacing the highest-ROI path based on real data. The model is the reasoning engine. Patrick is the map.

The Architecture of Context

Patrick's design reflects a simple thesis: the unit of useful knowledge isn't a document or a data point. It's a relationship.

Our graph tracks organizations, products, features, needs, tasks, and prospects — and the edges between them. Yours might look completely different. The specific entities don't matter. What matters is that you're modeling the connections that drive decisions, not just the objects.

An organization has needs. A need requires features. A feature is enabled by tasks. A product has features. These aren't arbitrary associations. They're the actual decision-making structure of a company, made explicit.

This means Patrick can answer questions that no single data source could:

"If we deprecate this feature, which customers are affected and which prospects does that jeopardize?" That's an impact analysis that spans the CRM, the product roadmap, and the sales pipeline simultaneously.

"Which customer needs have no features mapped to them?" That's a coverage gap analysis — unmet requirements hiding in plain sight.

"Who on the team is overloaded, and what would shift if we deprioritized this initiative?" That's a capacity analysis that accounts for actual task ownership and effort estimates, not just calendar slots.

None of these questions require a more powerful model. They require structured context that the model can traverse. The intelligence was always there in the LLM. What was missing was the map.

Why This Matters Beyond Us

The pattern we stumbled into is generalizable. Every organization that uses AI has the same fundamental problem: the context that would make AI useful is scattered, implicit, and relational. The model can reason. It just can't see.

CRMs store customer data but don't model how a customer's needs connect to your product roadmap. Project management tools track tasks but don't link them to strategic initiatives. Business intelligence platforms visualize data but don't capture the why behind decisions. The relationships between these systems — the connective tissue of actual decision-making — lives nowhere.

Patrick is one implementation of a broader idea: the next wave of AI value won't come from bigger models. It will come from structured context layers that give existing models the information they need to reason well.

The companies that figure this out — that invest in the map instead of perpetually upgrading the engine — will extract disproportionate value from AI. The companies that keep waiting for the next model release will keep writing the same underspecified prompts and getting the same mediocre answers.

This Isn't a Replacement. It's a Multiplier.

To be clear: this isn't an argument against any particular approach to AI. If you've built a RAG pipeline, great — it gets better when the retrieval layer understands relationships, not just document similarity. If you're fine-tuning models on domain data, great — that model becomes dramatically more useful when it has structured context to reason over at inference time. If you're running agents with tool access, great — a graph of your actual business state is one of the most powerful tools you can hand them.

The point isn't that existing approaches are wrong. The point is that they're all operating on incomplete context, and the returns from fixing that are larger than the returns from any single model upgrade.

Every approach to AI gets better when the model can see the full picture. Patrick is how we built that picture for ourselves. The specific implementation matters less than the principle: structure your context, and the models you already have become the models you were waiting for.

What We're Building Toward

Patrick started as a scrappy internal tool to help a small team make better decisions. Internally, it's grown into something far more powerful — a full business intelligence layer that touches every decision we make, from which prospect to prioritize to which feature to deprecate to how to prepare for a meeting next Tuesday. It knows our organizations, our pipeline, our strategic initiatives, our capacity constraints, and the thousand invisible threads between them.

We're not releasing all of that. But we are releasing the core of it — a subset that captures the pattern: a graph-based context layer, exposed via MCP, that any team can deploy to give their LLM the structured context it's been missing. Enough to prove the thesis. Enough to build on.

We've detailed the origin story and architecture in a companion post: How We Built Patrick. But the bigger claim here isn't about Patrick specifically. It's about where AI value actually lives.

The model race will continue. Models will get bigger, faster, cheaper. And that's great — a better engine is always welcome. But the teams and organizations that will win with AI are the ones that figure out the context problem first.

Build the map. The engine will follow.

Top comments (0)