DEV Community

Juhani Ränkimies
Juhani Ränkimies

Posted on

What AI Needs to Know (and What It Doesn't)

#ai

Ask an AI coding assistant to implement a feature. Give it access to your whole codebase. Watch what happens.

It reads too much. Or too little. It picks up conventions from one module and applies them where they don't belong. It misses architectural boundaries because they're implicit. It generates code that works but doesn't fit because it lacks context about why the system is shaped the way it is.

The problem isn't AI capability. It's context management.

The context problem

AI agents don't understand your system the way a developer who's worked on it for months does. A tenured developer has absorbed product intent, architectural constraints, technology choices, module boundaries, and unwritten conventions through osmosis. They know what matters without being told.

AI has none of this ambient knowledge. Every session starts cold. What the AI can work with is exactly what you give it -- no more, no less.

This creates two failure modes:

Too much context. Dump the entire repo into the prompt and the AI drowns. It can't distinguish product requirements from implementation details, current conventions from legacy code, stable architecture from experimental branches. Signal gets lost in noise.

Too little context. Point the AI at a single file and it works in a vacuum. It makes locally reasonable decisions that violate global constraints. It duplicates utilities that already exist. It breaks architectural rules it doesn't know about.

The sweet spot is selective context loading: give the AI exactly the knowledge it needs for the task at hand, organized so each document answers one clear question.

A layered knowledge model

The approach that works in practice organizes project knowledge into layers, each serving a different purpose. When the AI needs to implement a feature, you don't give it everything -- you load the layers relevant to that task.

Layer 1: Product context

What question it answers: What is this product and who is it for?

This is the document most teams skip and AI most needs. It captures:

  • What the product does
  • Who the users are
  • What problems it solves
  • What's explicitly out of scope

Without this, AI makes product decisions by inference. It guesses at user intent from code patterns. Sometimes it guesses right. Often it doesn't.

A real example from a project bootstrapped with this approach:

SitePilot2 is a SaaS product that helps non-technical business owners create and maintain professional business websites without external implementation help. It combines AI-assisted onboarding, guided content generation, and ongoing site maintenance workflows.

Three sentences. Enough for an AI to know that "simplicity for non-technical users" is the product constraint, not "flexibility for developers."

Layer 2: Engineering context

What question it answers: How do I operate safely inside this repository?

Four documents cover this:

  • structure.md -- where code goes, what modules do, dependency boundaries. When AI needs to add a new handler, it knows to put it in src/infrastructure/ not src/domain/.
  • tech-stack.md -- approved technologies, preferred libraries, forbidden choices. Prevents AI from pulling in a new ORM when one is already chosen.
  • architecture.md -- high-level system shape, components, data flow. Tells AI that the system uses hexagonal architecture before it generates code that couples the domain to the HTTP framework.
  • quality-policy.md -- mandatory automated checks and how to run them. AI knows it needs to pass rustfmt, clippy, and cargo test -- not just make the feature work.

Layer 3: Feature package

What question it answers: What exactly must this feature do?

This is where the spec-driven approach from Part 1 lives:

  • spec.md -- problem, user story, acceptance criteria with stable IDs, invariants, non-goals.
  • verification.yaml -- maps each acceptance criterion to specific tests.
  • design.md (optional) -- explains how a non-trivial solution will satisfy the contract.

The key distinction:

  • spec.md defines what must be true
  • verification.yaml defines how that truth is proven
  • design.md explains how the solution works

These live together because they describe the same feature from different angles. An AI implementing that feature needs all three; an AI working on a different feature needs none of them.

Layer 4: Technical decisions

What question it answers: Why was this decision made, and what are the consequences?

Architecture Decision Records (ADRs) capture decisions that are expensive to revisit: framework choices, authentication strategies, data model commitments. They record the decision, alternatives considered, rationale, and consequences.

ADRs are not the whole design system. They capture why, not the full current shape. But when AI is about to make a decision that conflicts with an existing ADR, having that context loaded prevents costly mistakes.

A project might have ADRs like:

  • ADR-001: Backend-driven UI with Datastar
  • ADR-002: Hexagonal backend architecture
  • ADR-004: Passwordless email OTP authentication for v1

When AI starts implementing authentication, loading ADR-004 tells it the decision is OTP via email, not passwords. Without that context, it defaults to whatever pattern is most common in its training data.

Layer 5: Change management

What question it answers: What is changing and why?

Change records track how the system evolves. They're separate from specs because:

  • spec.md is current truth -- what the system must do right now
  • change.md is historical delta -- what's changing, why, and what it affects

This distinction matters for AI because it prevents a common failure: AI reading a change record and treating a proposed change as the current contract, or reading an old change record and implementing something that's already been superseded.

Layer 6: Planning

What question it answers: What work is worth doing and in what order?

Planning items describe goals, priorities, and sequencing. AI needs this when selecting or proposing work, but not when implementing an already-defined feature.

Why the split matters

If these concerns are collapsed into one large design document, agents either miss important context or load too much irrelevant material.

The split lets agents load only what they need:

  • Product intent when defining behavior
  • Repo structure when editing code
  • Tech constraints when choosing tools
  • The specific feature package when implementing behavior
  • ADRs when a decision touches an existing architectural commitment
  • Change records when evaluating whether behavior is intentionally changing

This isn't just theory. In practice, the difference between "implement this feature given the whole codebase" and "implement this feature given the spec, the architecture doc, the structure guide, and the tech stack" is the difference between fighting the AI and collaborating with it.

The minimal shape

A repository doesn't need all of this on day one. The minimal useful shape is:

docs/
  product.md
  structure.md
  tech-stack.md
  architecture.md
  quality-policy.md
  specs/
    <feature>/
      spec.md
      verification.yaml
  changes/
    CR-001-descriptive-slug/
      change.md
  adr/
    ADR-001.md
  planning/
    PLN-001.md
tests/
src/
Enter fullscreen mode Exit fullscreen mode

Start with the five core documents. Add specs and change records as you build features. Add ADRs when you make significant decisions. The structure grows with the project.

What this doesn't solve

Context management helps AI make better decisions, but it doesn't eliminate the need for verification. A well-informed AI still needs its output checked against contracts and quality gates. The artifact model reduces how often AI goes wrong; the spec and proof model catches it when it does.

The two work together. Without good context, you write specs and AI still struggles. Without specs and proof, you provide context and AI still produces unverifiable output. You need both.

Practical implications

If you're working with AI coding assistants today:

  1. Write a product.md. Even three paragraphs about what your system does and who it's for will improve AI output more than any prompt engineering trick.
  2. Document where code goes. A structure.md that says "handlers go in src/handlers, domain logic goes in src/domain, never import infrastructure from domain" prevents an entire class of AI mistakes.
  3. State your tech choices. If you've chosen Axum over Actix, say so. If you've decided on Turso over Postgres, say so. AI defaults to popularity, not your decisions.
  4. Keep quality policy explicit. Don't rely on AI knowing to run clippy or that you require 80% test coverage. Write it down where the AI can read it.

These four documents take an hour to write. They pay for themselves in the first week of AI-assisted development.

Top comments (0)