DEV Community

Roman Belov
Roman Belov

Posted on • Originally published at futurecraft.pro

Context Engineering: How to Manage Context for AI Models and Agents

Claude's context window holds 200,000 tokens. Gemini's stretches to two million. But response quality starts degrading long before the window fills up. Window size doesn't solve the context problem — it masks it.

Prompt engineering teaches you how to ask. Context engineering teaches you what to feed the model before asking. And the second one shapes the answer more than the first.

Andrej Karpathy put it this way: "Context engineering — the delicate art and science of filling the context window with just the right information for the next step." Tobi Lütke, CEO of Shopify, popularized the term itself, and Gartner declared in July 2025: "Context engineering is in, and prompt engineering is out."

This piece covers concrete techniques, models, and patterns. Things that actually work when you're using AI agents in development every day.

Prompt vs Context: Where the Line Falls

Here's an analogy that works: you're hiring an expert consultant.

Prompt — your question: "What should I do?"
Context — the briefing you hand them before the question.

You can phrase the question perfectly, but if the briefing contains 500 pages of irrelevant documents, even a strong expert will get lost. Flip it around: hand them exactly the 2 pages they need, and even a simple question yields a precise answer.

Prompt engineering answers questions like: how to frame the task, what role to assign the model, what output format to request. Context engineering answers different ones: feed 100 reviews or pick 15 representative ones? The entire 500-line file or just lines 45–80? All the documentation or extract the facts?

A more technical analogy drives it home. The LLM is a CPU. The context window is RAM. You're the operating system deciding what gets loaded into working memory. The goal: load exactly the data needed for the current operation.

Why More Context Is Worse

This is counterintuitive, but backed by research.

Context Rot

Chroma's research (2025) showed that LLM accuracy drops as the token count in context grows — even when the window is far from full.

The mechanism: attention is a fixed resource. Weights always sum to 1. More tokens means less attention per fragment. Think of a flashlight — the wider the beam, the dimmer the light at any point. And the harder the task, the steeper the drop.

Lost in the Middle

A study found a specific pattern: LLM performance drops 30%+ when critical information sits in the middle of a long context. Beginning and end? Fine. The middle is a blind spot.

Practical takeaway: put the important stuff at the beginning or end. System prompt up top. Few-shot examples at the bottom.

Economics

Every token costs money, and the model rereads the entire context on every request (LLMs are stateless):

  • Input: ~$3 per 1M tokens (Claude Sonnet)
  • 100K context × 100 requests/day = ~$30/day = $900/month

Context engineering is budget engineering too.

Hallucinations from Overload

With a bloated context, the model tries to "use everything" and starts inventing connections between unrelated parts. Data about Company A gets attributed to Company B. Functions that don't exist get "recalled" from similar code that drifted into the context twenty screens back.

Six Layers of Context

Structure context like an onion — six layers, each with a specific job. This fights degradation by placing the most important information at the beginning and end, instead of spreading it across the middle.

┌─────────────────────────────────────────┐
│  1. SYSTEM — who you are & how to act   │  ← Permanent (beginning)
├─────────────────────────────────────────┤
│  2. PROJECT — project context           │  ← Semi-permanent
├─────────────────────────────────────────┤
│  3. TASK — the specific task            │  ← Per task
├─────────────────────────────────────────┤
│  4. DIFF / CODE — relevant fragments    │  ← Per task
├─────────────────────────────────────────┤
│  5. ACCEPTANCE CRITERIA — exit criteria │  ← Per task
├─────────────────────────────────────────┤
│  6. EXAMPLES (Few-shot) — samples       │  ← Optional (end)
└─────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

System: Role and Behavior

Who the model is and how it should behave. Always at the very beginning.

You are an experienced backend developer working with Python and FastAPI.
Keep answers concise. Use type hints. Don't add dependencies without asking.
Enter fullscreen mode Exit fullscreen mode

This is where the role, response style, and constraints go. Task details do not belong here — that's layer 3.

Project: Project Context

Tech stack, structure, architecture decisions, code conventions. This layer gets reused across tasks. In Claude Code, it lives in the CLAUDE.md file — the agent reads it automatically on every launch.

Task: What to Do

A clear description of what to do, why, and — this gets forgotten constantly — what not to do.

A good example:

Task: Add rate limiting to /users.
Context: Endpoint is unprotected, bots are overloading it.
Requirements: 100 req/min per IP, Redis for counters, 429 on exceeded.
Out of scope: Changing endpoint logic, adding authorization.
Enter fullscreen mode Exit fullscreen mode

A bad example: "Add rate limiting."

Diff/Code: Only What's Relevant

Provide only the code fragments that relate to the task. Not the entire file. Specify path and lines: app/api/users.py, lines 45–60.

Acceptance Criteria: When to Stop

Clear, verifiable conditions. The model only knows when to stop if you tell it. Skip these and you'll get either a half-finished answer or something wildly overengineered.

- [ ] Return 429 status when limit is exceeded
- [ ] Include Retry-After header in the response
- [ ] Unit tests cover edge cases
Enter fullscreen mode Exit fullscreen mode

Examples (Few-shot): At the End

For nonstandard output formats or a specific style. Place them at the end of the context — the model "sees" the finale better.

What Hurts to Put in Context

A few anti-patterns that will reliably tank your results:

  • The entire project codebase — signal drowns in noise
  • Contradictory instructions — "use Redux" + "use Context API" = the model gets confused
  • Outdated examples — code with deprecated APIs gets reproduced verbatim
  • Vague phrasing — "make it better" gives the model no direction

Four Strategies for Managing Context

LangChain and Anthropic propose a framework: all context work boils down to four actions.

Strategy What It Does Example
Write Persist externally Scratchpads, MEMORY.md, progress files
Select Extract only what's relevant RAG, grep, code search
Compress Compact Compaction, summarization, tool result cleanup
Isolate Isolate tasks Subagents with clean context

Everything described below is a specific case of one of these four.

Persistence: Bridging Sessions

Every session with an AI agent starts from scratch. New context window, zero memory of previous work. Anthropic calls it the "shift engineer" problem: each new engineer coming on shift remembers nothing of the previous one's work. No notes left behind? Start over.

Plain Files

The most basic form of memory — markdown notes the agent writes for its future self. Claude Code uses MEMORY.md for this: the agent automatically records project patterns, decisions, and architectural notes.

Git as Memory

Commits with meaningful messages form a changelog and restore points. The agent can experiment freely, knowing it can always roll back.

Structured Notes

Plain files evolve. Instead of a flat log, the agent maintains a structured knowledge base. The pattern: write_to_notes(topic, content) + read_from_notes(topic) — an external hard drive for memory.

An example from Anthropic: an agent playing Pokemon recorded "trained Pikachu 1234 steps, 8 out of 10 levels." After a context reset, it read its own notes and picked up right where it left off.

Scratchpad

Working memory within the current session. The agent "thinks out loud" — storing intermediate results, hypotheses, a plan. Scratchpad is RAM; files are disk.

Simple thought, but it changes everything: stop making the model remember. Give it a notebook.

Context Compaction

When the context fills up, compress it. The model gets the full history and produces a summary. Old conversation gets tossed, compressed version goes at the start of the new context.

Manual compaction at logical breakpoints (after finishing a feature) beats automatic. There's also a lighter variant: cleaning up tool results — strip the verbose command outputs from history, keep just the fact that they ran.

Task Trackers

For long-running projects, the "Initializer + Executor" pattern works well. The first agent doesn't write code — it creates a structured task list in JSON: description, status, dependencies. Each subsequent agent reads the list, picks a pending task, completes it, updates the status, and commits.

Subagents: Isolation as Strategy

The main agent can delegate a subtask to a subagent — a separate process with its own clean context window. Like a manager asking a database specialist to optimize a query: hand them the schema and the slow query, not the entire month's email thread.

Three wins:

  1. Context purity. The subagent isn't polluted by the main agent's history. The main agent might have 85% of its window occupied — the subagent starts at 5%.
  2. Specialization. You can use different models or system prompts for different subagents.
  3. Parallelization. Multiple subagents can work simultaneously.

In Claude Code, subagents are launched via the Task tool. The main agent describes the task, the subagent receives it in a clean context, does the work, and returns a structured result. The main agent's context cost is minimal.

MCP and the Tool Tax

MCP (Model Context Protocol) is an open standard defining how AI agents discover and call tools. Each MCP server adds its tool descriptions to the context. Every description costs tokens.

You feel it the moment you start working for real: connect 5–10 MCP servers (GitHub, Slack, database, analytics, monitoring) and tens of thousands of tokens in tool descriptions land in every request, even when none of them get called.

The fix is lazy loading. Claude Code uses Tool Search: tool descriptions load on demand, only when the agent decides it might need one. Saves around 85% of tokens. Other agents have similar tricks: lazy-mcp, MetaMCP.

Tool design principles:

  • Self-sufficiency: the description contains everything needed for use. The model doesn't read your README.
  • Unambiguity: user_email instead of data, validate_payment instead of process.
  • Minimalism: one tool = one atomic operation. If the description exceeds 200 words, the tool does too much.

Memory Hierarchy

Context in production tools isn't a single file — it's a multi-level system. Claude Code's docs lay out the hierarchy:

  • System prompt — base instructions (always loaded)
  • Settings — user preferences
  • CLAUDE.md — project instructions (loaded from the repository root)
  • Rules — modular instructions, can be path-specific (loaded only when working with certain files)
  • Skills — entire folders of instructions and scripts the agent loads at its own discretion
  • Auto Memory — memory the agent forms for itself

Martin Fowler proposes a useful distinction: Instructions (orders — "write a test for this function") vs Guidance (general rules — "all tests must be independent of each other"). CLAUDE.md and rules are mostly Guidance. Chat prompts are Instructions.

Working with Large Documents

You can't just dump a 50-page PDF into the model. You need a strategy.

Chunking

Break it into pieces of 1,500–3,000 tokens with 10–20% overlap. Semantic chunking (by chapters and sections) works noticeably better than chopping at fixed lengths.

Contextual Retrieval from Anthropic tackles the ripped-from-context problem: before indexing, each fragment gets a description of where it came from and what the section covers. Result: at least 35% fewer retrieval failures, up to 67% with reranking.

Fact Extraction

Skip the full text. Pull a structured list of facts and figures from each chunk instead. Smaller footprint, better accuracy for analysis.

Map-Reduce

For very large documents: split into chunks, summarize each (MAP), assemble the mini-summaries into a final one (REDUCE). The MAP phase can be parallelized — speedup scales with the number of workers.

RAG vs Long Context

With windows getting bigger (Gemini 2M), the question keeps coming up: do we still need RAG? Research (arXiv:2501.01880) says it depends on the task.

RAG wins: the corpus is huge (> 1M tokens), freshness matters, budget is limited.

Long context wins: you need synthesis across sections, structural understanding, document < 200K.

Hybrid (the way to go): RAG for selection, long context for analysis. The cost gap is real: full 2M context on every request runs an order of magnitude more than RAG selection + 50K of relevant context.

Where This Doesn't Work

Wouldn't be honest to stop at the upsides.

Context engineering won't fix a bad model

If the model can't write Rust, no amount of context will help. Context engineering works within what the model can already do. If the task is too hard for the current generation, break it into subtasks or try a different angle.

Preparation overhead

Assembling a perfect six-layer context package for every request takes time. For quick questions ("how does this function work?") it's overkill. Context engineering pays off on repeatable tasks and with agents that chain dozens of operations.

Compaction loses information

Compression is a tradeoff. The model picks what to keep and what to toss. Sometimes it tosses what matters. Manual compaction at logical breakpoints is safer, but needs the operator paying attention.

Lost in the Middle works both ways

You can get so focused on "important stuff at the beginning and end" that the middle turns into a junk drawer. Better to cut the context down than hope positioning saves you.

Subagents add latency

Delegating to a subagent means a separate API call with its own context. On a complex task, one subagent fires dozens of requests. For anything real-time, that's too slow.

Lazy tool loading isn't free

Tool Search saves context but adds a search step. If the agent hunts for a tool before every action, that's extra requests and wasted time. Balancing tools-in-context against search frequency takes tuning.

Common mistakes

Three that come up more than anything else:

  1. Copying an entire file instead of the relevant fragment. The model gets 500 lines when it needed lines 45–60. The other 440 lines are pure noise.
  2. Not saying what NOT to do. Without constraints, the model refactors the whole file when you asked it to fix one function.
  3. Skipping acceptance criteria. The model doesn't know when to stop. You get either undercooked or overcomplicated output.

Checklist

Run through this before every serious request to a model.

Before the request:

  • Is there a source of truth (docs, code, data) in the context?
  • Is the task clearly described?
  • Is the output format specified?
  • Is what NOT to do specified?

In the prompt:

  • "Answer only based on the provided context"
  • "If you don't know — say you don't know"

After the response:

  • Are the facts verified?
  • Do the referenced functions and libraries actually exist?
  • Were characteristics of one object attributed to another?

Five takeaways

  1. Less = better. Quality and relevance of context matter more than quantity. The goal is the smallest set of tokens with the strongest signal.
  2. Structure it. Six layers: System, Project, Task, Code, Criteria, Examples. Important stuff at the beginning and end.
  3. Persist it. Persistence = bridge between sessions. State files, structured notes, git.
  4. Isolate it. Subagents with clean context for specialized tasks.
  5. Compress it. Compaction and tool result cleanup when the context grows.

Start small: assemble a six-layer context package for one typical task and compare the result to what you get from pasting code into the chat. The difference tends to be obvious on the first try.

FAQ

At what token count does context rot become practically noticeable, and is there a threshold to monitor?

Multiple benchmarks (including studies by Chroma and others) show measurable accuracy degradation starting around 20–30K tokens for complex reasoning tasks, with a steeper drop past 50K. For simpler extraction tasks the threshold is higher — around 80–100K. A practical monitoring rule: if your average context exceeds 40K tokens per request and you're seeing inconsistent output quality, context size is the first variable to investigate. The $900/month calculation in the article assumes 100K tokens — most production agents can cut that by 60–70% through selective RAG retrieval without measurable quality loss.

How does lazy tool loading in Claude Code achieve 85% token savings, and what is the actual mechanism?

Without lazy loading, every MCP server's full tool schema is injected into the system prompt on every request — 10 servers with 5 tools each at ~200 tokens per tool description equals 10,000 tokens of overhead per call, regardless of which tools actually get used. Tool Search defers schema injection: the agent first sends a semantic search query to find relevant tool names (~50 tokens), then loads only the matching tool descriptions (~400 tokens for 2 tools). The 85% savings comes from eliminating the full schema dump for 8–9 unused tools per typical request.

When should you use manual context compaction versus automatic, and what information is typically lost?

Manual compaction at logical breakpoints (end of a feature, after a passing test suite) is safer because you control what the summary captures. Automatic compaction triggers on window fill and summarizes whatever is current — which may include half-finished reasoning, temporary debugging state, or contradictory instructions from mid-session pivots. The most common loss is architectural decisions made conversationally: "let's not use Redux here because X" survives a manual summary but gets dropped by automatic compaction which treats it as transient chat rather than binding constraint.

Top comments (1)

Collapse
 
ali_muwwakkil_a776a21aa9c profile image
Ali Muwwakkil

A key insight with managing context for AI models is that bigger isn't always better. In our experience with enterprise teams, we've observed that a carefully crafted context using less data often outperforms a maxed-out token window. This is because models can struggle to prioritize relevant information when overloaded. The trick is to focus on context engineering -identifying the most impactful data points and structuring them effectively to guide AI agents. - Ali Muwwakkil (ali-muwwakkil on LinkedIn)