Many AI task failures do not happen because the model cannot modify code. They happen because the model reads the wrong context.
It may trust outdated docs, treat a roadmap as fact, read an AI-assistance note as a product rule, generalize from one failed sample, or find a similarly named implementation that is already obsolete.
That is why a project-specific AI delivery pipeline should not simply tell AI to "read the repository." It should prepare a context package for the task.
More context is not always better
It is tempting to give AI everything. In real projects, that often makes the result worse.
Too much context distracts the model. Messy documentation makes temporary decisions look permanent. Long chat history can revive decisions that were already rejected.
The project needs narrow context, not maximum context.
Narrow context means the task receives the facts it needs, and misleading material is not exposed by default.
Source of truth must be explicit
The most important field in a context package is the source of truth.
In a codebase, the current code and tests often outrank old docs. In a product project, public capability claims should be grounded in README, user docs, release artifacts, and runnable demos. In a knowledge publication system, public content must trace back to sources, not only to summaries.
Without an explicit source of truth, AI tends to treat available materials as if they are equally reliable.
That is dangerous because real project materials have hierarchy:
- code and tests may outrank old documents;
- current product docs may outrank early roadmaps;
- project-local facts may outrank central methodology;
- raw evidence may outrank AI summaries;
- public claims must be more conservative than internal plans.
The context package should state these priorities.
What a context package contains
A useful package can be simple:
- project rules relevant to the task;
- source-of-truth files;
- relevant specs, ADRs, issues, PRs, or runbooks;
- relevant tests, fixtures, screenshots, or traces;
- known failures;
- validation commands;
- files that may be changed;
- files that must not be touched;
- precedence rules when context conflicts.
This is more controlled than letting the agent search the whole repository on its own.
This is where project-specific value appears
General agent tools can provide strong models, shell access, file editing, MCP, subagents, and hooks. They cannot know your project's truth hierarchy by default.
For example, a TidalFi worker must know which paths involve money, trading, KYC, or production release. A SketchUp Agent Harness worker must know how the design model, source evidence, and SketchUp scene relate. A knowledge publication worker must know that the knowledge base owns bilingual candidates while the site owns rendering and deployment.
These are not generic tool facts. They are project context.
Context packages also prevent overgeneralization
AI easily turns one example into a general rule.
In a design tool, one floor plan repair should not become a universal product rule. In a trading system, one issue-specific fix should not redefine global business semantics. In a knowledge system, one project's directory habit should not be copied into every project.
A context package can label the material:
- fact;
- evidence;
- inference;
- project-local rule;
- reusable method;
- source-specific interpretation that must not be generalized.
That reduces the chance of local experience leaking into global rules.
Conclusion
Giving AI context does not mean dumping the whole repository into the conversation.
What works is a task-level context package: small, accurate, prioritized, source-grounded, and bounded.
Much of the value of a project-specific AI delivery pipeline comes from this. It does not make the model magically smarter. It makes the model work on the right layer of truth.
Originally published on my personal site:
https://marlinbian-site.pages.dev/writing/context-package-and-source-truth/
More links: GitHub · YouTube · LinkedIn · Bluesky · Mastodon · Discord
Top comments (0)