Loose Instructions, Strict History
In the previous article, I explained how I can still consult AI
about decisions made months ago.
A natural follow-up question is:
- Where is the instruction?
- Where do you control the AI?
- Where is agents.md?
- Where is the system prompt?
The answer is unintuitive.
I don’t control the AI by telling it how to think.
I control it by deciding what is allowed to survive across time.
This article explains how that works in practice.
The Wrong Place to Look for Control
Most people look for control in places like:
- system prompts
- agent definitions
- long instruction files
- strict role descriptions
These feel like control.
But they are fragile.
- They reset with sessions
- They drift as projects evolve
- They require constant maintenance
- They silently conflict with reality
Worse, they give a false sense of safety.
If the structure allows bad context to survive,
no prompt can save you.
The Actual Control Point: History
The real control point is not instruction.
It is history.
Specifically:
What is allowed to become history, and what is not.
In my setup, history has one rule:
Only decisions become history.
Everything else is temporary.
Strict History, Not Strict Instructions
This is the core inversion.
I use:
- Loose instructions
- Strict history
The AI is free to:
- explore ideas
- speculate
- suggest alternatives
- be wrong
- change its mind
But it is not free to decide what becomes canonical.
Only artifacts that pass through the repository structure are allowed to persist across time.
That structure is the control mechanism.
What Becomes History
History is represented as decision diffs.
Each entry captures:
- what was decided
- why it was decided
- what changed
- what remains unresolved
No polish.
No storytelling.
No retrospective rewriting.
If something didn’t result in a decision, it does not enter history.
What Does Not Become History
This is equally important.
The following are explicitly excluded:
- session conversations
- brainstorming notes
- partial designs
- experiments
- dead ends
All of that lives outside history.
Exploration is allowed.
Persistence is not.
The AI can think freely,
but it cannot make facts by accident.
Why I Don’t Rely on agents.md
You might expect a detailed agents.md file here.
I intentionally keep it minimal.
Why?
Because detailed behavioral instructions:
- rot over time
- encode assumptions that later become false
- silently override real project state
Instead, the AI learns the rules by observing what survives and what doesn’t.
History teaches behavior better than instructions.
How Context Is Reconstructed
When I ask:
“Please look at past decision logs and advise on XXX.”
The AI does not remember.
It reconstructs.
It loads:
- surviving decision diffs
- current code
- current contracts
Everything else is invisible.
This makes the reasoning stable.
The AI cannot accidentally revive ideas that were intentionally discarded.
Minimal Setup Checklist (How-To)
If you want to reproduce this approach, the minimum setup is surprisingly small.
You need:
- a place where decisions are recorded
- a place where in-progress work can exist safely
- Git as the authority of truth
- a habit of asking the AI to consult history, not memory
You do not need:
- complex prompts
- persistent AI memory
- heavy agent frameworks
- strict behavioral scripts
Why This Scales Over Time
This approach works long-term because it does not depend on:
- human memory
- AI memory
- session continuity
- model-specific features
It depends only on structure.
As long as the structure exists, context can be reconstructed.
Closing
Loose instructions are not a weakness.
They are a prerequisite for exploration.
Strict history is what makes that exploration safe.
That is how AI can reason across time without remembering anything at all.
This article is part of the **Context as Infrastructure* series —
exploring how long-term AI collaboration depends on structure, not memory.*
Top comments (0)