DEV Community

Cover image for The Smallest Setup That Makes Long-Term AI Collaboration Work
synthaicode
synthaicode

Posted on

The Smallest Setup That Makes Long-Term AI Collaboration Work

No Memory. No Agents. Just Structure.

In the previous articles, I intentionally avoided showing any setup.

No folder trees.
No config files.

No agents.md examples.

That was not an omission.
It was necessary.

Before showing how to set this up,

I needed to explain what must not survive across time.

Now we’re ready.


The Goal of This Setup

The goal is not to make AI remember.
The goal is to make context reconstructable.
That requires only one thing:

A structure that decides what is allowed to survive.

Nothing more.


The Minimal Directory Structure

This is the smallest structure that works:

.
├─ docs/
│  └─ diff_log/
├─ features/
└─ .git/
Enter fullscreen mode Exit fullscreen mode

That’s it.

No logs/.
No current.md.
No session records.

Everything else is optional.


What Each Part Does

docs/diff_log/ — Canonical History

This is the only place where the past exists.
Each file represents a decision.

Not discussion.
Not exploration.
Not intent.

Only what was decided.
If it’s not here, it does not exist historically.


features/ — Safe Space for In-Progress Thinking

Everything unfinished goes here:

  • experiments
  • drafts
  • probes
  • failed attempts
  • half-formed ideas

This directory is allowed to be messy.
Nothing here is authoritative.

features/ is where thinking is free
because it is not preserved as truth.


.git/ — The Authority of Time

Git is not just version control here.
It is the context authority.

  • what exists at a commit exists
  • what was rolled back never happened
  • what is not committed is invisible

This is how context rollback works naturally.


The diff_log File Format (Minimal)

A diff log does not need much structure.
It needs clarity, not completeness.
Example:

# Decision: Disable Auto Schema Registration
Date: 2025-12-14

## Decision
Automatic schema registration is disabled by default.

## Reason
Explicit control is required to prevent accidental incompatibilities.

## Impact
- CLI requires explicit registration step
- Existing examples updated

## Open
- Should this be configurable per environment?
Enter fullscreen mode Exit fullscreen mode

That’s enough.

No templates.
No enforced sections.

The rule is simple:

If someone reads this months later,
they must understand why the decision exists.


What Must Never Go into diff_log

This is more important than the format.
Never put these into diff_log/:

  • brainstorming notes
  • rejected options without context
  • temporary assumptions
  • “current thinking”
  • session summaries

If it wasn’t decided, it doesn’t belong here.


The Only Instruction Given to AI

This setup works with exactly one instruction:

When unsure, consult past decision diffs.
Only decisions go into diff_log.
Enter fullscreen mode Exit fullscreen mode

That’s it.

No role definition.
No behavior constraints.
No system-level micromanagement.

The AI is free to explore —
but it cannot decide what survives.


Why This Is Enough

This works because:

  • AI reasoning is constrained by visibility
  • Visibility is constrained by Git
  • Git is constrained by structure

Control emerges from what is allowed to persist,
not from what is explicitly instructed.


What You Should Resist Adding

Most people break this system by adding “just one more thing”:

  • a session log
  • a current status file
  • a running summary
  • a growing instruction document

Each of these reintroduces ambiguity.

The moment thinking and decisions share a container,
long-term context starts to drift.


When to Add More (Later)

You can add more later:

  • automation
  • validation
  • summaries
  • tooling

But only after this minimal structure is stable.
If this doesn’t work,
nothing built on top of it will.


Closing

This setup looks too small to work.
That’s the point.
Long-term AI collaboration does not require more memory.
It requires less survival.

Structure is what makes that possible.


This article is part of the Context as Infrastructure series —
exploring how long-term AI collaboration depends on structure, not memory.

Top comments (0)