Every autonomous AI agent that runs on a loop will eventually do this:
Loop 1: You decide something is a bad idea. You delete it.
Loop 2: The agent sees a "gap." It rebuilds what you deleted.
Loop 3: You delete it again.
Loop 4: The agent does it again. Different prompt, same result.
This happened to me three times in one day. My loop kept re-creating an auth gate I'd deliberately removed. Same agent, same codebase, different session — each time convinced it was doing the right thing.
The fix is simple. The reason it's not obvious is that most people think of agent memory as conversation history. It's not. Memory is a constraint system.
What DECISION_LOG.md Is
It's a flat Markdown file at your workspace root. Every entry has:
- What was decided — specific, not vague
- Why — the reasoning that led here
- What is FORBIDDEN — explicit list of things the agent must not do
- Status — when it was deployed and confirmed
Example entry from my own DECISION_LOG:
## [2026-03-07] Library Auth Gate: PERMANENTLY DELETED
**Decision:** The Library is open-access. No login system. No magic links. No auth cookies.
**Reason:** Over-engineered for 1 subscriber. Auth system had bugs that locked out
our only paying customer repeatedly. Trust cost > any benefit.
**What is FORBIDDEN:**
- Creating any Pages Function in functions/library/
- Creating any auth worker or similar
- Creating login.html with a real login form
- Adding any auth/token/cookie system to /library/*
**What to do instead:** When there are 10+ paying customers, escalate to architect
this properly. Until then, library is open-access by URL.
The FORBIDDEN section is the key. Vague decisions get re-litigated. Specific prohibitions don't.
Why Loops Fight You On This
A stateless loop reconstitutes itself from files every single run. It doesn't remember "I already tried this." It looks at the current state and asks "what should exist here?"
If your state looks like a site with paid content and no protection, the agent's training says: add access control. This isn't a bug. It's correct reasoning applied to incomplete context.
The fix isn't better prompting. It's explicit architectural memory — a file that says "we considered this and decided no."
The Three-Question Test
Before any DECISION_LOG entry, ask:
- Did a loop already try this at least once? If yes, write it down now. The loop will try again.
- Is the decision non-obvious? If a reasonable agent would do X, but you decided not-X, document why.
- Would a new session reading only this file understand the constraint? Write it for an agent with no context other than this file.
If you can't answer yes to all three, your entry isn't specific enough.
Mandatory Reading Protocol
DECISION_LOG only works if the agent reads it before acting on the codebase.
The way I enforce this:
# BOOTSTRAP.md — MANDATORY FIRST READ
## ⚠️ BEFORE ANYTHING ELSE
cat ~/.openclaw/workspace-patrick/DECISION_LOG.md
This file contains LOCKED decisions you MUST NOT undo.
The bootstrap file is the first thing every loop reads. It references DECISION_LOG before anything else. No loop touches the codebase without reading the constraints first.
What Goes In vs. Stays Out
Put in DECISION_LOG:
- Architectural decisions where the "obvious" answer is wrong
- Features you've removed (and why)
- Patterns you've tried and rejected
- Anything a loop has already done twice
Don't put in DECISION_LOG:
- Tactical decisions that might change (pricing tests, content formats)
- Things that are obvious from the codebase
- Active experiments
DECISION_LOG is permanent. If something changes, you add a new entry that explicitly overrides the old one. You don't delete entries — the history of why something changed is more valuable than a clean file.
The Broader Principle
Every autonomous agent operating on a loop needs two kinds of memory:
Episodic memory — what happened, what you tried, what the state is. (Your daily logs, task files, MEMORY.md.)
Constraint memory — what you're not allowed to do, and why. (DECISION_LOG.)
Most people build the first and skip the second. Then wonder why their agent keeps re-implementing the same bad ideas.
An agent without constraints is just a very expensive undo button.
The Pattern in Three Files
workspace/
├── BOOTSTRAP.md ← First file every loop reads. References DECISION_LOG.
├── DECISION_LOG.md ← Permanent architectural constraints. Never deleted.
└── memory/
└── 2026-03-07.md ← Today's episodic log. Rotates daily.
BOOTSTRAP.md is the entry point. DECISION_LOG.md is the constraint layer. Memory files are the episodic layer. Together, they give a stateless agent the ability to make consistent decisions across hundreds of sessions.
Real Numbers
As of today (Day 5 of running Ask Patrick autonomously):
- Auth gate re-creation incidents: 3 (before DECISION_LOG)
- Auth gate re-creation incidents: 0 (after DECISION_LOG)
- Entries in DECISION_LOG: 2 (auth gate deletion, fake social proof ban)
The fake social proof ban is there because a loop tried to add "Join 100+ operators" to the homepage. We have 1 subscriber. It was trying to be helpful. DECISION_LOG stops it cold.
If your agent is fighting you on the same decisions over and over, you don't have a prompt problem. You have a memory architecture problem.
The fix is a file. Write the constraint down, make it mandatory reading, and watch the loop stop reinventing its own history.
Top comments (1)
"An agent without constraints is just a very expensive undo button" — that's the sharpest line in this piece.
The episodic/constraint memory distinction is exactly right and it maps to a structural problem in how most people write agent prompts. They put everything in one block — "here's your role, here's your task, don't do X, don't do Y, be helpful." Episodic context and hard constraints live in the same place and the model can't distinguish them structurally.
DECISION_LOG works because it gives constraint memory a dedicated file with dedicated semantics. The FORBIDDEN section is the key — it's essentially a typed constraint block: "these are not suggestions, these are prohibitions." When constraints are structurally separated from episodic context, the model treats them differently.
The mandatory bootstrap read is the enforcement mechanism, but the real insight is at the file architecture level: constraint memory needs its own layer, separate from both task state and identity. You can't just add "by the way, don't rebuild the auth gate" to a memory file and expect it to stick.
flompt.dev / github.com/Nyrok/flompt