There’s a weird feeling when working with AI.
You’ve solved this task before.
You’ve already explained how you want things done.
You’ve already gotten a good result.
And yet today… you start from zero.
Again you write:
- don’t over-engineer
- don’t change the design
- keep it simple
- follow these constraints
After a few messages, you realize:
I’m basically repeating yesterday’s conversation
The real problem
It’s not the model.
It’s not your prompts.
The problem is:
AI does not remember what actually worked
Every session is a reset.
Why memory is not enough
You might say, but there’s memory.
Yes, but memory stores facts:
user prefers concise answers
Not behavior:
in migrations, do not touch UI and avoid refactoring
That difference matters.
Where the real value is lost
When you finally get a great result, something important happened.
You discovered a working pattern.
But that pattern disappears with the conversation.
We tried to store the experience, not the chat
That’s where AEP (Agent Experience Protocol) comes in.
If something worked, save why it worked
Not prompts.
Not chat logs.
Just the essence.
What AEP actually stores
After a successful task, AEP captures:
- intent
- constraints
- preferences
- workflow
- failure traps
- success checks
Instead of storing conversations, it stores success structure.
A familiar example
Task:
convert an HTML template to Next.js
What usually happens:
- AI starts “improving” things
- splits everything into too many components
- changes structure
- breaks styling
You spend 10–20 messages fixing it.
When it finally works, that’s the valuable moment.
AEP stores it like this:
{
"intent": ["Convert HTML to Next.js while preserving layout"],
"constraints": ["Do not redesign UI", "Keep CSS intact"],
"preferences": ["Prefer simplicity over abstraction"],
"workflow": [
"Analyze layout",
"Convert HTML to JSX",
"Preserve styles",
"Split only when needed"
],
"failure_traps": [
"Over-engineering",
"Unrequested redesign"
],
"success_checks": [
"Visual parity",
"Build passes"
]
}
Next time, you skip the pain.
What changes in practice
Without this:
- you repeat instructions
- AI drifts
- results are inconsistent
With this:
- the agent starts aligned
- avoids past mistakes
- gets to the result faster
Why this works
Because:
we stop storing what was said
and start storing what worked
Where it lives
Right inside your repo:
.agent/aep/
This means:
- version controlled
- team-visible
- agent-readable
- portable
How you use it
- Solve a task with AI
- Get a good result
- Save it as AEP
- Reuse it next time
This is not another abstraction layer
This is not about:
- more prompts
- complex frameworks
- magic
It’s about something very practical:
stop losing working solutions
The uncomfortable truth
We already know how to get good results from AI.
But:
we don’t know how to keep them
AEP is an attempt to fix that.
Status
Experimental and open source.
Install:
npx @smithery/cli@latest skill add Robi-Labs/aep
TL;DR
AI keeps starting from zero.
AEP helps it remember what actually worked.
Top comments (0)