Why My Coding Agent Stopped Repeating the Same Hints
While testing our coding mentor, I noticed something frustrating.
The AI wasn’t wrong.
It was just repetitive.
A user fails a problem.
The mentor says: “Try breaking it into smaller parts.”
They fail again.
Same hint.
Third time?
Still the same hint.
At that point, correctness doesn’t matter. The system isn’t helping anymore.
That’s when I realized the real problem wasn’t intelligence.
It was memory.
What We Built
CodeMentor AI is a coding practice platform with an AI mentor that adapts based on user behavior.
The interface is simple:
- Monaco Editor in the center
- Problem list on the left
- AI mentor on the right
The goal was not to make a smarter model.
The goal was to make the system aware of the user.
** The Core Problem**
Most AI tools are stateless.
They don’t remember:
- previous mistakes
- repeated failures
- learning patterns
So even if the response is correct, it doesn’t evolve.
Every session starts from zero.
The Approach
Instead of increasing model complexity, I introduced a simple memory loop:
retain → recall → reflect
This became the foundation of the system.
retain() — Store Meaningful Events
After every submission:
await retain(
hindsightKey,
bankId,
`Failed ${problem.title} due to incorrect loop boundary`,
{
type: "failure",
topic: "arrays",
difficulty: problem.difficulty,
timestamp: new Date().toISOString()
}
);
Key decision:
Do not store raw code.
Store the reason behind failure.
This keeps memory compact and useful.
recall() — Fetch Relevant Context
Before solving a new problem:
const memories = await recall(
hindsightKey,
bankId,
"user mistakes in array problems",
5
);
This retrieves only what is relevant:
- similar past mistakes
- related topics
- repeated patterns
reflect() — Convert Memory into Insight
Before generating a response:
const reflection = await reflect(
hindsightKey,
bankId,
"user weaknesses in problem solving"
);
This produces insights such as:
- misses edge cases
- struggles with loops
- relies on brute force
What Changed
Before memory:
“Check your logic.”
After memory:
“You’ve missed boundary conditions in similar problems before. Check your loop limits.”
That shift makes the response specific and actionable.
What I Learned
Memory is more important than model size.
Better context leads to better responses.
Summarized insights are more useful than raw data.
Query design directly impacts output quality.
Testing across multiple sessions is necessary to see real value.
Current Behavior
The system now:
- adjusts hints based on past attempts
- avoids repeating the same feedback
- builds a pattern of user weaknesses
It doesn’t feel like a stateless tool anymore.
It feels like a system that tracks progress.
Try It
https://codemento.netlify.app/
Final Thought
AI systems don’t improve just by being more powerful.
They improve by remembering.
Once memory is introduced, the system stops starting over.
And that’s when it becomes useful.
Built at HackWithIndia 2026- Sarvesh Gajakosh
Team: Krishna Hasare, Karan Patil, Priya Vhatkar, Siddhi Shinde, Aastha Sawade



Top comments (0)