<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Agrim Jain</title>
    <description>The latest articles on DEV Community by Agrim Jain (@agrim_jain_a16108ff4972e9).</description>
    <link>https://dev.to/agrim_jain_a16108ff4972e9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/agrim_jain_a16108ff4972e9"/>
    <language>en</language>
    <item>
      <title>IterAI: An AI Coding Mentor That Learns From Your Failures Using Memory</title>
      <dc:creator>Agrim Jain</dc:creator>
      <pubDate>Sat, 21 Mar 2026 15:32:14 +0000</pubDate>
      <link>https://dev.to/agrim_jain_a16108ff4972e9/iterai-an-ai-coding-mentor-that-learns-from-your-failures-using-memory-n38</link>
      <guid>https://dev.to/agrim_jain_a16108ff4972e9/iterai-an-ai-coding-mentor-that-learns-from-your-failures-using-memory-n38</guid>
      <description>&lt;p&gt;We Built a Coding Mentor That Remembers Your Mistakes&lt;/p&gt;

&lt;p&gt;“Did it seriously just tell me I always mess up recursion?”&lt;/p&gt;

&lt;p&gt;We paused for a second and just stared at the screen. The model wasn’t just answering the question—it was pointing out a pattern we never explicitly told it about.&lt;/p&gt;

&lt;p&gt;That was the moment IterAI stopped feeling like just another chatbot… and started feeling like something that actually understands how people learn.&lt;/p&gt;

&lt;p&gt;What We Built&lt;/p&gt;

&lt;p&gt;Most AI coding tools today are really good at one thing: giving answers.&lt;/p&gt;

&lt;p&gt;You paste your code, they fix it, maybe explain a bit, and you move on.&lt;/p&gt;

&lt;p&gt;But here’s the problem—most of the time, you don’t actually learn anything.&lt;/p&gt;

&lt;p&gt;So we built IterAI, a coding mentor designed around a simple idea:&lt;br&gt;&lt;br&gt;
learning should come from your mistakes, not just correct answers.&lt;/p&gt;

&lt;p&gt;Instead of treating every interaction as brand new, IterAI remembers what you struggled with and uses that to guide you next time.&lt;/p&gt;

&lt;p&gt;At a high level, it has three main parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A chat interface for coding questions
&lt;/li&gt;
&lt;li&gt;A practice mode for solving problems
&lt;/li&gt;
&lt;li&gt;A memory layer powered by Hindsight that tracks mistakes and weak areas
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The interface feels familiar.&lt;br&gt;&lt;br&gt;
But what’s happening underneath is very different.&lt;/p&gt;

&lt;p&gt;The Problem With Stateless AI&lt;/p&gt;

&lt;p&gt;Our first version was honestly, pretty basic:&lt;/p&gt;

&lt;p&gt;ts&lt;br&gt;
const response = await model.generate({&lt;br&gt;
  prompt: userInput&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;And to be fair it worked.&lt;br&gt;
It gave correct answers, explained concepts clearly, and even generated decent code.&lt;br&gt;
But something felt off.&lt;br&gt;
Every interaction was isolated.&lt;br&gt;
A user could ask:&lt;br&gt;
“Explain recursion”&lt;br&gt;
…and later:&lt;br&gt;
“Why is my recursive function failing?”&lt;br&gt;
And the model would treat those as completely unrelated.&lt;br&gt;
No memory. No continuity. No awareness of struggle.&lt;br&gt;
That’s when it clicked for us:&lt;br&gt;
Without memory, there is no mentorship.&lt;/p&gt;




&lt;p&gt;Adding Memory With Hindsight&lt;br&gt;
That’s where Hindsight came in.&lt;br&gt;
Instead of just storing conversations, we started storing learning events—structured pieces of information about how users fail.&lt;br&gt;
await hindsight.store({&lt;br&gt;
  user_id: userId,&lt;br&gt;
  event: {&lt;br&gt;
    type: "mistake",&lt;br&gt;
    topic: "recursion",&lt;br&gt;
    input: userCode,&lt;br&gt;
    feedback: aiExplanation,&lt;br&gt;
    timestamp: Date.now()&lt;br&gt;
  }&lt;br&gt;
});&lt;br&gt;
Now, whenever a user:&lt;br&gt;
• writes incorrect logic &lt;br&gt;
• repeats a mistake &lt;br&gt;
• struggles with a concept &lt;br&gt;
…it becomes part of their learning history.&lt;br&gt;
At this point, we weren’t just storing text anymore.&lt;br&gt;
We were storing patterns of thinking.&lt;/p&gt;




&lt;p&gt;Retrieval Isn’t Just Search&lt;br&gt;
Initially, we treated memory like a search problem:&lt;br&gt;
const past = await hindsight.search({&lt;br&gt;
  user_id: userId,&lt;br&gt;
  query: currentQuestion&lt;br&gt;
});&lt;br&gt;
But this didn’t feel right.&lt;br&gt;
If someone asks:&lt;br&gt;
“Why is my loop not working?”&lt;br&gt;
We don’t just want similar past questions.&lt;br&gt;
We want:&lt;br&gt;
• their previous loop mistakes &lt;br&gt;
• patterns in their logic errors &lt;br&gt;
• known weak areas &lt;br&gt;
So we changed our approach.&lt;br&gt;
Instead of “searching,” we started building context:&lt;br&gt;
const memoryContext = await hindsight.getRelevant({&lt;br&gt;
  user_id: userId,&lt;br&gt;
  filters: ["mistakes", "weak_topics"],&lt;br&gt;
  limit: 3&lt;br&gt;
});&lt;br&gt;
And then used that inside the prompt:&lt;br&gt;
const response = await model.generate({&lt;br&gt;
  prompt: `&lt;br&gt;
User question: ${input}&lt;/p&gt;

&lt;p&gt;Past weaknesses:&lt;br&gt;
${memoryContext}&lt;/p&gt;

&lt;p&gt;Explain clearly and focus on these weak areas.&lt;br&gt;
`&lt;br&gt;
});&lt;br&gt;
That one shift made the system feel completely different.&lt;/p&gt;

&lt;p&gt;When It Started Feeling Real&lt;br&gt;
Before adding memory:&lt;br&gt;
• The AI gave clean, generic explanations &lt;br&gt;
After adding memory:&lt;br&gt;
• It started saying things like:&lt;br&gt;
“You often miss base cases in recursion. Let’s fix that step-by-step.” &lt;br&gt;
That’s not just answering anymore.&lt;br&gt;
That’s teaching.&lt;/p&gt;

&lt;p&gt;The Failure Coach&lt;br&gt;
We also introduced something we call the Failure Coach.&lt;br&gt;
Instead of simply saying “incorrect,” the system breaks mistakes down into structured insights:&lt;br&gt;
function analyzeFailure(code, expected) {&lt;br&gt;
  return {&lt;br&gt;
    error_type: "logic_error",&lt;br&gt;
    missed_concept: "base_case",&lt;br&gt;
    explanation: "...",&lt;br&gt;
    steps_to_fix: [...]&lt;br&gt;
  };&lt;br&gt;
}&lt;br&gt;
Over time, this builds a profile like:&lt;br&gt;
• struggles with recursion base cases &lt;br&gt;
• misses edge conditions in loops &lt;br&gt;
• confuses array indexing &lt;br&gt;
Now the system doesn’t just fix your code—it understands how you fail.&lt;/p&gt;

&lt;p&gt;The Feedback Loop&lt;br&gt;
What surprised us most is how naturally this turned into a learning loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; You make a mistake &lt;/li&gt;
&lt;li&gt; The system stores it &lt;/li&gt;
&lt;li&gt; Future responses adapt &lt;/li&gt;
&lt;li&gt; You improve &lt;/li&gt;
&lt;li&gt; New mistakes refine the system 
At some point, it stops feeling like a tool.
It starts feeling like a mentor.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A Real Example&lt;br&gt;
First interaction:&lt;br&gt;
User:&lt;br&gt;
“Why does my recursion not stop?”&lt;br&gt;
System:&lt;br&gt;
Explains base case&lt;br&gt;
Stored:&lt;br&gt;
{&lt;br&gt;
  "topic": "recursion",&lt;br&gt;
  "mistake": "missing base case"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Later interaction:&lt;br&gt;
User:&lt;br&gt;
“Fix this function”&lt;br&gt;
System:&lt;br&gt;
“You’ve struggled with recursion base cases before—let’s check that first.”&lt;br&gt;
It’s a small thing.&lt;br&gt;
But it completely changes the experience.&lt;/p&gt;

&lt;p&gt;What Didn’t Work&lt;br&gt;
Not everything went smoothly.&lt;br&gt;
Too Much Memory&lt;br&gt;
At one point, we passed too much past data into the prompt.&lt;br&gt;
Result:&lt;br&gt;
• messy responses &lt;br&gt;
• confusion &lt;br&gt;
• irrelevant suggestions &lt;br&gt;
Fix:&lt;br&gt;
limit: 3&lt;/p&gt;

&lt;p&gt;Storing Everything&lt;br&gt;
We initially tried storing every interaction.&lt;br&gt;
That quickly became noise.&lt;br&gt;
Fix:&lt;br&gt;
• store meaningful mistakes only &lt;br&gt;
• ignore trivial queries &lt;/p&gt;

&lt;p&gt;Weak Categorization&lt;br&gt;
If you don’t tag mistakes properly, memory becomes useless.&lt;br&gt;
Bad:&lt;br&gt;
{ "type": "error" }&lt;br&gt;
Better:&lt;br&gt;
{ "type": "logic_error", "topic": "loops" }&lt;/p&gt;

&lt;p&gt;What It Feels Like Now&lt;br&gt;
Most AI tools feel like:&lt;br&gt;
“Here’s your answer.”&lt;br&gt;
IterAI feels more like:&lt;br&gt;
“Here’s why you keep getting this wrong—and how to fix it.”&lt;br&gt;
It’s a small shift in wording.&lt;br&gt;
But a huge shift in experience.&lt;/p&gt;

&lt;p&gt;Lessons We Learned&lt;br&gt;
• Memory without structure doesn’t work &lt;br&gt;
• Personalization matters more than raw intelligence &lt;br&gt;
• More context isn’t always better &lt;br&gt;
• Mistakes are the best signal for learning &lt;br&gt;
• Good mentorship requires remembering patterns &lt;/p&gt;

&lt;p&gt;Closing Thought&lt;br&gt;
We didn’t set out to build a better chatbot.&lt;br&gt;
We wanted to build something that actually learns how you fail—and helps you improve because of it.&lt;br&gt;
Hindsight didn’t just make the system smarter.&lt;br&gt;
It made it remember.&lt;br&gt;
And once a system remembers, it stops being just a tool…&lt;br&gt;
and starts becoming a mentor.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>learning</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
