How I Made My Study Bot Learn From Past Errors Using Hindsight
Last night I tested my study assistant twice with the same mistakes — it failed both times to adapt. By morning, after adding Hindsight, it started changing its questions based on my past errors.
That was the moment I realized: my “AI study companion” wasn’t actually learning anything.
What I Built
I built an AI Study Companion — a system that helps students revise topics, generate quizzes, and track weak areas over time.
At a high level, the system has three main parts:
- A frontend interface where users attempt quizzes and study topics
- A backend service that generates questions using an LLM
- A memory layer using Hindsight to store and recall past interactions
The goal wasn’t just to generate questions — it was to make the system adapt based on what the student struggles with.
The Problem: Stateless AI is Useless for Learning
Initially, my system worked like most basic AI apps:
- User selects a topic
- The system generates questions
- User answers them
- Session ends
And that’s it.
The next time the user came back, the system had no idea:
- What they got wrong
- Which topics they struggled with
- Whether they improved
It was basically starting from zero every time.
I tried increasing context size, passing previous answers into prompts, and even storing logs — but none of it worked reliably.
That’s when I realized I needed persistent memory, not just temporary context.
Adding Hindsight Memory
I integrated Hindsight as the memory layer for the agent.
Instead of just generating responses, the agent now:
- Stores past quiz attempts
- Tracks incorrect answers
- Learns patterns over time
Here’s how I structured memory.
Storing User Mistakes
Every time a user answers a question, I store it as an experience:
python
hindsight.add_memory({
"user_id": user_id,
"topic": "JavaScript Closures",
"question": question,
"user_answer": answer,
"correct": False,
"timestamp": now()
})
Top comments (0)