DEV Community

Cover image for My AI Mentor Warned Me About a Bug Before I Even Ran the Code
1shrikantsc-spc
1shrikantsc-spc

Posted on

My AI Mentor Warned Me About a Bug Before I Even Ran the Code

My AI Mentor Warned Me About a Bug Before I Even Ran the Code

I used to think personalized learning meant picking your difficulty level.
I was wrong. Real personalization means your tools remember you — your
exact mistakes, your patterns, your blind spots. That's what we built
with Kernel's Slap.

What is Kernel's Slap?

Kernel's Slap is an AI coding mentor that uses Hindsight memory to
remember how you think, fail, and improve — across every session,
every problem, every mistake.

Unlike LeetCode or HackerRank, it never forgets you.

The main moving parts:

  • Frontend: Next.js + Monaco Editor (same editor as VS Code)
  • Backend: FastAPI (Python)
  • Memory: Hindsight — retain(), recall(), learn()
  • LLM: Groq (qwen3-32b) — fast inference, no waiting

The Core Problem We Solved

Every coding platform today is stateless. You fail the same
off-by-one error on Monday. You fail it again on Wednesday.
The platform has no idea. It gives you the same generic hint
both times as if it has never seen you before.

Three specific failures this causes:

  1. Repetition without recognition — same mistake, no memory
  2. Generic help — same hint every user gets, regardless of history
  3. No real trajectory — no way to know if you're actually improving

Hindsight's agent memory
solves all three.

How Hindsight Memory Works in Our System

Every single mistake calls retain():

await retain(user_id, {
    "event": "mistake",
    "error_type": "missing_base_case",
    "language": "python",
    "problem_category": "recursion",
    "hint_given": hint,
    "hint_style": "direct",
    "resolved": False
})
Enter fullscreen mode Exit fullscreen mode

Before every single hint, we call recall():

past_patterns = await recall(
    user_id=user_id,
    query=f"mistakes with {error}",
    top_k=5
)
Enter fullscreen mode Exit fullscreen mode

The agent prompt then becomes:
"This student has failed base case errors 4 times.
Last hint style was code example — it didn't work.
Try analogy instead."

This is the difference between a generic tutor and
a personal mentor.

The Feature That Made Judges Stop — Pre-mortem Warning

Before the student clicks Run, our agent:

  1. Scans their written code
  2. Calls recall() to fetch their error history
  3. Warns them if it detects a pattern match

What the student sees:

⚠️ "Based on your last 3 recursive functions,
you tend to miss the base case.
Want me to check yours before you run?"

The agent warned them before the mistake happened.
That's not possible without persistent memory.
That's Hindsight doing its job.

What learn() Actually Does

After every hint interaction we call learn():

await learn(user_id, {
    "event": "solved",
    "hint_style": "analogy",
    "resolved": True,
    "time_to_resolve": minutes
})
Enter fullscreen mode Exit fullscreen mode

If code examples never help you but analogies do —
the agent permanently switches. This is real agent
learning, not just logging.

What Surprised Us

We expected Hindsight to be just storage.
It turned out to be the entire decision-making layer.

Every agent response is shaped by memory:

  • What hint to give
  • What style to use
  • What problem to generate next
  • How to greet you when you return

Without recall() firing before every response,
the agent is blind. With it, the agent knows you.

Lessons Learned

  • Store rich context in retain() — not just the error, but the hint given, the style used, whether it worked
  • recall() before EVERY response — not just when stuck
  • learn() changes behavior — track what actually helped, then use it to decide next action
  • Memory is not logging — logging is passive, memory is active decision-making input

Try It Yourself

GitHub: https://github.com/1shrikantsc-spc/kernels-slap

Built using:

Top comments (0)