DEV Community

Varun Bamhane
Varun Bamhane

Posted on

AXION HACKATHON PROJECT

Why Your AI Coding Tool Should Feel Like a Friend, Not an Examiner

I want to tell you about the worst feeling in programming education.
You spend 45 minutes on a problem. You finally submit. The platform flashes red. "Wrong answer." No explanation of what you misunderstood. No memory of the 10 times you tried before. No encouragement. Just — wrong.
This is how most coding platforms make you feel. Like you are failing an exam. Every single time.
When our team built TraceX, we made one core decision that shaped everything else: this platform should feel like a friend who happens to be a great programmer — not an examiner.

The UX Philosophy Behind TraceX
Before I wrote a single line of prompt engineering, we decided on a set of principles:
No pressure language. We replaced "Error detected" with "Opportunity found." We replaced "Wrong" with "Here is what your code is thinking." We removed difficulty ratings, timers, and scores entirely.
Encouraging copy everywhere. The sidebar says "learning every day 🌱" under your name. The challenge page says "No rush — think it through." The feedback page says "every mistake is just a lesson in disguise."
Memory as the core feature. The most important thing a human mentor does is remember you. So we built the entire product around Hindsight memory — a persistent layer that stores every mistake and recalls it in future sessions.
This is not just aesthetic. Research consistently shows that stress and anxiety reduce learning effectiveness. A student who feels safe makes more attempts, takes more risks, and learns faster.

The Prompt Engineering That Makes It Personal
My job on the team was AI prompts and content. The goal was to make Groq's responses feel warm, specific, and genuinely helpful — not like a cold error message.
The key was injecting Hindsight memory directly into the prompt:
javascriptconst memoryContext = pastMistakes?.length
? HINDSIGHT MEMORY — Past mistakes by this student:\n${pastMistakes.join('\n')}\n\n
: '';

const prompt = `${memoryContext}
You are TraceX, a warm and encouraging AI coding mentor.
Analyze this ${language} code for ${topic}.

Respond in exactly this format:

[ANALYSIS]
What is wrong. Be specific but kind. 2-3 sentences.

[BETTER APPROACH]
How to improve. Explain the correct logic warmly. 2-3 sentences.

[CORRECTED CODE]
The fixed version.

[HINDSIGHT WARNING]
If past mistakes show a pattern, call it out gently but clearly.
`;
The structure matters. By forcing the LLM to respond in specific sections, we ensure the feedback is always clear and actionable — never a confusing wall of text.

The YouTube Theory Feature
This is the feature I am most proud of.
The insight was simple: sometimes a student does not just have a bug. They have a gap in their understanding. Fixing the code without fixing the concept means they will make the same mistake again next week.
So we built a theory detection layer. After feedback is shown, TraceX asks: "Feeling unsure about the concept? Here is a great video that might help."
We curated the best YouTube videos for each topic:
javascriptexport const theoryLinks = {
'binary-search': {
title: 'Binary Search — Full Explanation',
url: 'https://www.youtube.com/watch?v=P3YID7liBug',
channel: 'NeetCode',
concepts: [
'Always check: left = mid + 1, right = mid - 1',
'Loop condition should be left <= right',
'mid = left + (right - left) / 2 to avoid overflow',
],
},
// ... more topics
};
The concepts array is shown as quick reminders alongside every challenge. So even if the student does not watch the video, they see the key points they need to remember.

The Challenge Content
Every challenge in TraceX is designed around a specific error pattern. Not random DSA problems — targeted exercises that force the student to confront and fix the exact mistake they just made.
The binary search challenge has an off-by-one error built in. The linked list challenge has a missing null check. The recursion challenge has a wrong base case.
This is deliberate. The goal is not to test you. The goal is to make you practice fixing the specific thing that is holding you back.

What Real Personalized Learning Looks Like
The before and after of using TraceX tells the story better than anything else.
Before Hindsight memory (Session 1):

"Your binary search has an off-by-one error in the boundary condition."

After Hindsight memory (Session 4):

"You have made this boundary error in binary search 3 times now. Each time, your right pointer is set to arr.length instead of arr.length - 1. This is a consistent pattern in your code. Focus specifically on this before moving to other topics."

Same code. Completely different experience. Because now the AI knows you.
That is what real personalized learning looks like. And it is only possible because of Hindsight.

Build Something That Remembers
If you are building any AI product that interacts with users repeatedly — whether it is a tutor, a coach, a customer service agent, or anything else — the single most impactful thing you can add is persistent memory.
Hindsight makes this surprisingly simple. Two function calls. Retain and recall. And suddenly your AI goes from a goldfish to a mentor.

GitHub: https://github.com/Anupam-codes7/TraceX
Hindsight SDK: https://github.com/vectorize-io/hindsight
Hindsight Docs: https://hindsight.vectorize.io/
Agent Memory: https://vectorize.io/features/agent-memory

The best learning happens when you feel safe to make mistakes. TraceX was built on that belief. And Hindsight made it possible.

All 3 articles are ready to publish. Paste each one on Dev.to or Medium, add one screenshot of the app, publish as public, and copy the URL for your LinkedIn post comments

Top comments (0)