By Lakshay | Team Clarion | Code Mentor AI
"Six of us kept hitting the same wall — great platforms like LeetCode and Scrimba that teach you well but forget you the moment you leave. When we decided to build Code Mentor AI, everyone rushed to the AI features. I had a different question: how do you actually show a student what the AI remembers about them, in a way that feels helpful rather than creepy?"
The Frontend Problem Nobody Talks About
Most agent memory demos show you a JSON blob or a terminal output. That's fine for a proof of concept. For a real learning product, it's useless. The hardest part of building Code Mentor AI wasn't the memory system it was surfacing that memory to a student in a way that feels natural, timely, and actually helps them learn.
I owned the entire frontend built with Next.js, integrated with the Monaco editor, and wired up to every one of our six AI features through FastAPI.
The Stack
Next.js as the framework gave us server-side rendering where we needed it (the dashboard, the profile page) and fast client-side transitions inside the code editor. Monaco handled the coding environment. Every AI feature the Bug Fingerprint, the Socratic hints, the context-aware debugger, the session welcome has a dedicated UI component that I built and connected to the backend.
The Hardest UI Problem: The Memory Panel
The context-aware debugging panel was the trickiest to get right. When a student hits an error, the backend runs a vector similarity search and returns the top-3 most similar past mistakes. The panel needs to show these without breaking the student's focus on their current code.
First version: a modal. Students hated it it covered their code. Second version: a sidebar that pushed the editor. Too disruptive. Final version: a collapsible overlay anchored to the bottom of the editor, with a subtle pulse animation when new results come in. That version tested well.
The Session Welcome Component
On login, the backend returns the student's most 'surprising' past mistake the outlier from their history, computed using cosine similarity over CodeBERT embeddings. My job was turning that data into a welcome message that felt personal, not robotic.
The component reads the mistake type and generates a warm, specific message: 'Welcome back! Last time you had an unusual struggle with recursion base cases something you don't usually mess up.' That sentence comes from the AI. My component just renders it in the right place at the right moment.
Integrating with Hindsight
Every user interaction that involves memory logging a mistake, requesting a hint, loading past context flows through endpoints powered by Hindsight. From the frontend, this meant designing clear API contracts with the backend team so the UI always knew what shape of data to expect back from the agent memory layer.
The Lesson
AI memory is only as good as how you present it. The backend team built something genuinely impressive. But if the UI surfaces it at the wrong moment, in the wrong format, or with too much friction students ignore it. The interface is not decoration. It's the product.
Read the Hindsight documentation if you're building memory into your product their retain/recall model maps cleanly to what a frontend needs to query.
Team Clarion
• Aanchal & Pranati — Backend architecture & database
• Lakshay — Full frontend integration with Next.js
• Aman, Kinjal & Priyanshu — Dynamic AI models & AI assistance features
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)