By: Chiranjeevi C — React Frontend Module
Hindsight Hackathon — Team 1/0 Coders
The first time the Insights panel updated in real time — showing 'Rushing: 56%' after a user hammered out wrong code in eight seconds — I realized the UI was no longer just a form. It was a mirror.
No one asked for a simple submit-and-wait interface. We needed the frontend to feel alive — reacting to code, reflecting behavioral patterns, surfacing adaptive suggestions the moment the backend produced them. Building that in React across a weekend taught me more about real-time data flow than any tutorial ever had.
That's what wiring a live AI mentor to a React app actually looks like.
What We Built
Our project is an AI Coding Practice Mentor — a system where users write Python solutions to coding problems, submit them, and receive personalized feedback driven by behavioral memory. The system doesn't ask users what kind of learner they are. It watches them code and figures it out.
The stack:
FastAPI backend handling code execution, behavioral signals, and LLM hint generation
Groq (LLaMA 3.3 70B) for generating adaptive, context-aware hints
Hindsight for persistent memory of user patterns across sessions
React frontend — my responsibility — wiring all of it into a single coherent interface
My role was everything the user sees and interacts with. The code editor, the submit flow, the live feedback panel, the Insights section that surfaces behavioral patterns — and the adaptive problem suggestions that change based on how a user has been solving (or failing to solve) problems.
The Problem with Static UIs for Dynamic Systems
The backend was sophisticated from the start. Behavioral signals. Pattern detection. Memory that persisted across sessions. Adaptive problem selection. But none of that matters if the UI treats the interaction like a form submission.
The failure mode I wanted to avoid: user submits code → spinner → text box appears with feedback. That's a FAQ page with extra steps.
The insight was that the frontend had to make three things feel connected:
What the user is doing right now — the code they're typing
What the system knows about them — behavioral patterns from past sessions
What the system recommends next — adaptive problem selection
These three live in completely separate backend modules. My job was making them feel like one fluid experience in the browser.
Building the Code Editor Panel
The centerpiece of the interface is the code editor. I used Monaco Editor — the same engine that powers VS Code — embedded inside a React component. What looks simple here is actually doing something important:
import Editor from '@monaco-editor/react';
function CodePanel({ problem, onSubmit }) {
const [code, setCode] = useState(problem.starter_code);
const [editCount, setEditCount] = useState(0);
const [startTime] = useState(Date.now());
const handleChange = (value) => {
setCode(value);
setEditCount\(prev => prev \+ 1\);
};
const handleSubmit = () => {
const timeTaken = Math.floor((Date.now() - startTime) / 1000);
onSubmit\(\{ code, editCount, timeTaken \}\);
};
}
edit_count and time_taken look like UI decorations. They're actually behavioral telemetry — the signals that feed the cognitive analyzer and get stored in Hindsight memory. The frontend was generating real pattern data from the moment the user started typing.
The Live Feedback Flow
Once user hits Submit, the API call carries code, timing data, and edit count to the backend. The backend runs the code, evaluates it, detects patterns, updates Hindsight memory, and returns a structured response. I split the feedback panel into three distinct zones:
Test Results — pass/fail grid for each test case, shown immediately
Hint — the LLM-generated, behaviorally-tailored message, rendered with markdown
Insights — rolling pattern percentages pulled from Hindsight memory
Each zone updates independently via React state. When the API response lands, a single setState call fans out across all three panels simultaneously. No sequential loading. Everything updates at once.
The Insights Panel: Making Memory Visible
The Insights panel was the most technically interesting piece to build. It's not showing the current submission's result. It's showing a user's behavioral history across all sessions — information that lives in Hindsight Cloud memory, not in any local state.
const fetchInsights = async (userId) => {
const res = await fetch(/api/insights/${userId});
const data = await res.json();
setInsights(data);
};
useEffect(() => {
fetchInsights(userId);
}, []);
// After each submission:
const handleSubmitResponse = (response) => {
setFeedback(response);
fetchInsights(userId); // refresh memory state
};
The rendered Insights block showed pattern percentages — Rushing 56%, Overthinking 40% — as horizontal progress bars. The numbers came directly from Hindsight's stored session data, accumulated across every problem that user had ever attempted.
The Bug That Took Two Hours
We had a peculiar issue during integration: the Insights panel was showing stale data after submission. The API call was firing. The response was arriving. But the state wasn't updating.
The culprit was a closure problem inside the submission handler. The fetch was hitting /insights/undefined because userId was captured in a stale closure before the prop was available.
// BROKEN — userId captured in stale closure
const handleSubmit = useCallback(async (payload) => {
const result = await submitCode(payload);
fetchInsights(userId); // userId is undefined here
}, []);
// FIXED — userId in dependency array
const handleSubmit = useCallback(async (payload) => {
const result = await submitCode(payload);
fetchInsights(userId);
}, [userId]);
The fix was adding userId to useCallback's dependency array. Three characters. Two hours of network tab inspection to find it. This pattern would have caused subtle, hard-to-reproduce bugs in production.
Deploying to Vercel
Frontend deployment was the cleanest part of the project. Vercel's GitHub integration triggered automatic deployment on every push to main. The only configuration that mattered was proxying API calls to the backend on Render:
{
"rewrites": [
{
"source": "/api/:path\*",
"destination": "https://ai\-coding\-mentor\.onrender\.com/:path\*"
\}
]
}
This meant the React app could call /api/submit and Vercel would proxy it to the Render backend — no CORS issues, no hardcoded backend URLs in the client bundle. Ten lines of config that saved hours of debugging.
What I Learned
_Tracking behavioral signals in the UI is architectural, not cosmetic. _
Edit count and time elapsed look like UI details. They're actually backend telemetry. Designing the component to capture and forward them from the start meant zero refactoring later.
_useCallback dependency arrays are not optional. _
The stale closure bug was subtle in development and catastrophic in real use. Always list your dependencies — never leave the array empty.
_Real-time feel comes from architecture, not animation. _
The actual improvement came from a single API call returning all feedback fields at once, letting all three UI zones update simultaneously. No spinners needed.
_Proxy rewrites in Vercel eliminated an entire class of CORS bugs. _
Ten lines of config. Saved hours of debugging cross-origin failures and kept the client bundle clean of environment-specific URLs.
_The UI is the demo. _
Backend logic can be brilliant. If the frontend doesn't make it visible, legible, and immediate, judges won't feel it working. Every design decision was in service of making memory and adaptation feel obvious without explanation.
Resources & Links
Hindsight GitHub: https://github.com/vectorize-io/hindsight
Hindsight Docs: https://hindsight.vectorize.io/
Agent Memory: https://vectorize.io/features/agent-memory
Top comments (0)