By: Jagadeesh R S — API Layer & Models

Hindsight Hackathon — Team 1/0 Coders
The moment I pushed the Pydantic models to GitHub, four teammates started coding simultaneously. That's what it means to unblock a team.
Nobody told me the API layer would be the most depended-on piece of the whole system. I found out when my teammates started asking for my models before I'd even finished writing them.
My job was to define the data contracts that every other module would import, and build the four routes that connected the frontend to the backend pipeline. Without those contracts, nothing else could start. That's a different kind of pressure than building a feature — it's the pressure of knowing that your mistakes ripple everywhere simultaneously.
What We Built
Our project is an AI Coding Practice Mentor — a system where users submit Python solutions to coding problems, get evaluated, and receive personalized hints based on their behavioral patterns. The system doesn't ask users how they learn. It watches how they actually behave — how fast they submit, how many times they edit, what errors they make — and adapts accordingly.
The stack:
- FastAPI backend handling code execution and routing
- Groq (LLaMA 3.3 70B) for generating personalized hints and feedback
- Hindsight for persistent behavioral memory across sessions
- React frontend with a live code editor
My role was the API Layer. I was responsible for defining all the shared Pydantic models and creating the 4 core routes that every other team member depended on. Without my work, the entire team was blocked.
Why the API Layer Goes First
In our team structure, I was the foundation. Before I pushed the Pydantic models, nobody else could properly define their data shapes. Person 2 needed CodeSubmission and EvalResult. Person 3 needed FeedbackOut. Person 4 needed EvalResult and UserProfile. Person 1 needed UserProfile and PatternRecord.
The dependency chain was clear: my models → everyone else's code. So the first thing I did after cloning the repo was create app/models/__init__.py with five Pydantic models that defined the contracts for the entire system.
Every hour I delayed was an hour four teammates spent writing placeholder dicts that would need to be replaced. I pushed models first. Everything else could wait.
Building the Pydantic Models
The five models I built were CodeSubmission, ProblemOut, EvalResult, FeedbackOut, and UserProfile. Each one represents a data contract between different parts of the system. The most critical one was EvalResult:
class EvalResult(BaseModel):
passed\_count: int
total: int
all\_passed: bool
error\_types: List\[str\]
edge\_case\_results: List\[dict\]
execution\_time\_ms: int
Midway through the project, Person 2 asked me to add two extra fields — edge_case_results and execution_time_ms. I updated the model and pushed. Their execution engine worked immediately. That's the value of a shared schema — one change, propagated everywhere, zero miscommunication.
The CodeSubmission model was equally important because it captured behavioral signals directly from the frontend:
class CodeSubmission(BaseModel):
user\_id: str
problem\_id: str
code: str
language: str
time\_taken: float
attempt\_number: int
code\_edit\_count: int
Those last three fields — time_taken, attempt_number, code_edit_count — are not just metadata. They're the behavioral signals that feed the cognitive analyzer and eventually get stored in Hindsight memory. Defining them in the model meant every downstream module could rely on them being present and typed correctly.
Creating the 4 API Routes
Once the models were done, I created four route files that defined the API surface of the entire application:
- POST /submit_code — receives user code and runs it through the execution engine
- GET /get_problem/{problem_id} — serves problems from the problem store
- POST /get_feedback — calls Groq LLM with user profile injected from Hindsight memory
- GET /user_profile/{user_id} — returns the user's cognitive patterns and behavioral history
One lesson I learned quickly: static routes must always come before dynamic routes in FastAPI. I initially had /get_problem/{problem_id} at the top of the file, which caused FastAPI to swallow /problems and /problems/difficulty/{d} — routing everything to the dynamic handler. Moving the dynamic route to the bottom fixed it immediately.
# CORRECT order — static first, dynamic last
@router.get('/problems')
def list_problems(): ...
@router.get('/problems/difficulty/{difficulty}')
def problems_by_difficulty(difficulty: str): ...
@router.get('/get_problem/{problem_id}') # dynamic — always last
def get_problem(problem_id: str): ...
Building the Visualization Layer
My second major task was the Visualization Layer. This endpoint reads from Person 1's Hindsight memory and returns structured data for the frontend charts. I built it with dummy data first while waiting for the memory system to be complete. Once the Hindsight integration was done, I updated GET /visualizations/{user_id} to call retrieve_memory() and return real pattern trends, mistake categories, and confidence scores.
The real data response after a student submitted code:
{
"total_sessions": 1,
"pattern_trends": [
\{"session": 1, "pattern": "overthinking", "confidence": 0\.5\}
],
"mistake_categories": {"overthinking": 1},
"dominant_pattern": "overthinking"
}
Dummy data first, real data when ready. This approach kept the team moving without anyone being blocked on anyone else.
Wiring the Continuous Learning Loop
The final piece was connecting all modules into one continuous loop through the /submit_code route:
- User submits code → execution_service runs it against test cases
- signal_tracker captures time_taken, attempt_number, code_edit_count, error_types
- cognitive_analyzer converts signals into pattern labels (overthinking, guessing, rushing)
- hindsight.store_session() persists the pattern to Hindsight Cloud memory
- Next problem adapts based on dominant pattern recalled from memory
This is the core of what makes the system intelligent — not a single clever algorithm, but five modules passing structured data to each other in a loop that gets smarter with every submission. My routes were the connective tissue.
Testing All 8 Routes
I wrote tests/test_routes.py with 8 tests covering every endpoint. All 8 passed:
tests/test_routes.py::test_root PASSED
tests/test_routes.py::test_health PASSED
tests/test_routes.py::test_get_problem PASSED
tests/test_routes.py::test_get_all_problems PASSED
tests/test_routes.py::test_submit_code PASSED
tests/test_routes.py::test_get_feedback PASSED
tests/test_routes.py::test_user_profile PASSED
tests/test_routes.py::test_visualizations PASSED
8 passed in 3.99s
One test — test_get_feedback — initially failed because the Groq API key wasn't set. Once I added it to .env, all 8 passed clean. Zero warnings after the Pydantic ConfigDict fix.
What I Learned
- _Define contracts before writing logic. _
- The Pydantic models were the most valuable thing I built. Once everyone knew what CodeSubmission and EvalResult looked like, parallel development became possible.
- _Route order matters in FastAPI. _
- Static routes must always come before dynamic ones or FastAPI will route everything to the dynamic handler. A 10-minute mistake that took 30 minutes to debug.
- _Dummy data is a valid starting point. _
- Build the interface first, wire the real data when it's ready. This approach kept the team unblocked while the memory module was still being built.
- _Merge conflicts are normal, not scary. _
- I resolved three merge conflicts during the project. Always understand both versions before choosing which to keep — never just accept yours blindly.
- _The API layer is invisible when it works. _
- Nobody noticed my routes were there — they just worked. The best infrastructure lets everyone else do their job without thinking about it.
Resources & Links
Hindsight GitHub: https://github\.com/vectorize\-io/hindsight
Hindsight Docs: https://hindsight\.vectorize\.io/
Agent Memory: https://vectorize\.io/features/agent\-memory
Top comments (0)