<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shreya R Chittaragi </title>
    <description>The latest articles on DEV Community by Shreya R Chittaragi  (@shreyarchittaragi).</description>
    <link>https://dev.to/shreyarchittaragi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shreyarchittaragi"/>
    <language>en</language>
    <item>
      <title>What Happened When My Coding Agent Started Remembering User Mistakes</title>
      <dc:creator>Shreya R Chittaragi </dc:creator>
      <pubDate>Mon, 23 Mar 2026 09:51:11 +0000</pubDate>
      <link>https://dev.to/shreyarchittaragi/what-happened-when-my-coding-agentstarted-remembering-user-mistakes-1345</link>
      <guid>https://dev.to/shreyarchittaragi/what-happened-when-my-coding-agentstarted-remembering-user-mistakes-1345</guid>
      <description>&lt;p&gt;&lt;em&gt;By: Shreya R Chittaragi — Memory &amp;amp; Adaptation Module&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Hindsight Hackathon — Team 1/0 coders&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The first time our mentor called a guessing user "someone who rushes through problems without reading carefully" — using only behavioral signals, no labels — I knew the memory layer was working.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;No one told the system this user was a rusher. No dropdown, no profile form, no manual tag. The agent watched how fast they submitted, counted their edits, saw the syntax errors, and concluded it on its own. Then it adapted its hint accordingly.&lt;/p&gt;

&lt;p&gt;That's what behavioral memory looks like when it actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Built
&lt;/h2&gt;

&lt;p&gt;Our project is an AI Coding Practice Mentor — a system where users submit Python solutions to coding problems, get evaluated, and receive personalized hints. The personalization isn't based on what they tell us about themselves. It's based on how they actually behave while solving problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The stack:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FastAPI backend handling code execution and routing&lt;/li&gt;
&lt;li&gt;Groq (LLaMA 3.3 70B) for generating hints and feedback&lt;/li&gt;
&lt;li&gt;Hindsight for persistent behavioral memory across sessions&lt;/li&gt;
&lt;li&gt;React frontend with a live code editor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My role was the memory and adaptation module — everything that sits between "user submitted code" and "here's a hint tailored to how this specific person thinks."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Generic Hints
&lt;/h2&gt;

&lt;p&gt;Before memory, every user got the same hint for the same wrong answer.&lt;/p&gt;

&lt;p&gt;Submit an empty two_sum function? Here's a generic explanation of hash maps. Doesn't matter if you're someone who overthinks every edge case or someone who submits in 8 seconds without reading the problem. Same hint. Same tone. Same depth.&lt;/p&gt;

&lt;p&gt;That's not mentoring. That's a FAQ page.&lt;/p&gt;

&lt;p&gt;The insight behind our system is that how someone fails tells you more than what they got wrong. Two users can both fail the same test case for completely different cognitive reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One spent 15 minutes overthinking and missed a simple edge case&lt;/li&gt;
&lt;li&gt;One submitted in 5 seconds with a syntax error because they didn't read carefully&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They need different responses. The first needs confidence. The second needs to slow down.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Pattern Detection Layer
&lt;/h2&gt;

&lt;p&gt;The first thing I built was cognitive_analyzer.py — a rule-based system that takes raw behavioral signals and converts them into cognitive pattern labels.&lt;/p&gt;

&lt;p&gt;The signals come from signal_tracker.py:&lt;/p&gt;

&lt;p&gt;def capture_signals(submission: CodeSubmission, result: EvalResult) -&amp;gt; dict:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;return \{

    "user\_id": submission\.user\_id,

    "problem\_id": submission\.problem\_id,

    "attempt\_number": submission\.attempt\_number,

    "time\_taken\_sec": submission\.time\_taken,

    "code\_edit\_count": submission\.code\_edit\_count,

    "all\_passed": result\.all\_passed,

    "error\_types": classify\_errors\(result\.error\_types\),

\}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These signals feed into pattern detectors. Here's the rushing detector:&lt;/p&gt;

&lt;p&gt;def _check_rushing(signals: dict) -&amp;gt; list:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;score = 0\.0

if signals\["time\_taken\_sec"\] &amp;lt; 15:

    score \+= 0\.3

if "syntax\_error" in signals\["error\_types"\]:

    score \+= 0\.5

if signals\["code\_edit\_count"\] &amp;lt;= 2:

    score \+= 0\.2

if score &amp;gt;= 0\.4:

    return \[\{"pattern": "rushing", "confidence": round\(score, 2\)\}\]

return \[\]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Five patterns total: overthinking, guessing, rushing, concept_gap, boundary_weakness. Each has its own confidence score. The dominant pattern drives everything downstream — the hint tone, the next problem difficulty, the encouragement threshold.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Memory Problem Nobody Warns You About
&lt;/h2&gt;

&lt;p&gt;My first implementation of memory was a Python dict:&lt;/p&gt;

&lt;p&gt;_memory_store: dict[str, UserMemoryProfile] = {}&lt;/p&gt;

&lt;p&gt;It worked perfectly during testing. Patterns stored, profiles built, adaptive hints generating correctly. Then I restarted the server and every single user profile was gone.&lt;/p&gt;

&lt;p&gt;A dict lives in RAM. RAM clears on restart. For a demo this would be catastrophic — judges submit code, close the tab, come back, and the system has no memory of them at all.&lt;/p&gt;

&lt;p&gt;I moved to file-based persistence first — serializing the memory store to memory_data.json on every write:&lt;/p&gt;

&lt;p&gt;def _save_to_disk():&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data = \{uid: profile\.model\_dump\(\) for uid, profile in \_memory\_store\.items\(\)\}

with open\(MEMORY\_FILE, "w"\) as f:

    json\.dump\(data, f, default=str\)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This survived restarts. But it was still local — not scalable, not shareable across instances, and not the real Hindsight integration we needed for the demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Real Hindsight Cloud Memory
&lt;/h2&gt;

&lt;p&gt;Switching to Hindsight meant moving from a flat JSON file to a proper agent memory system with semantic recall and reflection built in.&lt;/p&gt;

&lt;p&gt;The integration looked clean at first:&lt;/p&gt;

&lt;p&gt;from hindsight_client import Hindsight&lt;/p&gt;

&lt;p&gt;client = Hindsight(&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;base\_url=settings\.HINDSIGHT\_URL,

api\_key=settings\.HINDSIGHT\_API\_KEY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;def store_session(user_id: str, session_data: dict):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client\.retain\(

    bank\_id="coding\-mentor",

    content=f"User \{user\_id\} showed \{session\_data\['dominant\_pattern'\]\} pattern\.\.\.",

    metadata=\{"user\_id": user\_id\}

\)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then I hit this error the moment a real user submitted code:&lt;/p&gt;

&lt;p&gt;RuntimeError: Timeout context manager should be used inside a task&lt;/p&gt;

&lt;p&gt;hindsight_client uses async under the hood. FastAPI was running the route handler in a thread via run_in_threadpool. Calling an async client from a sync thread context without a running event loop causes this exact crash — silently, only on real requests, never in unit tests.&lt;/p&gt;

&lt;p&gt;The fix was running the async calls in a fresh event loop:&lt;/p&gt;

&lt;p&gt;def _run_in_new_loop(coro):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;loop = asyncio\.new\_event\_loop\(\)

try:

    return loop\.run\_until\_complete\(coro\)

finally:

    loop\.close\(\)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;def store_session(user_id: str, session_data: dict):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\_run\_in\_new\_loop\(client\.aretain\(

    bank\_id="coding\-mentor",

    content=content,

    context=f"coding session for user \{user\_id\}",

    metadata=\{"user\_id": user\_id\}

\)\)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Four lines. Two hours of debugging. Worth every minute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Adaptive Problem Selector&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once memory was working, I built adaptive_selector.py — a module that reads a user's dominant pattern from memory and picks the best next problem for them.&lt;/p&gt;

&lt;p&gt;PATTERN_STRATEGY = {&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"overthinking": \{"difficulty": "easy",

    "reason": "Simpler problem to build confidence"\},

"guessing":     \{"difficulty": "easy",

    "reason": "Easy problem to force deliberate thinking"\},

"rushing":      \{"difficulty": "medium",

    "reason": "Harder problem that punishes rushing"\},

"concept\_gap":  \{"difficulty": "easy",

    "reason": "Back to basics to fill the knowledge gap"\},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;A user who rushes gets a medium difficulty problem — something that actually punishes careless reading. A user who's overthinking gets an easy win to rebuild confidence. The logic is simple, but it's only possible because we have a real history of their behavior across sessions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What It Looks Like Now&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After a few submissions, the Insights panel in our UI shows:&lt;/p&gt;

&lt;p&gt;Weak Areas: Rushing (56%)  Overthinking (40%)&lt;/p&gt;

&lt;p&gt;Latest: Rushing&lt;/p&gt;

&lt;p&gt;These percentages come directly from pattern confidence scores stored in Hindsight. The mentor hint changes based on this — a rusher gets told to slow down and re-read. An overthinker gets told to trust their instinct and start simple.&lt;/p&gt;

&lt;p&gt;In the Hindsight Cloud dashboard, entities are being tracked across sessions: hindsight_test, test_user, overthinking, syntax_struggles — Hindsight isn't just storing logs. It's building a semantic understanding of each user's behavior over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;_&lt;em&gt;Persistence is not optional. _&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;An in-memory dict feels fine until the first restart. Design for persistence from day one, even if it's just a JSON file initially.&lt;/li&gt;
&lt;li&gt;_&lt;em&gt;Async context matters more than you think. _&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The RuntimeError from running async Hindsight calls inside a FastAPI thread cost two hours. Always check whether your client library is async before wiring it into a sync endpoint.&lt;/li&gt;
&lt;li&gt;_&lt;em&gt;Behavioral signals beat self-reported data. _&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Users don't know they're rushing. Watching what they actually do gives you a more honest picture than anything they'd type into a profile form.&lt;/li&gt;
&lt;li&gt;_&lt;em&gt;One source of truth for pattern detection. _&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;We initially had two pattern detectors with overlapping but inconsistent logic. Consolidating into one was a small change with a big impact on reliability.&lt;/li&gt;
&lt;li&gt;_&lt;em&gt;Memory makes the LLM smarter without retraining. _&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The same Groq model gives dramatically different, more useful hints when it has behavioral context. You don't need a bigger model — you need better memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Resources &amp;amp; Links&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hindsight GitHub: &lt;a href="https://github%5C.com/vectorize%5C-io/hindsight" rel="noopener noreferrer"&gt;https://github\.com/vectorize\-io/hindsight&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hindsight Docs: &lt;a href="https://hindsight%5C.vectorize%5C.io/" rel="noopener noreferrer"&gt;https://hindsight\.vectorize\.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agent Memory: &lt;a href="https://vectorize%5C.io/features/agent%5C-memory" rel="noopener noreferrer"&gt;https://vectorize\.io/features/agent\-memory&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
