<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nirmala Devi Patel</title>
    <description>The latest articles on DEV Community by Nirmala Devi Patel (@nirmaladevipatel2005).</description>
    <link>https://dev.to/nirmaladevipatel2005</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nirmaladevipatel2005"/>
    <language>en</language>
    <item>
      <title>AI Was Helping Me Prepare for Internships — Until I Realized It Was Forgetting Everything 🤯</title>
      <dc:creator>Nirmala Devi Patel</dc:creator>
      <pubDate>Sat, 21 Mar 2026 16:56:34 +0000</pubDate>
      <link>https://dev.to/nirmaladevipatel2005/i-career-advisor-that-remembers-you-41e6</link>
      <guid>https://dev.to/nirmaladevipatel2005/i-career-advisor-that-remembers-you-41e6</guid>
      <description>&lt;p&gt;I didn’t expect memory to be the problem 🧠&lt;/p&gt;

&lt;p&gt;While building an AI career advisor, I kept running into the same issue.&lt;/p&gt;

&lt;p&gt;Every interaction looked good in isolation.&lt;br&gt;
But the moment I tried to use it across multiple sessions, everything broke.&lt;/p&gt;

&lt;p&gt;I’d ask for resume feedback, get useful suggestions, close the tab—and when I came back, the system had no idea who I was.&lt;/p&gt;

&lt;p&gt;At first, I assumed this was just a limitation of prompts.&lt;/p&gt;

&lt;p&gt;It wasn’t.&lt;/p&gt;

&lt;p&gt;What I was trying to build 🚀&lt;/p&gt;

&lt;p&gt;The goal was straightforward:&lt;/p&gt;

&lt;p&gt;An AI assistant that helps students with:&lt;/p&gt;

&lt;p&gt;📄 Resume feedback&lt;/p&gt;

&lt;p&gt;📊 Skill gap analysis&lt;/p&gt;

&lt;p&gt;🎯 Internship tracking&lt;/p&gt;

&lt;p&gt;🎤 Interview preparation&lt;/p&gt;

&lt;p&gt;The stack itself was not unusual:&lt;/p&gt;

&lt;p&gt;React frontend&lt;/p&gt;

&lt;p&gt;Node.js backend&lt;/p&gt;

&lt;p&gt;LLM for response generation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuq942akswu2qozcrtubg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuq942akswu2qozcrtubg.png" alt=" " width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧠 A persistent memory layer&lt;/p&gt;

&lt;p&gt;The real difference was in how requests were handled.&lt;/p&gt;

&lt;p&gt;Instead of treating each query independently, the system reconstructs user context before generating a response.&lt;/p&gt;

&lt;p&gt;The problem only shows up at scale 📉&lt;/p&gt;

&lt;p&gt;If you test with single prompts, everything works fine.&lt;/p&gt;

&lt;p&gt;But real usage doesn’t look like that.&lt;/p&gt;

&lt;p&gt;Internship preparation is not one interaction—it’s a sequence:&lt;/p&gt;

&lt;p&gt;📚 Learning new skills&lt;/p&gt;

&lt;p&gt;🛠 Building projects&lt;/p&gt;

&lt;p&gt;📨 Applying to companies&lt;/p&gt;

&lt;p&gt;📈 Improving based on feedback&lt;/p&gt;

&lt;p&gt;If the system forgets each step, it can’t provide meaningful guidance.&lt;/p&gt;

&lt;p&gt;Stateless systems don’t work for long-term workflows ⚠️&lt;/p&gt;

&lt;p&gt;Most LLM-based applications follow a simple pattern:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsnmk796qey8niw4ayp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsnmk796qey8niw4ayp5.png" alt=" " width="539" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This works well for isolated questions.&lt;/p&gt;

&lt;p&gt;But it completely ignores history.&lt;/p&gt;

&lt;p&gt;There’s no awareness of:&lt;/p&gt;

&lt;p&gt;progress&lt;/p&gt;

&lt;p&gt;past mistakes&lt;/p&gt;

&lt;p&gt;user-specific context&lt;/p&gt;

&lt;p&gt;Which leads to repetitive, generic answers.&lt;/p&gt;

&lt;p&gt;Rethinking the problem: career prep as a timeline ⏳&lt;/p&gt;

&lt;p&gt;Instead of treating interactions as isolated queries, I started modeling them as a timeline.&lt;/p&gt;

&lt;p&gt;Each user has:&lt;/p&gt;

&lt;p&gt;👤 A profile (skills, projects)&lt;/p&gt;

&lt;p&gt;📌 Events (applications, interviews)&lt;/p&gt;

&lt;p&gt;💬 Session context (recent interactions)&lt;/p&gt;

&lt;p&gt;So the system should behave more like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eb5uq1c0qiobjoqfwmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eb5uq1c0qiobjoqfwmg.png" alt=" " width="677" height="261"&gt;&lt;/a&gt;&lt;br&gt;
That small change shifts the entire system behavior.&lt;/p&gt;

&lt;p&gt;Why memory changes everything 🔥&lt;/p&gt;

&lt;p&gt;Once the system has access to structured context, even basic features improve.&lt;/p&gt;

&lt;p&gt;📄 Resume feedback becomes contextual&lt;/p&gt;

&lt;p&gt;based on actual projects&lt;/p&gt;

&lt;p&gt;not generic suggestions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg7b8l5m45yugi16fqs1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcg7b8l5m45yugi16fqs1.png" alt=" " width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📊 Skill recommendations become targeted&lt;/p&gt;

&lt;p&gt;derived from real gaps&lt;/p&gt;

&lt;p&gt;not random lists&lt;/p&gt;

&lt;p&gt;🎯 Internship suggestions become relevant&lt;/p&gt;

&lt;p&gt;filtered using past activity&lt;/p&gt;

&lt;p&gt;not broad recommendations&lt;/p&gt;

&lt;p&gt;At this point, memory is no longer a feature—it becomes infrastructure.&lt;/p&gt;

&lt;p&gt;Why I didn’t build memory from scratch 🧩&lt;/p&gt;

&lt;p&gt;Instead of implementing storage and retrieval from the ground up, I used:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/vectorize-io/hindsight" rel="noopener noreferrer"&gt;https://github.com/vectorize-io/hindsight&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It provides a structured way to:&lt;/p&gt;

&lt;p&gt;store interactions&lt;/p&gt;

&lt;p&gt;retrieve relevant context&lt;/p&gt;

&lt;p&gt;evolve memory over time&lt;/p&gt;

&lt;p&gt;The documentation helped shape the retrieval logic:&lt;br&gt;
👉 &lt;a href="https://hindsight.vectorize.io/" rel="noopener noreferrer"&gt;https://hindsight.vectorize.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the broader concept of agent memory:&lt;br&gt;
👉 &lt;a href="https://vectorize.io/features/agent-memory" rel="noopener noreferrer"&gt;https://vectorize.io/features/agent-memory&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What changed after adding memory 📈&lt;/p&gt;

&lt;p&gt;Before:&lt;/p&gt;

&lt;p&gt;repeated questions&lt;/p&gt;

&lt;p&gt;generic suggestions&lt;/p&gt;

&lt;p&gt;no continuity&lt;/p&gt;

&lt;p&gt;After:&lt;/p&gt;

&lt;p&gt;context-aware responses&lt;/p&gt;

&lt;p&gt;pattern recognition&lt;/p&gt;

&lt;p&gt;progressive improvement&lt;/p&gt;

&lt;p&gt;A simple example:&lt;/p&gt;

&lt;p&gt;Before:&lt;/p&gt;

&lt;p&gt;“Add more projects”&lt;/p&gt;

&lt;p&gt;After:&lt;/p&gt;

&lt;p&gt;“You built a REST API last week—include it in your resume.”&lt;/p&gt;

&lt;p&gt;That difference comes entirely from memory.&lt;/p&gt;

&lt;p&gt;What surprised me the most 🤔&lt;/p&gt;

&lt;p&gt;I initially thought better prompts would solve the problem.&lt;/p&gt;

&lt;p&gt;They didn’t.&lt;/p&gt;

&lt;p&gt;The biggest improvements came from:&lt;/p&gt;

&lt;p&gt;storing structured user data&lt;/p&gt;

&lt;p&gt;retrieving only relevant context&lt;/p&gt;

&lt;p&gt;Memory quality mattered far more than prompt engineering.&lt;/p&gt;

&lt;p&gt;Lessons learned 🧠&lt;/p&gt;

&lt;p&gt;Without memory, personalization is not possible&lt;/p&gt;

&lt;p&gt;More data is not better—structured data is&lt;/p&gt;

&lt;p&gt;Retrieval is harder than storage&lt;/p&gt;

&lt;p&gt;Systems should improve over time, not reset&lt;/p&gt;

&lt;p&gt;Final thought 💡&lt;/p&gt;

&lt;p&gt;Most AI tools today are optimized for single interactions.&lt;/p&gt;

&lt;p&gt;But real problems—like career preparation—are sequences.&lt;/p&gt;

&lt;p&gt;Once you design for that, memory stops being optional.&lt;/p&gt;

&lt;p&gt;👉 It becomes the system.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
