<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rohit Rajvaidya</title>
    <description>The latest articles on DEV Community by Rohit Rajvaidya (@rohitrajvaidya5).</description>
    <link>https://dev.to/rohitrajvaidya5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rohitrajvaidya5"/>
    <language>en</language>
    <item>
      <title>Building a Local AI Assistant with Memory, PostgreSQL, and Multi-Model Support Update</title>
      <dc:creator>Rohit Rajvaidya</dc:creator>
      <pubDate>Tue, 17 Mar 2026 05:00:17 +0000</pubDate>
      <link>https://dev.to/rohitrajvaidya5/building-a-local-ai-assistant-with-memory-postgresql-and-multi-model-support-update-4cod</link>
      <guid>https://dev.to/rohitrajvaidya5/building-a-local-ai-assistant-with-memory-postgresql-and-multi-model-support-update-4cod</guid>
      <description>&lt;p&gt;Most local AI assistants forget everything once the conversation ends.\&lt;br&gt;
While experimenting with locally hosted LLMs, I wanted to solve that&lt;br&gt;
problem by giving my assistant &lt;strong&gt;persistent memory&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;On &lt;strong&gt;16 March 2026&lt;/strong&gt;, I worked on improving the architecture and&lt;br&gt;
reliability of my local AI assistant project. The main focus was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Adding persistent memory&lt;/li&gt;
&lt;li&gt;  Integrating PostgreSQL&lt;/li&gt;
&lt;li&gt;  Improving project structure&lt;/li&gt;
&lt;li&gt;  Running multiple models locally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article walks through what I built and what I learned.&lt;/p&gt;




&lt;h1&gt;
  
  
  The Problem: Local AI Assistants Have No Memory
&lt;/h1&gt;

&lt;p&gt;When you run models locally using tools like &lt;strong&gt;Ollama&lt;/strong&gt;, they respond&lt;br&gt;
based only on the current prompt.&lt;/p&gt;

&lt;p&gt;They don't remember:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Your preferences\&lt;/li&gt;
&lt;li&gt;  Previous conversations\&lt;/li&gt;
&lt;li&gt;  Important user information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To solve this, I implemented a &lt;strong&gt;memory system backed by PostgreSQL&lt;/strong&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  Designing a Memory Storage System
&lt;/h1&gt;

&lt;p&gt;The idea was simple:&lt;/p&gt;

&lt;p&gt;If the user explicitly asks the assistant to remember something, the&lt;br&gt;
system should store that information.&lt;/p&gt;

&lt;p&gt;Instead of storing entire conversations, I designed a &lt;strong&gt;trigger-based&lt;br&gt;
memory detection system&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trigger Words
&lt;/h2&gt;

&lt;p&gt;The assistant watches for these keywords:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;remember&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;store&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;save&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a user message contains one of these triggers, the system extracts&lt;br&gt;
and stores the important information.&lt;/p&gt;




&lt;h1&gt;
  
  
  Memory Extraction Process
&lt;/h1&gt;

&lt;p&gt;The system follows this pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Detect trigger word in user input&lt;/li&gt;
&lt;li&gt; Remove the trigger word&lt;/li&gt;
&lt;li&gt; Clean the remaining text&lt;/li&gt;
&lt;li&gt; Ask the model to convert it into a concise fact&lt;/li&gt;
&lt;li&gt; Store it in the database&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;User Input&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Remember that I prefer Python for backend development.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Stored Memory&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User prefers Python for backend development.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This ensures the database contains &lt;strong&gt;clean, structured facts instead of&lt;br&gt;
raw conversation logs&lt;/strong&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  PostgreSQL Integration
&lt;/h1&gt;

&lt;p&gt;To store memories persistently, I integrated &lt;strong&gt;PostgreSQL&lt;/strong&gt; with the&lt;br&gt;
assistant.&lt;/p&gt;

&lt;p&gt;Three core database functions were implemented:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;store_memory()
get_memories()
clear_whole_database()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using PostgreSQL ensures that memories remain available even after&lt;br&gt;
restarting the assistant.&lt;/p&gt;




&lt;h1&gt;
  
  
  Improving Reliability with Error Handling
&lt;/h1&gt;

&lt;p&gt;AI systems interacting with databases can fail for many reasons.&lt;/p&gt;

&lt;p&gt;To make the assistant more stable, I wrapped the memory storage logic&lt;br&gt;
inside a &lt;strong&gt;try/except block&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Prevents application crashes&lt;/li&gt;
&lt;li&gt;  Logs errors properly&lt;/li&gt;
&lt;li&gt;  Allows the conversation to continue&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  Implementing a Centralized Logging System
&lt;/h1&gt;

&lt;p&gt;Originally, the project printed logs directly to the terminal.&lt;/p&gt;

&lt;p&gt;As the project grew, this became messy and hard to debug.&lt;/p&gt;

&lt;p&gt;I implemented a &lt;strong&gt;centralized logging configuration&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging Structure
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;logs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Logging configuration lives in:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;config/logging_config.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Cleaner terminal output&lt;/li&gt;
&lt;li&gt;  Persistent logs for debugging&lt;/li&gt;
&lt;li&gt;  Easier monitoring of system behavior&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  Running Local LLMs with Ollama
&lt;/h1&gt;

&lt;p&gt;The assistant runs multiple models locally using &lt;strong&gt;Ollama&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Stack
&lt;/h2&gt;

&lt;p&gt;Model            Purpose&lt;/p&gt;




&lt;p&gt;Llama3           General conversation and reasoning&lt;br&gt;
  DeepSeek-Coder   Programming and technical questions&lt;br&gt;
  Phi3             Lightweight fallback model&lt;/p&gt;

&lt;p&gt;This setup allows the assistant to choose the most suitable model&lt;br&gt;
depending on the task.&lt;/p&gt;




&lt;h1&gt;
  
  
  Refactoring the Project Structure
&lt;/h1&gt;

&lt;p&gt;As the project expanded, the codebase needed better organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updated Project Structure
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;offline_chat_project/

app/
   main.py
   ai/
   database/

config/
   logging_config.py

logs/
project_logs/
scripts/

.env
requirements.txt
README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Key Improvements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Separated AI logic into the &lt;code&gt;ai/&lt;/code&gt; module&lt;/li&gt;
&lt;li&gt;  Isolated database operations inside &lt;code&gt;database/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  Added centralized logging configuration&lt;/li&gt;
&lt;li&gt;  Organized logs and project documentation&lt;/li&gt;
&lt;/ul&gt;


&lt;h1&gt;
  
  
  Version Control Strategy
&lt;/h1&gt;

&lt;p&gt;All database and memory-related work was developed in a dedicated&lt;br&gt;
feature branch.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;feature/database_store
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using feature branches helps keep the main branch stable while&lt;br&gt;
developing new functionality.&lt;/p&gt;




&lt;h1&gt;
  
  
  Lessons Learned
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Small Models Can Be Unreliable
&lt;/h2&gt;

&lt;p&gt;Smaller models sometimes generate inconsistent structured outputs.&lt;/p&gt;

&lt;p&gt;When building memory systems, it's important to validate the extracted&lt;br&gt;
data before storing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory Systems Need Filtering
&lt;/h2&gt;

&lt;p&gt;Without proper filtering, the assistant might store irrelevant or&lt;br&gt;
incorrect information.&lt;/p&gt;

&lt;p&gt;The system should only store &lt;strong&gt;long-term meaningful facts&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Good Project Structure Matters
&lt;/h2&gt;

&lt;p&gt;As projects grow, maintaining clean architecture becomes critical.&lt;/p&gt;

&lt;p&gt;Separating modules early prevents major refactoring later.&lt;/p&gt;




&lt;h1&gt;
  
  
  What's Next
&lt;/h1&gt;

&lt;p&gt;Planned improvements include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Injecting stored memories into prompts&lt;/li&gt;
&lt;li&gt;  Adding commands like &lt;code&gt;show memories&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  Implementing intelligent model routing&lt;/li&gt;
&lt;li&gt;  Improving memory filtering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is to make the assistant behave more like a &lt;strong&gt;personalized AI&lt;br&gt;
system rather than a stateless chatbot&lt;/strong&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;Building a &lt;strong&gt;local AI assistant with persistent memory&lt;/strong&gt; is an&lt;br&gt;
interesting engineering challenge.&lt;/p&gt;

&lt;p&gt;Combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  PostgreSQL&lt;/li&gt;
&lt;li&gt;  Local LLMs&lt;/li&gt;
&lt;li&gt;  Modular architecture&lt;/li&gt;
&lt;li&gt;  Structured memory storage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;brings us closer to creating &lt;strong&gt;personal AI systems that truly remember&lt;br&gt;
users&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;I am taking of all the updates inside my github repo inside ProjectLogs.&lt;br&gt;
For Code and Updates Checkout my github repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RohitRajvaidya5/AI-Assistant-Project.git" rel="noopener noreferrer"&gt;https://github.com/RohitRajvaidya5/AI-Assistant-Project.git&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>postgres</category>
      <category>programming</category>
    </item>
    <item>
      <title>Local AI Chatbot Project Update</title>
      <dc:creator>Rohit Rajvaidya</dc:creator>
      <pubDate>Sun, 15 Mar 2026 18:57:25 +0000</pubDate>
      <link>https://dev.to/rohitrajvaidya5/local-ai-chatbot-project-update-1k34</link>
      <guid>https://dev.to/rohitrajvaidya5/local-ai-chatbot-project-update-1k34</guid>
      <description>&lt;p&gt;This project is a local AI assistant built with Python and Ollama.&lt;/p&gt;

&lt;h2&gt;
  
  
  Latest Development Log
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Project Log — AI Assistant Development
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Date
&lt;/h3&gt;

&lt;p&gt;March 15, 2026&lt;/p&gt;

&lt;h3&gt;
  
  
  Progress Today
&lt;/h3&gt;

&lt;p&gt;Today I worked on improving the memory system of my local AI assistant built with &lt;strong&gt;Python, Ollama, and PostgreSQL&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Git Workflow Update
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Created a new Git branch for database-related work:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; feature/database_store
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;This branch is dedicated to developing and testing &lt;strong&gt;database memory features&lt;/strong&gt; without affecting the main branch.&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  2. PostgreSQL Memory Integration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Connected the assistant to a PostgreSQL database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Created a &lt;code&gt;memory&lt;/code&gt; table to store important user information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implemented database helper functions:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;store_memory()&lt;/code&gt; → stores a memory in the database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;get_memories()&lt;/code&gt; → retrieves stored memories&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;clear_whole_database()&lt;/code&gt; → clears the memory table&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows the assistant to &lt;strong&gt;persist information between sessions&lt;/strong&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  3. Memory Trigger System
&lt;/h3&gt;

&lt;p&gt;Added logic to detect when the user wants the assistant to remember something.&lt;/p&gt;

&lt;p&gt;The assistant now looks for trigger words such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;remember&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;store&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;save&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;User input:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;remember my name is Rohit Rajvaidya
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The assistant detects the trigger and prepares the information for storage.&lt;/p&gt;


&lt;h3&gt;
  
  
  4. Memory Paraphrasing with LLM
&lt;/h3&gt;

&lt;p&gt;Implemented a small LLM prompt that converts the user sentence into a &lt;strong&gt;clean factual memory&lt;/strong&gt; before storing it.&lt;/p&gt;

&lt;p&gt;Example transformation:&lt;/p&gt;

&lt;p&gt;Input:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;remember my name is Rohit Rajvaidya
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Stored memory:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User name is Rohit Rajvaidya
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This ensures the database stores &lt;strong&gt;structured and consistent information&lt;/strong&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  5. Output Cleaning
&lt;/h3&gt;

&lt;p&gt;Added a cleanup step to remove unnecessary text returned by the model, such as:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output:
Explanation:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This ensures only the &lt;strong&gt;final fact&lt;/strong&gt; is stored in the database.&lt;/p&gt;


&lt;h3&gt;
  
  
  6. Assistant Improvements
&lt;/h3&gt;

&lt;p&gt;The assistant now includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local LLM interaction using &lt;strong&gt;Ollama&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model fallback system&lt;/strong&gt; (&lt;code&gt;tinyllama → phi3 → llama3&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Terminal commands (clear chat, switch models)&lt;/li&gt;
&lt;li&gt;Loading animation during model generation&lt;/li&gt;
&lt;li&gt;PostgreSQL memory storage&lt;/li&gt;
&lt;li&gt;Automatic detection of memory instructions&lt;/li&gt;
&lt;li&gt;Memory paraphrasing before database storage&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;p&gt;Planned improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inject stored memories into the system prompt so the assistant can &lt;strong&gt;recall user information across sessions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Prevent duplicate memory entries in the database.&lt;/li&gt;
&lt;li&gt;Improve memory extraction prompts.&lt;/li&gt;
&lt;li&gt;Introduce structured memory types (name, preferences, location, etc.).&lt;/li&gt;
&lt;li&gt;Implement memory retrieval during conversation to make the assistant more context-aware.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Previous Logs
&lt;/h2&gt;

&lt;p&gt;See the full history in the &lt;code&gt;ProjectLogs&lt;/code&gt; folder.&lt;/p&gt;
&lt;h2&gt;
  
  
  Here's Github Link :
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/RohitRajvaidya5" rel="noopener noreferrer"&gt;
        RohitRajvaidya5
      &lt;/a&gt; / &lt;a href="https://github.com/RohitRajvaidya5/AI-Assistant-Project" rel="noopener noreferrer"&gt;
        AI-Assistant-Project
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Local AI Chatbot Project&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;This project is a local AI assistant built with Python and Ollama.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Latest Development Log&lt;/h2&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;📅 Project Log&lt;/h1&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Date&lt;/h2&gt;

&lt;/div&gt;

&lt;p&gt;21 March 2026&lt;/p&gt;




&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🚀 Work Done Today&lt;/h2&gt;

&lt;/div&gt;

&lt;p&gt;Enhanced the assistant into a more production-ready system by improving both &lt;strong&gt;execution architecture&lt;/strong&gt; and &lt;strong&gt;AI intelligence capabilities&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;🔹 System &amp;amp; CLI Improvements&lt;/h3&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Refactored project into a modular and scalable structure&lt;/li&gt;
&lt;li&gt;Implemented &lt;code&gt;__main__.py&lt;/code&gt; to support execution via &lt;code&gt;python -m app&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Built a CLI interface (&lt;code&gt;jarvis run&lt;/code&gt;) using &lt;code&gt;argparse&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Configured &lt;code&gt;setup.py&lt;/code&gt; with entry points for command-based execution&lt;/li&gt;
&lt;li&gt;Enabled running the assistant as a proper CLI tool&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;🔹 Context-Aware AI System&lt;/h3&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Integrated database memory into AI responses&lt;/li&gt;
&lt;li&gt;Built memory retrieval with keyword-based filtering&lt;/li&gt;
&lt;li&gt;Designed a context builder to format memory into structured input&lt;/li&gt;
&lt;li&gt;Injected memory as system-level context before model execution&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;🔹 Architecture Optimization&lt;/h3&gt;

&lt;/div&gt;


&lt;ul&gt;

&lt;li&gt;Prevented message history pollution using temporary message copies&lt;/li&gt;

&lt;li&gt;Maintained efficient and scalable context handling&lt;/li&gt;

&lt;li&gt;Preserved streaming…&lt;/li&gt;

&lt;/ul&gt;
&lt;/div&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/RohitRajvaidya5/AI-Assistant-Project" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;





</description>
      <category>python</category>
      <category>postgres</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
