<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Devendra Parihar</title>
    <description>The latest articles on DEV Community by Devendra Parihar (@devparihar5).</description>
    <link>https://dev.to/devparihar5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/devparihar5"/>
    <language>en</language>
    <item>
      <title>How to Build Long-Term Memory for LLMs (RAG + FAISS Tutorial)</title>
      <dc:creator>Devendra Parihar</dc:creator>
      <pubDate>Sun, 01 Feb 2026 08:43:39 +0000</pubDate>
      <link>https://dev.to/devparihar5/how-to-build-long-term-memory-for-llms-rag-faiss-tutorial-13md</link>
      <guid>https://dev.to/devparihar5/how-to-build-long-term-memory-for-llms-rag-faiss-tutorial-13md</guid>
      <description>&lt;p&gt;Build a memory system that lets LLMs remember user preferences across conversations using Python, LangChain, FAISS, and SQLite.&lt;br&gt;
tags: ai, python, langchain, machinelearning&lt;/p&gt;

&lt;p&gt;Have you ever had a deep, meaningful conversation with an AI, only to come back the next day and find it has forgotten everything about you? It's the "50 First Dates" problem of modern AI. While Large Language Models (LLMs) are incredibly smart, they suffer from a severe case of amnesia. Once the context window closes, the memory is gone.&lt;/p&gt;

&lt;p&gt;In this post, I'll walk you through how I built a &lt;strong&gt;Long-Term Memory System for LLM agents&lt;/strong&gt; that allows them to extract, store, and recall personalized information across conversations. We'll use LangChain, OpenAI, FAISS for vector search, and SQLite for persistent storage.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem: Context Window vs. Long-Term Memory
&lt;/h2&gt;

&lt;p&gt;LLMs have a "context window" — a limited amount of text they can process at once. You can stuff user history into this window, but it gets expensive and eventually runs out of space. Plus, it's inefficient to re-read the entire history of every conversation just to know the user's name or favorite programming language.&lt;/p&gt;

&lt;p&gt;We need a system that acts like a human brain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Short-term memory&lt;/strong&gt;: The current conversation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-term memory&lt;/strong&gt;: Important facts stored away and retrieved only when relevant.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Solution: RAG + Semantic Search
&lt;/h2&gt;

&lt;p&gt;We're building a specialized Retrieval-Augmented Generation (RAG) pipeline. Instead of retrieving generic documents, we are retrieving &lt;strong&gt;personal memories&lt;/strong&gt; about the user.&lt;/p&gt;
&lt;h3&gt;
  
  
  Key Components
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memory Extractor&lt;/strong&gt;: An LLM agent that "listens" to the chat and identifies facts worth saving.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Store (FAISS)&lt;/strong&gt;: Stores the "meaning" (embedding) of the memory for fuzzy search.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL Database&lt;/strong&gt;: Stores the structured data (content, timestamp, category) for reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval System&lt;/strong&gt;: Fetches relevant memories based on the current user query.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 1: Defining a Memory
&lt;/h2&gt;

&lt;p&gt;First, we need a structure. A memory isn't just text; it has metadata.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dataclasses&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;dataclass&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;

&lt;span class="nd"&gt;@dataclass&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Memory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;  &lt;span class="c1"&gt;# e.g., 'tools', 'personal', 'work'
&lt;/span&gt;    &lt;span class="n"&gt;importance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;  &lt;span class="c1"&gt;# 0.0 to 1.0
&lt;/span&gt;    &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
    &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Optional&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Step 2: Extracting Memories with LangChain
&lt;/h2&gt;

&lt;p&gt;We don't want to save everything. "Hello" is not a memory. "I use VS Code for Python development" is.&lt;/p&gt;

&lt;p&gt;We use LangChain and a carefully crafted system prompt to extract structured data.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# memory_system.py (Simplified)
&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_messages&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You are an expert at extracting factual information.
    Focus on preferences, tools, personal info, habits.
    Return a list of memories with an importance score (0-1).
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;human&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Message: {message}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Output is structured JSON
# User: "I mostly code in Python but use Rust for side projects."
# Result: [
#   {"content": "Codes primarily in Python", "category": "skills", "importance": 0.9},
#   {"content": "Uses Rust for side projects", "category": "skills", "importance": 0.7}
# ]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Step 3: The "Brain" (Vector Store + Database)
&lt;/h2&gt;

&lt;p&gt;We use a &lt;strong&gt;hybrid storage approach&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why FAISS?&lt;/strong&gt; We need to answer questions like "What tools do I use?" even if the memory is recorded as "I work with NeoVim." Keyword search fails here, but Vector Search understands that NeoVim is a tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why SQLite?&lt;/strong&gt; Vectors are great for search but bad for reading. We need a reliable place to store the actual text, timestamps, and IDs to handle updates and deletions.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;VectorStore&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;openai_api_key&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text-embedding-3-small&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;faiss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;IndexFlatIP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1536&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Inner Product for Cosine Similarity
&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;vector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;embed_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Step 4: Connecting the Dots
&lt;/h2&gt;

&lt;p&gt;The main loop handles the flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User sends message.&lt;/li&gt;
&lt;li&gt;System extracts facts (if any).&lt;/li&gt;
&lt;li&gt;System checks for updates (Did the user say "Actually, I switched to Java"?).&lt;/li&gt;
&lt;li&gt;System retrieves relevant history based on the current message.&lt;/li&gt;
&lt;li&gt;LLM generates response using the retrieved memories as context.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;answer_with_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# 1. Search vector DB for similar memories
&lt;/span&gt;    &lt;span class="n"&gt;relevant_memories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search_similar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# 2. Construct context
&lt;/span&gt;    &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;relevant_memories&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="c1"&gt;# 3. Ask LLM
&lt;/span&gt;    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Based on these memories:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt;Answer: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Live Demo
&lt;/h2&gt;

&lt;p&gt;I built a Streamlit app to visualize this "brain". You can see the memories forming in real-time, search through them, and even see how the system categorizes your life.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://llm-long-term-memory.streamlit.app/" rel="noopener noreferrer"&gt;Try the Live Demo&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;This isn't just about remembering names. It's about &lt;strong&gt;Personalization&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A coding assistant that remembers your preferred libraries.&lt;/li&gt;
&lt;li&gt;A tutor that remembers what you struggled with last week.&lt;/li&gt;
&lt;li&gt;A therapist bot that remembers your long-term goals.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Future Improvements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Graph Database&lt;/strong&gt;: Linking memories (e.g., "Paris" is related to "France").&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local LLMs&lt;/strong&gt;: Running Llama 3 for privacy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time Decay&lt;/strong&gt;: Slowly "forgetting" unimportant memories over time.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Check out the Code
&lt;/h2&gt;

&lt;p&gt;The full code is available on GitHub. It includes the complete detailed implementation of the memory extractor, vector store management, and the Streamlit UI.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Devparihar5" rel="noopener noreferrer"&gt;
        Devparihar5
      &lt;/a&gt; / &lt;a href="https://github.com/Devparihar5/llm-long-term-memory" rel="noopener noreferrer"&gt;
        llm-long-term-memory
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A sophisticated memory storage and retrieval system that provides LLMs with persistent, searchable long-term memory capabilities. This system can extract, store, update, and retrieve memories from conversations, enabling AI agents to maintain context across multiple sessions.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🧠 LLM Long-Term Memory&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a href="https://badge.fury.io/py/llm-long-term-memory" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/20a54a011b87d0afe8426df72afcaa07ce54b00d81f03ba4fb6ad37cd7bb4bd7/68747470733a2f2f62616467652e667572792e696f2f70792f6c6c6d2d6c6f6e672d7465726d2d6d656d6f72792e737667" alt="PyPI version"&gt;&lt;/a&gt;
&lt;a href="https://www.python.org/downloads/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/bd7bcdc70784bad7073b66850c51f4fed5dc3b2fc782277551b9013c7d27f043/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f707974686f6e2d332e382b2d626c75652e737667" alt="Python 3.8+"&gt;&lt;/a&gt;
&lt;a href="https://opensource.org/licenses/MIT" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fdf2982b9f5d7489dcf44570e714e3a15fce6253e0cc6b5aa61a075aac2ff71b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d79656c6c6f772e737667" alt="License: MIT"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Give your AI agents persistent, searchable long-term memory with pluggable storage backends.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A sophisticated memory storage and retrieval system that provides LLMs with persistent, searchable long-term memory capabilities. This system can extract, store, update, and retrieve memories from conversations, enabling AI agents to maintain context across multiple sessions.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;✨ Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;🧠 &lt;strong&gt;Intelligent Memory Extraction&lt;/strong&gt; - Automatically extracts factual information from conversations using OpenAI GPT&lt;/li&gt;
&lt;li&gt;🔍 &lt;strong&gt;Semantic Search&lt;/strong&gt; - Vector-based similarity search using OpenAI embeddings and FAISS&lt;/li&gt;
&lt;li&gt;💾 &lt;strong&gt;Pluggable Storage Backends&lt;/strong&gt; - SQLite, PostgreSQL, MongoDB, and Redis support&lt;/li&gt;
&lt;li&gt;🔄 &lt;strong&gt;Memory Management&lt;/strong&gt; - Add, update, and delete memories with conflict resolution&lt;/li&gt;
&lt;li&gt;📊 &lt;strong&gt;Category Organization&lt;/strong&gt; - Automatic categorization of memories&lt;/li&gt;
&lt;li&gt;⚡ &lt;strong&gt;Importance Scoring&lt;/strong&gt; - Weighted importance system for memory prioritization&lt;/li&gt;
&lt;li&gt;🔗 &lt;strong&gt;LangChain Integration&lt;/strong&gt; - Built with LangChain for robust LLM interactions&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;📦 Installation&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; Basic installation (SQLite backend)&lt;/span&gt;
pip install llm-long-term-memory
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; With PostgreSQL support&lt;/span&gt;
pip install llm-long-term-memory[postgresql]

&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Devparihar5/llm-long-term-memory" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;







&lt;p&gt;&lt;strong&gt;How do you handle state in your LLM apps? Drop a comment below!&lt;/strong&gt; 👇&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://medium.com/@dev523/giving-llms-a-brain-building-a-long-term-memory-system-with-python-langchain-and-faiss-7173bc33b1f4" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>python</category>
      <category>rag</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>As A very beginner open-source contributor</title>
      <dc:creator>Devendra Parihar</dc:creator>
      <pubDate>Sun, 05 Nov 2023 17:29:15 +0000</pubDate>
      <link>https://dev.to/devparihar5/as-a-very-beginner-open-source-contributor-1igh</link>
      <guid>https://dev.to/devparihar5/as-a-very-beginner-open-source-contributor-1igh</guid>
      <description>&lt;p&gt;Introduction:&lt;br&gt;
As the leaves turned from green to fiery shades of red and gold, a new chapter unfolded in my life as a software developer. This year, I embarked on my very first Hacktoberfest journey, and it was nothing short of an adventure into the world of open source. In this post, I want to share my experience, the lessons I learned, and the excitement that came with contributing to the open-source community.&lt;/p&gt;

&lt;p&gt;The Road to Hacktoberfest:&lt;br&gt;
Like many, I had heard about Hacktoberfest in the past, but I had never actively participated. This year, I decided to change that and take the plunge. The first step was signing up on the Hacktoberfest website, which was incredibly easy and straightforward. Next, I had to decide which projects I wanted to contribute to.&lt;/p&gt;

&lt;p&gt;Choosing the Right Projects:&lt;br&gt;
One of the most daunting aspects of participating in Hacktoberfest is deciding which projects to contribute to. With countless repositories available on platforms like GitHub, it can be overwhelming. I decided to focus on projects that aligned with my interests and skills, ensuring that my contributions would be meaningful and valuable.&lt;/p&gt;

&lt;p&gt;Getting My Hands Dirty:&lt;br&gt;
Once I had selected a few projects, I started digging into their codebases. It was time to get my hands dirty! The feeling of diving into an unknown codebase was both exciting and intimidating. I faced new challenges and picked up new skills along the way. Reading and understanding the project's documentation and codebase was a vital first step.&lt;/p&gt;

&lt;p&gt;Contributions and Pull Requests:&lt;br&gt;
I started with small bug fixes and documentation improvements. This allowed me to get a feel for the contribution process while also making a positive impact on the projects. I learned how to create a pull request (PR), submit it for review, and engage with maintainers and other contributors.&lt;/p&gt;

&lt;p&gt;The Thrill of Collaboration:&lt;br&gt;
One of the most rewarding aspects of Hacktoberfest was the sense of community and collaboration. I had the opportunity to interact with maintainers and other contributors from around the world. Their feedback and guidance were invaluable, and it was heartwarming to see how open source brings people together.&lt;/p&gt;

&lt;p&gt;Learning and Growing:&lt;br&gt;
Hacktoberfest provided a unique learning experience. I not only contributed to open-source projects but also enhanced my coding skills, learned new technologies, and improved my collaboration and communication abilities.&lt;/p&gt;

&lt;p&gt;Completion and Reflection:&lt;br&gt;
As the month of October drew to a close, I had successfully completed my Hacktoberfest challenge by making four meaningful contributions. The feeling of accomplishment was incredibly satisfying. It was a moment to reflect on the journey, the knowledge gained, and the relationships built within the open-source community.&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
Hacktoberfest was a thrilling and educational experience that I will treasure for a long time. It's a fantastic opportunity for both beginners and seasoned developers to contribute to open source, learn, and connect with like-minded individuals. If you've never participated in Hacktoberfest, I highly recommend giving it a try next year. You won't just be contributing to projects; you'll be contributing to your growth as a developer and to the vibrant world of open source.&lt;/p&gt;

&lt;p&gt;So, here's to my first Hacktoberfest, and here's to the countless adventures that await in the world of open source! 🎉🐱‍💻 #Hacktoberfest #OpenSource #CodingCommunity&lt;/p&gt;

</description>
      <category>hack23contributor</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
