DEV Community

NSFW Coders
NSFW Coders

Posted on

Memory in AI Companions: Implementing Vector-Based Long-Term User State

When we build conversational agents that users interact with repeatedly, one of the biggest challenges is long-term memory. Traditional chatbots operate session-to-session, forgetting everything once the conversation ends. But AI companions, assistants, and persistent dialogue agents need to remember details over time — preferences, past conversations, emotional tone, and personal background.

We’ve worked on these systems at NSFW Coders, where maintaining conversational continuity is a core requirement. Over time, we’ve found that the most reliable approach is to separate memory from the model itself, instead of trying to make the model “remember” through fine-tuning.

That’s where vector-based memory storage comes in.

Why Not Just Increase the Context Window?

Modern models allow very large context windows, but pushing everything into the prompt:

  • Is computationally expensive
  • Introduces noise
  • Encourages the model to invent connections that never existed

Instead, we want the model to retrieve only relevant memory fragments when needed.

A vector memory system allows us to store conversation points as embeddings, then retrieve similar ones during future dialogue.

The Core Idea

  • Convert user messages or meaningful conversation summaries into embeddings.
  • Store the embeddings in a vector database.
  • During conversation, search for relevant memory entries based on similarity.
  • Inject only those retrieved memories into the model prompt.

This gives the model just enough information to maintain context — without flooding it.

Basic Architecture

User Message

Embedding Model (e.g., sentence-transformers)

Vector Store (FAISS, Pinecone, Qdrant, Weaviate)

Similarity Search → Retrieve Relevant Memory

Add Retrieved Memory to Prompt

LLM Generates Response

The important separation is:

The LLM generates responses, the vector store maintains memory.

A Minimal Python Example (FAISS + Sentence Transformers)

This is a simplified example — not production code — but it demonstrates the concept.

from sentence_transformers import SentenceTransformer
import faiss
import numpy as np

# Example embedding model
model = SentenceTransformer("all-MiniLM-L6-v2")

# Initialize FAISS index (for 384-dimensional embeddings)
index = faiss.IndexFlatL2(384)

# Memory storage (parallel list to map indexes to text)
memory_texts = []

def store_memory(text):
    embedding = model.encode([text])
    index.add(embedding.astype(np.float32))
    memory_texts.append(text)

def recall_relevant_memory(query, k=3):
    query_vec = model.encode([query]).astype(np.float32)
    distances, indices = index.search(query_vec, k)
    return [memory_texts[i] for i in indices[0]]

# Example usage:
store_memory("User likes conversations about astronomy.")
store_memory("User preferred being addressed in a friendly tone.")
store_memory("User mentioned they enjoy late-night chats.")

query = "Let's talk about space."
print(recall_relevant_memory(query))

Enter fullscreen mode Exit fullscreen mode

What this does:

  • Stores key memory phrases from previous interactions.
  • Later retrieves the most relevant ones in real-time.

Those retrieved memories are added to the prompt when generating the next response.

What Counts as “Memory”?

We’ve found it useful to store:

  • Preferences (“likes short replies” / “loves sci-fi themes”)
  • Facts shared intentionally by the user (not inferred, not assumed)
  • Ongoing emotional tone (“feels stressed today” — but not forever)

What we don’t store:

  • Guesses
  • Model hallucinations
  • Temporary emotional reactions (unless persistent) This prevents persona drift and inaccurate relationship projections.

Prompting Strategy

When generating a response, we add retrieved memory like this:

User Memory (retrieved from vector DB):
- The user prefers calm and reflective conversation styles.
- The user asked yesterday about space exploration.

Current User Message:
"I was thinking more about Mars missions today."

Your Response:

Enter fullscreen mode Exit fullscreen mode

This tells the model:

“Use these memories as context, but do not invent new ones.”

Challenges We've Encountered

Building memory systems is not a “plug and play” task. Some common issues include:

  • Over-retrieval: pulling too many memories and cluttering prompts.
  • Stale memory: keeping outdated information that no longer matters.
  • Memory bloat: storing everything instead of summarizing meaningfully.

We solve these by:

  • Running periodic memory pruning
  • Converting old repetitive details into summarized embeddings
  • Using timestamp decay scoring (recent memories matter more)

Why This Approach Works Well for Companions and Social AI

Human conversation relies heavily on shared history.
When AI remembers:

  • Interactions feel stable
  • Personality feels consistent
  • Engagement becomes long-term

The memory system is what makes the AI feel like it “knows” the user — without requiring massive model retraining every time.

Final Thoughts

Long-term memory isn’t created by increasing model size — it’s created by structuring how the model receives context.

At NSFW Coders, separating memory architecture from model behavior has become foundational in building persistent conversational agents. The approach is modular, scalable, and allows the model to remain both fluent and grounded.

If you're building a chatbot, companion agent, or long-running assistant, implementing vector-based memory early will save you from major refactoring later.

Top comments (0)