DEV Community

Cover image for 📅 Day 3: Understanding AI Memory in LangChain – A Shimla Travel Analogy 🇮🇳
Utkarsh Rastogi for AWS Community Builders

Posted on • Edited on

📅 Day 3: Understanding AI Memory in LangChain – A Shimla Travel Analogy 🇮🇳

From short-term recall to deep personalization — let's explore how AI remembers like humans!

Imagine you're chatting with your AI travel assistant Lexi and planning a trip to Shimla. Wouldn’t it be amazing if she remembered your favorite hotel, travel dates, and even your preference for toy trains — just like a human would?

That's exactly what AI Memory in LangChain is all about.


🧠 What is Memory in AI?

In human terms, memory helps us recall:

  • What was said
  • What we like
  • What we did
  • What we’ve learned

For AI agents, memory helps them act smarter, carry over context, and improve over time. LangChain and LangGraph offer robust ways to manage both short-term and long-term memory — just like a human brain.


🔍 Two Types of Memory in LangChain

1. ✨ Short-Term Memory (Thread-Scoped)

This memory lives within a single conversation.

🧳 Example: You told Lexi, “Book a trip to Shimla in December.”

Lexi remembers:

  • Destination: Shimla
  • Timing: December

And that memory stays as long as you're in the same thread — thanks to LangGraph’s checkpointer.

Key Highlights:

  • Thread-specific
  • Stored in agent’s state
  • Loaded at every step
  • Temporary

💡 Like sticky notes on Lexi’s desk — perfect for the current chat.


2. 🧠 Long-Term Memory (Cross-Thread)

This memory survives across sessions.

“Plan something like last time.”

Lexi remembers you meant Shimla in December. Why? Because that was stored in long-term memory, scoped to your user profile.

Key Highlights:

  • Works across conversations
  • Persisted using vector stores or DBs
  • Supports personalization

📓 Like Lexi’s personal diary — useful for lifelong relationships.


🧬 AI Memory Types: Inspired by Human Brain

LangChain’s memory also resembles human memory types:

1. 📖 Episodic Memory

Stores specific events — like your Dec 10 hotel booking in Shimla.

  • Chat logs and user actions
  • Enables time-stamped recall

“Book a toy train to Shimla on the 14th” → remembered exactly as said.


2. 📚 Semantic Memory

Stores general knowledge — like facts about Shimla.

  • Snowfall in Kufri
  • Best time to visit
  • Toy train info

Even if you don’t say "Shimla", Lexi might recommend it if you say “snowy hills in North India.”


3. ⚙️ Procedural Memory

Learns routines or behaviors — like always booking a hotel after a train.

  • Learns booking patterns
  • Automates tasks

Lexi starts suggesting your travel steps without being told — like muscle memory.


🧠 When Should AI Create Memories?

Unlike humans, AI doesn’t sleep. So when do they store new memories?

LangChain offers two approaches:

🔥 1. Hot Path (Real-Time Writing)

  • Happens during the conversation
  • Fast recall, but slower response time

Lexi notes: "User prefers mountain-facing rooms” while chatting.


🌙 2. Background (Post-Task Writing)

  • Happens after the task
  • Batched or summarized memory

After your session, Lexi reflects: “User loves snowy cafes in Himachal.”


🧠 Pro Strategy: Combine Both

  • Use hot path for bookings/preferences
  • Use background for session summarization

🗂️ Tagging Makes Memory Smarter

To make memory usable, tag it by:

  • Thread ID
  • Location (e.g., Shimla)
  • User ID

Right memory, right moment — just like a thoughtful friend.


🛠️ Memory Management in LangGraph

Many AI applications need memory to share context across multiple interactions. LangGraph provides built-in support for managing memory effectively, enabling agents to stay within the LLM's context window while remaining aware of the conversation history.

LangGraph supports two main types of memory:

🔁 Short-Term Memory

  • Tracks the ongoing conversation within a session
  • Maintains message history during the current flow
  • Critical for contextual follow-ups

🧠 Long-Term Memory

  • Stores user-specific or app-level data across sessions
  • Used for persistent personalization and historic recall

📏 Handling Context Window Limits

With short-term memory enabled, long conversations can exceed the LLM’s token limit. LangGraph offers the following strategies:

✂️ Trimming

  • Remove the first or last N messages
  • Keeps the most relevant and recent messages
  • Ensures LLM receives a manageable context

📝 Summarization

  • Earlier parts of the conversation are summarized
  • Summaries replace full message logs
  • Helps maintain continuity while reducing tokens

🗑️ Deletion

  • Permanently remove messages from LangGraph state
  • Useful for stateless workflows

🛠️ Custom Strategies

  • Filter messages based on importance
  • Retain specific types (e.g., user queries only)
  • Fully customizable to fit app needs

🎯 Why It Matters

These memory management strategies allow your AI agent to:

  • Operate within LLM limits
  • Stay context-aware
  • Provide coherent responses
  • Enhance long-form conversations

🧾 Summary Table

Memory Layer Type Scope Shimla Example Management Strategy
Short-Term Episodic One conversation "Shimla trip in December" Trimming, Summarization, Deletion
Long-Term Episodic/Semantic Multiple chats Remembers previous trip to Shimla Stored in DB or vector store
Semantic Knowledge-based General facts Knows Shimla is snowy in winter Stored as knowledge base
Procedural Habitual recall Behavior patterns Always books train → hotel → cafe Pattern learning over time
Hot Path Real-time save Immediate Saves hotel preference mid-convo Stored instantly
Background Post-processing Deferred Summarizes entire trip memory Summarized after conversation

🧭 Why This Matters for AI Agents

Without memory:

  • AI feels robotic, forgetful, and cold

With memory:

  • AI becomes personal, smart, and useful

Next time you plan a winter trip, Lexi might say:

“Shall I book that toy train and hillside hotel you liked last December?”

That’s the power of AI memory. 🧠✨


🙏 Credits

This article is inspired by and references the official LangChain documentation. Special thanks to the LangChain team for making advanced memory handling so intuitive.


👨‍💻 About Me

Hi, I’m Utkarsh Rastogi, an AWS Community Builder passionate about:

  • 🌩️ Cloud-native apps on AWS
  • 🤖 Building intelligent AI assistants
  • 🧱 Infrastructure-as-Code with Terraform & CloudFormation
  • 📝 Blogging real-world AWS & AI projects on awslearner.hashnode.dev

Let’s connect on LinkedIn!

Happy building! 🚀

Top comments (3)

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

pretty cool seeing memory stuff actually explained in plain words - honestly, i always wonder if real progress comes from better tech or just from making things more human-like

Collapse
 
dotallio profile image
Dotallio

Love the Shimla analogy! Have you found any tradeoffs in real apps when deciding between hot path and background memory updates?

Collapse
 
awslearnerdaily profile image
Utkarsh Rastogi AWS Community Builders

I am just trying to learn langchain concepts so will be implementing based on this after days10 series ends