DEV Community

Richard Abishai
Richard Abishai

Posted on

Building Context-Aware Agents with LangGraph

How to add memory, state, and long-term reasoning to LangGraph agents.

Most AI agents behave like goldfish — they respond only to the last message and forget everything else.

But real intelligence needs memory.

Context changes decisions.

History shapes reasoning.

Today we’ll build a LangGraph agent that:

  • remembers past interactions
  • stores its state
  • adapts reasoning based on context
  • reads/writes persistent memory
  • loops intelligently instead of starting fresh every time

Let’s get started.


⚙️ 1. Setup & Installation

Make sure you have LangGraph installed:

pip install langgraph langchain openai
Enter fullscreen mode Exit fullscreen mode

If you're running this on a GPU or Colab, you’re good.


🧩 2. Idea: Context-Aware Agent Loop

Unlike stateless chatbot calls, a context-aware agent has:

State — what it knows so far

Memory — persistent information across runs

Tools — actions it can take

LLM nodes — thinking steps

In LangGraph, this becomes a state graph:

User → Planner → MemoryCheck → Executor → MemoryUpdate → Planner (loop)
Enter fullscreen mode Exit fullscreen mode

🧠 3. Define the Agent State

LangGraph agents use pydantic-style states.

from typing import List, Optional
from langgraph.graph import StateGraph

class AgentState:
    history: List[str]
    task: Optional[str]
    memory: dict
Enter fullscreen mode Exit fullscreen mode

This is the entire brain of your agent:

  • history → conversation log

  • task → current objective

  • memory → persistent knowledge


🔧 4. Add a Memory Backend (Simple JSON File)

Let’s create a tiny persistent memory store:

import json
import os

MEMORY_FILE = "agent_memory.json"

def load_memory():
    if not os.path.exists(MEMORY_FILE):
        return {}
    return json.load(open(MEMORY_FILE))

def save_memory(memory):
    json.dump(memory, open(MEMORY_FILE, "w"), indent=2)
Enter fullscreen mode Exit fullscreen mode

You can replace this later with:

Redis

MongoDB

Pinecone vector memory

LangChain storage

But for demo purposes, JSON works beautifully.


🧠 5. LLM Nodes (Thinking + Planning)

from langchain.chat_models import ChatOpenAI
from langgraph.nodes import LLMNode

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

planner = LLMNode(
    id="planner",
    llm=llm,
    prompt=(
        "Given the memory and the current user task, "
        "decide: (1) what the user wants, (2) what steps to take next.\n"
        "Memory: {memory}\n"
        "Task: {task}\n"
        "History: {history}\n"
    )
)
Enter fullscreen mode Exit fullscreen mode

The planner uses all accumulated context — not just the latest message.


🔧 6. MemoryCheck Node

This step checks whether the agent already knows something relevant:

def memory_check_node(state: AgentState):
    task = state.task or ""
    memory = state.memory

    matches = []
    for key, value in memory.items():
        if key.lower() in task.lower():
            matches.append((key, value))

    return {"memory_matches": matches}
Enter fullscreen mode Exit fullscreen mode

🧰 7. Executor Node (Actions)

A placeholder tool:

def search_tool(query):
    return f"[Search results for '{query}']"

def executor_node(state: AgentState):
    task = state.task
    result = search_tool(task)
    return {"result": result}
Enter fullscreen mode Exit fullscreen mode

You can later replace with:

web scraping

API calls

database lookups

custom tools


📥 8. MemoryUpdate Node

Store new knowledge after each run:

def memory_update_node(state: AgentState):
    memory = load_memory()
    last_result = state.result

    memory[state.task] = last_result
    save_memory(memory)

    return {"memory": memory}
Enter fullscreen mode Exit fullscreen mode

Now your agent gets smarter with every loop.


🔗 9. Build the LangGraph

graph = StateGraph(AgentState)

graph.add_node("planner", planner)
graph.add_node("memory_check", memory_check_node)
graph.add_node("executor", executor_node)
graph.add_node("memory_update", memory_update_node)

graph.connect("planner", "memory_check")
graph.connect("memory_check", "executor")
graph.connect("executor", "memory_update")
graph.connect("memory_update", "planner")

graph.set_entry_point("planner")

agent = graph.compile()
Enter fullscreen mode Exit fullscreen mode

This is a fully context-aware agent loop.


🚀 10. Run the Full Demo

state = agent.invoke({
    "history": [],
    "task": "Find articles on LangGraph",
    "memory": load_memory()
})

print(state)
Enter fullscreen mode Exit fullscreen mode

Run it again — and watch memory kick in:

state = agent.invoke({
    "history": ["Hi again!"],
    "task": "Find articles on LangGraph",
    "memory": load_memory()
})
Enter fullscreen mode Exit fullscreen mode

The second run will skip unnecessary work because the agent “remembers.”


🧩 Why Context Makes Agents Powerful

Fewer hallucinations — the agent doesn't forget past results

Action optimization — avoids repeating tasks

Long-term workflows — multi-step reasoning over time

Personalization — your agent remembers preferences

Multi-agent cooperation — context is shared across nodes

Context is the difference between an LLM and an agentic system.


🧠 Final Reflection

Building agents is no longer about chaining prompts.
It’s about orchestrating stateful intelligence.

Give your agent:

memory → to recall

state → to reason

structure → to act

And suddenly you’re not just prompting a model —
you’re designing a mind.

Top comments (0)