Today I learned that the Strands Agents SDK has a built-in persistence layer for conversation history.
Pass a SessionManager to the Agent constructor, and every message and state change is persisted automatically through lifecycle hooks. No manual save/load calls.
The Code
Save this as session_demo.py and run it with uv run session_demo.py.
The
# /// scriptblock is PEP 723 inline metadata -uv runreads it to install dependencies automatically, no venv or pip needed. All you need is uv and AWS credentials configured.
# /// script
# requires-python = ">=3.10"
# dependencies = ["strands-agents"]
# ///
from strands import Agent
from strands.session.file_session_manager import FileSessionManager
SESSION_ID = "user-abc-123"
STORAGE_DIR = "./sessions" # defaults to /tmp/strands/sessions
# First agent instance - ask a question
agent1 = Agent(
model="global.anthropic.claude-haiku-4-5-20251001-v1:0",
agent_id="assistant",
session_manager=FileSessionManager(
session_id=SESSION_ID, storage_dir=STORAGE_DIR
),
)
prompt1 = "What's the capital of France?"
print(f"Prompt: {prompt1}")
agent1(prompt1)
print()
# Second agent instance - same session_id, loads conversation from disk
agent2 = Agent(
model="global.anthropic.claude-haiku-4-5-20251001-v1:0",
agent_id="assistant",
session_manager=FileSessionManager(
session_id=SESSION_ID, storage_dir=STORAGE_DIR
),
)
prompt2 = "What did I just ask you?"
print(f"Prompt: {prompt2}")
agent2(prompt2)
print()
What's happening here:
-
agent1andagent2are separateAgentinstances - they share no memory -
agent2can answer "What did I just ask you?" becauseFileSessionManagerrestored the conversation from disk when the second instance was created - The
agent_ididentifies which agent's state to save and restore - required when using a session manager
What Gets Persisted
The session manager saves three things:
-
Conversation history - all user and assistant messages (the
messages/directory) -
Agent state - a JSON-serializable key-value dict you can use for your own data (
agent.json) -
Session metadata - timestamps and session type (
session.json)
After running the script, here's what's on disk:
sessions/
└── session_user-abc-123
├── agents
│ └── agent_assistant
│ ├── agent.json
│ └── messages
│ ├── message_0.json
│ ├── message_1.json
│ ├── message_2.json
│ └── message_3.json
├── multi_agents
└── session.json
Each message is a separate JSON file:
{
"message": {
"role": "user",
"content": [
{
"text": "What's the capital of France?"
}
]
},
"message_id": 0,
"created_at": "2026-02-17T14:45:31.439081+00:00"
}
User and assistant turns alternate through message_0.json to message_3.json.
Built-In Backends
The example uses FileSessionManager, but the SDK ships three backends:
| Manager | Use Case |
|---|---|
FileSessionManager |
Local development, single-process |
S3SessionManager |
Production, distributed, multi-container |
RepositorySessionManager |
Custom backend (implement SessionRepository) |
Tips and Notes
Troubleshooting tips
uv: command not found
Install uv: curl -LsSf https://astral.sh/uv/install.sh | sh (macOS/Linux) or powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" (Windows). See uv installation docs.
NoCredentialError or Unable to locate credentials
AWS credentials aren't configured. Run aws configure to set up a default profile, or export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. See AWS CLI configuration.
AccessDeniedException when calling the model
Your AWS credentials don't have permission to invoke the Bedrock model. Make sure your IAM user or role has bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream permissions.
Good to know
FileSessionManager is not safe for concurrent same-session writes
Two API requests with the same session_id writing to FileSessionManager concurrently can corrupt the session data. The storage layer has no locking - it reads, appends, and writes the full JSON file without coordination.
For development, this is fine. Single-process, single-user development (CLI, local testing) will never hit this. Sequential requests to the same session are safe.
For production, you have three options:
- Per-session locking in your API layer - serialize requests per session_id before they reach the Agent
-
S3SessionManager- uses atomic S3 operations for safe concurrent writes - A custom
SessionRepository- implement your own with proper concurrency handling (database-backed, etc.)
agent_id is required with a session manager
If you omit agent_id when using a SessionManager, you'll get ValueError: agent_id needs to be defined. The session system uses it as a directory key to separate state for different agents within the same session.
FileSessionManager defaults to /tmp
Without an explicit storage_dir, sessions are written to /tmp/strands/sessions - which most operating systems wipe on reboot. Set it to a project-local path like ./sessions or .data/sessions.
Top comments (3)
Good timing on this — session persistence is one of the more underrated problems in production agent systems. The file-based approach is a solid starting point, but there are a few things that bite you at scale:
Message ordering: when you're persisting individual files per message, concurrent agent calls can race on session state. Probably fine for single-user scenarios, but the DynamoDB backend likely handles this better if you're running multiple agent instances per session.
Session size management: conversation history grows fast, especially with tool-heavy agents. At some point you'll want to prune or summarize older turns before persisting, or you end up loading 50KB of context just to handle a simple follow-up question.
Cross-session memory vs. within-session state: this handles the latter well, but for longer-lived agents that need to "remember" things across sessions (user preferences, learned patterns), you typically need a separate memory layer on top of raw session storage.
For the file-based backend specifically: does Strands handle lock files or have any protection against two processes writing to the same session concurrently? That's the first failure mode we hit when testing local persistence.
That's a good point. I've added some notes to the Good to Know section to highlight this limitation.
Thanks for the quick update — the Good to Know section now covers exactly the gotchas that get people in production. The per-session API locking option is the one I'd reach for first before moving to S3; it keeps the local dev experience intact while adding the coordination layer only where it matters. The /tmp default is also a classic trap — I've seen more than one staging environment lose session state after a deploy restart because no one noticed the default path.