Most AI agents have amnesia. You explain the same context every session. Here is a five-minute fix that turns any LLM into something with actual continuity.
The trick: use a markdown vault as memory
A vector database is the obvious choice and usually the wrong one. Embeddings give fuzzy recall and lock you into a specific provider. A plain Obsidian vault gives the agent exact, named, editable memory that the human can also read.
Hermes Agent v0.14 shipped May 16, 2026 with a first-class Obsidian provider. One command wires it up.
Install Hermes
Three paths, pick one:
pip install hermes-agent && hermes postinstall
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
irm https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.ps1 | iex
Wire in Obsidian
hermes memory setup --provider obsidian --path ~/vaults/work
hermes memory status
That is the whole memory setup. The agent now reads and writes markdown files in that vault every session.
Optional power-ups
-
Obsidian Local REST API plugin on
localhost:27123gives programmatic R/W during agent execution, not just at boot. - Obsidian MCP server (cyanheads/obsidian-mcp-server) exposes the vault as Model Context Protocol tools — any MCP-aware client can use the same memory.
- Omi captures passive screen and mic context into the vault, which turns the workflow into a true second brain.
What else landed in v0.14
- 22 messaging platforms (Telegram, Discord, Slack, WhatsApp, Signal, Teams, LINE, more)
- 400+ supported models (xAI Grok SuperGrok OAuth, Nous Portal, OpenRouter, Ollama)
- 68 built-in agent tools
-
hermes proxy— OpenAI-compatible local endpoint for OAuth-only providers -
/handoff— live session transfer between devices - 19s faster cold start, 180x faster browser CDP
- MIT license, 100K+ stars (fastest project to 100K)
60-second walkthrough
Why this matters for builders
If your agent stack is a glue script that pastes the same context into every prompt, you are paying tokens for amnesia. Move the context out of the prompt and into a vault, and prompts shrink while quality goes up. Long breakdowns and reference vaults live at cptdigital.com.
Top comments (0)