Hermes Agent: The Self-Improving Open-Source AI Agent Framework
Hermes Agent is an MIT-licensed AI agent framework by NousResearch that builds a closed learning loop -- complete a task, auto-generate a skill document, reuse it next time.
In two months since its February 2026 release, Hermes Agent has accumulated 33,000+ GitHub stars, 4,200+ forks, and contributions from 142+ developers. The latest v0.7.0 "The Resilience Release" shipped on April 3rd with major stability and security improvements.
Core Architecture
Self-Learning Skill System
The signature feature. When Hermes completes a complex task, it automatically creates a reusable skill document. Next time a similar task appears, the agent references this document for faster, more accurate execution. Skills also self-improve through usage.
This is fundamentally different from agents that start fresh every session. Hermes gets better the more you use it.
Multi-Platform Messaging Gateway
Hermes isn't terminal-only. It supports 7+ messaging platforms through a unified CLI gateway:
- Telegram, Discord, Slack, WhatsApp, Signal
- Feishu/Lark, WeCom (added in v0.6.0)
You can invite the agent into a team chat and give it natural language instructions.
Persistent Memory
Two markdown files maintain context across sessions:
- MEMORY.md -- environment info, past lessons, system state
- USER.md -- user preferences, work style, custom settings
SQLite backs full session search. v0.7.0 made memory backends pluggable with 6 third-party providers available out of the box.
LLM Provider Freedom
No vendor lock-in:
- OpenAI, Anthropic, OpenRouter (200+ models)
- Self-hosting: Ollama, vLLM, SGLang
- Fallback Provider Chain (v0.3.0+): auto-failover on provider errors
Switching providers is a one-line config change.
MCP Native Support
Connects to Model Context Protocol servers for GitHub, databases, and external APIs. Since v0.6.0, Hermes can also expose itself as an MCP server via hermes mcp serve.
v0.7.0 "The Resilience Release"
The latest release focused on reliability and security:
-
Pluggable memory providers -- swap memory backends via plugins,
hermes memory setupto configure -
Button-based approval UI --
/approve,/denyslash commands + interactive button prompts - Inline diff previews -- real-time diff display for file write/patch operations
-
API server session persistence --
X-Hermes-Session-Idheader for persistent sessions - Camofox browser -- Camoufox-based anti-detection browser with VNC debugging
- Credential pool rotation -- API key rotation for rate limit distribution
- 168 PRs, 46 issues resolved -- gateway race conditions, approval routing, deep security fixes
Quick Start
# Install
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
# Select LLM provider
hermes model
# Launch
hermes
Docker:
docker run -it nousresearch/hermes-agent # CLI
docker run -d nousresearch/hermes-agent gateway # messaging gateway
Hermes vs Claude Code
| Hermes Agent | Claude Code | |
|---|---|---|
| Focus | General-purpose autonomous agent | IDE-integrated coding agent |
| License | MIT (fully open) | Commercial (subscription) |
| LLM Support | Multi-provider (200+) | Claude only |
| Messaging | 7+ platform gateway | Terminal-based |
| Hosting | Self-hostable | Anthropic cloud |
| Learning | Auto-generated skill docs | CLAUDE.md + rule-based |
They serve different needs. Claude Code excels at coding workflows. Hermes aims to run autonomous tasks on any chat platform with full self-hosting control.
Who Should Try This
- Teams needing self-hosted AI agents (data stays on-premises)
- Developers wanting multi-platform agent deployment (Telegram/Discord/Slack)
- Projects requiring model flexibility (no vendor lock-in)
- Researchers using the training infrastructure (batch trajectories, Atropos RL, trajectory compression)
Resources
Have you tried Hermes Agent? I'd love to hear about your setup -- especially which LLM provider you're using and how the self-learning skills perform in practice.
Top comments (0)