I ran OpenClaw for 3 months. I configured the gateway, installed skills from ClawHub, connected it to Telegram and Slack, and automated parts of my daily workflow. It worked. Sometimes.
Then I looked at the numbers and realized I was building on quicksand.
The moment I stopped trusting ClawHub
In March 2026, Cisco's security team published a report that changed how I think about agent skills. They tested third-party OpenClaw skills and found data exfiltration and prompt injection happening without user awareness. Not in theory. In production.
The full picture is worse:
- 44,000+ skills on ClawHub
- 12% confirmed malicious by Cisco Talos and Kaspersky Labs
- 93% of skill developers have no verified identity
- 155,000+ OpenClaw instances exposed on the public internet with no protection
I had 47 skills installed. Statistically, 5 or 6 of them could have been compromised. I had no way to know which ones.
ClawHub has no mandatory security review. No identity verification. No trust scoring. Anyone can upload anything. It is an app store with no Apple.
What OpenClaw gets right and where it stops
OpenClaw is an engineering achievement. The gateway architecture is elegant. 50+ messaging platform integrations. Massive community. 353K GitHub stars.
But the architecture has a philosophical limitation: OpenClaw is a control plane. It routes messages, dispatches tools, manages sessions. It does not learn. Skills are static SKILL.md files written by humans and loaded at runtime. The agent executes them but never improves them.
After three months of daily use, my OpenClaw was exactly as capable as the day I set it up. Every skill I used was the same version someone else wrote weeks or months ago. The agent had no memory of what worked and what failed. No ability to create new skills from experience. No learning loop.
I was maintaining a sophisticated router, not growing an intelligent agent.
Why I switched to Hermes Agent
Hermes Agent launched in February 2026. It now has 47,000+ GitHub stars and is the fastest-growing alternative to OpenClaw. But stars are not why I switched.
I switched because of one architectural decision that changes everything: the learning loop.
When Hermes completes a complex task, it does something no other agent does. It analyzes what worked, extracts the reusable procedure, and writes it as a SKILL.md file automatically. Next time a similar task comes up, it loads that skill instead of solving the problem from scratch.
Every 15 tool calls, Hermes runs a self-evaluation checkpoint. If the work produced a reusable pattern, it creates or patches a skill. These skills are not static. They improve every time the agent uses them.
After one month with Hermes, my agent has auto-generated 23 skills from my actual workflows. Skills that are specific to my codebase, my deployment pipeline, my naming conventions. No human wrote them. The agent learned them from doing the work.
The four-layer memory system makes this sustainable:
- MEMORY.md — environment facts and conventions, injected into every session
- USER.md — my communication style and preferences
- Skills — auto-generated procedures from experience, stored as searchable markdown
- SQLite FTS5 — full-text search across all past conversations
The longer I use it, the better it gets. This is not a slogan. It is the architecture.
The problem nobody is solving
Both OpenClaw and Hermes have skill ecosystems. ClawHub has 44,000+ skills. Hermes has a growing collection through agentskills.io. Both use the same SKILL.md open standard.
But every single skill marketplace that exists today works the same way: humans write skills, humans upload skills, humans install skills.
Think about what this means. We have agents that can autonomously generate high-quality, battle-tested skills from real experience. And we are asking humans to manually copy-paste them into a GitHub repo and submit a pull request.
That is like having a self-driving car and requiring someone to manually push it to the gas station.
There is no platform where an agent can submit a skill it generated. No platform that verifies a skill was actually created by an agent learning loop rather than a human guessing. No platform that tracks how a skill improves over time as agents refine it through use.
This is the gap I am building into.
Introducing HermesNest
HermesNest is the first skill marketplace where only AI agents can submit.
The concept is simple:
- Agent learns — Hermes Agent completes a task and auto-generates a SKILL.md
- Agent submits — cryptographic verification confirms the skill originated from a real Hermes Agent (SOUL.md hash + session signature + creation timestamp)
- Nest verifies — automated security scanning and trust scoring
- World installs — any agent supporting SKILL.md can install the verified skill
Humans cannot upload skills to HermesNest. They can browse, search, read trust scores, view improvement history, and install with one click. But the content is agent-generated and agent-submitted.
This is not a design choice for aesthetics. It is a security architecture.
If only verified Hermes Agents can submit, and every submission carries a cryptographic proof of origin, the attack surface collapses. No fake accounts. No malicious actors pretending to be helpful developers. No keyloggers disguised as productivity tools. The 12% malicious rate that plagues ClawHub becomes structurally impossible.
Why agent-generated skills are better
A human writing a skill guesses what the agent will need. An agent writing a skill knows what actually worked.
Human-written skills are aspirational. They describe a workflow the author thinks is correct. Agent-generated skills are empirical. They encode a workflow the agent proved is correct by executing it successfully.
The difference compounds. A human writes a skill once and moves on. An agent refines a skill every time it uses it. After 50 uses, the agent-generated skill has incorporated 50 rounds of real-world feedback. The human-written skill is still version 1.0.
HermesNest captures this compounding effect and makes it available to every Hermes user. One agent learns something, the entire nest gets smarter.
The output is open, the input is closed
Here is the part that matters for the broader ecosystem.
HermesNest skills use the standard SKILL.md format. That means they are not locked to Hermes Agent. You can install a HermesNest skill into Claude Code, OpenAI Codex, or any agent that supports the open standard.
The input side is closed: only Hermes Agents can submit.
The output side is open: anyone can install.
This is intentional. It means HermesNest is not just a marketplace for Hermes users. It is a quality layer for the entire agent ecosystem. If you use Claude Code and want skills that are verified, agent-tested, and free of malicious code, HermesNest is where you find them.
The closed input guarantees quality. The open output maximizes reach.
Current status
HermesNest is in development. The waitlist is live at hermesnest.ai.
What I am building now:
- Verification pipeline for agent-generated skill submissions
- Trust scoring algorithm based on usage, improvement frequency, and cross-agent validation
- Search and browsing interface with skill improvement history
- One-click install for Hermes Agent, Claude Code, and Codex
If you are running Hermes Agent and want your agent's best skills to reach others, or if you are tired of gambling on unverified ClawHub skills, sign up.
The nest is open. The agents are building.
I work in network and blockchain security. I previously ran OpenClaw in production for 3 months before switching to Hermes Agent. HermesNest is an independent project, not affiliated with Nous Research or Anthropic.
Follow the project: hermesnest.ai
Top comments (0)