Your coding agent spent an hour debugging a tricky auth flow yesterday. It discovered that the JWT validation in this repo differs from the standard library, that migrations must run before seeding, and that the e2e tests silently hang without Docker. Tomorrow, all of that is gone.
Writing skills by hand helps — research shows human-curated skills genuinely improve agent performance. But capturing the right knowledge at the right time is hard, and skills that aren't updated decay silently as the codebase changes.
So I built skill-molt — a tool that helps your agent draft skills from real session experience, with quality gates and human review built in. You stay in control; the agent does the heavy lifting of extracting what it learned.
Solutions are emerging — Hermes Agent bundles skill learning into a self-hosted runtime, learnings.md patterns accumulate raw session notes, and static skill libraries provide pre-written skills. But each locks you into one agent, lacks quality control, or can't learn from your sessions.
I wanted something different: a skill-only tool that works with any agent, extracts only non-obvious knowledge, and has built-in quality gates.
How it works
After a coding session that involved trial-and-error or non-obvious workarounds, skill-molt helps your agent:
Observe → Generate / Improve → Validate → Human Review
- Observe — The agent reflects on its own session, asking itself questions like "What did I know at the END that I wish I'd known at the START?" to surface non-obvious lessons
- Generate — Create a new skill from the lessons learned
- Improve — Or update an existing skill with new knowledge — shed the old, keep the new (that's the "molt")
- Validate — Run 6 pass/fail quality checks (path existence, command availability, relevance, length, description trigger, secrets) before presenting to you
Nothing is saved without your approval.
The idea builds on Voyager (NeurIPS 2023), where agents accumulate reusable skills from experience, and ACE (ICLR 2026), which shows that treating context as an evolving playbook outperforms static approaches. skill-molt applies these ideas to coding agents — with a human in the loop and natural-language procedures to stay agent-agnostic.
The key insight: non-inferable only
This is the principle that makes skill-molt different from raw session logs:
Only include what the agent could NOT have known before the session.
If it's discoverable from the codebase — README, config files, code comments — it doesn't belong in a skill. Every line passes a filter: "If I deleted this and gave the agent only the codebase, would it figure this out?" If yes, it gets cut.
This keeps skills small, high-signal, and actually useful. No documentation rewrites, no "we use TypeScript" filler.
The molt: skills that grow over time
When a second session adds new knowledge to an existing domain, skill-molt doesn't just append — it sheds outdated content and grows new. Here's a Vercel deployment skill after its second session:
## Workflow
1. Read `vercel.json` — check that rewrites are ordered before catch-all routes
2. Verify environment variables: `vercel env ls`
3. Deploy: `vercel --prod`
- If deploy fails after config change → `vercel --force --prod`
+ 4. Check logs: `vercel logs <deployment-url> --all`
+ - Default `vercel logs` only shows last 1000 lines — use `--all` for full output
5. Verify: check the deployment URL returns 200
## Gotchas
- Build cache persists across deploys. After changing `next.config.js`, use `--force`.
- Preview and Production environments have separate env vars.
+ - Build timeout is 45 minutes. Monorepo builds approaching 40min need optimization.
+ - `vercel logs` without `--all` truncates to 1000 lines. Always use `--all` for debugging.
4 lines added from a second session. Two real-world gotchas — log truncation and build timeouts — that would have cost the agent (and you) time to rediscover.
Install
npx skills add konippi/skill-molt
Or install manually
git clone https://github.com/konippi/skill-molt.git
ln -s /path/to/skill-molt ~/.claude/skills/skill-molt # Claude Code
ln -s /path/to/skill-molt .kiro/skills/skill-molt # Kiro CLI
ln -s /path/to/skill-molt .agents/skills/skill-molt # Codex / Cursor / Gemini CLI
Zero dependencies. All Markdown. Works with any agent that supports skills.
Try it out
👉 github.com/konippi/skill-molt
If you find it useful, a ⭐ on GitHub would mean a lot. Contributions are welcome too — see CONTRIBUTING.md.
How do you capture what your coding agent learns during a session? I'd love to hear what's working for you.
Top comments (0)