I've been building Claude Code skills full-time for the past month. 24 skills total. Some are trivial — one-trick prompt wrappers. Others are 500+ line systems that genuinely change how I work.
Last week, I open-sourced four of my favorites. Here's what I learned about building skills that actually do something useful, and the architecture patterns that separate real tools from demo toys.
What's a Claude Code Skill?
If you haven't used Claude Code yet: skills are reusable prompt+tool configurations that extend what Claude can do in your terminal. Think of them like shell scripts, but instead of bash commands, they orchestrate Claude's tool calls — file reads, edits, web searches, bash execution — into repeatable workflows.
The key insight: skills are not prompts. A prompt says "review this code." A skill says "read every changed file in this PR, check for SQL injection in any raw query, verify all new endpoints have auth middleware, compare the diff against the base branch, and output a structured report with line numbers."
That's the gap between a toy and a tool.
The 4 Skills I Open-Sourced
1. Dependency Auditor
Repo: claude-dependency-auditor
Runs npm audit, pip-audit, cargo audit, and go vuln check — then actually filters the noise. Most audit tools dump 200 findings where 190 are false positives or dev-only dependencies you'll never ship. This skill:
- Separates production vs dev dependencies
- Filters findings by actual exploitability (not just CVSS score)
- Auto-generates fix PRs for safe upgrades
- Produces an SBOM if you need compliance docs
The architecture pattern here: multi-tool orchestration with filtering. The skill runs 4 different audit tools, normalizes their output into a common schema, then applies heuristic filters before presenting results.
# Install
cp dependency-auditor.md ~/.claude/skills/
# Use
claude "audit dependencies in this project"
2. Project Init Wizard
Repo: claude-project-init-wizard
Auto-detects your tech stack and generates an optimized CLAUDE.md for any repo. Scans package.json, Cargo.toml, go.mod, pyproject.toml, Dockerfiles, CI configs — then produces a CLAUDE.md with:
- Project architecture overview
- Recommended skills for that stack
- Custom hooks for common operations
- Git conventions matching existing commit history
Architecture pattern: inspection → inference → generation. The skill reads 15-20 config files, builds a mental model of the project, then generates configuration that fits.
3. Session Continuity
Repo: claude-session-continuity
The anti-amnesia system. Claude Code has a context window. When it fills up, older messages get compressed. If you're 3 hours into a complex refactor and the context compresses, you lose your mental model.
This skill auto-checkpoints your working state to a .claude-session/ directory:
- Current task and subtasks
- Files modified and why
- Decisions made and alternatives rejected
- Next steps planned
When context compresses or you start a new session, Claude reads the checkpoint and picks up exactly where it left off.
Architecture pattern: state serialization with semantic compression. The skill doesn't just dump raw data — it captures the reasoning behind your current state.
4. Awesome Claude Code (Meta-List)
Repo: awesome-claude-code
A curated list of skills, hooks, plugins, and agent orchestrators for Claude Code.
Architecture Patterns for Skills That Actually Work
After building 24 of these, patterns emerge:
Pattern 1: Read Before You Write
Bad skills jump straight to generating code. Good skills read the existing codebase first. My code review skill reads every changed file, the test suite structure, the CI config, and recent commit messages before it writes a single line of review.
Implementation: Always start your skill with a discovery phase. Use Glob to find relevant files, Read to understand them, Grep to find patterns. Only then generate output.
Pattern 2: Verification Loops
The best skills verify their own output. My Security Scanner skill doesn't just find potential vulnerabilities — it tries to construct a proof-of-concept for each finding. Findings that can't be reproduced get downgraded to "informational."
This is the difference between a tool that creates work (triaging false positives) and one that eliminates work (verified findings only).
Pattern 3: Progressive Disclosure
Don't dump everything at once. My Dashboard Builder skill first shows a summary: "Found 47 metrics across 3 services. Recommend 4 dashboards." Then it asks which to build. Then it builds one at a time, showing previews.
Users want control over complex operations. Let them steer.
Pattern 4: Fail Forward
Skills should handle errors gracefully. If npm audit fails because there's no package.json, the dependency auditor doesn't crash — it skips npm and tries the other ecosystems.
Implementation: Wrap each tool call in error handling. Collect partial results. Report what worked and what didn't.
The Ones I Sell
Three skills are available on Gumroad. These are more complex — they handle edge cases the open-source versions don't, include better documentation, and get updates:
Security Scanner ($10) — Finds real vulnerabilities with PoC generation. I've used this to find bugs in MLflow, Gradio, and LlamaIndex.
Dashboard Builder ($7) — Generates monitoring dashboards for SigNoz, Grafana, and similar platforms from metrics specs. I shipped 12 SigNoz dashboard PRs using this.
API Connector ($7) — Builds API integration plugins for platforms like Keep, Onyx, Cal.com. Follows existing patterns in the target repo automatically.
The free skills are genuinely useful on their own. The paid ones are for people who do this work professionally and want the edge cases handled.
How to Build Your Own
The fastest path:
- Install Claude Code
- Create
~/.claude/skills/your-skill.md - Write a system prompt that describes the workflow
- Include concrete examples of input → output
- Test with real projects, not toy examples
The skill format is just markdown with frontmatter. No SDK, no build step, no deployment. Drop a file and it works.
The hard part isn't the format — it's designing a workflow that actually saves time. Start with something you do manually 3+ times per week. Automate the boring parts. Keep the judgment calls for the human.
What's Next
I'm building skills for Terraform review, Kubernetes debugging, and E2E test generation. If you want to follow along or contribute, the repos are all on my GitHub.
The Claude Code skill ecosystem is early. The best skills haven't been built yet. If you have a repetitive workflow that involves reading code, making decisions, and producing structured output — that's a skill waiting to happen.
If you build something cool, open a PR on awesome-claude-code. The list is small now but growing fast.
Top comments (0)