DEV Community

thestack_ai
thestack_ai

Posted on

I wrote a tool that diagnoses your Claude Code skills before they break

If you use Claude Code skills, you've probably been there — a skill looks fine, works
sometimes, then silently fails when you actually need it. Missing gotchas section,
description that doesn't trigger properly, 400-line monolith that should've been split
into files.

▎ Anthropic published 7 principles for writing good skills but didn't build anything to
enforce them. So I did.

▎ pulser scans your SKILL.md files against 8 rules derived from those principles. But
here's what makes it different from a linter: it doesn't just say "this is wrong." It
tells you why it matters, gives you a ready-to-use template, and auto-fixes it if you
want. And if the fix isn't what you expected, pulser undo rolls everything back
instantly.

▎ The whole pipeline is: diagnose, classify the skill type, prescribe with context, fix,
rollback.
▎ You can run it three ways. In your terminal as a CLI (just type pulser). Inside Claude
Code as a conversation ("check my skills" or /pulser and Claude does the rest). Or in
Codex.

                                                                                ▎ I've been running it on my own 54 skills and it caught things I would've never noticed
Enter fullscreen mode Exit fullscreen mode

— skills with overlapping trigger keywords stepping on each other, descriptions written
for humans instead of the model, missing tool restrictions that could let a read-only
skill accidentally write files.

▎ It's free, MIT, single npm install.

▎ npm install -g pulser-cli

▎ github.com/whynowlab/pulser

Top comments (0)