DEV Community

myougaTheAxo
myougaTheAxo

Posted on • Edited on

Claude Code's Hidden Superpower: Custom Slash Commands That Follow Your Exact Workflow

I've been using Claude Code as my primary AI dev tool for several months. Most developers I talk to use it in the obvious way: open a project, describe a problem, get a solution.

That works. But there's a feature that most tutorials completely skip over, and it changes how you interact with the tool entirely.

It's called the skill system. Once you understand it, you'll stop rewriting the same prompts over and over — and you'll start getting consistent, predictable output from Claude instead of different results each session.

What Are Custom Skills?

A skill is a Markdown file that defines a reusable workflow for Claude Code. Drop it in .claude/skills/<skill-name>/SKILL.md and it immediately becomes a slash command you can invoke from the Claude Code prompt.

The structure is minimal:

---
name: review
description: "3-axis code review (security, performance, readability)"
allowed-tools: Read, Grep, Glob, Bash
---

## Instructions

[Your workflow here]
Enter fullscreen mode Exit fullscreen mode

No configuration files. No build step. No installation. When you type /review in Claude Code, it reads this file and executes the instructions exactly as written.

You can add an argument-hint to the front matter to show a hint when the skill is invoked, and specify which tools Claude is allowed to use. That second part matters more than it seems — it prevents Claude from reaching outside the boundaries you've defined.

The Real Problem This Solves

The core frustration with LLM-based dev tools is inconsistency. Without explicit instructions, Claude might:

  • Skip steps in your review process depending on how you phrase the request
  • Use a different output format each session, making it hard to skim results
  • Add unrequested refactoring on top of what you asked for
  • Miss your specific quality criteria because you forgot to mention them

This isn't a flaw in the model — it's a natural consequence of ad-hoc prompting. When the instructions are different every time, the output will be too.

Skills solve this by locking the process in. Every invocation runs the same checklist, produces the same output format, and stays within the tools you've defined. Claude Code goes from "smart autocomplete" to "programmable workflow executor."

The larger pattern: structured AI workflows outperform ad-hoc prompting for repeatable tasks. A skill file is essentially a runbook. The same reason humans use checklists for high-stakes tasks applies here.

A Real Review Skill, Full Source

Here's the /review skill I actually use in production. This is the exact file, not a simplified version:

---
name: review
description: Review a PR or specified file across 3 axes -- Security, Performance, and Readability
argument-hint: "[file path or PR number (omit for current staged/unstaged changes)]"
allowed-tools: Read, Grep, Glob, Bash
---

## Identify Review Target

Determine input from arguments:
- `$ARGUMENTS` is a number -> fetch PR diff with `gh pr diff $ARGUMENTS`
- `$ARGUMENTS` is a file path -> read that file
- `$ARGUMENTS` is empty -> use `git diff --cached` (staged changes). If no staged changes, fall back to `git diff`

## Review Procedure

### 1. Understand the Changes
- List changed files and line counts
- Infer the purpose of the changes (new feature / bug fix / refactor / config change)

### 2. Security Check
Review for the following concerns:
- SQL injection (query built via string concatenation?)
- XSS (user input rendered without escaping?)
- Command injection (external input passed to shell commands?)
- Authentication/authorization gaps (access control appropriate?)
- Hardcoded secrets (API keys, passwords, etc.)
- Path traversal (user input used in file paths?)

### 3. Performance Check
- N+1 query potential
- Unnecessary loops or O(n^2)+ complexity
- Memory leaks (event listener cleanup, large object retention)
- Unnecessary re-renders (React: useEffect deps, useMemo/useCallback)
- Impact on bundle size

### 4. Readability & Maintainability Check
- Do function/variable names convey intent?
- Excessive nesting (3+ levels)?
- Magic numbers that should be constants?
- Duplicated code?
- Appropriate error handling?

## Output Format```

markdown
## Review Results

### Overall Verdict: LGTM / Needs Changes / Blocker Found

### Security
- (Specific issues if found, otherwise "No issues")

### Performance
- (Specific issues if found, otherwise "No issues")

### Readability & Maintainability
- (Improvement suggestions if any)

### Positive Notes
- (Actively highlight good practices)
​

Enter fullscreen mode Exit fullscreen mode

Important: Keep "No issues" brief for clean areas. Provide concrete code examples for areas needing improvement.




A few things worth pointing out in this skill:

**The `$ARGUMENTS` dispatch block** at the top handles three different input cases: a PR number, a file path, or nothing (defaults to staged changes). One skill, three use cases. You invoke it as `/review`, `/review path/to/file.ts`, or `/review 42` for a PR number.

**The explicit checklist** is what makes this valuable. "Check for security issues" is vague. "Check for SQL injection via string concatenation, XSS via unescaped user input, command injection via external input passed to shell" is not. The specificity is the point.

**The output format section** means every review result has the same structure. After running this dozens of times, you can skim the output in seconds. You know exactly where to look.

I've been running this on every commit for months. It's caught real issues: a missing auth check in a route I added quickly, an N+1 query inside a loop, a hardcoded API key I dropped in "temporarily." Nothing revolutionary, but consistent coverage I wouldn't have gotten from ad-hoc prompting.

## Why Consistency Changes Everything

Here's the shift in mental model that skills enable: when you run `/review` fifty times, you start to build intuition about what this reviewer catches and what it doesn't.

Ad-hoc prompting doesn't give you that. Each session is different enough that you can't build a reliable mental model of your AI assistant's behavior. Skills make the behavior stable enough to reason about.

This matters for team use too. If `/review` is a shared skill in your repo's `.claude/skills/` directory, every developer on the team runs the same review checklist. Onboarding a new developer means they get your team's quality standards automatically, not after months of code review feedback.

## The Argument System

Skills receive arguments via `$ARGUMENTS`. You can use this to make a single skill handle multiple cases, as the review skill demonstrates.

A few patterns I've found useful:

For a debug skill:


```markdown
- $ARGUMENTS contains a stack trace → analyze the trace, identify the origin, suggest fixes
- $ARGUMENTS is an error message → search the codebase for the likely source
- $ARGUMENTS is empty → ask what the error is before proceeding
Enter fullscreen mode Exit fullscreen mode

For a doc-gen skill:

- $ARGUMENTS is a file path → generate documentation for that specific file
- $ARGUMENTS is a directory → generate docs for all files in that directory
- $ARGUMENTS is empty → ask which files to document
Enter fullscreen mode Exit fullscreen mode

The pattern: handle the obvious cases explicitly, define a sensible fallback. This makes skills robust to slight variations in how you invoke them without making the instructions vague.

How to Build Your Own

The fastest path to a useful skill:

  1. Think of something you prompt Claude for more than twice a week
  2. Write down the exact steps you'd want Claude to follow — not "review the code," but the specific checklist you actually care about
  3. Create .claude/skills/<name>/SKILL.md with that process
  4. Invoke it with /<name> in Claude Code

The quality of the skill is directly proportional to the specificity of your instructions. "Review this" → inconsistent output. "Check these six security patterns, then these four performance issues, output in this exact format" → consistent output you can rely on.

A good target: if you can write the skill instructions as a numbered checklist that a competent human developer could follow without asking clarifying questions, Claude can follow it the same way.

Start with one skill. See what the output looks like. Iterate. The review skill above didn't start in its current form — it went through several rounds of refinement based on what it missed and what was noisy.

Skills vs. CLAUDE.md

There's a related concept worth distinguishing, since both live in the .claude/ directory:

CLAUDE.md: Project-level context that Claude always has loaded in the background. Use it for project architecture, code conventions, "don't modify X," and cross-file relationships. This is your permanent project memory.

Skills: Explicit workflows you invoke by name. Use them for multi-step processes, repeatable tasks, anything requiring a consistent output format. These are your named procedures.

A solid setup uses both: CLAUDE.md tells Claude about your project at all times, and skills handle your most common task patterns on demand. They're complementary, not alternatives.

Getting Started Today

The fastest experiment:

  1. Create .claude/skills/review/SKILL.md
  2. Paste the full review skill from the code block above
  3. Adapt the checklist to your actual concerns (if you're not using React, remove the React-specific items; if you care about a specific security issue not listed, add it)
  4. Run /review on code you're about to commit

You'll immediately see whether the output matches your standards. The first run tells you what to adjust. Iterate from there.

The skill system rewards specificity. Vague instructions produce vague output. A precise runbook produces precise, consistent results — and that consistency is what makes AI assistance actually trustworthy for repeatable work.


If you'd rather skip the iteration and start with a tested set, I packaged 7 skills (review, debug, refactor, doc-gen, test-gen, migrate, perf-audit) into a ready-to-use skill pack on Gumroad — copy the folders into .claude/skills/ and all seven commands are live. But the pattern above gives you everything you need to build your own.

— myouga the axolotl

Building Claude Code tools and writing about what actually works. Find me at gumroad.com/myougaTheAxo.


Related Articles

All 8 Android app templates on Gumroad


Related Articles

Top comments (0)