DEV Community

Cover image for How to Build Custom Claude Code Skills That Actually Work
Alan West
Alan West

Posted on

How to Build Custom Claude Code Skills That Actually Work

Have you ever stared at Claude Code thinking "this thing is amazing at generic coding tasks, but it has no idea how my team actually works"? Yeah, same.

I spent a good chunk of last month trying to get Claude Code to follow our internal diagnostic workflow for debugging production issues. Out of the box, it kept giving me generic advice when I needed it to follow a very specific runbook. Turns out, the answer was custom Skills — but getting them to work properly took some trial and error.

Here's what I learned.

The Problem: Claude Code Doesn't Know Your Workflow

Claude Code is great at general-purpose development tasks. But every team has specialized workflows — business diagnostics, incident response runbooks, code review checklists, deployment procedures. When you try to get Claude Code to follow these, you end up copy-pasting the same giant prompt over and over.

The real frustration hits when you realize:

  • Your prompts drift over time (everyone has a slightly different version)
  • Context gets lost between conversations
  • New team members don't know which prompts to use
  • You can't version control ad-hoc prompts easily

Root Cause: You Need Reusable, Shareable Prompt Packages

Claude Code supports a Skills system that lets you package reusable prompts as slash commands. Think of them as specialized modes you can activate. The underlying mechanism is straightforward — Skills are markdown files with frontmatter that Claude Code loads on demand.

But here's where people get tripped up: the file structure and configuration have to be exactly right, or your skill silently fails to load.

Step-by-Step: Creating Your First Custom Skill

Step 1: Set Up the Directory Structure

Skills live in a .claude/skills/ directory, either in your project root or in your home directory for global skills.

# Project-level skills (shared with your team via git)
mkdir -p .claude/skills/

# Or global skills (just for you)
mkdir -p ~/.claude/skills/
Enter fullscreen mode Exit fullscreen mode

The key thing people miss: project-level skills take precedence over global ones if they share the same name. This is actually useful for overriding defaults per-project.

Step 2: Write the Skill File

Create a markdown file with YAML frontmatter. Here's a real example — a diagnostic skill that walks through a structured debugging process:

---
name: diagnose
description: "Run a structured diagnostic on the current project"
user_invocable: true
---

You are now in diagnostic mode. Follow these steps exactly:

1. **Identify the symptom**: Ask the user to describe the issue in one sentence
2. **Check recent changes**: Run `git log --oneline -10` and identify
   any commits that could be related
3. **Verify the environment**: Check Node version, dependency versions,
   and environment variables that might affect behavior
4. **Reproduce**: Attempt to reproduce the issue with a minimal test case
5. **Root cause**: Based on findings, identify the most likely root cause
6. **Fix and verify**: Propose a fix, apply it, and run relevant tests

Always explain your reasoning at each step. If you're unsure about
a root cause, say so and list the top 3 possibilities ranked by
likelihood.
Enter fullscreen mode Exit fullscreen mode

Save this as .claude/skills/diagnose.md.

Step 3: Reference It in Your Settings

You need to make sure your project's .claude/settings.json knows about your skills directory. Most setups pick this up automatically, but if your skill isn't showing up, check that your settings aren't explicitly excluding the path.

{
  "permissions": {
    "allow": [
      "Read",
      "Glob",
      "Grep"
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

The skill should now be available as /diagnose in your Claude Code session.

Step 4: Test and Iterate

Here's the part nobody tells you: your first version will be too vague. Claude Code interprets skill prompts literally, so you need to be specific about:

  • What tools it should use (and in what order)
  • When to stop and ask for input vs. proceed autonomously
  • What format the output should be in
  • Edge cases it should handle
---
name: review-pr
description: Structured code review following team standards
user_invocable: true
---

Perform a code review on the current changes. Check for:

## Security
- SQL injection vectors (parameterized queries only)
- XSS in any user-facing output
- Secrets or credentials in code or config

## Performance
- N+1 query patterns in database calls
- Missing indexes for new query patterns
- Unbounded list operations without pagination

## Style
- Follow existing patterns in the codebase — do NOT suggest
  refactors outside the scope of the PR
- Flag any function longer than 50 lines

Output your review as a list of findings, each with:
- **File and line**: exact location
- **Severity**: critical / warning / nit
- **What**: description of the issue
- **Why**: why this matters
- **Fix**: suggested fix (with code if applicable)
Enter fullscreen mode Exit fullscreen mode

See how specific that second example is compared to the first? That's the difference between a skill that's actually useful and one that gives you generic fluff.

Common Debugging Issues

Skill doesn't appear as a slash command:

  • Check that user_invocable: true is set in the frontmatter
  • Verify the file extension is .md
  • Make sure the file is in the correct directory
  • Restart your Claude Code session (skills are loaded at startup)

Skill loads but ignores instructions:

  • Your prompt is probably too ambiguous. Add explicit constraints
  • Use imperative language: "You MUST" instead of "Try to"
  • Break complex workflows into numbered steps

Skill conflicts with another skill:

  • Check for duplicate names across project and global directories
  • Use unique, descriptive names (not just review or fix)

Sharing Skills With Your Team

The real power move is committing your .claude/skills/ directory to your repo. Now everyone on the team has the same diagnostic workflows, review checklists, and debugging runbooks.

git add .claude/skills/
git commit -m "Add team Claude Code skills for diagnostics and review"
Enter fullscreen mode Exit fullscreen mode

One thing I'd recommend: add a short README in the skills directory explaining what each skill does and when to use it. Future you will thank present you.

Prevention: Design Skills That Stay Useful

A few principles I've landed on after building about a dozen of these:

  • Keep skills focused. One skill, one job. A "do everything" skill is just a worse version of no skill at all.
  • Version your prompts. Treat skill files like code — review changes, test them, don't just yolo-push prompt tweaks.
  • Include escape hatches. Tell Claude Code when it should stop and ask for clarification rather than guessing.
  • Document assumptions. If your skill assumes a specific project structure or toolchain, say so in the prompt.

Custom skills won't solve every problem with AI-assisted development, but they do solve the "I keep telling it the same thing" problem. And honestly, that alone makes them worth the setup time.

The Claude Code documentation has more details on the skills system and other customization options. Start with one simple skill, get it working, and build from there.

Top comments (0)