DEV Community

Cover image for How to Activate Claude Skills Automatically: 2 Fixes for 95% Activation
Oluwawunmi Adesewa
Oluwawunmi Adesewa

Posted on • Edited on

How to Activate Claude Skills Automatically: 2 Fixes for 95% Activation

The point of Claude Skills in Claude Code is autonomous skill activation. You define a skill and Claude automatically triggers it when relevant.

But if you've actually tried building with Skills, you know that's not how it behaves by default.

They don't trigger consistently. Even when the trigger conditions look correct, Claude often ignores them.

While you can use workarounds like slash commands or stacking rules in the CLAUDE.md file, these methods lose the core benefit: skills firing on their own.

If you want to make Claude Code skills activate automatically instead of calling them yourself, this tutorial shows exactly how to set them up so they fire reliably.


Table of Contents


Why Claude Skills Don't Trigger

The issue isn’t that Claude Skills don’t work. They do. The problem is when they activate. As activation depends on Claude’s interpretation of your prompt. It decides whether a skill is relevant, and sometimes it guesses wrong. That’s why skills don’t always fire when you expect.

So, instead of leaving activation up to chance, we can guide Claude so skills trigger automatically every time.

Here's how:


Fix 1: Detection Hook + Trigger Rules

We owe this fix to Claude hooks , by the way. (ironically)

For this fix, we use the UserPromptSubmit hook, which runs every time you send a prompt to Claude, before it responds. This lets us inject instructions into every prompt Claude sees, giving us reliable control over which skills run.

Here's how it works:

  1. You type a prompt that should trigger a skill.
  2. The UserPromptSubmit hook runs.
  3. The hook reads your prompt and checks it against skill-rules.json.
  4. If there’s a match, the hook adds a small instruction telling Claude which skill to activate.
  5. Claude sees your prompt with the hook’s instruction.
  6. The correct skill runs automatically.

This setup lets us reliably activate skills without manually calling them.

Next, we’ll walk through an example using the valibot-usage validation skill to show exactly how to configure the hook and trigger rules

What we Need the Skill to do

We need the skill to:

  • Guide Claude to use composable validators like v.pipe()

  • Ensure proper TypeScript type inference

  • Make skill activation reliable whenever validation is mentioned

Why valibot? It only loads the validators Claude actually uses, so token usage stays low and the skill runs fast. This matters when you're triggering skills throughout a coding session.

What We Need to Install

To make this work automatically, we’ll set up:

  1. UserPromptSubmit hook – runs on every prompt you send to Claude

  2. Configuration file – tells the hook when to suggest skills

  3. Settings update – activates the hook


Step 1

Note: Ensure you have the valibot library installed.

Copy the valibot-usage into your .skills/directory

Step 2 - Create the Hook Files

To make the UserPromptSubmit hook work for skill activation, you need two files in your .claude/hooks/ directory. Each file has a clear role:

  1. .claude/hooks/skill-activation-prompt.sh – a shell script that runs automatically whenever you submit a prompt. Its job is to invoke the TypeScript script that handles the hook logic.

  2. .claude/hooks/skill-activation-prompt.ts – contains the logic that reads your prompt and checks it against your skill-trigger rules defined in a JSON file. If a rule matches, it injects instructions into the prompt so Claude can activate the correct skill.

This separation keeps things modular: the shell script handles execution, the TypeScript script handles logic, and the JSON file holds the actual rules. Together, they let the hook run on every prompt and guide Claude reliably.

File 1: .claude/hooks/skill-activation-prompt.sh

#!/bin/bash
set -e

cd "$CLAUDE_PROJECT_DIR/.claude/hooks"
cat | npx tsx skill-activation-prompt.ts
Enter fullscreen mode Exit fullscreen mode

Make it executable:

chmod +x .claude/hooks/skill-activation-prompt.sh
Enter fullscreen mode Exit fullscreen mode

File 2: .claude/hooks/skill-activation-prompt.ts

#!/usr/bin/env node
import { readFileSync } from 'fs';
import { join } from 'path';

interface HookInput {
    session_id: string;
    transcript_path: string;
    cwd: string;
    permission_mode: string;
    prompt: string;
}

interface PromptTriggers {
    keywords?: string[];
    intentPatterns?: string[];
}

interface SkillRule {
    type: 'guardrail' | 'domain';
    enforcement: 'block' | 'suggest' | 'warn';
    priority: 'critical' | 'high' | 'medium' | 'low';
    promptTriggers?: PromptTriggers;
}

interface SkillRules {
    version: string;
    skills: Record<string, SkillRule>;
}

interface MatchedSkill {
    name: string;
    matchType: 'keyword' | 'intent';
    config: SkillRule;
}

async function main() {
    try {
        const input = readFileSync(0, 'utf-8');
        const data: HookInput = JSON.parse(input);
        const prompt = data.prompt.toLowerCase();

        const projectDir = process.env.CLAUDE_PROJECT_DIR || '$HOME/project';
        const rulesPath = join(projectDir, '.claude', 'skills', 'skill-rules.json');
        const rules: SkillRules = JSON.parse(readFileSync(rulesPath, 'utf-8'));

        const matchedSkills: MatchedSkill[] = [];

        for (const [skillName, config] of Object.entries(rules.skills)) {
            const triggers = config.promptTriggers;
            if (!triggers) {
                continue;
            }

            if (triggers.keywords) {
                const keywordMatch = triggers.keywords.some(kw =>
                    prompt.includes(kw.toLowerCase())
                );
                if (keywordMatch) {
                    matchedSkills.push({ name: skillName, matchType: 'keyword', config });
                    continue;
                }
            }

            if (triggers.intentPatterns) {
                const intentMatch = triggers.intentPatterns.some(pattern => {
                    const regex = new RegExp(pattern, 'i');
                    return regex.test(prompt);
                });
                if (intentMatch) {
                    matchedSkills.push({ name: skillName, matchType: 'intent', config });
                }
            }
        }

        if (matchedSkills.length > 0) {
            let output = '━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n';
            output += '🎯 SKILL ACTIVATION CHECK\n';
            output += '━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n';

            const critical = matchedSkills.filter(s => s.config.priority === 'critical');
            const high = matchedSkills.filter(s => s.config.priority === 'high');
            const medium = matchedSkills.filter(s => s.config.priority === 'medium');
            const low = matchedSkills.filter(s => s.config.priority === 'low');

            if (critical.length > 0) {
                output += '⚠️ CRITICAL SKILLS (REQUIRED):\n';
                critical.forEach(s => output += `  → ${s.name}\n`);
                output += '\n';
            }

            if (high.length > 0) {
                output += '📚 RECOMMENDED SKILLS:\n';
                high.forEach(s => output += `  → ${s.name}\n`);
                output += '\n';
            }

            if (medium.length > 0) {
                output += '💡 SUGGESTED SKILLS:\n';
                medium.forEach(s => output += `  → ${s.name}\n`);
                output += '\n';
            }

            if (low.length > 0) {
                output += '📌 OPTIONAL SKILLS:\n';
                low.forEach(s => output += `  → ${s.name}\n`);
                output += '\n';
            }

            output += 'ACTION: Use Skill tool BEFORE responding\n';
            output += '━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n';

            console.log(output);
        }

        process.exit(0);
    } catch (err) {
        console.error('Error in skill-activation-prompt hook:', err);
        process.exit(1);
    }
}

main().catch(err => {
    console.error('Uncaught error:', err);
    process.exit(1);
});
Enter fullscreen mode Exit fullscreen mode

Install dependencies:

cd .claude/hooks
npm init -y  # Creates package.json if you don't have one
npm install tsx  # Required to run TypeScript files
Enter fullscreen mode Exit fullscreen mode

Step 2: Configure Skill Triggers

Create .claude/skills/skill-rules.json:

{
  "version": "1.0",
  "skills": {
    "valibot-usage": {
      "type": "domain",
      "enforcement": "suggest",
      "priority": "high",
      "description": "Valibot schema validation",
      "promptTriggers": {
        "keywords": [
          "validate",
          "validation",
          "schema",
          "type check",
          "verify input",
          "validate data",
          "parse request"
        ]
      },
      "fileTriggers": {
        "pathPatterns": [
          "**/*.ts",
          "**/*.js"
        ]
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This file tells the hook when to suggest the Valibot-usage skill. When you type words like "validate" or "schema" while editing TypeScript or JavaScript files, the hook detects it and suggests the Valibot-usage skill.

Each skill you want to auto-trigger needs an entry in this file. Right now we only have Valibot-usage, but you can add more skills later by adding more entries under "skills". For example, if you had a React patterns skill, you'd add another entry for "react-patterns" with its own keywords and file patterns.

Step 3: Activate the Hook

Add this to your .claude/settings.json:

{
  "hooks": {
    "UserPromptSubmit": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "$CLAUDE_PROJECT_DIR/.claude/hooks/skill-activation-prompt.sh"
          }
        ]
      }
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

If you already have a settings.json with other hooks, just add the UserPromptSubmit section.

How It Works

Before:

You: "validate this user input"
Claude: *writes manual if-statement validation*
Enter fullscreen mode Exit fullscreen mode

After:

You: "validate this user input"
Hook: detects "validate" + .ts file
Claude: RECOMMENDED SKILLS:
  Valibot-usage 

ACTION: Use Skill tool BEFORE responding"
You: "yes"
Claude: uses valibot-usage skill to generate schema
Enter fullscreen mode Exit fullscreen mode

One important thing to know: The hook doesn't automatically load skills for you. It detects when a skill is relevant and tells Claude about it, but Claude will still ask you "Should I load the Valibot-usage skill?" before actually using it. You have to say yes.

This is by design. The hook handles the detection part automatically so you don't have to remember which skill to use, but you still get final say on whether to load it.

Now when you're building an API endpoint and ask Claude to validate the request body, it uses Valibot's composable validators, guided by thevalibot-usageskill. instead of writing manual checks. The validation logic stays readable even as your schemas get more complex.


Fix 2: Forced Skill Evaluation

This method also uses the UserPromptSubmit hook. But instead of just checking rules quietly, it forces Claude to evaluate each skill before writing any code. Claude has to commit to a skill decision instead of skipping straight to implementation.

How Forced Skill Evaluation Works

When you type a prompt like "validate this API request," Claude goes through three steps:

a. Evaluate
Lists every skill and decides YES or NO with a reason:

valibot-usage: YES - need schema validation
react-patterns: NO - not building UI

b. Activate
Calls the skill for each YES decision:

Skill(valibot-usage)

c. Implement
Writes the code after committing to the skill decisions.

This fix makes Claude stop and evaluate each skill before touching any code. It can't skip straight to implementation.

Using words like MANDATORY or CRITICAL makes it harder for Claude to skip the activation step.


How to Set it up

Create .claude/hooks/skill-forced-eval.sh:

#!/bin/bash
# UserPromptSubmit hook that forces explicit skill evaluation
#
# This hook requires Claude to explicitly evaluate each available skill
# before proceeding with implementation.
#
# Installation: Copy to .claude/hooks/UserPromptSubmit

cat <<'EOF'
INSTRUCTION: MANDATORY SKILL ACTIVATION SEQUENCE

Step 1 - EVALUATE (do this in your response):
For each skill in <available_skills>, state: [skill-name] - YES/NO - [reason]

Step 2 - ACTIVATE (do this immediately after Step 1):
IF any skills are YES → Use Skill(skill-name) tool for EACH relevant skill NOW
IF no skills are YES → State "No skills needed" and proceed

Step 3 - IMPLEMENT:
Only after Step 2 is complete, proceed with implementation.

CRITICAL: You MUST call Skill() tool in Step 2. Do NOT skip to implementation.
The evaluation (Step 1) is WORTHLESS unless you ACTIVATE (Step 2) the skills.

Example of correct sequence:
- research: NO - not a research task
- Valibot-usage: YES - need schema validation 

[Then IMMEDIATELY use Skill() tool:]
> Skill(Valibot-usage)

[THEN and ONLY THEN start implementation]
EOF
Enter fullscreen mode Exit fullscreen mode

Make it executable:

chmod +x .claude/hooks/skill-forced-eval.sh
Enter fullscreen mode Exit fullscreen mode

Add to .claude/settings.json:

{
  "hooks": {
    "UserPromptSubmit": [
      {
        "hooks": [
          {
            "type": "command",
            "command": ".claude/hooks/skill-forced-eval.sh"
          }
        ]
      }
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

Claude writes down the decision, then has to follow through. Once it says "YES - need schema validation", it's locked in. It has to activate the skill before coding.

It's more verbose though. You'll see Claude list every skill before it starts. But it works because Claude has to commit to the decision in writing before implementing anything.


Bonus: Settings Tweak Worth Testing

There's one more approach that's simpler than hooks but less consistent. If you don't want to deal with hook files, you can add instructions directly to Claude Code's settings.

Go to Settings at Custom Instructions and add this:

MANDATORY: Before responding to ANY prompt, you MUST:
1. Check ALL available skills in <available_skills>
2. Identify which skills apply to this prompt
3. Use Skill(skill-name) for EACH applicable skill
4. ONLY THEN start your response

Do NOT skip skill activation. Do NOT proceed without checking skills.
This is NON-NEGOTIABLE for every single prompt.
Enter fullscreen mode Exit fullscreen mode

The aggressive language ("MANDATORY", "NON-NEGOTIABLE") makes it harder for Claude to ignore. It becomes part of Claude's system context on every interaction.

This works, but it's not as reliable as the hook approaches, as custom instructions sit in the background. Claude can still override them when it thinks it knows better, but Hooks force the behavior by injecting it into every prompt.

Which Method Should You Use?

The real question isn't which method has better pros/cons but about how you work and what you're building.

Use Detection Hook (Fix 1) when:

You're managing multiple skills across different contexts

If you have 5+ skills (validation, React patterns, database queries, API design, testing), Fix 1 scales better. The keyword-based detection in skill-rules.json lets you map different triggers to different skills without forcing Claude to evaluate everything every time.

Example: Your prompt mentions "validate user input" while editing a TypeScript API route. The hook detects validate + .ts file context and suggests only the Valibot skill—not your React patterns or database skills. This selective activation keeps Claude focused.

Your skills have clear linguistic or contextual triggers

Fix 1 works best when you can define precise activation conditions. If you can articulate "when the user says X while editing Y file type, suggest Z skill," the detection hook will reliably fire.

The JSON configuration gives you fine-grained control:

  • Keywords: ["validate", "schema", "parse request"]
  • Intent patterns: Regex matching like "verify.*input" or "check.*data"
  • File patterns: **/*.ts, **/api/**/*.js

You want to preserve Claude's autonomy for simple tasks

The detection hook suggests skills but still asks permission. This matters when you're doing something straightforward that doesn't need skill intervention. You type "add a console.log here" ”no skill triggers, Claude just does it without an evaluation overhead.

You're willing to invest setup time for long-term efficiency

Fix 1 requires more initial work: shell script, TypeScript logic, JSON configuration, npm dependencies. But once it's set up, adding new skills is just editing one JSON file. If you're building a skill library for a team or long-term project, this upfront cost pays off.

Use Forced Evaluation (Fix 2) when:

You have 2-3 critical skills that must never be forgotten

If your project relies on a small set of non-negotiable patterns. Say a Valibot validation skill, a custom auth pattern, and a logging standard. Fix 2 guarantees Claude considers them every time. The three-step evaluation sequence (Evaluate, Activate and Implement) makes skill usage mandatory, not optional.

Your skills apply broadly but Claude keeps missing them

Some skills don't have obvious keyword triggers. A "clean code principles" skill or "error handling patterns" skill might be relevant to almost any code task, but there's no single word that captures when to use them. Fix 2 solves this by making Claude explicitly evaluate every skill regardless of the prompt content.

You're okay with verbosity for reliability

Every prompt gets the full evaluation list. If you ask Claude to "fix a typo," it still lists every skill before doing anything. This is the tradeoff. 100% reliability in exchange for seeing the evaluation process every time. If you're working on something where correctness matters more than speed, this is worth it.

Use Custom Instructions (Bonus) when:

You're testing whether skill activation is even your bottleneck

Before investing in hooks, spend 10 minutes adding the instruction to settings. If Claude starts consistently using skills, maybe the issue was just a gentle reminder, not a technical problem. If it doesn't work, you've learned that you need the stronger mechanisms.

You have one dominant skill that should almost always load

If 90% of your work involves one skill (like a specific framework pattern), custom instructions can work as a quick solution. "Always check if the React-patterns skill applies" is simpler than setting up hooks for a single skill.

Your project is temporary or exploratory

Prototyping for a few days? Building a proof-of-concept? Custom instructions give you skill activation without committing to infrastructure. You can remove it just as easily when the project ends.


Conclusion

Claude Code skills have the potential to solve a lot of the AI “slop” that happens when a model guesses or skips steps. Their real power comes when they trigger autonomously, reliably, and in the right context.

With the fixes we’ve covered, using UserPromptSubmit hook for detection and forced evaluation. You can guide Claude to activate skills consistently, without manual intervention. That means fewer mistakes, cleaner code, and more predictable results.

Skills don’t just exist to sit in the background; when set up right, they become a trusted part of your AI workflow, letting Claude handle complex tasks like validation, type inference, or any custom logic you design.

Autonomous skills aren’t just convenient, they’re the key to unlocking Claude’s full potential.


Quick Troubleshooting

tsx: command not found
Run: npm install tsx in .claude/hooks/

Hook not triggering
Check: chmod +x skill-activation-prompt.sh and restart Claude Code

skill-rules.json not found

Verify path: .claude/skills/skill-rules.json (not .claude/hooks/)


Shoutouts & References

The hook system used in fix 1 was built by diet103 (check his full code infrastructure for more).

The hook system used in fix 2 is based on research by Scott Spence.

Top comments (0)