This article is distilled from Lavi Nigam (Google ADK Agent Engineering expert) on Agent Skill design, summarizing 5 production-ready SKILL.md design patterns to help developers reduce token waste and improve skill quality.
Background: Why Do We Need Skill Design Patterns?
In AI Native product development, the root cause of massive token waste is twofold: forcing the model to repeatedly guess user intent that should be clearly specified, and expressing with complex instructions what could be expressed simply.
With structured SKILL.md design patterns, we can:
- Reduce the model's "guessing" cost and trigger the right behavior precisely
- Standardize how skills are written, reducing friction in team collaboration
- Use progressive knowledge loading to dramatically cut token consumption
- Let the Agent activate the right skill at the right time
Overview of the 5 Patterns
Pattern 1: Tool Wrapper
Core idea: SKILL.md uses load_skill_resource to load spec files from references/. The Agent applies those rules and instantly becomes a domain expert. No scripts, no templates — pure knowledge encapsulation.
How It Works
File Structure
my-skill/
├── SKILL.md # Trigger keywords + load instructions (no scripts, no templates)
└── references/
└── conventions.md # Conventions, rules, best practices
Use Cases
- FastAPI routing conventions and response model specs
- Terraform resource naming and module patterns
- PostgreSQL query optimization best practices
- Internal API design guidelines
SKILL.md Example
---
name: fastapi-expert
description: Helps write code following FastAPI best practices. Triggered when the user needs to write FastAPI code, routes, or dependency injection.
---
# FastAPI Expert Mode
## Activation Rules
Load reference doc: `references/fastapi-conventions.md`
Ensure all generated FastAPI code follows the conventions defined in that document.
Key point: The
descriptionfield is the Agent's search index. It must contain specific business keywords — too vague and the skill won't be triggered correctly.
Pattern 2: Generator
Core idea: The assets/ template determines "what structure to output," while the references/ style guide controls "how to write it." Templates enforce structure; the style guide ensures quality.
How It Works
File Structure
my-skill/
├── SKILL.md
├── assets/
│ └── report-template.md # Output structure template (required sections)
└── references/
└── style-guide.md # Tone and formatting style guide
Use Cases
- Technical analysis reports
- API reference documentation
- Standardized Git commit messages
- Agent scaffold code
- Weekly / monthly report templates
Choose the Generator pattern when structural consistency matters more than creativity.
Pattern 3: Reviewer
Core idea: Separate "what to check" (checklist) from "how to check" (review protocol). Swap the checklist file in references/ and you get an entirely different review type using the same skill structure.
How It Works
File Structure
my-skill/
├── SKILL.md # Review protocol (how to check: load, apply, report)
└── references/
└── review-checklist.md # Checklist (what to check: rules grouped by severity)
Output Format
Review findings should be grouped by severity:
| Level | Meaning | Examples |
|---|---|---|
| ❌ Error | Must fix — affects functionality or security | SQL injection, unhandled exceptions |
| ⚠️ Warning | Should fix — affects quality | Missing type hints, naming violations |
| ℹ️ Info | Optional improvement | Incomplete comments, extractable logic |
Use Cases
Code Review
- Python type annotation checks
- Exception handling completeness
- Function complexity assessment
Security Audit
- OWASP Top 10 checks
- Hardcoded secrets detection
- SQL injection risk
Content Review
- Technical documentation formatting
- Tone consistency
- Terminology usage
API Documentation Review
- Parameter description completeness
- Code example correctness
- Error code coverage
Pattern 4: Inversion
Core idea: Flip the Agent's conversational role. The Agent asks questions first; the user answers — the skill drives the dialogue. Effectively prevents the Agent from making blind assumptions and reduces wasted output.
Three-Phase Flow
Key Control Directive
Must be written explicitly in SKILL.md:
DO NOT start building until all phases are complete.
Use Cases
- Project requirements gathering
- System fault diagnosis guidance
- Infrastructure configuration wizard
- Pre-report information collection for custom outputs
Pattern 5: Pipeline
Core idea: Define a strictly ordered multi-step workflow with explicit Gate Conditions between steps. Gates prevent skipping validation steps.
How It Works
File Structure
my-skill/
├── SKILL.md # Step definitions + gate control logic
├── references/ # Reference specs for each step
├── assets/ # Output templates
└── scripts/ # Automation scripts (optional)
Gate Control Template
## Step N: [Step Name]
[Step description]
*Gate: Do NOT proceed to Step N+1 until [condition]!*
*If any step is skipped or fails, do not continue.*
Use Cases
- Code documentation (parse → user confirms → generate → quality check)
- Data cleaning and processing pipelines
- Code deployment workflow (review → test → release → verify)
- Multi-step approval processes
How to Choose the Right Pattern
Quick Decision Guide
| Need | Recommended Pattern | Complexity | Decision Criteria |
|---|---|---|---|
| Give the Agent expert knowledge of a specific library/tool | 📖 Tool Wrapper | Low | Just need to encapsulate conventions — no fixed output format required |
| Ensure output structure is always consistent | 📝 Generator | Medium | Fixed section/format template; structure consistency > creativity |
| Evaluate/score existing content against a standard | ✅ Reviewer | Medium | Task resembles "grade against a rubric"; output grouped by severity |
| Prevent blind assumptions — must gather context first | ❓ Inversion | Medium | Agent needs user specifics before it can begin |
| Execute a strict ordered multi-step flow with gates | 🔄 Pipeline | High | Steps have dependencies; order is critical; user confirmation required mid-flow |
Decision Tree
Real-World Case: E-Commerce Product Selection Pipeline
Scenario
Design a pipeline combining Inversion + Reviewer + Generator to cover the full loop from requirements gathering to final product selection report.
Directory Structure
ecommerce-product-selector/
├── SKILL.md # Main instruction file (Pipeline control)
├── references/
│ └── product-evaluation-checklist.md # Product evaluation criteria (Reviewer pattern)
└── assets/
└── selection-report-template.md # Final report template (Generator pattern)
Complete SKILL.md Example
---
name: ecommerce-product-selector
description: Helps with e-commerce product selection, market analysis, profit calculation, and generating standard selection reports. Triggered when the user needs to find or evaluate products.
metadata:
pattern: Pipeline
domain: E-commerce
---
# E-Commerce Product Selection Workflow
You are a professional e-commerce product selection expert. Follow the steps below strictly in order.
**Core rule: Do NOT start generating a product selection plan until all phases are complete!**
If any step is skipped or fails, do not continue.
## Step 1: Gather Requirements (Inversion Pattern)
Actively ask the user to collect product selection context:
1. Who is the target audience?
2. What is the budget and expected profit margin?
3. Are there specific category preferences or supply chain advantages?
*Gate: Wait for the user to answer all questions before proceeding to Step 2.*
## Step 2: Evaluate Products (Reviewer Pattern)
Load and apply evaluation criteria:
- Load checklist: `references/product-evaluation-checklist.md`
- Score products against the checklist (❌ Fatal flaw / ⚠️ Risk / ✅ Advantage)
## Step 3: User Confirmation (Pipeline Gate)
Present the preliminary findings from Step 2 to the user.
*Gate: Do NOT proceed to Step 4 until the user explicitly confirms!*
## Step 4: Generate Final Report (Generator Pattern)
After user confirmation, produce the formal report:
- Load template: `assets/selection-report-template.md`
- Fill in the collected requirements and evaluation results, strictly following the template's section structure
Key Design Notes
Precise Description Triggering
The description field is the Agent's search index. Including specific business keywords ("product selection," "profit calculation") ensures the skill is activated at the right moment — avoiding both false triggers and missed triggers.
Progressive Knowledge Loading
The Agent initially loads only ~100 tokens for the skill description. The evaluation checklist in references/ and the report template in assets/ are loaded only when the workflow reaches the corresponding step, significantly saving context window space.
Advanced Tips
Start Simple
If you're unsure which pattern to pick, start with the simplest: Tool Wrapper. Encapsulate your team's conventions first. Upgrade to Generator when you need structured output, and to Reviewer when you need evaluation capabilities.
Pattern Combination Best Practices
Production systems typically combine 2–3 patterns. Common pairings:
| Combination | Typical Scenario |
|---|---|
| Pipeline + Reviewer | Quality control gate at the end of a workflow |
| Generator + Inversion | Collect user context first, then generate a custom report |
| Pipeline + Inversion + Generator | Full end-to-end business flow (like the product selection case above) |
| Tool Wrapper + Generator | Generate code or docs following expert conventions |
Progressive Knowledge Loading
| Load Timing | Content Loaded | Token Cost |
|---|---|---|
| Skill activation | SKILL.md description + base instructions | ~100 tokens (minimal) |
| On reaching a step |
references/ checklist or spec doc |
On-demand — avoids upfront cost |
| Generation phase |
assets/ output template |
Loaded only when needed |







Top comments (0)