DEV Community

eddylee
eddylee

Posted on

Why I Built a 4,000-Line Agent Skill Instead of Another npm Package

The Problem

I use Claude Code (and sometimes Cursor) for frontend work every day. And every day, I fix the same mistakes:

// AI generates this
const user: User = await res.json()
Enter fullscreen mode Exit fullscreen mode

Looks fine. TypeScript is happy. But res.json() returns any at runtime — if the API changes shape, this silently breaks in production.

// AI also loves this
const [isLoading, setIsLoading] = useState(false)
const [error, setError] = useState<Error | null>(null)
const [data, setData] = useState<User | null>(null)
Enter fullscreen mode Exit fullscreen mode

Three separate pieces of state that can represent impossible combinations. isLoading: true AND data present? error set but isLoading still true?

And my personal favorite:

'use client' // slapped on the page component

export default function ProductPage() {
  // ...entire page is now client-rendered
}
Enter fullscreen mode Exit fullscreen mode

These aren't obscure edge cases. They happen constantly because AI agents don't have a structured reference for frontend TypeScript patterns.

Why Not Just Fix It Each Time?

I did. For months. Then I realized:

  • I'm correcting the same patterns over and over
  • My corrections aren't persisted between sessions
  • Every new conversation starts from zero

I needed something the agent could read before generating code — not a tutorial I'd paste into chat, but a structured reference it would consult automatically.

What I Built

typescript-react-patterns — an Agent Skill for Claude Code, Cursor, Codex, and any AI tool that reads SKILL.md.

17 files. 4,000+ lines. Three directories:

typescript-react-patterns/
├── SKILL.md              ← Hub: agent rules, decision guide, checklists
├── rules/                ← 11 pattern modules
│   ├── typescript-core.md
│   ├── react-typescript-patterns.md
│   ├── nextjs-typescript.md
│   ├── component-patterns.md
│   ├── data-fetching-and-api-types.md
│   ├── forms-and-validation.md
│   ├── state-management.md
│   ├── performance-and-accessibility.md
│   ├── debugging-checklists.md
│   ├── code-review-rules.md
│   └── anti-patterns.md
└── playbooks/            ← 3 debugging guides
    ├── type-error-debugging.md
    ├── hydration-issues.md
    └── effect-dependency-bugs.md
Enter fullscreen mode Exit fullscreen mode

What Makes This Different

I've seen a lot of agent skills. Most are collections of code snippets. This one is designed as decision support — helping the agent choose the right pattern, not just showing patterns.

1. Agent Behavior Rules

The skill starts by telling the agent what to check before writing any code:

  • Is this server or client code?
  • Is runtime validation needed? (Yes, if data comes from outside the app)
  • What Next.js version? (params is a Promise in 15+)
  • What assumptions must NOT be made?

2. Decision Flowcharts

Not just "here's a pattern" — but "when to use which":

Is this data from a server/API?
├─ Yes → TanStack Query (NOT useState)
└─ No → Is it shareable via URL?
   ├─ Yes → searchParams
   └─ No → How many components need it?
      ├─ 1 → useState
      └─ Many → Zustand with selectors
Enter fullscreen mode Exit fullscreen mode

3. Rule Classification

Every rule is labeled:

  • [HARD RULE] — Violating causes bugs. No exceptions. "Validate API responses at runtime."
  • [DEFAULT] — Recommended unless you have a documented reason. "Use interface for Props."
  • [SITUATIONAL] — Depends on context. "Polymorphic components — only for design-system foundations."

4. Before/After That Actually Matter

Not toy examples. Real frontend scenarios:

API typing:

// ❌ Before
const user: User = await res.json()

// ✅ After
const userSchema = z.object({
  id: z.string(),
  name: z.string(),
  email: z.string().email(),
})
type User = z.infer<typeof userSchema>
const user = userSchema.parse(await res.json())
Enter fullscreen mode Exit fullscreen mode

Loading state:

// ❌ Before — impossible states are representable
const [isLoading, setIsLoading] = useState(false)
const [error, setError] = useState(null)
const [data, setData] = useState(null)

// ✅ After — impossible states are unrepresentable
type State<T> =
  | { status: 'idle' }
  | { status: 'loading' }
  | { status: 'success'; data: T }
  | { status: 'error'; error: Error }
Enter fullscreen mode Exit fullscreen mode

5. Debugging Playbooks

When something goes wrong, the agent has step-by-step diagnosis guides:

  • Type errors: Read bottom-up, classify, check common React/Next.js-specific errors
  • Hydration mismatches: Flowchart from symptom to fix (useEffect vs dynamic vs Suspense)
  • useEffect bugs: Infinite loops (unstable deps), stale closures (captured state), missing cleanup

6. Code Review Heuristics

The skill distinguishes risk (flag it) from preference (mention it, don't block):

Risk: as on API data, useEffect with object deps, server-only import in client component

Preference: type vs interface, handler naming convention, import ordering

How to Use It

One command:

git clone https://github.com/leejpsd/typescript-react-patterns ~/.claude/skills/typescript-react-patterns
Enter fullscreen mode Exit fullscreen mode

The agent reads SKILL.md automatically and consults the relevant rules/ file based on context. If you're working on a form, it reads forms-and-validation.md. If there's a type error, it reads playbooks/type-error-debugging.md.

Works with Claude Code, Cursor, Codex, Gemini CLI — anything that reads the SKILL.md format.

What's Covered

Module Topics
TypeScript Core Narrowing, generics, utility types, as const, satisfies, unknown vs any
React Patterns Props, children, events, hooks, context, forwardRef, generic components
Next.js App Router params, Server Actions, RSC boundaries, Edge, useOptimistic
Component Patterns Discriminated Props, compound components, modal/dialog, polymorphic
Data Fetching Zod validation, TanStack Query, Result<T,E>, pagination, error handling
Forms react-hook-form + Zod, Server Actions, multi-step checkout example
State Management Decision matrix, Zustand (+ middleware), Context, URL state
Performance & A11y Memoization tradeoffs, focus management, aria-live, keyboard navigation
Anti-patterns 12 mistakes with symptoms, root causes, and fixes

What I Learned Building This

1. Structure matters more than volume.

Early versions had more files but less structure. The current version has fewer, denser modules with a consistent template: Scope → Key Rules → Examples → Anti-patterns → Review Checklist.

2. Agent skills need decision support, not just examples.

Showing 10 patterns is less useful than helping the agent choose between them. Flowcharts and decision matrices are more valuable than code snippets.

3. Classify your rules.

[HARD RULE] vs [DEFAULT] vs [SITUATIONAL] changed everything. The agent stops treating every guideline as absolute.

4. Cross-references prevent duplication.

Every file has See also links. The agent can navigate between modules without each file repeating everything.

Contributing

The skill is MIT-licensed and PRs are welcome. Priority areas:

  • Testing patterns (Vitest, Testing Library)
  • Internationalization typing
  • More debugging playbooks
  • Accessibility deep dive

If you try it and something is wrong or missing, open an issue. This is built to be iterated on.

Links

If this is useful, a ⭐ on GitHub helps others find it.

Top comments (4)

Collapse
 
jon_at_backboardio profile image
Jonathan Murray

appreciate the reasoning here. 4000 lines makes sense if the skill is doing real orchestration work rather than just wrapping an API. one honest question though: what happens when the platform changes the skill interface? feels like you've traded npm version hell for platform update hell, curious if that's shown up yet

Collapse
 
_a9b502091e5f4cba28f13 profile image
eddylee

I do see the platform dependency as a real trade-off.
That said, this skill is more of a structured reference (SKILL.md) than executable code, so even if the interface changes, the core content remains reusable.

I also tried to avoid locking it into a single platform and instead keep it compatible across multiple agents (Claude, Cursor, etc.).

So far, I haven’t run into issues caused by platform changes yet,
but it’s definitely something I’m keeping an eye on. Appreciate you calling that out

Collapse
 
apex_stack profile image
Apex Stack

The rule classification system ([HARD RULE] vs [DEFAULT] vs [SITUATIONAL]) is the insight that separates this from 90% of agent skills I've seen. Most skills treat every guideline with equal weight, which means the agent either follows everything rigidly or ignores nuance entirely. Giving the agent explicit permission to deviate on [SITUATIONAL] rules while holding firm on [HARD RULE] ones mirrors how experienced developers actually think about code quality.

The decision flowcharts are the other standout piece. I've been building skills for various domains (SEO auditing, content pipelines, financial analysis) and the pattern I keep rediscovering is exactly what you describe — showing 10 code examples is less useful than one well-structured decision tree. When I switched from "here are patterns" to "here's how to choose between patterns," the output quality jumped noticeably.

One thing I'd add from my experience: a brief "context gathering" section at the top of SKILL.md that tells the agent what questions to ask before writing any code. Something like "Before generating: check package.json for Next.js version, check if Zod is already a dependency, identify server vs client boundary." This front-loads the context that prevents the most common mistakes rather than catching them in review.

The cross-referencing strategy is smart too. Skill files that try to be self-contained end up massive and repetitive. Modular with clear "See also" links is the right architecture for anything above ~500 lines.

Definitely starring this — the TypeScript frontend space needed a skill at this level of depth.

Collapse
 
_a9b502091e5f4cba28f13 profile image
eddylee

Really appreciate the thoughtful feedback 🙏
The context gathering section is a great point — preventing mistakes upfront makes a lot of sense. I’ll definitely try adding that.
Glad the decision-tree approach resonated with you as well. Thanks again!