DEV Community

Reymond
Reymond

Posted on • Originally published at dev-reymond.vercel.app

How I Set Up My Claude Workflow (Spec-Driven and Easy to Follow)

I used to jump straight into code with Claude. It felt fast, but I kept paying for it later: missed edge cases, messy commits, and "what changed?" moments during review.

So I rebuilt my setup around one idea:

Slow down before coding, so coding goes faster.

This post explains exactly how I set up my workflow from my init.md template, why each part exists, and what alternatives I might try later.

The Workflow in One Picture

User request
   |
   v
spec-architect (creates task spec)
   |
   v
Human approval (required)
   |
   v
agent-router (chooses the right specialist)
   |
   v
specialist agent(s) implement only inside scope
   |
   v
validation (npm run verify)
   |
   v
single-line commit + delete finished spec
   |
   v
PR reviewer checks full branch diff
Enter fullscreen mode Exit fullscreen mode

If you only remember one thing, remember this:

  • No spec, no code.

Why I Changed My Old Approach

My old approach was "ask Claude, get code, patch later." It works for small one-offs, but on real projects it breaks down:

  • Context drift: the assistant forgets intent across long sessions.
  • Scope creep: "small fix" turns into touching five unrelated files.
  • Weak handoffs: if I return the next day, I lose the reasoning trail.
  • Review pain: reviewers see diffs, but not the decision process.

The new setup solves that by forcing an explicit path from request to commit.

The Core Pieces I Set Up

My template has 11 setup tasks. In plain language, they boil down to six pillars:

  1. A project contract (CLAUDE.md)
  2. Agent roles (architect, router, specialists, reviewer)
  3. A strict spec template
  4. Behavioral playbook (.claude/CLAUDE.md)
  5. Hard enforcement (permissions + hooks)
  6. Memory that persists across sessions

1) Project Contract (CLAUDE.md)

This file tells Claude:

  • which commands exist (dev, test, verify)
  • architecture boundaries
  • routing table for domains
  • commit policy and safety rules

I treat this like an engineering contract, not notes.

2) Agent Roles Instead of One "Do Everything" Assistant

I split responsibilities into clear workers:

  • spec-architect: turns request into atomic tasks (<= 30 min each)
  • agent-router: dispatches to the right domain specialist
  • specialists: implement only inside owned paths
  • pr-reviewer: reviews full branch diff at the end

This prevents one agent from improvising across the whole repo.

3) Spec Template (the Unit of Work)

Each change gets a TASK-YYYY-MM-DD-###.spec.md file with fields like:

  • goal
  • scope_in
  • scope_out
  • constraints
  • validation
  • status

A spec is tiny, explicit, and reviewable.

4) Behavioral Playbook

I keep a second file (.claude/CLAUDE.md) that says what to do in real time:

  • start protocol
  • pre-action gate
  • exact sequence for code-change requests
  • what counts as a workflow violation

Think of root CLAUDE.md as policy and .claude/CLAUDE.md as execution checklist.

5) Two-Layer Guardrails

This is the part that made the biggest difference.

Layer 1 blocks risky patterns before execution (deny list).
Layer 2 inspects runtime command context with a hook script.

Attempted command
   |
   +--> Layer 1: permissions deny list
   |      - blocks: --no-verify, --force push, git add .
   |
   +--> Layer 2: workflow-guard.sh
          - blocks: commit without spec
          - blocks: artifact staging
          - blocks: multiline or co-authored-by commit formats
Enter fullscreen mode Exit fullscreen mode

This means I don't rely on memory or discipline alone. The system enforces the behavior.

6) Memory System

I keep persistent memory files for:

  • user preferences
  • team conventions
  • project constraints
  • repeated corrections

So each session starts with context, not from zero.

The Exact Request-to-Commit Flow

Here is my real flow for every code change.

Step 1: Spec First

I ask Claude to create spec(s) from the request.

Example request:

"Add retry logic to scraper API and surface retry count in admin UI."
Enter fullscreen mode Exit fullscreen mode

Typical output from spec-architect:

  • Spec A (tools-domain): API retry behavior
  • Spec B (admin-domain + design collaborator): retry count UI

If a task can't fit in about 30 minutes, it gets split.

Step 2: Human Approval Gate

Specs stay draft until I approve.

This is where I fix assumptions before code exists, which is much cheaper than fixing code later.

Step 3: Route to the Right Specialist

agent-router reads the approved spec and dispatches.

  • Sequential by default
  • Parallel only if truly independent and non-overlapping paths

Step 4: Implement Inside Scope

Specialist agent implements only inside scope_in, respects scope_out, then runs validation.

No "while I'm here" edits.

Step 5: Commit Discipline

One spec equals one commit.

  • run validation
  • explicit git add <file1> <file2>
  • single-line commit format: type(scope): description
  • delete completed spec file

Then move to the next spec.

Step 6: PR-Level Review

After all specs are done, pr-reviewer checks full branch diff against base branch.

This catches cross-spec regressions that per-task review can miss.

A Simple End-to-End Example

Let me show one tiny scenario:

Request: "Add CSV import button in catalog page"

Spec 1 (catalog-domain)
- goal: add button + file input + happy-path upload
- scope_in: catalog page + upload component
- scope_out: auth module, billing module
- validation: npm run verify

Spec 2 (backend)
- goal: accept CSV endpoint + validation errors
- scope_in: api route + service
- scope_out: db schema

Flow:
spec-architect -> approval -> router -> catalog specialist + backend specialist -> verify -> commits -> PR review
Enter fullscreen mode Exit fullscreen mode

Because scope is explicit, I avoid accidental touching of unrelated modules.

Why This Workflow Works (for Me)

It separates thinking from typing

Specs force me to design first, code second.

It makes context explicit

I don't depend on "assistant memory vibes." Scope and constraints are written down.

It improves recoverability

If I stop mid-work, I can resume from specs and status instantly.

It reduces blast radius

Owned paths and scope boundaries prevent silent repo-wide changes.

It makes reviews faster

Reviewers can trace: request -> spec -> diff -> validation.

What Usually Breaks and How I Handle It

"This is just a one-liner"

One-liners are where policy violations start. I still create a small spec.

"Can we skip validation this time?"

No. If validation is too slow, optimize validation. Don't skip it.

"The agent touched out-of-scope files"

I stop and correct via new scoped spec. I don't merge "almost correct" behavior.

My Current Folder Layout

project/
├── CLAUDE.md
├── specs/
│   ├── TASK-2026-05-04-001.spec.md
│   └── templates/task.spec.template.md
└── .claude/
    ├── CLAUDE.md
    ├── settings.local.json
    ├── hooks/workflow-guard.sh
    └── agents/
        ├── spec-architect.md
        ├── agent-router.md
        ├── pr-reviewer.md
        ├── backend-specialist.md
        ├── database-specialist.md
        ├── ui-ux-frontend-design-specialist.md
        └── *-domain-specialist.md
Enter fullscreen mode Exit fullscreen mode

Alternatives I Want to Explore Next

This setup works well, but I still want to experiment.

Alternative 1: Lightweight Mode for Tiny Repos

For personal throwaway projects:

  • keep spec + approval
  • collapse router + specialist into one constrained specialist
  • keep hooks mandatory

This may cut overhead while preserving safety.

Alternative 2: Stronger CI-Based Enforcement

Right now guardrails run locally in Claude Code. Next step is mirroring the same checks in CI:

  • reject non-conforming commit messages
  • reject artifact staging patterns
  • reject PRs without validation status

That makes enforcement team-wide, not machine-local.

Alternative 3: Test-First Specs

I currently define validation commands in specs. I want to push this further:

  • require failing test case in spec for bug fixes
  • require new test case mapping for new behavior

This makes completion criteria even more objective.

Alternative 4: Domain Risk Scoring

Not all tasks need the same process depth. I want an auto score in spec-architect:

  • low risk: single-spec path
  • high risk: mandatory collaborator + expanded review checklist

Alternative 5: Multi-Tool Strategy

I use Claude as primary orchestrator. I may test a hybrid path where:

  • Claude handles decomposition + routing
  • another tool handles narrow code transforms
  • same spec/hook rules apply

The key is keeping the workflow stable even if tools change.

Practical Advice if You Want to Copy This

Start small. Do this in order:

  1. Add the spec template
  2. Add pre-action gate rules in CLAUDE.md
  3. Add hook + deny list
  4. Add router and one domain specialist
  5. Expand domain specialists only when needed

If you skip hard enforcement, the workflow eventually degrades.

Final Takeaway

My Claude workflow is not "just prompting better." It is a small operating system for code changes:

  • specs define intent
  • agents separate responsibilities
  • hooks enforce rules
  • review closes the loop

It works because it is explicit, constrained, and difficult to bypass.

If you are already shipping production code with AI assistants, this is the shift I recommend first: make the process executable, not just documented.

Top comments (0)