DEV Community

Cover image for I Was Setting Up Claude Code Wrong. So I Built CCL.
Sushil Kulkarni
Sushil Kulkarni

Posted on

I Was Setting Up Claude Code Wrong. So I Built CCL.

Every Claude Code project I started followed the same painful ritual.

Open a blank directory. Stare at it. Write a CLAUDE.md from scratch — half guessing at what Claude actually needs to know, half copying from a project I did last month. Then, manually create .claude/settings.json, wonder if my permissions are too loose, google what hooks actually do, forget to add a .claudeignore, realise two days in that I should have set up subagents from the start, and eventually accept that the AI I'm using to write production code is working with a half-configured environment I threw together in fifteen minutes.

Sound familiar?

The setup tax for Claude Code is real. And nobody had built a proper solution for it — so I did.

Meet CCL — Claude Context Loader.

🌐 https://sushilkulkarni1389.github.io/ccl/
https://github.com/sushilkulkarni1389/ccl ← If this resonates, a star means everything for an early-stage open source project.


What Claude Code Actually Needs to Work Well

Before I explain what CCL does, it's worth being precise about what "setting up Claude Code" even means — because it's more than most people realise.

A properly configured Claude Code project has:

  • CLAUDE.md — your project's onboarding document, written for Claude, not for humans. It needs to be concise (under 200 lines — it loads fully into context every session), opinionated, and cover the things Claude would otherwise get wrong: your stack, your commands, your conventions, your absolute prohibitions.
  • .claude/settings.json — permissions and security hooks. What Bash commands are allowed? What's blocked? What runs before every shell command to prevent disasters?
  • .claude/skills/ — lazy-loaded instruction sets that activate when Claude detects it needs them. A deploy skill. A run-migrations skill. Written with precise trigger sentences that produce ~90% auto-activation vs. ~20% for vague ones.
  • .claude/agents/ — subagents pre-configured with the right model (Haiku for bulk reads and security scans, Sonnet for implementation, Opus for architecture decisions), right tools, and read-only scope.
  • .claudeignore — noise exclusions so Claude isn't wasting context on node_modules, build artefacts, and logs.

Getting all of this right, from scratch, for every project? That's the setup tax. And most people either skip it entirely or do it once for one project and never revisit it.


What CCL Does

CCL is an MCP server that plugs directly into Claude Code. You register it once, then type ccl in any project. It does the rest.

Here's the actual flow:

# Register once — that's it
npx @sushilkulkarni1389/ccl-mcp

# Then in any project, inside Claude Code:
ccl
Enter fullscreen mode Exit fullscreen mode

CCL gives you two paths:

Auto-detect — CCL reads your package.json, pyproject.toml, go.mod, Cargo.toml, Dockerfile, and CI config. It infers your stack, your dev/test/build/lint commands, your project type, and builds a complete scaffold plan automatically.

Guided setup — Five focused questions, answered one at a time. CCL fills in everything else with intelligent defaults.

Either way, before it writes a single file, CCL shows you the exact content of everything it will create. Line by line. You can request changes in plain English. It revises and re-presents. Nothing touches your disk until you say so.

Then it scaffolds everything in one shot:

your-project/
├── CLAUDE.md                        ← Onboarding doc, ≤200 lines, enforced
├── .claudeignore                    ← Noise exclusions
└── .claude/
    ├── settings.json                ← Permissions + security hooks
    ├── settings.local.json          ← Machine-local overrides (gitignored)
    ├── ccl-practices.json           ← Self-updating best practices
    ├── ccl-state.json               ← Scaffold state for safe resume
    ├── skills/
    │   └── [skill-name]/SKILL.md   ← One per detected workflow
    └── agents/
        └── [agent-name].md         ← One per inferred task scope
Enter fullscreen mode Exit fullscreen mode

Why It Had to Be an MCP Server (Not a CLI)

This was a deliberate architectural decision, and it matters.

The obvious alternative was a CLI tool — npx @sushilkulkarni1389/ccl-mcp scans your project, writes your files, done. Clean, simple, familiar.

But a CLI breaks the three things that make CCL worth using:

1. The conversational review loop. The plan review flow — "Here's what I'll scaffold, want to change anything?" — works because Claude Code is the conversation engine. A CLI gives you readline prompts. You lose natural language refinement. "Make the deploy skill more cautious about staging environments" becomes a form input. That's a completely different (and worse) product.

2. Web search for best practices. CCL uses Claude Code's native web search — no external API, no API key, no dependency. A CLI has no access to that. You'd need to integrate Brave, Tavily, or similar — which adds cost and complexity that defeats the zero-config premise.

3. Borrowed intelligence. As an MCP server, CCL has zero AI logic of its own. Claude Code is the brain. Every plan generation, every web search interpretation, every conversational nuance is Claude doing the work. A CLI would need to call the Anthropic API directly with its own credentials — which you'd either have to bundle (bad) or force users to provide (friction). The MCP architecture lets CCL be intelligent without owning any intelligence itself.

The ccl command feels native to Claude Code because it is native to Claude Code.


The Thing That Makes It Different: Self-Updating Best Practices

Every seven days, CCL checks whether its built-in best practices are still current.

📦 It's been 7 days since your best practices were last checked.

Would you like me to search for updates?

  [refresh] — refresh now (~30 seconds)
  [later]   — remind me next time
  [never]   — don't ask again
Enter fullscreen mode Exit fullscreen mode

If you say refresh, CCL performs a web search, diffs the results against ccl-practices.json, and presents exactly what changed before writing anything:

✦ 2 new practices found
✦ 1 outdated practice to remove
✦ 14 practices unchanged

NEW:
+ [practice title] — [source URL]
+ [practice title] — [source URL]

REMOVE:
- [practice title] — no longer recommended as of [date]

Accept changes? Type 'yes' or 'no'.
Enter fullscreen mode Exit fullscreen mode

You can accept everything, reject everything, or review each change one at a time. This is the part no other tool has — your Claude Code configuration doesn't go stale.


How It Compares to What Already Exists

I looked at everything before building CCL. Here's the honest landscape:

Tool What it does What it doesn't do
Claude Code /init Generates a CLAUDE.md by scanning your codebase No skills, no agents, no hooks, no best practices, one-shot dumb file dump
Mastery Starter Kit A GitHub repo with template files you clone manually Static template, no intelligence, no self-updating practices, no resume-from-failure
OpenSpec Spec-driven feature development inside Claude Code Assumes your environment is already set up — fills in after CCL
Google agents-cli Scaffolds, evaluates, deploys AI agents to Google Cloud Completely different layer — production deployment pipeline, not workspace setup
Community repos Reference implementations and examples Manual CLAUDE.md creation, no automation at all

The gap CCL fills: none of these have self-updating best practices, project detection with intelligent plan generation, resume-from-failure state, or the conversational review loop. The closest competitor is essentially a well-curated zip file. CCL is a live, intelligent setup agent.


Recovering From Interrupted Scaffolds

This one was important to get right. Scaffolding writes multiple files — and things fail. Network drops, permission errors, disk full. Without handling this properly, you end up with a half-written project and no clean way to recover.

CCL tracks state after every file write:

{
  "status": "in_progress",
  "last_completed_step": "skills/deploy",
  "remaining_steps": ["agents/security-auditor", "settings.json"]
}
Enter fullscreen mode Exit fullscreen mode

On the next ccl, if interrupted state is detected:

⚠️  It looks like a previous scaffold was interrupted.

[1] Continue from where I left off
[2] Start again from scratch
Enter fullscreen mode Exit fullscreen mode

All writes are atomic (temp file + rename) — resuming re-executes the full plan, and already-written files are overwritten with identical content. Idempotent by design.


Security — Not an Afterthought

CCL ships with security baked in from the start, not added on top.

Default settings.json blocks dangerous commands out of the box:

"deny": [
  "Bash(rm -rf:*)",
  "Bash(curl:*)",
  "Bash(wget:*)"
]
Enter fullscreen mode Exit fullscreen mode

And runs hooks before every shell command and after every file write:

"hooks": {
  "PreToolUse": [
    { "matcher": "Bash", "hooks": [{ "type": "command", "command": "ccl-validate-bash" }] }
  ],
  "PostToolUse": [
    { "matcher": "Write", "hooks": [{ "type": "command", "command": "ccl-audit-write" }] }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Beyond the defaults, the codebase has nine security fixes baked in post-build:

  • Prompt injection guard on LLM-generated overrides — free-form review-loop responses are validated, field lengths are capped, shell metacharacter and path-traversal patterns are rejected before they touch buildScaffoldPlan
  • File permission hardening — sensitive writes (claude.json, ccl-*.json) are chmod 0o600 before rename
  • Practice candidate validation — web search results are validated against a trusted domain allowlist before they reach the diff engine. Unknown domains are dropped silently
  • Unpredictable temp file names — all atomic writes use randomBytes(8) suffixes — no predictable collision targets
  • Agent tool permission validation — agent YAML frontmatter is parsed and inspected before writing; disallowed tools short-circuit the step as skipped, not failed
  • YAML parser hardening — frontmatter is parsed with { schema: "failsafe" }, blocking tag-driven type coercion like !!js/function
  • Path traversal guard — every planned file path is resolved against the scaffold root before the temp file is written
  • Elicitation audit trail — every user prompt and response is logged via sendLoggingMessage tagged [ccl:elicit]
  • Unicode normalisation — all free-text fields are NFKC-normalised before regex/blocklist checks to prevent homoglyph bypasses

The full trust boundary model is documented in SECURITY.md.


The Tech Stack

CCL is TypeScript, end to end. MIT licensed. Two packages:

Package What it does
@sushilkulkarni1389/core Shared logic — plan generation, file writing, practices manager, project scanner
@sushilkulkarni1389/mcp MCP server + npx @sushilkulkarni1389/ccl-mcp registration script + /ccl command handler

Key dependencies:

Layer Choice Why
MCP server SDK @modelcontextprotocol/sdk stdio transport + tool registration
Anthropic client @anthropic-ai/sdk LLM calls from the server (llmCall wrapper)
Distribution npx Zero global install, always latest
State storage JSON files in .claude/ Simple, portable, git-friendly

Model routing follows Anthropic's current best practice:

  • claude-haiku-4-5 → subagents: bulk reads, security scans, dependency mapping
  • claude-sonnet-4-6 → daily implementation, multi-file edits, orchestrator default
  • claude-opus-4-7 → complex architecture decisions, heavy algorithmic work

The test suite is 227 tests across core and MCP, including 27 end-to-end integration tests running against real tmpdir fixtures for eight project types (Node/TS, Python/FastAPI, Go, Rust, Flutter, monorepo, existing scaffold, empty dir). Wall-clock for the full integration suite: ~0.9 seconds.


When to Use CCL

Use CCL if:

  • You're starting a new project and want Claude Code configured correctly from minute one
  • You've been using Claude Code for a while but your CLAUDE.md is a mess of copy-paste
  • You want subagents and skills but don't want to research the right patterns from scratch
  • You care about not accidentally running rm -rf via an AI-suggested shell command
  • You want your Claude Code best practices to stay current without manually tracking Anthropic's docs

CCL is not for:

  • Scaffolding application code (React, NestJS, FastAPI boilerplate) — that's a different tool
  • Deploying AI agents to production infrastructure — that's what Google's agents-cli is for
  • Teams already happy with a manual CLAUDE.md workflow (though you might change your mind after trying it)

Getting Started

# Step 1 — register CCL once (any terminal)
npx @sushilkulkarni1389/ccl-mcp

# Step 2 — open Claude Code in your project
cd your-project
claude

# Step 3 — type this
ccl
Enter fullscreen mode Exit fullscreen mode

That's it. No flags. No config files to write before writing config files. No docs to read before reading docs.

CCL will scan your project, build a plan, show you everything, take your feedback, and scaffold the whole thing in one shot.


The Honest Pitch

The Claude Code ecosystem is evolving fast — skills, subagents, and hooks all matured significantly in late 2025. But the tooling to set it all up correctly hasn't kept pace. Most developers are either starting from scratch on every project or working with a CLAUDE.md they wrote in ten minutes two months ago and never touched since.

CCL is the setup layer the ecosystem needs. It's open source, MIT-licensed, and the entire build — 227 passing tests, full TypeScript strict mode, and 9 security fixes — is documented and available.

Star CCL on GitHub

🌐 https://sushilkulkarni1389.github.io/ccl/

If you've felt the Claude Code setup tax, CCL is for you. And if you have thoughts, edge cases, or want to contribute, the issues are open, and the CONTRIBUTING guide is there.


Built with Claude Code, configured by CCL.


Top comments (0)