Yesterday, Anthropic published the results of the largest qualitative study ever conducted on AI — 80,508 interviews across 159 countries. The findings are fascinating, but three numbers jumped out at me:
27% worry about unreliability. AI doing unexpected things is the #1 concern — the only category where the negatives outweighed the positives.
22% worry about losing autonomy and agency. People fear not understanding what AI is doing under the hood, or feeling like AI is drawing the lines instead of them.
16% worry about cognitive atrophy. People fear becoming dependent on tools they don't understand, losing the ability to think critically about what their tools are doing.
These are abstract concerns when talking about AI in general. But they become very concrete when you look at how developers configure Claude Code.
I'm one of those 27% and 22%. I use Claude Code daily, and what worries me isn't the model itself — it's that everything is changing so fast that it's literally hard to follow how AI coding tools should be configured. New features land weekly. Settings get added, deprecated, reorganized. I honestly don't feel fully confident about what impact my configuration has on the performance of the tools I rely on.
And it gets worse. Developers aren't just using one tool — they're experimenting with Claude Code, Cursor AI, Codex, and others, often in the same projects. Your dev environment can get polluted by different configurations that silently overlap and conflict. What's the actual impact? I don't know. And that uncertainty is the problem.
That's why I built ccinspect.
The configuration problem nobody talks about
Claude Code has a powerful, layered configuration system. Settings files, CLAUDE.md memory files, rules, agents, skills, commands, MCP servers, hooks, plugins — spread across 7+ locations with precedence-based merging. 30+ config files that silently interact with each other.
Here's the thing: most Claude Code users have no idea what their effective configuration actually is after all layers merge. And when things go wrong — an agent gets ignored, a permission doesn't apply, a rule contradicts another — debugging is painful because you're working blind.
Sound familiar? It should. This is exactly what happened with code before linters existed.
What if we applied the linter model to AI configuration?
Every mature development ecosystem has static analysis. Python has flake8. JavaScript has ESLint. TypeScript has tsc. These tools don't write code for you — they catch mistakes, surface contradictions, and give you visibility into what's actually happening.
I asked: why doesn't this exist for AI coding assistants?
So I built ccinspect — a CLI tool that brings the linter model to Claude Code configuration.
What it does
cci lint is the quality gate — 51 rules checking for dead references, permission contradictions, dangerous allows, oversized files, orphan agents, and more. Think of it as flake8 for your Claude Code setup.
$ cci lint
Settings (3 issues)
⚠ [settings/sandbox-recommended] Sandbox is not enabled
⚠ [settings/deny-env-files] Missing deny rules for .env files
Rules (3 issues)
⚠ [rules-dir/contradiction-keywords] Potential contradiction between rules
│ architecture.md:100 "server components by default"
│ frontend.md:46 "separate server components from client components"
Agents (2 issues)
✖ [agents/frontmatter-valid] Missing required name field
ℹ [agents/orphan-agent] 3 agents never referenced
cci blame resolves the merged config and shows you what Claude Code actually sees — which permissions apply, which env vars win, which MCP servers are active, and which file is responsible. It's mypy for configuration: resolving layers of inheritance to tell you the runtime truth.
cci audit goes beyond static analysis. It reads Claude Code's session transcripts to find out what actually happened at runtime. Did your agents get delegated to? Were your rules loaded? Which MCP servers were called? It detects write-blindness — a subtle bug where path-scoped rules don't load because Claude edited a file without reading it first. ccinspect is the only tool that catches this.
cci logs stats --cost shows your token economics — API-equivalent costs, cache efficiency, per-model breakdown.
cci session-recover parses interrupted sessions and generates a paste-ready recovery prompt so you can pick up where you left off.
Everything runs fully offline. No API keys. No hooks to install. Just point it at a project.
The developer tools parallel
If you've used any of these, you already understand ccinspect:
| You know this | ccinspect equivalent | What it does |
|---|---|---|
| flake8 / ESLint | cci lint |
Static analysis against rules |
| mypy / tsc | cci blame |
Resolves layers, shows effective state |
| pip list / npm ls | cci scan |
Inventory of what's configured |
| diff / black --check | cci compare |
Drift detection across projects |
| coverage.py | cci audit |
Runtime utilization analysis |
Every mature ecosystem has this tooling layer. The AI coding assistant ecosystem has almost none of it yet. This is the beginning of a new category.
The unique find: write-blindness
While building the runtime auditor, I discovered a class of bug I hadn't seen documented anywhere.
Claude Code's path-scoped rules (rules with paths: frontmatter like src/api/**) only load into context when Claude reads a matching file. But if Claude jumps straight to editing a file without reading it first, the rule never loads. Your carefully written instructions aren't in context when Claude is making changes.
I call this write-blindness. It's silent, hard to spot manually, and can explain why Claude "ignores" your rules in some sessions but not others.
cci audit detects this by correlating Read and Write tool calls against your rule paths across all sessions.
The roadmap: from CLI to CI/CD
Right now ccinspect is a CLI you run manually. The natural next step — the same path flake8 and ESLint followed — is CI/CD integration: a pre-commit hook or GitHub Action that validates your Claude Code config on every PR.
If your team is adopting Claude Code, config drift across projects is inevitable. A linter in CI catches it before it causes problems.
Try it
npx ccinspect scan # what do I have?
npx ccinspect lint # what's wrong?
npx ccinspect blame # what's actually in effect?
npx ccinspect audit # is my config working at runtime?
GitHub: github.com/fedius01/ccinspect
npm: npm install -g ccinspect
It's MIT licensed, fully offline, and has 1,300+ tests. Feedback welcome — especially on which rules are too noisy, which are missing, and what you'd want from a CI integration.
The Anthropic 81k-interview study confirmed what I'd been feeling while building this tool — if you're curious about what 80,000 people hope and fear from AI, it's worth reading.
Top comments (0)