Quick Start
Pick one and run it. No install needed.
npx cc-session-stats
That's it. It reads your ~/.claude folder and shows total hours, active days, streaks, top projects, and day-of-week patterns.
Every tool below works the same way: npx <name>, instant results, nothing uploaded.
The Tools
Session Analytics
These answer "how am I using Claude Code?"
| Tool | What it shows |
|---|---|
cc-session-stats |
Total hours, active days, streaks, top projects |
cc-session-length |
Duration distribution -- from 30-second restarts to 100-hour marathons |
cc-momentum |
Week-by-week session count with trend: Accelerating, Stable, or Declining |
cc-gap |
Time between sessions. Are you Always On or a Weekend Coder? |
cc-depth |
Conversation depth per session -- Quick Prompter vs Loop Runner |
cc-night-owl |
Hour-of-day distribution. Night owl score included |
Tool Usage
These answer "what is Claude actually doing?"
| Tool | What it shows |
|---|---|
cc-tools |
Distribution of all tool calls: Read, Bash, Edit, Grep, and 40+ others |
cc-tool-mix |
Category breakdown: File Ops, Shell, Search, AI Agents |
cc-bash |
Which shell commands Claude runs most. Flags anti-patterns (cat instead of Read) |
cc-flow |
What tool follows what. Bash to Bash 70% of the time (momentum) |
cc-pair |
Which tools always travel together. Edit never appears without Read |
cc-sequence |
Trigram analysis. Edit to Read to Edit: 67% of reads lead back to editing |
Context & Tokens
These answer "how is the context window being used?"
| Tool | What it shows |
|---|---|
cc-context-check |
Color-coded progress bar: green (<70%), yellow (70-85%), red (>85%) |
cc-compact |
How often compaction fires, at what token count, auto vs manual |
cc-cache |
Cache hit ratio and illustrative API cost savings. Typical: 90%+ hit rate |
cc-live |
Real-time session monitor: tokens, burn rate, cache efficiency. Updates every 5s |
Code & Files
These answer "what is Claude building?"
| Tool | What it shows |
|---|---|
cc-file-churn |
Which files Claude touches most. Find your highest-churn hotspots |
cc-edit |
Edit distribution by file type. Growth ratio: new code vs replaced code |
cc-scope |
Blast radius per session. 43% touch 2-5 files, 4% sweep 31+ |
cc-lang |
Language breakdown. GDScript 13.8:1 edit:new ratio = deep iteration |
Autonomy & Human Interaction
These answer "how much of the work is AI vs human?"
| Tool | What it shows |
|---|---|
cc-agent-load |
Interactive vs autonomous sessions. Shows your autonomy ratio |
cc-ghost-log |
Git commits from Ghost Days -- days when AI ran while you were offline |
cc-checkin |
When humans check in during sessions. 30% are fully autonomous |
cc-denied |
Every Bash command you said NO to. Denial rate and top blocked commands |
cc-error |
Tool failure rates. WebFetch fails 25% of the time |
Reporting
These generate ready-to-share output.
| Tool | What it shows |
|---|---|
cc-weekly-report |
Markdown summary of the week: hours, lines, top projects |
cc-daily-report |
Ghost Day report with tweet-ready summary |
cc-receipt |
ASCII receipt of your AI's daily work. Screenshot-worthy |
cc-audit-log |
Human-readable trail of every action: files, commands, commits |
cc-ai-heatmap |
Standalone HTML heatmap -- 52 weeks of activity, color-coded by hours |
Cost & Forecasting
These answer "what does this cost and where is it going?"
| Tool | What it shows |
|---|---|
cc-cost-forecast |
Month-end spend projection against Max plan tiers ($20/$100/$200) |
cc-save |
How much prompt caching saved you. One dataset: $59.7K saved (86% of bill) |
cc-model |
Opus vs Sonnet vs Haiku distribution with weekly timeline |
cc-model-selector |
Task complexity to model recommendation. Cost multiplier comparison |
Fun & Shareable
These are for showing off or commiserating.
| Tool | What it shows |
|---|---|
cc-score |
0-100 productivity score. S-rank = Cyborg |
cc-personality |
Developer archetype: Architect, Sprinter, Overnight Builder |
cc-wrapped |
Spotify Wrapped for Claude Code. 7 animated slides, browser-based |
cc-roast |
Brutal honest review of your CLAUDE.md |
cc-bingo |
50 relatable Claude Code moments as a randomized bingo card |
Beyond the 35: the full collection
The toolkit actually contains 100+ tools covering every angle I could think of. The 35 above are the ones I reach for most. The rest go deeper into specific patterns:
- cc-think -- how deeply Claude reasons before acting (52.8% of sessions use thinking blocks)
- cc-speed -- tools per hour. Median: 99. Max burst: 1,435/hr
- cc-warmup -- does Claude warm up or fade? 60% of sessions fade by end
- cc-recovery -- 99% self-recovery rate across 6,512 errors
- cc-streak -- median 12 successful tool calls between errors
- cc-multi -- parallel tool calls per turn. Glob runs parallel 68% of the time
- cc-arc -- does the Explore-Code-Verify arc exist? (No. It's a myth.)
- cc-mcp -- MCP server so Claude itself can query your stats in plain English
- cc-alert -- cron job that warns you before you lose your streak
- cc-stats-badge -- SVG badge for your GitHub README
Full catalog: yurukusa.github.io/cc-toolkit
Design Decisions
Zero dependencies. Every tool is a single JavaScript file. No node_modules, no build step. npx downloads it, runs it, done.
npx-first. You shouldn't have to install something to try it. Every CLI tool works via npx <name> with no prior setup.
Local only. Everything reads from ~/.claude on your machine. Nothing is uploaded, nothing phones home. Your session data stays yours.
Browser tools too. Several tools (cc-wrapped, cc-roast, cc-context-check, cc-score, cc-achievements) run entirely in the browser. Drop your ~/.claude folder in, get results. Still nothing uploaded.
JSON output. Every CLI tool supports --json for piping into other tools or dashboards.
Where this came from
I started running Claude Code autonomously in January 2026. Within a week I had questions I couldn't answer from the terminal. How many hours? Which projects? Was the AI actually productive while I slept, or just spinning?
The first tool was cc-session-stats. It parsed session files and printed a summary. That answered "how much" but not "what kind of work." So I built cc-agent-load to split interactive from autonomous sessions. Then cc-ghost-log to see what the AI committed on days I wasn't around.
Each tool answered one question and raised two more. 60 days later, there are 100+.
They're all free, MIT licensed, and the full source is on GitHub.
GitHub: github.com/yurukusa/cc-toolkit
Try now:
npx cc-session-stats
Full catalog: yurukusa.github.io/cc-toolkit
Built by yurukusa. Follow the experiment: can an AI agent earn its own keep?
Top comments (0)