DEV Community

J Now
J Now

Posted on

Classifying My Own Claude Code Habits Against Anthropic's Baseline

In February, Anthropic published a study of 9,830 Claude conversations measuring 11 observable collaboration behaviors — what they call the AI Fluency Index. I wanted to run that same classification against my own session history, not against an aggregate I couldn't see myself in.

The result is skill-tree. It pulls your Claude Code session files, extracts your messages, sends them to a remote classifier (Claude Haiku on Fly.io), and maps your behavior distribution across the same 11 behaviors from the study. Then it assigns one of seven archetype cards — rendered as tarot cards with museum art — and picks one behavior you haven't touched as a growth quest for your next session.

The behavior taxonomy comes from Dakan & Feller's 4D AI Fluency Framework: Description, Discernment, and Delegation are the three axes visible in chat logs. (The fourth axis, Diligence, isn't recoverable from conversation history.) The archetype assignment uses that distribution to slot you into one of seven profiles. You can see a live rendered example at skill-tree-ai.fly.dev/fixture/illuminator.

The part I found most useful wasn't the archetype — it was seeing which behaviors I literally never used. I'd been heavy on Description and light on Delegation for months without realizing it.

Install in Claude Code:

claude plugin marketplace add robertnowell/ai-fluency-skill-cards
claude plugin install skill-tree-ai@ai-fluency-skill-cards
Enter fullscreen mode Exit fullscreen mode

Also works in Cowork via skill-tree-ai.zip, and as an MCP server (npm install skill-tree-ai) for Cursor, VS Code, and Windsurf. Full analysis runs in 30–60 seconds and returns a stable URL per run.

github.com/robertnowell/skill-tree

Top comments (0)