DEV Community

J Now
J Now

Posted on

Measuring my Claude habits against 9,830 real conversations

I'd been using Claude Code daily for months and noticed I kept reaching for the same handful of prompting patterns. Faster, maybe. But broader? No idea.

Anthropic published the AI Fluency Index in February 2026 — a study of 9,830 Claude conversations that classifies 11 observable collaboration behaviors. I wanted to run the same classification on my own session history and see where I was thin.

That's what skill-tree does. It reads your Claude Code or Cowork session files, classifies your messages against the same 11-behavior taxonomy, and assigns one of seven archetype cards rendered as tarot cards with museum art. The live output for the fixture persona lives at skill-tree-ai.fly.dev/fixture/illuminator if you want to see what the card looks like before running it on your own history.

The behavior taxonomy comes from Dakan & Feller's 4D AI Fluency Framework: Description, Discernment, Delegation, and Diligence. Diligence doesn't show up in chat logs so only three axes are scored — but those three surface patterns that surprised me. I'd been heavy on Description (telling Claude what I want) and almost never touching Delegation (asking Claude to plan or decompose independently).

The growth quest feature is the part I use most. After classification, the tool picks one behavior I haven't tried recently and surfaces it as a named quest that persists into the next session via the SessionStart hook.

There's a detail in the state persistence worth noting if you're building plugins for both Claude Code and Cowork. Claude Code has a stable ~/.skill-tree/ path across sessions. Cowork's $HOME is ephemeral — state written there doesn't survive. The fix was a dual-path design: detect the runtime and write to $CLAUDE_PLUGIN_ROOT/.user-state/ under Cowork instead. One constraint, cleaner abstraction.

The 7-step orchestration (find files → extract messages → remote classifier → archetype assignment → narrative synthesis → render → return URL) takes 30–60 seconds end-to-end. The classifier runs on Claude Haiku on Fly.io; the rendered card is stored on a Fly.io volume and returns a stable URL.

Install in Claude Code:

claude plugin marketplace add robertnowell/ai-fluency-skill-cards
claude plugin install skill-tree-ai@ai-fluency-skill-cards
Enter fullscreen mode Exit fullscreen mode

Also available as the npm package skill-tree-ai for Cursor, VS Code, and Windsurf via MCP.

https://github.com/robertnowell/ai-fluency-skill-cards

Top comments (0)