DEV Community

J Now
J Now

Posted on

I ran Anthropic's AI fluency study against my own Claude sessions

Anthropōic published a study in February that classified 11 observable collaboration behaviors across 9,830 Claude conversations — things like whether users clarify ambiguity before delegating, whether they push back on Claude's framing, whether they decompose tasks or hand over monoliths. I read it and immediately wanted to know what my distribution looked like, not the population's.

So I built skill-tree: a plugin that pulls your Claude Code or Cowork session history, runs it through the same 11-behavior taxonomy (organized across three axes from Dakan & Feller's 4D AI Fluency Framework — Description, Discernment, Delegation), and tells you which behaviors you actually use, which you never touch, and picks one you haven't tried as a growth quest for your next session.

The classifier runs on Fly.io (Claude Haiku), takes 30–60 seconds end-to-end, and returns a stable URL with a visualization rendered as a tarot-style archetype card using curated museum art. Seven possible archetypes. Live example: skill-tree-ai.fly.dev/fixture/illuminator

For me, the result was clarifying in an uncomfortable way. I was fast at prompting. I wasn't actually iterating on how I prompted — I was just doing more of the same behaviors I'd defaulted to in week one. The behavior gaps weren't subtle; some of the 11 categories had zero instances across dozens of sessions.

Install in Claude Code:

claude plugin marketplace add robertnowell/ai-fluency-skill-cards
claude plugin install skill-tree-ai@ai-fluency-skill-cards
Enter fullscreen mode Exit fullscreen mode

Also available as an MCP server (npm install skill-tree-ai) for Cursor, VS Code, and Windsurf.

https://github.com/robertnowell/ai-fluency-skill-cards

Top comments (0)