DEV Community

J Now
J Now

Posted on

When 'getting faster' isn't the same as 'getting better'

Anthropics February 2026 study classified 11 observable collaboration behaviors across 9,830 Claude conversations. Things like how often you delegate multi-step reasoning, how much you describe context versus just issue commands, whether you push back on outputs or accept the first result. They called it the AI Fluency Index.

I'd been using Claude Code daily for months. When I read the study I realized I couldn't honestly say which of those 11 behaviors I was actually using — or which ones I never touched.

So I built skill-tree: a Claude Code plugin that reads your own session history, classifies the same 11 behaviors against that population baseline, and assigns you one of seven archetype cards rendered as tarot cards with curated museum art. Then it picks the one behavior you haven't tried and surfaces it as a growth quest for your next session.

The behavior taxonomy comes from Dakan & Feller's 4D AI Fluency Framework — Description, Discernment, Delegation, and Diligence. The fourth axis (Diligence) isn't visible in chat logs, so skill-tree works across the other three.

The full pipeline — find session files, extract user messages, classify remotely via Claude Haiku on Fly.io, assign an archetype, synthesize a narrative, render, return a stable URL — takes 30–60 seconds. You can see what a result looks like at skill-tree-ai.fly.dev/fixture/illuminator.

The growth quest persists across sessions via a SessionStart hook. Claude Code stores state at ~/.skill-tree/; Cowork uses $CLAUDE_PLUGIN_ROOT/.user-state/ because its $HOME is ephemeral.

Install in Claude Code:

claude plugin marketplace add robertnowell/ai-fluency-skill-cards
claude plugin install skill-tree-ai@ai-fluency-skill-cards
Enter fullscreen mode Exit fullscreen mode

Also available as an MCP server (npm install skill-tree-ai) for Cursor, VS Code, or Windsurf.

github.com/robertnowell/skill-tree

Top comments (2)

Collapse
 
tillerman profile image
Marcus

This distinction really resonates. I manage software projects and see this play out constantly — AI makes individual tasks faster, but if we haven't redesigned the workflow, we're just compressing the same broken process.

The example I keep coming back to: AI can generate a status update in 30 seconds, but if the project itself lacks clear goals and accountability, a faster status update just surfaces the chaos more efficiently.

The real unlock isn't speed — it's using AI to build better structures (clearer agendas, more explicit assumptions, better risk documentation) that make the quality of work improve. Then speed is a byproduct.

I've been writing about this in a weekly newsletter focused on AI habits for PMs — if anyone here manages projects and wants to dig deeper on this angle, it's at buttondown.com/marcustillerman

Collapse
 
laura_ashaley_be356544300 profile image
Laura Ashaley

Good point speed without accuracy or depth can actually hurt outcomes. In engineering and AI systems, “better” usually means more reliable, maintainable, and correct, not just faster.