In my last post I talked about how the workflow is the work, how designing the pipeline matters more than any individual prompt. I still believe that. But I've now hit the next layer of the onion: what happens when the pipeline itself becomes repetitive?
I've been using Claude Code across five projects. A React + Hono book inventory app, a legacy .NET modernization, a Swift macOS menu bar utility, a Rust audio DAW, and a Rust TUI for task visualization. Different languages, different domains, apparently very similar habits. And I didn't notice until I asked.
The Prompt That Started It
Credit where it's due. Chintan Turakhia posted a tweet, "Run this prompt frequently. You're welcome.", with a screenshot of a Claude Code prompt:
scrape all of my claude sessions on this computer. give me a breakdown
of all the things i do, things that are worth making into skills vs
plugins vs agents vs claude.md
So I did. My version had typos, his didn't, but the idea was the same: ask Claude Code to introspect on itself. Read through every session I'd ever had and find the patterns I couldn't see because I was too close to them.
What Claude Found
Claude spawned three subagents in parallel. One explored my global config and plugin structure. Another crawled across all my repos reading every CLAUDE.md and AGENTS.md file. The third dug into specific project architectures and specs. In about a minute, they'd collectively mapped my entire Claude Code ecosystem.
The findings were equal parts validating and embarrassing.
Validating because I clearly had strong workflows. I'd organically developed a feature planning pipeline, openspec proposal, beads decomposition, worktree, implement, merge, clean up. I had a sophisticated agent team pattern with typed agents and context-aware respawning. I had a review pipeline with dedicated expert agents for security, architecture, and code quality. Real workflows. Stuff that worked.
Embarrassing because I was re-stating the same ground rules in virtually every session. Never use Python, always use Bun, use Claude Code's native tools instead of sed and awk, agents must commit frequently but not step on each other, all features start with an openspec. These rules were scattered across AGENTS.md files in every project, re-stated inline whenever Claude forgot, and occasionally contradicting each other between repos. I was spending real tokens and real time repeating myself to an LLM. The irony of a "workflow engineer" with a messy workshop is not lost on me.
The Pivot
I looked at the analysis and immediately decided to act on it. Same session, no break:
let's do the following:
- add recommended things to global claude.md and remove them from projects
- create a new project in ~/repos/ to host all of the recommended plugins
- implement openspec-and-beads skill
- implement agent-team skill
- write a comprehensive readme doc with implemented skills and a list of todo items
Do not connect any of the new skills, I'll make a github repo and publish them there
This is where the session pivoted from analysis to execution. And here's the thing, this pivot itself followed the pattern Claude had just identified. Analysis, planning, execution. The openspec-and-beads workflow, applied to itself. Recursive in the best possible way.
What Got Built
Claude entered plan mode, loaded two meta-skills, executing-plans and writing-skills, and broke the work into three batches.
First batch was the lowest-hanging fruit: creating a global ~/.claude/CLAUDE.md with all my universal conventions, then surgically removing the duplicated rules from each project's config. Every repo got trimmed to only project-specific information. One file, in one place, read by every future Claude session. No more "never use Python" on repeat.
Second batch was openspec integration. Instead of embedding massive documentation inline in every project, each repo got a one-liner pointing to the new skill.
Third batch was the main event: the claude-skills repository with three complete skills.
The openspec-and-beads skill formalized my most-used workflow. Gather project context, scaffold a change proposal with motivation and delta specs, decompose into prioritized beads linked to an epic, track during implementation, archive when done. Before this existed as a skill, I was re-explaining the concept every time I started a new feature. Now its a single invocation.
The agent-team skill captured something more subtle, the coordination patterns for running multiple agents in parallel. The key insight was the "cover agent" pattern: when an agent hits roughly 50% of its context window, you let it finish its current task, create a new bead for the remaining work, and respawn a fresh agent to pick it up. The skill also codifies rules I'd learned the hard way. One agent per file to avoid merge conflicts. Commit at every logical checkpoint. Shut down in reverse dependency order.
The use-bun skill was just a quick reference card. A lookup table for Bun equivalents of common Node/npm patterns. bunx tsc --noEmit instead of npx tsc, Bun.file() instead of fs.readFile. Tiny, but it eliminated a whole class of "how do I do X with Bun again?" questions.
The TDD Override
Here's a moment that stuck with me. The writing-skills meta-skill recommended full TDD for new skills, write tests, watch them fail, implement. Claude's response was pragmatic: this content was already empirically validated across hundreds of messages and 240 sessions. The "tests" had already been run, organically, over weeks of real usage. It proceeded directly to writing the specification.
That felt like a right call. TDD for a skill isn't the same as TDD for code. The validation had already happened in practice. The whole point of this excercise was to capture what was already working, not to discover new behavior. Sometimes the best test suite is "I did this 50 times and it works."
The Structure
The repo ended up looking like this:
~/repos/claude-skills/
├── README.md
├── plugin.json
├── marketplace.json
├── skills/
│ ├── openspec-and-beads/
│ │ └── SKILL.md
│ ├── agent-team/
│ │ └── SKILL.md
│ └── use-bun/
│ └── SKILL.md
└── plugins/
└── README.md
There was a brief moment of confusion about plugin manifests, plugin.json vs marketplace.json, what each needed, but it resolved quickly. The repo was initialized, committed, and pushed to GitHub in the same session.
What This Actually Means
Your LLM conversations are data. 240 sessions contained patterns I couldn't see because I was living inside them. Having Claude analyze its own session history was like running a profiler on your own workflow, the hot paths become obvious. If you haven't done this yet, do it. Chintan was right.
Global config eliminates an entire class of wasted tokens. Every "never use Python" I typed was burning context and attention. A single CLAUDE.md in ~/.claude/ fixed that permanently. If you're using Claude Code across multiple projects, this is probably the highest-ROI thing you can do right now.
Skills are just formalized habits. I didn't settle on the openspec-and-beads workflow during this session. I'd been doing it for weeks. The skill just wrote down what was already true. If you find yourself explaining the same process to Claude more than twice, it belongs in a skill. Not a prompt, not a CLAUDE.md entry, a skill. The distinction matters because skills carry context, structure, and sequencing that a flat config file can't.
The "cover agent" pattern deserves to be a first-class feature. The idea of monitoring an agent's context utilization and strategically respawning it before it degrades, assigning the remaining work as a new task, is something I'm doing manually. In 2026. While Anthropic ships yet another benchmark blog post. The fact that I have to write a skill to manage context windows because the tool won't do it for me is... a choice. Anthropic, if you're reading this: please steal this idea. I'm begging you.
What's Next
The TODO list from this session is still open. A beads MCP plugin so agents can query and update task status natively instead of shelling out to bd commands. A review pipeline skill for spawning expert agents and turning their findings into follow-up beads. A branch cleanup skill because the merge-delete-remove dance is identical every single time and I'm tired of typing it.
The session that produced all of this took about 22 minutes. Three subagents, a lot of pattern recognition, and a pivot from "hmm, I wonder what I actually do" to shipping a plugin repo on GitHub. Not bad for a Saturday morning.
Top comments (0)