Most Claude-Code portfolios are "here is a project I built with Claude." Mine is "here is the system I use to build any project with Claude — and here is that system, on GitHub, dog-fooded by itself." The difference is structural, not aesthetic. The first shape produces artifacts that demonstrate a specific technique. The second shape produces an artifact that demonstrates the meta-skill of designing systems like it — which is what the role above "senior engineer using AI tools" actually asks for.
The framework is claude-code-agent-skills-framework. The QA-automation surface built on top of that framework, demonstrating the pattern in production shape, is claude-code-mcp-qa-automation. The recursion is the point.
The recursive property
Every component of the framework is the skill the framework teaches. That sentence is not a slogan. It is a checklist:
-
Context engineering. The
CLAUDE.mdin the framework demonstrates context engineering. Reading it is itself the lesson. -
Agent tool design. The
.claude/skills/directory demonstrates skill design. Opening a skill file is the example. - Evaluation methodology. The progressive-assessment rule demonstrates evaluation design. Running it on yourself is the practice.
- Production thinking. The integrity-check scripts and hooks demonstrate production safeguards. Running them before push is the habit they encode.
- Multi-agent architecture. The HANDOVER + SYNC protocol (in a separate repo) demonstrates multi-agent coordination. Opening the convention is the lesson.
- Memory and retrieval. The two-artifact session capture (narrative + wiki) demonstrates persistent-context design. The wiki is both the example and the retrievable corpus.
Most portfolios have one of these. The framework has all six, and each one is both the teaching artifact and the demonstration artifact simultaneously. A RAG system demonstrates RAG skill. This framework demonstrates the skill of designing systems like RAG — which is the level above.
Why this shape is the strongest hirable signal
Anthropic's careers page says directly: "If you've done interesting independent research, written an insightful blog post, or made substantial contributions to open-source software, put that at the TOP of your resume." Roughly half of their technical staff do not have PhDs.
Eugene Yan's published evaluation criteria for AI engineers ask four questions: Can you handle ambiguity? Can you scope influence? Can you manage complexity? Can you execute under constraints?
The recursive-meta-engineering shape answers all four at once:
- Ambiguity. Designing a framework for coaching AI-engineering work is ambiguous by construction. There is no existing answer to the problem; the artifact is the answer.
- Influence. A framework used across multiple surfaces (the learning lab, the QA-automation pipeline, any downstream adopter) is an influence-scope artifact that cannot be faked.
- Complexity. The framework has 15 rules, 21 skills, 5 repos, a HANDOVER + SYNC protocol, and a two-artifact session-capture pattern — managed coherently, not as accumulation.
- Execution under constraints. The integrity-check scripts are what execution under constraints looks like: the framework cannot push without passing its own gates.
One artifact, four answers. That is the shape an AI Lead role hires for.
The five repos as a single coherent artifact
Public surface:
- claude-code-agent-skills-framework — the rules, skills, and meta-framework. The teaching surface.
- claude-code-mcp-qa-automation — the production-QA pipeline built on top of the framework. Demonstrates the framework in working shape against a real workload.
- claude-multi-agent-protocol — HANDOVER + SYNC convention for multi-agent coordination. The scaling shape.
- llm-rag-knowledge-graph — the session-capture + retrievable-wiki pattern as a standalone artifact.
- ai-engineer-lab (soon) — the lab that generated the prior four.
Each repo stands alone. Read as a set, they describe a coherent system: the framework, a production demonstration of the framework, the multi-agent scaling pattern, the knowledge-retention pattern, and the lab that produced all of it. No repo is decoration. Each one is a surface the pattern ships against.
This is different from a GitHub profile with twenty unconnected weekend projects. Twenty projects communicate breadth. Five interlocking projects communicate that the author builds systems, not snippets — which is the level that matters for the roles that care.
The integrity-check CI pattern
Every repo runs a integrity-check.sh before push. Five gates: claim-evidence mapping, hype-word deny list, fresh-clone demo, private-identifier grep, secret-pattern grep. Plus artifact-specific checks per repo (determinism tests for the QA-automation pipeline, offline render checks for the report generator, etc.).
The pattern is described in detail in an earlier post in this series. The fact worth surfacing here: these checks are a recursive application of the framework to the framework. The rule that says "public artifacts must evidence their claims" is itself evidenced by the integrity-check script in every repo. The rule enforces the rule.
This is what the meta-property looks like in production. The framework does not just teach the discipline. It runs the discipline against itself before shipping.
What the recursion prevents
Portfolios that describe a practice without demonstrating it accumulate claims faster than they accumulate evidence. Six months in, the README says "production-ready" and "scalable" and "comprehensive" and nothing in the repo proves any of those. The drift is quiet because nobody is running a falsifiability check.
Recursive application catches this. If the framework claims "every rule carries a WHY and a retire-when clause," then the framework's own rules are checked against that claim. If the framework claims "every skill is a markdown contract, not code," then the framework's own skills are checked against that claim. The check is automatic because the same discipline the framework teaches is the discipline it is audited against.
An artifact that cannot apply its own rules to itself is an artifact whose rules are aspirational. The recursive shape is what makes them operational.
What this shape does NOT give you
Three honest limits:
- It does not replace the specific-domain artifact. If you are applying for an LLM-serving role, you still need an LLM-serving artifact. The meta-framework is the level above, not a substitute for the level. Both are needed.
- It is not a shortcut to seniority. Building a framework that works requires having seen enough failures to know what the failure modes are. The recursion makes the seniority visible; it does not confer it.
- It does not run itself. Someone maintains the framework, runs the audit when a new model ships, and writes the rules that earn their presence. This is work. The recursion is high-payoff; it is not effort-free.
The move you can make this week
If you are building toward an AI-engineering role, the shape I am describing is a concrete pattern you can adopt:
- Pick the one discipline you find yourself repeating across Claude Code projects — prompt hygiene, skill design, trace labeling, whatever.
- Write it down as a rule file in a public repo with a WHY tag and a retire-when clause.
- Build a second repo that uses that rule and cite it explicitly in the README.
- Run an integrity-check script that enforces the rule on both repos.
Two repos with a rule that connects them is the starter shape. Five interlocking repos is the advanced shape. The important thing is that the rule ships and the rule is audited against its own claims.
That is the recursive-meta-engineering move. It is available to anyone. What it asks for is that you take your own practice seriously enough to codify it and then seriously enough to audit it against the codification.
Aman Bhandari. Operator of an AI-engineering research lab running Claude Opus as the coaching partner, plus a QA-automation surface shipping against a real sprint workload. Public artifacts: claude-code-agent-skills-framework and claude-code-mcp-qa-automation. github.com/aman-bhandari.
Top comments (0)