Every project has its own way of doing things — migration patterns, transaction handling, deployment quirks, that one PDF template workflow nobody wants to touch. I've been writing this stuff down as markdown files with helper scripts so my AI agent can follow along. I've taken to calling this skill-driven development (building on agent skills, a convention that started with Anthropic's Claude Code and has since been adopted by several other AI coding agents).
This post walks through some skills I've put together and shows the feedback loop that makes them worth maintaining.
What skills add over a README
A project README or an AGENTS.md can capture conventions. The skills I've been writing try to go a bit further:
- Progressive disclosure — a slim main file loads first, with detailed references pulled in only when needed. This keeps the agent's context window focused.
- Composability — skills declare dependencies and load together. A Django skill + a project overlay skill + a TDD skill compose into a complete workflow.
- Scripts — skills can ship executable scripts alongside the instructions. A script the agent calls is more reliable than a 15-step procedure the agent interprets.
- Self-improvement — when something goes wrong, the fix goes into the skill. Next session, the agent follows the updated instructions.
That last point is what makes it worth maintaining: every correction you make goes into the skill, and the agent doesn't make the same mistake twice.
The SDD loop
There's a useful parallel with Test-Driven Development:
| TDD | SDD | |
|---|---|---|
| Write | Write the test first | Write the skill first |
| Run | Run the code | Let the agent produce the code |
| Evaluate | Test fails → fix the code | Output is wrong → fix the skill |
| Loop | Until green | Until the agent gets it right |
In TDD, you iterate on code until the tests pass. Here, you iterate on skills until the agent's output is what you wanted. The skill isn't a one-shot handoff — you keep tweaking it as you find gaps.
In practice, the two aren't separate activities — skills encode TDD as part of the workflow. The implementation skill says "write a failing test first, then implement," and the agent does both. Nobody writes tests by hand and then separately invokes a skill; the skill is what makes the agent follow TDD.
When tests fail, it's worth asking whether the code is wrong or the skill is incomplete. Fix the skill, re-run. Over time it adds up — each fix prevents one class of mistake, and after enough sessions you've built up a decent set of guardrails just from things that went wrong.
If you use teatree (a set of lifecycle skills I put together for multi-repo development), this loop can be automated: t3-retro runs a retrospective after each session and writes fixes into the skill files. But it works just as well manually — whenever you correct the agent, you can put that correction in a skill so it sticks.
What a skill looks like
A skill is a markdown file (SKILL.md) with YAML frontmatter, often accompanied by scripts for the mechanical parts. Here's a simplified example of the markdown side:
---
name: ac-django
description: Django coding conventions and best practices.
requires: []
metadata:
version: 0.2.0
---
# Django Conventions
## Models
### Fat Models Doctrine
- Business logic belongs in models, not views or serializers.
- Use model managers for complex queries.
### Migrations
- Always use `apps.get_model()` in data migrations — never import directly.
- Set `elidable=True` on data-only migrations.
- Include both `forwards` and `backwards` functions.
## Settings
### Storage Configuration (Non-Negotiable)
- Use `STORAGES` dict (Django 4.2+), not `DEFAULT_FILE_STORAGE`.
- The deprecated setting causes silent failures on deployment.
Rules marked (Non-Negotiable) are things I've learned the hard way. "Always verify services respond via HTTP before declaring running" sounds obvious, but without it, the agent will say "servers started" without checking whether anything actually came up.
These work with any agent that can read files — Claude Code, Codex, Cursor, whatever. The agent reads the skill and follows the instructions.
What's in the repo
I've put the generic ones in a public repository in case any of them are useful to others. Here are the ones I'd recommend looking at first.
ac-reviewing-skills — keep your skills in shape
This is probably the most broadly useful one. It does a deep audit of your skill files — architecture, content quality, script correctness, stale cross-references, duplicated guidance. I run it periodically and it consistently finds things I missed: rules that drifted between files, references pointing at renamed sections, scripts with missing error handling. If you maintain more than a handful of skills, it's worth running periodically.
ac-django — Django conventions that models already "know" but get wrong
The agent knows Django. It doesn't know how you use Django. This skill covers the mistakes I kept correcting: outdated migration patterns (apps.get_model() vs direct imports), unsafe transaction handling, the STORAGES dict vs deprecated DEFAULT_FILE_STORAGE, post_migrate signal timing for permission assignments. It's a reference, not a tutorial — it assumes the agent already understands the framework and just needs guardrails for the non-obvious parts.
ac-python is its companion for generic Python: style, typing, OOP patterns, testing conventions. Less opinionated, but useful as a baseline.
ac-adopting-ruff — structured linter migration
A step-by-step playbook for replacing black + isort + flake8 with ruff, one rule category per MR. It handles the things I got stuck on — conflicting formatter settings, rule equivalences between linters, the unfixable vs ignore distinction. Doing it in one big MR is painful; the skill breaks it into reviewable increments.
ac-openclaw — self-hosted AI assistant setup
An interactive guide to install OpenClaw on a VPS or local machine. Covers server provisioning, OS hardening, model configuration (BYOK or local Ollama), messaging channel integration (Signal, WhatsApp, Telegram, etc.), and secure remote access (Cloudflare Tunnel, Tailscale, or Caddy). It walks through every decision point — useful if you want a self-hosted personal AI assistant without piecing together a dozen tutorials.
Everything else
| Skill | What it covers |
|---|---|
ac-python |
Generic Python: style, typing, OOP design, testing, tooling |
ac-editing-acroforms |
AcroForm PDF templates: widget geometry, content streams, font subsetting |
ac-auditing-repos |
Cross-repo infrastructure audit: harmonize pre-commit, linter, and editor configs |
ac-writing-blog-posts |
Article writing + social media promotion + dev.to publishing pipeline |
ac-generating-slides |
Markdown to presentation slides via Marp |
ac-scaffolding-skill-repos |
Scaffold new skill repos with correct config and structure |
ac-editing-acroforms deserves a mention — it came out of editing PDF form templates by hand. The internals (annotation dictionaries, appearance stream generation, widget flags) are barely documented. The agent can't figure this out from training data alone, so the skill ships with Python scripts that handle the tricky bits.
This blog post was written with ac-writing-blog-posts, for what it's worth.
How to use them
Install with npx skills:
npx skills add https://github.com/souliane/skills --skill '*' -g -y
This installs all skills globally for your default agent. To install for multiple agents at once:
npx skills add https://github.com/souliane/skills --skill '*' -g -y \
--agent claude-code codex cursor github-copilot
If you want the SDD feedback loop — where retrospective fixes land in files you can commit — clone the repo and symlink it into your agent's skills directory:
git clone git@github.com:souliane/skills.git ~/workspace/souliane/skills
# Example for Claude Code — adjust the target for your agent runtime
for skill in ~/workspace/souliane/skills/ac-*/; do
ln -s "$skill" ~/.claude/skills/"$(basename "$skill")"
done
This points your agent at the live git checkout directly. When the agent (or you) updates a skill file, the change is immediately available in the next session and can be committed. Don't use npx skills add for this — it creates a managed copy that doesn't point back to your clone.
If you use teatree, its setup wizard can suggest these as companion skills for your project overlay — they're loaded automatically when you work in matching repos.
When it helps
Skills work best when:
- You correct the agent for the same kind of mistake more than once
- Your project has conventions that diverge from common patterns
- You work across sessions and the agent keeps losing context
- You use deterministic tools (PDF editors, linters, deployment scripts) where the agent needs exact steps
- You want to share a recipe with others — a skill is a portable, self-contained package that anyone can install and use with their own agent
They're less useful for one-off tasks or when the model's defaults already match your preferences. But even something you only do once yourself might be worth writing as a skill if it's useful to someone else.
These skills reflect my own workflow — Django, Python, PDF templates, multi-repo infrastructure. They might not match yours at all. The most useful skills are probably ones you'd write yourself for your own project's conventions. These are just examples of what worked for me.

Top comments (0)