Every time I started a new project with an AI coding agent, I was doing the same thing.
Opening a blank repo. Writing CLAUDE.md from scratch. Explaining my stack again. Explaining my conventions again. Explaining what NOT to do — again. By the time I had the agent actually doing useful work, I'd already spent two hours just setting up context.
Then I'd switch projects, and repeat everything from scratch.
There had to be a better way.
The problem with AI coding agents and new projects
If you've used Claude Code or Codex CLI, you already know how much these tools depend on good project context. The agent doesn't know your stack. It doesn't know you prefer pnpm over npm. It doesn't know that you never want raw SQL, or that every Prisma query needs a userId filter to prevent IDOR vulnerabilities, or that your commit messages follow Conventional Commits.
Without that context, you spend the first hour of every session correcting the agent instead of building.
The solution everyone discovers eventually is CLAUDE.md for Claude Code and AGENTS.md for Codex — instruction files the agent reads at the start of every session. But writing these well takes time, and the best ones include:
- Your exact tech stack with version numbers
- What NOT to do (as important as what to do)
- Security rules baked in from day one
- A testing strategy covering unit, integration, security, adversarial, and performance tests
- Module specs the agent reads before implementing anything
Writing all of that properly for every new project is genuinely tedious. So I automated it.
What I built: AI Agents Template Builder
github.com/shadkhan/AI-agents-template-builder
It's a GitHub Template Repository with a shell script that turns the template into a fully configured agent workspace for your specific project in about 2 minutes.
Here's what you do:
# Use the template on GitHub, then clone your new repo
git clone https://github.com/shadkhan/AI-agents-template-builder
cd AI-agents-template-builder
chmod +x scripts/init-project.sh
./scripts/init-project.sh
The script asks you six questions:
- Project name and description
- Language (TypeScript, Python, Go, JavaScript, or custom)
- Framework and database
- Package manager
- Your modules (Notes, Tasks, Auth — whatever your app has)
- Security profile (user-facing web app, API, static site, CLI tool)
Then it generates everything.
What gets generated
CLAUDE.md and AGENTS.md — filled in for your stack
Both files get your actual project name, tech stack, repo structure, and commands injected. No more {{placeholders}} — just ready-to-use instructions.
CLAUDE.md is read by Claude Code automatically. AGENTS.md is read by Codex CLI, Cursor, Aider, and Amp — it's now an open standard stewarded by the Linux Foundation with 60,000+ open-source projects using it.
SECURITY.md — security rules the agent enforces
This is the file I'm most proud of. It covers eight layers of security baked into every project from day one:
- Input validation with Zod on every route — patterns and examples included
- IDOR prevention — every Prisma query must include
userIdfrom the JWT, not from the request body - JWT verification patterns and refresh token rotation
- HTTP security headers via
@fastify/helmet - Rate limiting rules per endpoint type
- File upload security — MIME type allowlists, path traversal prevention, UUID-based storage
- Database security — parameterized queries, field exclusion patterns
- Logging rules — what to log, what never to log
The agent reads this alongside CLAUDE.md and applies these rules on every route it writes. I stopped getting auth-less routes and missing userId filters in PR reviews.
docs/testing/ — five-layer testing strategy
Four files covering every testing layer the agent needs to know about:
TESTING.md — the master strategy. Unit tests, integration tests, security tests, adversarial tests, and performance evaluation tests. Includes the complete CI/CD pipeline config for GitHub Actions.
VALIDATION.md — "validation loops." Every Zod schema gets tested for both valid and invalid inputs, for every field, for every constraint. The pattern that ensures schema drift never silently lets bad data through.
ADVERSARIAL.md — deliberately acting like a malicious user. IDOR attacks, mass assignment, SQL injection payloads, file upload attacks, JWT forgery, property-based fuzzing with fast-check. The agent writes these tests for every new module.
PERFORMANCE.md — k6 baseline tests that run before every release. Catches N+1 Prisma queries and missing PostgreSQL indexes before they hit production.
docs/specs/ — one spec stub per module
For every module you listed during setup, you get a spec file with the structure already in place: data model, API endpoints, request/response schemas, business rules, and acceptance criteria. Fill in the content, hand it to the agent, and it implements the whole module end-to-end without asking clarifying questions.
Everything else
-
docs/PRD.md— product requirements doc with your modules listed -
docs/ARCHITECTURE.md— architecture stub -
docs/adr/ADR-001.md— first architecture decision record -
CONTRIBUTING.md— with your actual commands -
.env.example— skeleton based on your stack (Postgres URL, JWT secrets, Redis, etc.) -
.gitignore— standard, generated if not present -
.github/workflows/validate-agent-files.yml— CI that fails if you commit.env, leave unfilled placeholders, or reference a spec that doesn't exist
Two more scripts for ongoing use
new-module.sh — add a new module to an existing project:
./scripts/new-module.sh "Notes"
# → generates docs/specs/notes.spec.md with full template
# → adds Notes to CLAUDE.md module table automatically
Fill in the spec, then tell the agent:
"Read docs/specs/notes.spec.md and implement the Notes module end-to-end"
update-module-status.sh — keep CLAUDE.md current as you ship:
./scripts/update-module-status.sh "Notes" done
./scripts/update-module-status.sh "Tasks" in-progress
The agent reads the module status table at the start of every session. Keeping it accurate prevents it from re-implementing something that already exists.
The global file — the part most people miss
Both Claude Code and Codex support a global instruction file that applies to every project:
# For Codex — applies to every repo you work in
~/.codex/AGENTS.md
# For Claude Code
~/.claude/CLAUDE.md
Put your universal personal preferences there once — pnpm over npm, no any in TypeScript, Conventional Commits — and never repeat them in any project file again. Project-level files inherit and can override.
This is the layer that truly makes the workflow feel automatic. New project, run the script, and the agent already knows your personal defaults before it even reads the project files.
Works with every major coding agent
| Agent | File it reads |
|---|---|
| Claude Code |
CLAUDE.md (falls back to AGENTS.md) |
| Codex CLI | AGENTS.md |
| Cursor |
AGENTS.md + .cursor/rules
|
| Aider | AGENTS.md |
| Amp | AGENTS.md |
| GitHub Copilot | .github/copilot-instructions.md |
The template generates CLAUDE.md and AGENTS.md. For Copilot, symlink or copy AGENTS.md content into .github/copilot-instructions.md.
The workflow in practice
Once this is set up, my per-project flow looks like this:
Day 1:
clone template → run init-project.sh → fill TODO sections → commit
About 20 minutes total.
Per module:
./scripts/new-module.sh "ModuleName"
Fill docs/specs/modulename.spec.md (15 min)
Tell agent: "Read docs/specs/modulename.spec.md and implement end-to-end"
Review diffs → merge
./scripts/update-module-status.sh "ModuleName" done
Before release:
pnpm test:security
pnpm test:adversarial
pnpm test:perf
The agent handles implementation. I handle architecture decisions and code review. The instruction files make sure we're always speaking the same language.
Try it
The repo is at github.com/shadkhan/AI-agents-template-builder.
Click "Use this template" to create your own copy, run the init script, and you're set up in under two minutes.
If you find it useful, a GitHub star helps other developers find it. And if you add support for new stacks or languages in the init script, PRs are very welcome — the more stacks covered, the more useful it gets for everyone.
Built this while setting up LifeOps, my personal organization platform that I'm also open-sourcing. More on that soon.
Top comments (0)