Think about it. Your AI agent refactors code across 12 files, makes architectural decisions, tries three approaches before settling on one, and fixes a subtle race condition. Then it commits with the message "fix bug" and all that context is gone forever.
We'd never accept this from a human developer. Why do we accept it from the tool that's increasingly writing most of our code?
AI agents need their own project management
I'm not talking about giving Claude a Jira API key. I'm talking about project management that's native to how agents work: plain files, git commits, and context windows.
That's what I built. Claude Project Tracker is Jira for AI agents — a .project/ folder you drop into any repo where your AI agent creates its own tickets, documents every decision, maintains a wiki, and closes issues when done.
No database. No SaaS. No monthly fee. Just files in git.
What it looks like in practice
You: "Fix the login timeout bug"
Claude Code:
→ Creates PROJ-1: "Fix login timeout bug"
→ Status: in-progress
→ Comment: "Investigating auth.ts — timeout at line 42"
→ Comment: "Root cause: token refresh waits for expired session"
→ Comment: "Tried: extending timeout. Failed: masks the real issue"
→ Comment: "Fix: early return on expired tokens. Modified auth.ts, middleware.ts"
→ Updates wiki: Solutions → Authentication
→ Commits: fix(auth): resolve login timeout [PROJ-1]
→ Status: done
That's not me writing comments. That's the agent documenting its own work as it goes.
Three weeks later when the bug resurfaces:
You: "Login is broken again, check PROJ-1"
Claude Code:
→ Reads full ticket history
→ Knows the root cause, what was tried, what worked
→ Picks up with complete context
Why not just use Jira / Linear / GitHub Issues?
Because those tools are designed for humans, not agents.
The mismatch:
| Human PM tools | Agent-native PM |
|---|---|
| Requires API integration | Reads/writes files directly |
| SaaS with auth, permissions, billing | Plain files in your repo |
| Designed for manual updates | Agent updates automatically |
| Separate from code | Lives next to code |
| Merge conflicts on shared state | Append-only, conflict-free |
Claude Code's superpower is that it reads and writes files. It doesn't need a REST API to create a ticket — it creates a folder with a JSON file and a Markdown description. It doesn't need a webhook to add a comment — it writes a new file to the comments directory.
The file system is the API.
The architecture
.project/
├── config.json
├── issues/
│ └── PROJ-1/
│ ├── issue.json # status, priority, labels
│ ├── description.md # what was requested
│ └── comments/
│ ├── 001.json # "Investigating auth.ts..."
│ ├── 002.json # "Root cause found..."
│ └── 003.json # "Fixed. 2 files modified."
├── wiki/
│ ├── _index.json # page tree structure
│ └── pages/
│ ├── steering.md # your rules for the agent
│ └── solutions-auth.md # auto-generated docs
└── boards/
└── default.json # kanban column config
Why this structure:
- One folder per issue — atomic git operations, clean diffs
- Comments as individual files — append-only means zero merge conflicts, even with multiple agents
- Wiki as Markdown — equally readable by humans and AI
- No database — works offline, no setup, no migrations, no backups
Steering files: your agent's operating manual
This is where it gets interesting. You create wiki pages called "steering files" that the agent reads before every task:
Coding Standards:
Use TypeScript strict mode. Prefer Tailwind. No inline styles.
Architecture:
All API endpoints return
{ data, error }envelope. Use Zod for validation.
Conventions:
snake_case for DB columns. camelCase for JS. Components in PascalCase.
Your agent follows these automatically. No more repeating yourself every session. No more "I told you yesterday to use strict mode."
The web UI
Agents work in terminals. Humans like boards. So it ships with both.
The web UI gives you:
- Kanban board — drag-and-drop across columns, time filter on completed work
- List view — sortable, filterable, paginated
- Wiki editor — tree navigation, search, Markdown editing
- Skill manager — create slash commands from the browser
deno run --allow-net --allow-read --allow-write --allow-env server.ts
# → http://localhost:8000
Slash commands
The tracker installs as Claude Code skills:
| Command | What it does |
|---|---|
/track-work |
Start a task with full audit trail |
/create-issue |
Create a new tracked issue |
/standup |
Summarize recent activity across all issues |
/review-ticket |
Read a ticket's complete history |
/wiki-update |
Create or update a wiki page |
/document-completion |
Auto-document finished work in the wiki |
One-line install
From inside any git repo:
curl -sL https://raw.githubusercontent.com/rpostulart/Claude-Project-Tracker/main/init.sh | bash
That creates the .project/ folder, installs Claude Code skills, and generates a CLAUDE.md that tells Claude to track everything automatically.
The bigger picture
Here's what I think is happening: AI agents are becoming the primary producers of code. Not assistants. Not copilots. Producers.
And producers need project management. Not human PM tools with AI bolted on — but PM tools designed from the ground up for how agents work.
That means:
- Files over APIs — agents think in files
- Git over databases — the audit trail already exists
- Context over dashboards — agents need ticket history in their context window, not a pretty UI
- Append-only over CRUD — agents working in parallel need conflict-free writes
Claude Project Tracker is a first step. It works with Claude Code today. The file format is open, and there's no reason Cursor, Aider, Codex, or any future agent couldn't adopt the same structure.
Try it
GitHub: github.com/rpostulart/Claude-Project-Tracker
MIT license. Open source. Feedback welcome. Spread the word.
I'd especially love to hear:
- What would you want your AI agent to track that it currently ?
- What's the biggest "I wish I knew what Claude did" moment you've had?
Drop a comment or open an issue on GitHub.
Top comments (0)