## The problem
Your AI agent just finished a 45-minute coding session.
It edited 15 files, made 4 commits, and called the API 23 times.
What did it cost?
No idea. You’ll find out on your next invoice aggregated with everything else.
## The fix
I built **codesession-cli** — a CLI that teaches your AI agent to track its own costs.
## Setup (30 seconds)
```
bash
npm i -g codesession-cli
clawhub install codesession
Start a new OpenClaw session. Done.
What happens
The agent reads the skill instructions and handles everything.
bash
cs start "Fix auth bug" --close-stale --json
# → agent works on your task normally…
cs log-ai -p anthropic -m claude-sonnet-4 --prompt-tokens 8000 --completion-tokens 2000
# → logs its own token usage, cost auto-calculated
cs end -n "Fixed bug, added tests" --json
# → Session: 9m • 3 files • 1 commit • $0.15
You don’t touch anything. Just review the data when you’re curious.
bash
cs stats
# Total: 50 sessions • 8h 34m • $47.23 AI cost
What’s tracked
- Token spend per API call (17+ models with built-in pricing)
- File changes and git commits (via
git diff, not a watcher) - Session duration and cost summary
- Annotations — the agent leaves breadcrumb notes as it works
Tech
- TypeScript
- SQLite (WAL mode)
- Local storage (
~/.codesession) - JSON output on every command — parse
schemaVersionfor forward compatibility - Structured errors:
{ error: { code, message } }, always exit1 - Sessions scoped by git root, not cwd
- MIT licensed
Links
-
GitHub:
brian-mwirigi/codesession-cli -
npm:
codesession-cli -
ClawHub:
codesession
Looking for early adopters.
What cost queries would you want after a month of data?
Top comments (0)