DEV Community

AgentZ
AgentZ

Posted on

How I Track AI Coding Costs Across 4 Platforms with One Tool

Last month I got a surprise. My credit card statement showed I'd spent significantly more on AI coding tools than I thought. The problem wasn't that I was using too much β€” the problem was I had no idea how much I was using. I had Claude Code open in one terminal, Gemini CLI in another, and Cursor handling a side project. Three separate tools, three separate billing pages, zero unified view.

So I built one.

The Problem with AI Coding Cost Visibility

Every major AI coding tool stores session data locally. Claude Code writes JSONL files to ~/.claude/projects/. Gemini CLI logs to ~/.gemini/tmp/. Codex uses ~/.codex/sessions/. Cursor keeps everything in a SQLite database.

None of them talk to each other. And if you use more than one β€” which most developers do these days β€” you're left doing mental arithmetic across four different dashboards.

Existing monitoring tools weren't much help either. The most popular ones only support Claude Code. That's fine if you're a Claude-only shop, but not if you're like me and reach for whatever tool fits the task.

The Solution: cc-statistics

cc-statistics is an open-source tool I built to solve exactly this. It reads local session files from all four platforms, aggregates everything into a unified view, and surfaces it through three interfaces: a CLI, a web dashboard, and a native macOS menu-bar app.

The install is a single command:

uv tool install cc-statistics
Enter fullscreen mode Exit fullscreen mode

Zero dependencies. Pure Python standard library. The macOS app ships as a pre-built binary β€” no Xcode, no local compilation.


πŸ–ΌοΈ Screenshots

πŸ–₯️ macOS App β€” Dark Mode πŸ–₯️ macOS App β€” Light Mode
macOS App Dark macOS App Light
πŸ“Š Usage Quota Predictor πŸ”΄ Max Usage Reached
Usage Quota Predictor Max Usage
πŸ” Session List πŸ”§ Tool Call Analytics
Session List Tool Call Analytics
⚑ Skill / MCP Analytics πŸ’¬ Share Session Messages
Skill Analytics Share Messages
🌐 4-Platform Unified View βš™οΈ Settings
Multi-Platform Settings
πŸ”” Notifications 🌐 Web Dashboard
Notifications Web Dashboard

Multi-Platform Support

The core differentiator is platform breadth. Here's what cc-statistics reads and where:

Platform Local data path
Claude Code ~/.claude/projects/<project>/<session>.jsonl
Gemini CLI ~/.gemini/tmp/<project>/chats/<session>.json
Codex CLI ~/.codex/sessions/*.jsonl
Cursor ~/Library/Application Support/Cursor/User/globalStorage/state.vscdb

The depth of data varies by platform β€” Claude Code gives you the most (input tokens, output tokens, cache read/write, tool calls, model name), while Cursor currently provides session counts and code-change stats. But even partial data is better than no data, and the picture fills in as you use each tool.

Why does a unified view matter? Because your real cost is the sum. If Claude Code is your heavy lifter but Gemini CLI is where you do exploratory research and Cursor handles autocomplete all day, only looking at one bill means you're flying blind on the other two. cc-statistics adds them up.


Native macOS Menu Bar App

This is the feature I use most. Run cc-stats-app once, and it lives in your menu bar permanently.

The status bar shows a Claude logo alongside your current day's token count and estimated cost in real time. When you're approaching your self-set daily limit, it turns red. No checking dashboards, no mental math β€” it's just there, like a fuel gauge.

Right-clicking the status bar item lets you switch between display modes: Token+Cost, Token only, Cost only, or Session count. Press Cmd+Shift+C from anywhere to open the full dashboard panel.

The dashboard itself is native SwiftUI β€” it doesn't feel like an Electron app pretending to be native. You get:

  • A source switcher: view stats per platform or aggregate all four
  • Dark/light/system theme
  • Export to JSON or CSV (auto-saves to Desktop and opens immediately)
  • Process manager showing memory usage of all running Claude processes
  • Settings for launch at login, language, and update checks

The Clawd Pixel-Art Mascot

One of the more delightful features is Clawd β€” a pixel-art mascot that lives alongside your token counter in the status bar. It's not just decorative: Clawd reacts to what Claude Code is actually doing. When an agent task is running, the animation changes. When Claude is idle, it switches to a different state. When a task completes, there's a brief happy animation.

The sprites come from the clawd-on-desk project, which hooks into Claude Code's process state to drive animations. Seeing a tiny pixel character respond to your AI actually working is a small thing, but it makes the experience feel less like staring at a text terminal.


CLI Power Features

The CLI is where cc-statistics earns its keep for power users. A few commands I use every day:

# What did I spend in the last 7 days, across all platforms?
cc-stats --all --since 7d

# List every project cc-statistics has found (all platforms)
cc-stats --list

# Drill into the last 3 sessions for a specific project
cc-stats my-project --last 3

# Compare token spend across projects in the last week
cc-stats --compare --since 1w

# Open the web dashboard in a browser
cc-stats-web
Enter fullscreen mode Exit fullscreen mode

The --compare flag is particularly useful at the end of a sprint. It shows a side-by-side breakdown of token consumption per project, which quickly reveals where the expensive work happened.

Session Search and Resume

This one came out of a real frustration: I'd have a productive Claude Code session, close the terminal, and then realize two days later that I needed to pick up exactly where I left off β€” but I couldn't remember which session it was.

cc-statistics indexes session content and lets you search by keyword:

cc-stats --search "database migration"
Enter fullscreen mode Exit fullscreen mode

It returns matching sessions with timestamps and a ready-to-run resume command:

claude --resume <session-id>
Enter fullscreen mode Exit fullscreen mode

One copy-paste and you're back in context. This works across all historical sessions, not just recent ones.


Beyond Cost Tracking

Token counts and dollar amounts are the obvious metrics. But cc-statistics surfaces several others that I've found surprisingly useful.

Tool Call Analysis

For Claude Code users, every tool call is logged β€” file reads, shell commands, web searches, and all your MCP tools. cc-statistics aggregates these and shows you a Top 10 breakdown by tool name, with Skill-type and MCP tools expanded to their specific names rather than grouped together.

This turns out to be a useful debugging signal. If mcp__filesystem__read_file is dominating your tool calls, maybe your agent is doing too many redundant reads. If a specific MCP tool is called 200 times in a session, you have a concrete number to reason about.

Code Change Statistics

Using git log --numstat on your project directories, cc-statistics tracks lines added and deleted by language per session. You get a breakdown like: "This week, 3,400 lines of TypeScript and 800 lines of Python were written in AI-assisted sessions."

This is useful for understanding where AI coding is actually generating output, not just consuming tokens. A session with high token usage but zero code changes probably means you spent the time in discussion or planning β€” which is fine, but worth knowing.

AI Time vs. User Time

cc-statistics measures how long Claude is "thinking" versus how long it spends waiting for you to respond. This ratio tells you something about your workflow efficiency. A high AI time percentage in short sessions usually means you're prompt-and-wait; a more balanced ratio might mean you're doing more interactive back-and-forth.

I've found this metric helpful for spotting sessions where I should have batched more work into a single prompt rather than going back and forth ten times.


Reports and Notifications

Automatic Weekly and Monthly Reports

cc-stats --report week
cc-stats --report month
Enter fullscreen mode Exit fullscreen mode

These generate Markdown files summarizing your usage across all platforms for the period: total tokens, cost by model, most active projects, top tool calls, and code changes by language. Useful for personal accountability, or for justifying your AI tooling budget to a manager.

Webhook Push

If you're on a team, you can push reports to a channel:

cc-stats --notify https://hooks.slack.com/services/your-webhook-url
Enter fullscreen mode Exit fullscreen mode

Slack, Feishu, and DingTalk webhooks all work. I have a weekly cron job that pushes a summary to a personal Slack channel so I have a permanent record. You could use this for team-wide visibility into AI tooling costs, or just for your own history.


Usage Alerts

You can set daily and weekly spending limits, and cc-statistics will fire a macOS system notification when you're close to or over the threshold. This lives in the app's Settings panel.

The notification goes through the standard macOS notification system, so it respects your Focus modes and doesn't require the app to be in the foreground. Combined with the status bar turning red, it's hard to miss.


Web Dashboard

cc-stats-web
Enter fullscreen mode Exit fullscreen mode

This opens a browser-based dark-themed dashboard β€” useful on non-Mac platforms or when you want a larger view than the menu bar panel provides. It shows the same data as the native app: sessions, tokens by model, daily trends, tool call breakdowns.


Getting Started in 3 Steps

1. Install

# uv (fastest)
uv tool install cc-statistics

# or pipx
pipx install cc-statistics

# or Homebrew
brew install androidZzT/tap/cc-statistics
Enter fullscreen mode Exit fullscreen mode

2. Run your first report

cc-stats --all --since 7d
Enter fullscreen mode Exit fullscreen mode

This scans all detected platforms and prints the last 7 days of activity. If you only use Claude Code, it'll show Claude Code. If you have Gemini CLI sessions too, those appear automatically.

3. Launch the menu bar app (macOS)

cc-stats-app
Enter fullscreen mode Exit fullscreen mode

Enable "Launch at Login" in Settings once, and you'll have a persistent cost gauge from then on.


What's Next

A few things on the roadmap that I'm working toward:

  • Rate limit progress bar β€” visual indicator of how close you are to Claude's session limits, similar to what claude-usage does for Pro/Max subscribers
  • Hourly activity distribution β€” see which hours of the day you use AI coding tools most, and whether those overlap with peak throttling windows
  • Budget forecasting β€” project end-of-month spend based on current usage velocity

Links

If you're using multiple AI coding tools and want a single place to see what they're actually costing you, give it a try. And if something doesn't work or you want a platform added, open an issue β€” the project is actively maintained.

Top comments (0)