If you’re building with OpenAI, Anthropic, or other LLM APIs, one thing becomes obvious fast: costs can get out of hand quietly.
A few extra retries, larger prompts, a background job you forgot to stop, or a script looping longer than expected, and suddenly your AI bill is much higher than you thought it would be.
That’s the exact reason I built AI Cost Tracker.
It’s a lightweight CLI tool that estimates how much a prompt will cost before you send it to an AI model.
What it does
AI Cost Tracker is intentionally simple:
It takes a prompt
Roughly estimates the token count
Applies pricing for popular AI models
Prints the estimated cost clearly in the terminal
Keeps a running session total so you can track spend as you work
It’s not meant to replace full billing or analytics platforms. It’s meant to give developers a fast, practical “sanity check” while building.
Why this matters
Most AI cost problems don’t come from one huge request.
They come from:
lots of small requests
prompts growing over time
testing multiple models
scripts running longer than expected
hidden costs during iteration
When you can see the estimated cost before a prompt goes out, you make better decisions immediately.
That changes behavior.
Example use case
Imagine you’re iterating on a customer support workflow, a content pipeline, or an internal AI assistant.
You start small. Then you add:
more context
larger system prompts
more history
retrieval results
extra retries
Now every request is bigger than it was a few hours ago.
AI Cost Tracker gives you a quick checkpoint in the terminal before those changes silently become a billing problem.
Install from npm
You can run it directly with npx:
npx @remova/ai-cost-tracker
Or install it globally:
npm install -g @remova/ai-cost-tracker
ai-cost-tracker
How it works
Once it starts, you can paste or type a prompt and get:
estimated tokens
estimated prompt cost
running session total
It also supports simple commands like:
/models to list supported models
/model to switch models
/total to show your session total
/quit to exit
That makes it useful as a quick local budgeting tool while you’re testing prompts or workflows.
Important note
This tool uses a rough estimation approach, not exact provider-side token accounting.
That’s deliberate.
The goal is speed and visibility, not perfect financial reconciliation. For most developer workflows, a fast pre-send estimate is far better than no estimate at all.
If you need strict enterprise controls like hard budgets, per-user limits, usage governance, or audit trails, you’ll want something more robust around your AI stack.
Where to get it
npm package:
npm install -g @remova/ai-cost-trackeror
`npx @remova/ai-cost-tracker
`Final thought
The simplest way to reduce surprise AI bills is to make costs visible earlier.
That’s what this tool does.
If you’re building AI products and want stronger controls around usage, security, and governance, visit www.remova.org
If you need help integrating AI into your product, systems, or growth workflows, check out www.goexpandia.com
Top comments (0)