DEV Community

JackAltman
JackAltman

Posted on

I accidentally burned $27 in LLM tokens during a single debugging session

I was debugging an LLM tool a few weeks ago and something weird happened.

Nothing broke.
No errors.
Responses looked normal.

But when I checked usage later, I had burned $27 in tokens during what I thought was a quick debugging session.

The reason was simple: I had no visibility into token usage while I was actually developing.

The typical workflow with LLM APIs looks like this:

write prompt
run request
check response
repeat

But token usage is basically invisible in that loop.

You usually only see it:

• later in dashboards
• inside logs
• after the fact in billing

Which means when you're experimenting, it’s very easy to burn tokens without realizing it.

Especially when you’re:

• testing prompts
• looping requests
• running agents
• debugging API calls

After running into this a few times I wanted something extremely simple:

a live token counter while I'm coding.

Something like how your Mac shows CPU usage or battery percentage.

So I built a small macOS menu bar tool that shows token usage in real time while you're working with OpenAI or Claude APIs.

No dashboards.
No analytics platform.
Just a small number in the menu bar showing how many tokens are being used.

It ended up being way more useful than I expected, especially during prompt experiments.

If anyone here builds with LLM APIs and has had similar “wait how many tokens did that use?” moments, you can check it out here:

https://tokenbar.site

Also curious how other people monitor token usage during development. Logs? Custom tooling? Something else?

Top comments (0)