DEV Community

Cover image for Why is tracking LLM token usage still so annoying?
John
John

Posted on

Why is tracking LLM token usage still so annoying?

If you build anything with the OpenAI or Claude APIs, you've probably run into this at some point.

You're testing prompts, running scripts, tweaking things quickly… and suddenly you realize you have no real sense of how many tokens you're burning in real time.

You can check dashboards later, sure. But while you're actually developing, it's basically invisible. You run something, it works, and only later do you discover the cost.

I kept running into the same problem:

• running prompt experiments
• testing agents or scripts
• debugging API calls

and having no immediate visibility into token usage while coding.

Most tools that exist are either:

dashboards after the fact

logging solutions

full analytics platforms

But I just wanted something extremely simple: a tiny indicator that shows token usage while I'm working.

So I ended up building a small macOS menu bar tool that shows token usage in real time while you're developing.

No dashboards.
No analytics platform.
Just a token counter sitting in the menu bar.

If anyone else here builds with LLM APIs and finds themselves wondering “how many tokens did that just burn?”, I'm curious if something like this would actually be useful.

You can see it here:
https://tokenbar.site

Would also love feedback from anyone working heavily with the OpenAI or Claude APIs.

Top comments (0)