DEV Community

Zley
Zley

Posted on

I Built an AI Token Calculator Because Long Prompts Need a Sanity Check

I keep running into the same small problem when working with LLMs: a prompt looks fine in a text editor, but I still need to know whether it is going to fit inside the model context window before I send it.

That is why I added a new tool to Tools Online: AI Token Calculator.

It is a browser-based token and context estimator for common AI models. You can paste a prompt, code snippet, document section, or chat draft, pick the provider/model you care about, and see the token count plus context usage immediately.

What it supports

The first version focuses on the models I most often need to plan for:

  • OpenAI / GPT models
  • Claude
  • Gemini
  • Qwen
  • GLM
  • Kimi

For OpenAI models, the tool uses compatible local BPE tokenizers. For the other providers, it shows the result as a local estimate because their exact billing tokenizers are either proprietary or exposed through authenticated APIs.

That distinction is intentional. I did not want the UI to pretend every number has the same precision. If the result is exact, it is marked as exact. If it is an estimate for planning, it is marked as an estimate.

No API key, no upload

The part I cared about most was privacy. Prompt drafts often contain product ideas, customer text, logs, or internal notes. I did not want a token calculator that sends that text to another server just to produce a rough number.

So the calculator runs in the browser and does not require an API key. You paste text, choose the model, and get a quick read on whether the prompt is small, close to the limit, or already too large.

Where AI helped during development

AI was useful in the boring but important parts of the build. I used it to map out edge cases, compare how different providers expose token counting, shape the first version of the copy, and check whether the UI explained exact counting versus estimation clearly enough.

The final product still needed manual decisions: keep the tool simple, avoid implying exact billing numbers for proprietary tokenizers, support multiple languages, and make the context window usage obvious at a glance.

When I use it

This has already been useful before sending long prompts to production models, trimming documents for chat, checking code snippets, and estimating whether a multi-message prompt should be split.

If you work with GPT, Claude, Gemini, Qwen, GLM, or Kimi and want a quick local check before sending a prompt, you can try the AI Token Calculator here.

Top comments (0)