DEV Community

Cover image for How I accidentally built a cost tracking tool for LLMs
Habeeb Rahman
Habeeb Rahman

Posted on

How I accidentally built a cost tracking tool for LLMs

Last month I got an API bill that made me physically flinch. $2,847. I had no idea where it came from.

I was building a side project — a fairly standard app with OpenAI for chat, Anthropic for summarization, Stripe for payments, Supabase for the database, and SendGrid for emails. Five services, each with their own dashboard, their own billing page, their own definition of "usage."

I found myself opening five tabs every morning just to check if something had spiked overnight. It was miserable. So I wrote a quick script to intercept outgoing API calls and log the cost next to each one. Just a console.log with a dollar amount. Nothing fancy.

But then something interesting happened. I saw that my onboarding flow was making 14 LLM calls per new user. Fourteen. I'd built a multi-step wizard where each step called GPT-4o separately, when a single call could have handled it. That one fix cut my daily OpenAI spend by 60%.

I started showing the script to friends who were building with LLMs. They all had the same reaction: "Wait, I can see the cost per request?" Turns out nobody was tracking this. Everyone just waited for the monthly bill and hoped for the best.

So I cleaned it up and turned it into burn0.

What it does

You add one line to your entry point:

import 'burn0'
Enter fullscreen mode Exit fullscreen mode

That's it. burn0 auto-detects 50+ services — OpenAI, Anthropic, Stripe, Supabase, Twilio, SendGrid, and more — and tracks costs per request in your terminal. No agents to deploy, no complex setup.

Run burn0 scan to see every API service in your codebase. Run burn0 report to get a cost breakdown with model names, endpoints, and a running total. You can even attribute costs to specific features with burn0 track <feature>.

Beyond the CLI

What started as a terminal tool kept growing. You can create custom API entries for any internal or third-party service burn0 doesn't recognize yet, and monitor your production APIs' costs from a dashboard. It gives you a single pane of glass for your entire stack — LLMs, payment processors, databases, messaging services, everything — so you can finally answer "what does this user session actually cost?" in real time.

What surprised me

The tool I built for myself turned out to solve a problem almost every developer building with APIs has — especially anyone working with LLMs, where a single bad prompt template can burn through hundreds of dollars overnight.

If your API bill has ever surprised you, give it a try:

$ npx @burn0/burn0
Enter fullscreen mode Exit fullscreen mode

Everything runs locally. No data leaves your machine. It's free and open source.

I'd love to hear what you find — especially the "oh no" moments when you see what a feature actually costs.

Top comments (0)