I was tired of checking multiple billing dashboards for my AI API usage. OpenAI has one, Anthropic has another, Google has yet another. No unified view, no alerts when costs spike, and no idea which model was eating my budget.
So I built TokenBurn — a single dashboard that tracks AI API costs across all providers.
In this post, I'll break down the full tech stack, key decisions, and lessons learned from shipping in 5 days.
What it does
TokenBurn connects to your AI providers via API key and gives you:
- A unified cost dashboard across all providers
- Cost breakdown by model (GPT-4o vs Claude Sonnet vs Gemini)
- Spike detection alerts via email
- Daily spend summaries
- Monthly cost forecasting
The tech stack
Here's what I used and why each choice mattered.
Next.js 14 (App Router)
Server components for the marketing pages, client components for the dashboard. The App Router's nested layouts made the dashboard sidebar trivial — define the layout once, every dashboard page inherits it.
// app/(dashboard)/layout.tsx
export default function DashboardLayout({ children }) {
return (
<div className="flex">
<Sidebar />
<main>{children}</main>
</div>
)
}
Supabase
This is the secret weapon for solo devs. You get:
- Authentication (Google + GitHub OAuth)
- PostgreSQL database
- Row Level Security (RLS)
- Real-time subscriptions
- Generous free tier
RLS meant I didn't need to write authorization logic in every API route. The database enforces it:
CREATE POLICY "Users see own data only"
ON usage_records FOR SELECT
USING (auth.uid() = user_id);
AES-256-GCM encryption
API keys are encrypted before storage. I built a small encryption module using Node's crypto library:
import { createCipheriv, randomBytes } from 'crypto'
export function encrypt(text: string): string {
const iv = randomBytes(12)
const cipher = createCipheriv('aes-256-gcm', key, iv)
// ...
}
Keys are decrypted only momentarily during sync operations and never logged.
Resend
For transactional emails. Built two HTML email templates:
- Spike alerts — fires when daily spend exceeds threshold
- Daily summaries — sends previous day's costs every morning
The dark theme matches the dashboard for consistent branding.
Recharts
For the cost trend visualizations. Simple API, sensible defaults, works great with React.
Lucide React
Icon library. Started with emojis (don't judge), switched to Lucide for a professional look. Lesson learned — emojis render differently across browsers and look unpolished on a SaaS.
Dodo Payments
Payment processing as Merchant of Record. They handle taxes, compliance, and payouts globally. Good option for indie devs in India who can't easily integrate Stripe.
Vercel
Deployment. Works seamlessly with Next.js. The free tier is sufficient for launch — only paid features I needed were cron jobs (using cron-job.org as a free alternative for now).
What I'd do differently
1. Start with one provider. I built OpenAI and Anthropic sync from day one. I should have shipped with just OpenAI and added Anthropic after getting feedback. Half the work to validate the same hypothesis.
2. Don't over-engineer encryption. AES-256-GCM is great, but I spent too long perfecting it. A simpler approach would have been fine for MVP — you can always upgrade later.
3. Add payments earlier. I almost launched without a payment system thinking I'd add it after getting traction. Adding Dodo Payments took one afternoon and I got a paying customer the same day. Don't ship without monetization.
4. Pricing matters more than you think. I started at $29/month for Pro and dropped to $9/month before launch. The lower price point removes friction completely. Developers don't think twice about $9. You can always raise prices later.
5. Legal pages aren't optional. Privacy Policy and Terms of Service are required by payment providers and look unprofessional when missing. Generate them before launch.
What's next
Building a Smart Model Recommender — paste your prompt, test it across multiple LLMs, and see which model gives you the best quality-to-cost ratio.
Early experiments suggest most developers can cut their API costs by 40-60% by switching models for specific use cases. That's the real value proposition — not just tracking costs, but saving them.
Try it
Free tier available — no credit card required: tokenburn.dev
Happy to answer questions about the tech stack or build process in the comments.
What are you using to track your AI API costs? Or do you just hope the bill won't be too bad? 😅
Top comments (0)