πΈ Stop Burning Money on AI Tokens
If you build with LLMs (ChatGPT, Claude, Gemini), you know the pain. Tokens = Money.
Every extra bracket and quote in your JSON payloads costs you cash. I found a way to stop the bleeding.
π The Solution: TOON
TOON (Token-Oriented Object Notation) is a hack for your context window. It strips out the noise from JSON and formats data into tight tables. The AI still reads it perfectly.
See the difference:
β JSON (108 Tokens)
{
"hikes": [
{ "id": 1, "name": "Blue Lake", "dist": 7.5, "sunny": true },
{ "id": 2, "name": "Ridge", "dist": 9.2, "sunny": false }
]
}
β
TOON (65 Tokens)
hikes[2]{id,name,dist,sunny}:
1,Blue Lake,7.5,true
2,Ridge,9.2,false
That is a 40% reduction. Scale that up to millions of requests, and you save serious budget.
π What I Built: TOON Playground
I didn't just want to read about it. I wanted to see the savings live.
So I built TOON Playground.
It is a free, open-source web app. It proves the math in real-time.
π₯ Features
- Instant Conversion: Type JSON on the left, get TOON on the right.
- Live Token Counter: See exactly how much money you save per prompt.
- Zero Latency: Built with Vanilla JS and Vite. No bloat.
- Modern UI: Glassmorphism design with a sleek dark mode.
π οΈ Tech Stack
- Vite: For speed.
- Vanilla JS: For performance.
- Netlify: For global deployment.
π Try It Now
Don't guess your token usage. Measure it.
π toon.shahdeep.ca
Save tokens. Ship faster. π
Top comments (0)