If you're feeding JSON into GPT-4, Claude, or Gemini — you're probably wasting tokens on whitespace and verbose keys.
TOON (Token-Optimized Object Notation) is a compact format designed to cut that overhead by 30-60%.
What TOON looks like
Standard JSON (verbose)
{
"user_identifier": 12345,
"first_name": "Alice",
"last_name": "Smith",
"account_status": "active",
"subscription_tier": "premium"
}
TOON (compact)
uid:12345|fn:Alice|ln:Smith|status:active|tier:premium
Same data. Fraction of the tokens.
Why this matters for LLM costs
| Model | Price | Impact |
|---|---|---|
| GPT-4o | ~$5/1M input tokens | 30-60% savings = significant |
| Claude Sonnet | ~$3/1M input tokens | Same |
| Gemini 1.5 | ~$1.25/1M input tokens | Same |
If you're processing thousands of records, the savings add up fast.
When to use TOON
- ✅ Sending structured data to LLMs for analysis
- ✅ RAG pipelines with large context windows
- ✅ Batch processing JSON records through AI APIs
- ❌ Not for APIs expecting standard JSON responses
- ❌ Not for human-readable config files
Convert your JSON
Use the free JSON to TOON converter — paste your JSON and get the token-optimized output immediately.
Full comparison with real cost calculations: JSON vs TOON — Complete Guide
Top comments (0)