<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: İsmail Kağan Acar</title>
    <description>The latest articles on DEV Community by İsmail Kağan Acar (@ikaganacar).</description>
    <link>https://dev.to/ikaganacar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ikaganacar"/>
    <language>en</language>
    <item>
      <title>TOON Benchmarks: A Critical Analysis of Different Results</title>
      <dc:creator>İsmail Kağan Acar</dc:creator>
      <pubDate>Fri, 07 Nov 2025 10:07:10 +0000</pubDate>
      <link>https://dev.to/ikaganacar/toon-benchmarks-a-critical-analysis-of-different-results-5h66</link>
      <guid>https://dev.to/ikaganacar/toon-benchmarks-a-critical-analysis-of-different-results-5h66</guid>
      <description>&lt;p&gt;TOON (Token-Oriented Object Notation), a new data format is making brave claims. It says it can cut down on tokens by &lt;strong&gt;30–60%&lt;/strong&gt; compared to JSON, while also helping LLMs understand the data &lt;strong&gt;more accurately&lt;/strong&gt;.[1]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;However&lt;/strong&gt;, when you start looking at the test results (the benchmarks), you see a difference between the &lt;strong&gt;official scores&lt;/strong&gt; and the results from &lt;strong&gt;independent testing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So, this article is going to lay out both sets of data, side-by-side, and look into what might explain why they don’t match up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Benchmark Methodology
&lt;/h2&gt;

&lt;p&gt;Before we dive into the results, it’s important to know what these tests actually measure.&lt;/p&gt;

&lt;p&gt;They check how well an LLM can &lt;strong&gt;understand and pull information&lt;/strong&gt; from data. Specifically, we’re seeing how good it is at answering questions about data when it’s presented in different formats. [1]&lt;/p&gt;

&lt;p&gt;So, think of it as a &lt;strong&gt;reading test&lt;/strong&gt;, not a test of whether the model can &lt;em&gt;write&lt;/em&gt; in that format.&lt;/p&gt;

&lt;h2&gt;
  
  
  Official Repository Benchmarks
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Headline Results
&lt;/h3&gt;

&lt;p&gt;The official TOON repository presents compelling performance metrics based on 201 data retrieval questions across 4 models:[1]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iodwdleszfl7ezi3pqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iodwdleszfl7ezi3pqg.png" alt="captionless image" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Finding:&lt;/strong&gt; TOON achieves 68.7% accuracy versus JSON’s 65.7% while using 39.5% fewer tokens — a seemingly definitive win for TOON on both efficiency and accuracy metrics.[1]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Dataset Composition
&lt;/h3&gt;

&lt;p&gt;The official tests used five different types of datasets to see how the models performed:[1]&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Tabular:&lt;/strong&gt; 100 employee records, which were all simple and had the same fields.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Nested:&lt;/strong&gt; 50 e-commerce orders, which were complex and had data tucked inside other data (like a customer object with a list of items).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Analytics:&lt;/strong&gt; 60 days of time-series data (like dates and numbers).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; Real-world data from 100 popular GitHub repositories.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Event Logs:&lt;/strong&gt; 75 logs, about half of which were simple and the other half had more complex error details.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Across all these datasets, the models were asked a total of 201 questions to retrieve information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model-Specific Performance
&lt;/h2&gt;

&lt;p&gt;A separate benchmark study using the TOON implementation tested individual model performance, showing varying results across different LLMs:[1]&lt;/p&gt;

&lt;p&gt;Here is your list, corrected using the official values you provided. I’ve included the full official results for each model for completeness and updated the rankings.&lt;/p&gt;

&lt;h3&gt;
  
  
  GPT-5-nano
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;TOON:&lt;/strong&gt; 88.6% accuracy (178/201) — &lt;em&gt;Best overall performer&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JSON compact:&lt;/strong&gt; 88.1% accuracy (177/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;CSV:&lt;/strong&gt; 88.0% accuracy (88/100)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;YAML:&lt;/strong&gt; 84.6% accuracy (170/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;XML:&lt;/strong&gt; 81.6% accuracy (164/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JSON:&lt;/strong&gt; 80.1% accuracy (161/201)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Claude Haiku 4.5
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;YAML:&lt;/strong&gt; 52.2% accuracy (105/201) — &lt;em&gt;Best performer for this model&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;TOON:&lt;/strong&gt; 50.7% accuracy (102/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JSON:&lt;/strong&gt; 50.2% accuracy (101/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JSON compact:&lt;/strong&gt; 49.8% accuracy (100/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;XML:&lt;/strong&gt; 49.3% accuracy (99/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;CSV:&lt;/strong&gt; 39.0% accuracy (39/100)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Gemini 2.5 Flash
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;XML:&lt;/strong&gt; 86.1% accuracy (173/201) — &lt;em&gt;Best performer for this model&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;TOON:&lt;/strong&gt; 84.1% accuracy (169/201) — &lt;em&gt;Ranked 2nd&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;CSV:&lt;/strong&gt; 82.0% accuracy (82/100)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JSON compact:&lt;/strong&gt; 81.1% accuracy (163/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;YAML:&lt;/strong&gt; 81.1% accuracy (163/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JSON:&lt;/strong&gt; 81.1% accuracy (163/201)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Grok-4-fast-non-reasoning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;TOON:&lt;/strong&gt; 51.2% accuracy (103/201) — &lt;em&gt;Tied for 1st&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JSON:&lt;/strong&gt; 51.2% accuracy (103/201) — &lt;em&gt;Tied for 1st&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;XML:&lt;/strong&gt; 50.2% accuracy (101/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JSON compact:&lt;/strong&gt; 49.8% accuracy (100/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;YAML:&lt;/strong&gt; 48.8% accuracy (98/201)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;CSV:&lt;/strong&gt; 40.0% accuracy (40/100)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Note: These results are from a separate 201-question benchmark and may use different datasets than the main official repository’s 201-question benchmark.&lt;/em&gt;[1]&lt;/p&gt;

&lt;h2&gt;
  
  
  Independent Third-Party Benchmarks
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Test 1: Tabular Data with GPT-4.1-nano
&lt;/h2&gt;

&lt;p&gt;An independent evaluation from &lt;a href="https://www.improvingagents.com/blog/is-toon-good-for-table-data" rel="noopener noreferrer"&gt;improvingagents.com&lt;/a&gt; presents dramatically different findings. Testing was conducted using &lt;strong&gt;GPT-4.1-nano&lt;/strong&gt; across 12 formats with statistical confidence intervals:[2]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmnued8t7psoy2no5qix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmnued8t7psoy2no5qix.png" alt="captionless image" width="800" height="690"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Finding:&lt;/strong&gt; In this evaluation, TOON ranks 9th out of 12 formats with 47.5% accuracy, performing worse than JSON (52.3%) and significantly behind Markdown-KV (60.7%).[2]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  The Accuracy-Token Efficiency Trade-off
&lt;/h3&gt;

&lt;p&gt;This independent test shows that while &lt;strong&gt;TOON is great at saving tokens&lt;/strong&gt; (using only 21,518 compared to 66,396 for JSON), this efficiency might come at a price.[2]&lt;/p&gt;

&lt;p&gt;It seems that being &lt;em&gt;too&lt;/em&gt; compact might make the data harder for the LLM to understand.&lt;/p&gt;

&lt;p&gt;Here’s the interesting part: the format that got the &lt;strong&gt;best accuracy score&lt;/strong&gt; (Markdown-KV at 60.7%) was actually a lot “heavier,” using &lt;strong&gt;more than double the tokens&lt;/strong&gt; that TOON did.&lt;/p&gt;

&lt;p&gt;This suggests that using more words and a clearer structure (being more “verbose”) might actually give the LLM the context it needs to understand the data better.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test 2: Nested Data with GPT-5-nano
&lt;/h2&gt;

&lt;p&gt;The same independent researchers conducted a second evaluation using GPT-5-nano to test TOON’s performance with nested data structures:[3]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthhiwz4j74fkrkhq9glg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthhiwz4j74fkrkhq9glg.png" alt="captionless image" width="670" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Finding:&lt;/strong&gt; In this nested data test, TOON ranked last with 43.1% accuracy, performing worse than JSON (50.3%), Markdown (54.3%), and significantly behind YAML (62.1%).[3]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This second test reveals a critical limitation: TOON’s performance degrades substantially with nested data structures, despite still being relatively token-efficient.[3]&lt;/p&gt;

&lt;h2&gt;
  
  
  Analyzing the Differences
&lt;/h2&gt;

&lt;p&gt;The conflicting results present a puzzle: how can TOON score 68.7% in official benchmarks but only 47.5% in independent testing? Several factors warrant consideration:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Dataset Characteristics
&lt;/h3&gt;

&lt;p&gt;The official tests and the independent tests were not testing the same things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Official Tests:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The creators of TOON set up their benchmarks using &lt;strong&gt;six different types of data.&lt;/strong&gt; This included “tabular” (flat) employee records, which they openly state is the perfect, ideal use case for TOON.[1]&lt;/p&gt;

&lt;p&gt;They basically created different test “tracks” (like flat data vs. mixed-structure data) that were heavily weighted toward scenarios where TOON was designed to win.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Independent Tests:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The independent researchers focused &lt;strong&gt;only on tabular (flat) data&lt;/strong&gt; and used one specific model, GPT-4.1-nano (which is a small, fast model).&lt;/p&gt;

&lt;p&gt;Their write-up suggests that TOON “could be a format to consider if you’re trying to reduce token usage, especially if the limitations of the CSV format make it hard to represent aspects of your data.”[2]&lt;/p&gt;

&lt;p&gt;This implies their tests might have used data or asked questions in a way that didn’t play to TOON’s biggest strengths, giving a different perspective on its performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Question Complexity
&lt;/h3&gt;

&lt;p&gt;The distribution of question types matters significantly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Simple field retrieval favors compact formats like TOON&lt;/li&gt;
&lt;li&gt;  Complex aggregation may benefit from more explicit structure&lt;/li&gt;
&lt;li&gt;  Nested queries could suffer in TOON’s flattened representation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The independent study &lt;strong&gt;doesn’t specify question distribution details,&lt;/strong&gt; making direct comparison difficult.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Model Selection
&lt;/h3&gt;

&lt;p&gt;This represents a critical methodological difference:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Official benchmarks:&lt;/strong&gt; Tested across &lt;strong&gt;4 different LLM models&lt;/strong&gt;, providing an aggregate view of TOON’s performance.[1]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Independent benchmarks:&lt;/strong&gt; Conducted two separate tests — one with &lt;strong&gt;GPT-4.1-nano&lt;/strong&gt; on tabular data and another with &lt;strong&gt;GPT-5-nano&lt;/strong&gt; on nested data.[2][3]&lt;/p&gt;

&lt;p&gt;The model-specific results from separate testing show dramatic variance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GPT-5-nano&lt;/strong&gt; achieved &lt;strong&gt;96.1%&lt;/strong&gt; accuracy with TOON[1]&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt; scored only &lt;strong&gt;48.7%&lt;/strong&gt; with TOON[1]&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;GPT-4.1-nano&lt;/strong&gt; in the independent study: &lt;strong&gt;47.5%&lt;/strong&gt; with TOON[2]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Interestingly, the independent tests showed that even &lt;strong&gt;GPT-5-nano achieved only 43.1% accuracy with TOON on nested data[3]&lt;/strong&gt;, despite the &lt;strong&gt;official benchmarks showing GPT-5-nano reaching 96.1% with TOON.[1]&lt;/strong&gt; This dramatic variance suggests the data structure type (tabular vs. nested) may be more significant than model selection alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Token Counting Methodology
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Official benchmarks&lt;/strong&gt; used &lt;strong&gt;o200k_base&lt;/strong&gt; encoding (GPT-4o/GPT-5 tokenizer) via gpt-tokenizer.[1]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Independent benchmarks&lt;/strong&gt; don’t specify their tokenization method.[2]&lt;/p&gt;

&lt;p&gt;Different tokenizers can produce varying token counts for identical text, particularly with formatting characters and whitespace. This could affect both absolute counts and relative efficiency rankings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model-Specific Considerations
&lt;/h2&gt;

&lt;p&gt;The official benchmarks reveal that TOON’s performance varies dramatically by model:[1]&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Excellent: GPT-5-nano (88.6% &lt;strong&gt;on official&lt;/strong&gt; benchmarks)&lt;/li&gt;
&lt;li&gt;  Good: Gemini 2.5 Flash (86.4%)&lt;/li&gt;
&lt;li&gt;  Moderate: Claude Haiku 4.5 (48.7%), Grok-4-fast (49.4%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, the independent nested data test showed GPT-5-nano achieving only 43.1% accuracy with TOON.[3] &lt;strong&gt;This suggests the data structure type matters more than model selection alone.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This variance is critical for practical applications. TOON’s performance depends heavily on both the model you’re using AND the type of data you’re working with. &lt;strong&gt;Even top-performing models&lt;/strong&gt; like GPT-5-nano &lt;strong&gt;may struggle&lt;/strong&gt; with TOON when dealing with nested structures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: An Evidence-Based Perspective
&lt;/h2&gt;

&lt;p&gt;The available benchmarks paint an incomplete picture. TOON demonstrably reduces token usage by 30–60% across multiple studies — this finding appears robust.[1][2] However, accuracy results span from strong performance (68.7%) to below-average (47.5%), depending on the evaluation methodology.&lt;/p&gt;

&lt;p&gt;This variance isn’t necessarily damning. It reflects the reality that no data format is optimal for all scenarios. &lt;strong&gt;TOON appears well-suited for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Uniform tabular data&lt;/li&gt;
&lt;li&gt;  Simple field retrieval queries&lt;/li&gt;
&lt;li&gt;  GPT-5 and similar models&lt;/li&gt;
&lt;li&gt;  Applications where token efficiency is paramount&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;It may under-perform with:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Complex nested structures (where it ranked last in independent testing)&lt;/li&gt;
&lt;li&gt;  Deep reasoning queries&lt;/li&gt;
&lt;li&gt;  Claude Haiku and similar models&lt;/li&gt;
&lt;li&gt;  Applications where accuracy cannot be compromised&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  So, what’s the takeaway?
&lt;/h2&gt;

&lt;p&gt;Here’s my recommendation: Treat TOON like a &lt;strong&gt;specialized tool&lt;/strong&gt;, not a one-size-fits-all replacement for JSON.&lt;/p&gt;

&lt;p&gt;If you’re thinking about using it for a real project, &lt;strong&gt;you have to test it yourself&lt;/strong&gt; — with &lt;em&gt;your&lt;/em&gt; data and &lt;em&gt;your&lt;/em&gt; models — before you commit. The token savings look great, but you need to prove they don’t hurt the accuracy of the answers you need.&lt;/p&gt;

&lt;p&gt;As TOON gets older and more independent researchers test it, we’ll all get a better idea of where it really shines and where older formats are still better.&lt;/p&gt;

&lt;p&gt;Until then, &lt;strong&gt;let your own results guide you, not the hype.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Saving tokens is a big deal, and optimizations are important. But good optimization means you have to be clear-eyed about the pros and cons, not just accept the headline numbers.&lt;/p&gt;

&lt;p&gt;TOON is an interesting experiment. It just needs a lot more attention, ongoing testing as the technology and our use cases for it continue to evolve.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;[1]: Johann Schopplich, “TOON — Token-Oriented Object Notation”, GitHub Repository, &lt;a href="https://github.com/johannschopplich/toon" rel="noopener noreferrer"&gt;https://github.com/johannschopplich/toon&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2]: “Is TOON Good for Table Data?”, Improving Agents, &lt;a href="https://www.improvingagents.com/blog/is-toon-good-for-table-data" rel="noopener noreferrer"&gt;https://www.improvingagents.com/blog/is-toon-good-for-table-data&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[3]: “TOON Benchmarks”, Improving Agents, &lt;a href="https://www.improvingagents.com/blog/toon-benchmarks" rel="noopener noreferrer"&gt;https://www.improvingagents.com/blog/toon-benchmarks&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>rag</category>
      <category>json</category>
      <category>ai</category>
    </item>
    <item>
      <title>TOON: New data format for LLM Applications</title>
      <dc:creator>İsmail Kağan Acar</dc:creator>
      <pubDate>Fri, 07 Nov 2025 06:38:28 +0000</pubDate>
      <link>https://dev.to/ikaganacar/toon-new-data-format-for-llm-applications-37f5</link>
      <guid>https://dev.to/ikaganacar/toon-new-data-format-for-llm-applications-37f5</guid>
      <description>&lt;h3&gt;
  
  
  Understanding the Token Economy Problem
&lt;/h3&gt;

&lt;p&gt;Large Language Models (LLMs) work by “reading” information in small pieces called &lt;strong&gt;tokens&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Think of tokens like building blocks. These tokens are important because they control two main things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Cost:&lt;/strong&gt; How much you have to pay to API’s or GPU’s.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Memory:&lt;/strong&gt; How much information the LLM can handle at one time (this is its “context window”).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you give the LLM organized data like Tool/API outputs, the &lt;strong&gt;format&lt;/strong&gt; you use (like JSON) changes how many tokens it takes up.&lt;/p&gt;

&lt;p&gt;Common formats like &lt;strong&gt;JSON&lt;/strong&gt; are widely used but aren’t &lt;strong&gt;token-efficient&lt;/strong&gt;. They often use more tokens than necessary, which drives up costs and uses up the LLM’s limited memory.&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;Token-Oriented Object Notation (TOON)&lt;/strong&gt; enters the field. Created in 2025, TOON is a compact data serialization format specifically designed for passing structured data to LLMs claiming 30–60% token reduction compared to JSON.[1]&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is TOON?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;TOON is a text-based data format that prioritizes token efficiency while maintaining human readability. Unlike JSON, which repeats field names for every object, TOON uses a tabular approach where field names are declared once in a header, and subsequent rows contain only values.[1]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Core Syntax&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;TOON combines familiar patterns from YAML (indentation-based structure) and CSV (tabular format) into a cohesive system that LLMs can parse naturally.[1]&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;users[3]{id,name,role}:
  1,Alice Smith,admin
  2,Bob Jones,user
  3,Carol White,moderator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The syntax breaks down as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;em&gt;users&lt;/em&gt; — Collection name&lt;/li&gt;
&lt;li&gt;  &lt;em&gt;[3]&lt;/em&gt; — Number of items (optional but recommended)&lt;/li&gt;
&lt;li&gt;  &lt;em&gt;{id,name,role}&lt;/em&gt; — Field names declared once&lt;/li&gt;
&lt;li&gt;  Subsequent lines contain comma-separated values&lt;/li&gt;
&lt;li&gt;  2-space indentation for nesting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Comparison with JSON&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;//JSON format (117 tokens):&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;items&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sku&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;A1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Widget&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;qty&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;price&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;9.99&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sku&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;B2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Gadget&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;qty&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;price&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;14.5&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sku&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;C3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Doohickey&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;qty&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;price&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;7.25&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// TOON format (49 tokens):
items[3]{sku,name,qty,price}:
  A1,Widget,2,9.99
  B2,Gadget,1,14.5
  C3,Doohickey,5,7.25
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example demonstrates a 58.1% token reduction (68 fewer tokens) for identical information.[1] The efficiency comes from eliminating repeated field names and reducing structural overhead.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How LLMs Understand TOON Without Training&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A critical question emerges: if TOON was created in 2025, how can existing LLMs understand it when they weren’t trained on this format?&lt;/p&gt;

&lt;p&gt;The answer lies in transfer learning. Even without specific training, LLMs can understand TOON because it borrows familiar patterns from formats they were trained on (specifically YAML and CSV).[1]&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“TOON works best when you show the format instead of describing it. The structure is self-documenting — models parse it naturally once they see the pattern. Models treat it like familiar YAML or CSV.”[1]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This understanding capability means TOON can be deployed immediately without fine-tuning.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Using TOON in Practice&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Implementation Libraries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;TOON has many community driven production-ready implementations across multiple programming languages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Python:&lt;/strong&gt; &lt;em&gt;python-toon&lt;/em&gt; by xaviviro[7]&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Python:&lt;/strong&gt; &lt;em&gt;toon-llm&lt;/em&gt; by davidpirogov[8]
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;toon&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stringify&lt;/span&gt;
&lt;span class="c1"&gt;# Parse TOON to Python objects
&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
products[2]{id,name,price}:
  101,Laptop,999.99
  102,Mouse,29.99
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Access data
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;products&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;  &lt;span class="c1"&gt;# Output: Laptop
# Convert Python objects to TOON
&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;users&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Alice&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;admin&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Bob&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;toon_string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;JavaScript/TypeScript:&lt;/strong&gt; Official reference implementation[1]
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stringify&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;toon-format&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="c1"&gt;// Parse TOON string&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`
orders[3]{orderId,customer,total}:
  ORD-001,John Smith,150.00
  ORD-002,Jane Doe,275.50``
  ORD-003,Bob Wilson,89.99
`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// Access data&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;customer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// Output: John Smith&lt;/span&gt;
&lt;span class="c1"&gt;// Convert to TOON&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;inventory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;sku&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;A1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;stock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Warehouse-A&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;sku&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;B2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;stock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Warehouse-B&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;toonString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inventory&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;PHP:&lt;/strong&gt; &lt;em&gt;toon-php&lt;/em&gt; by HelgeSverre[9]
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Toon\Parser&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Toon\Formatter&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// Parse TOON&lt;/span&gt;
&lt;span class="nv"&gt;$parser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Parser&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nv"&gt;$data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$parser&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"
employees[2]&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;id,name,department&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:
  1001,Alice Johnson,Engineering
  1002,Bob Smith,Marketing
"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// Access data&lt;/span&gt;
&lt;span class="k"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'employees'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s1"&gt;'name'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// Output: Alice Johnson&lt;/span&gt;
&lt;span class="c1"&gt;// Format to TOON&lt;/span&gt;
&lt;span class="nv"&gt;$formatter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Formatter&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nv"&gt;$toonString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$formatter&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;    &lt;span class="s1"&gt;'tasks'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'id'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'title'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'Review PR'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'status'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'pending'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'id'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'title'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'Deploy'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'status'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'complete'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Dart&lt;/strong&gt;: &lt;em&gt;toon&lt;/em&gt; by wisamidris77[10]
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:toon_dart/toon_dart.dart'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;'user'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="s"&gt;'id'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s"&gt;'name'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'Ada'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s"&gt;'tags'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'reading'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'gaming'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="s"&gt;'active'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s"&gt;'preferences'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;toonEncode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Go&lt;/strong&gt;: &lt;em&gt;gotoon&lt;/em&gt; by alpkeskin[11]
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/alpkeskin/gotoon"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{}{&lt;/span&gt;
        &lt;span class="s"&gt;"users"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{}{&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Alice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"role"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"admin"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Bob"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"role"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;encoded&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;gotoon&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;encoded&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;When to Use TOON&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Tabular Data with Uniform Objects&lt;/li&gt;
&lt;li&gt; RAG System Contexts [2]&lt;/li&gt;
&lt;li&gt; Analytics Dashboards&lt;/li&gt;
&lt;li&gt; AI-Powered Classification [2]&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;When NOT to Use TOON&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Deep Nesting Required&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Inconsistent Object Structures&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This is awkward in TOON
mixed_data[3]{id,name,email,phone,address}:
  1,Alice,alice@example.com,,,  # Missing phone and address
  2,Bob,,555-1234,,              # Missing email and address
  3,Carol,carol@example.com,,123 Main St
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Tool Calling / Function Definitions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;TOON is designed as an input format for data, not for defining tool schemas or function signatures.[3] All major LLM providers (OpenAI, Anthropic, Google) expect tool definitions in JSON format following OpenAPI specifications.[4][5]&lt;/p&gt;

&lt;p&gt;Use JSON for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Function calling schemas&lt;/li&gt;
&lt;li&gt;  API tool definitions&lt;/li&gt;
&lt;li&gt;  LLM output when integrating with existing systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;TOON represents a targeted optimization for a specific problem: efficiently passing structured, tabular data to LLMs. It doesn’t replace JSON across the board nor does it try to. Instead, it offers a specialized tool that, when applied to appropriate use cases, delivers measurable token reduction and cost savings.&lt;/p&gt;

&lt;p&gt;The format’s compatibility by existing LLMs, combined with production-ready implementations across multiple languages, makes it accessible for rapid integration. As the AI ecosystem grows and token efficiency becomes increasingly important, formats like TOON demonstrate significant operational improvements.&lt;/p&gt;

&lt;p&gt;For teams operating LLM-powered systems at scale, particularly those working with tabular data, RAG systems, or analytics applications, TOON deserves consideration as a optimization strategy.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;References&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;[1]: Johann Schopplich, “TOON — Token-Oriented Object Notation”, GitHub Repository, &lt;a href="https://github.com/johannschopplich/toon" rel="noopener noreferrer"&gt;https://github.com/johannschopplich/toon&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2]: “TOON Format Guide: Reduce LLM Token Usage by 50%”, Nihar Daily, &lt;a href="https://www.nihardaily.com/131-token-oriented-object-notation-toon-your-path-to-50-token-savings" rel="noopener noreferrer"&gt;https://www.nihardaily.com/131-token-oriented-object-notation-toon-your-path-to-50-token-savings&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[3]: Abdulkader Safi, “TOON: The Token-Efficient Data Format for LLM Applications”, &lt;a href="https://abdulkadersafi.com/blog/toon-the-token-efficient-data-format-for-llm-applications-complete-guide-2025" rel="noopener noreferrer"&gt;https://abdulkadersafi.com/blog/toon-the-token-efficient-data-format-for-llm-applications-complete-guide-2025&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[4]: “Function Calling with LLMs”, Prompt Engineering Guide, &lt;a href="https://www.promptingguide.ai/applications/function_calling" rel="noopener noreferrer"&gt;https://www.promptingguide.ai/applications/function_calling&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[5]: “Tool/function calling”, LangChain Documentation, &lt;a href="https://python.langchain.com/v0.1/docs/modules/model_io/chat/function_calling/" rel="noopener noreferrer"&gt;https://python.langchain.com/v0.1/docs/modules/model_io/chat/function_calling/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[6]: Joyal Saji, “The Hidden Cost of Tokens in LLMs”, Medium, November 2025, &lt;a href="https://medium.com/@joyalsaji/the-hidden-cost-of-tokens-in-llms-and-how-toon-smarter-strategies-can-shrink-it-827160c92787" rel="noopener noreferrer"&gt;https://medium.com/@joyalsaji/the-hidden-cost-of-tokens-in-llms-and-how-toon-smarter-strategies-can-shrink-it-827160c92787&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[7]: Xavi Viró, “python-toon”, GitHub Repository, &lt;a href="https://github.com/xaviviro/python-toon" rel="noopener noreferrer"&gt;https://github.com/xaviviro/python-toon&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[8]: David Pirogov, “toon-llm”, GitHub Repository, &lt;a href="https://github.com/davidpirogov/toon-llm" rel="noopener noreferrer"&gt;https://github.com/davidpirogov/toon-llm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[9]: Helge Sverre, “toon-php”, GitHub Repository, &lt;a href="https://github.com/HelgeSverre/toon-php" rel="noopener noreferrer"&gt;https://github.com/HelgeSverre/toon-php&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[10]: Wisam Idris, “toon”, GitHub Repository, &lt;a href="https://github.com/wisamidris77/toon" rel="noopener noreferrer"&gt;https://github.com/wisamidris77/toon&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[11]: Alp Keskin, “gotoon”, GitHub Repository, &lt;a href="https://github.com/alpkeskin/gotoon" rel="noopener noreferrer"&gt;https://github.com/alpkeskin/gotoon&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>rag</category>
      <category>ai</category>
      <category>json</category>
    </item>
  </channel>
</rss>
