<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bruno Hanss</title>
    <description>The latest articles on DEV Community by Bruno Hanss (@bruno_hanss_6b82bb52cd0fa).</description>
    <link>https://dev.to/bruno_hanss_6b82bb52cd0fa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bruno_hanss_6b82bb52cd0fa"/>
    <language>en</language>
    <item>
      <title>Converting Large JSON, NDJSON, CSV and XML Files without Blowing Up Memory</title>
      <dc:creator>Bruno Hanss</dc:creator>
      <pubDate>Sat, 14 Feb 2026 12:25:13 +0000</pubDate>
      <link>https://dev.to/bruno_hanss_6b82bb52cd0fa/converting-large-json-ndjson-csv-and-xml-files-without-blowing-up-memory-20ao</link>
      <guid>https://dev.to/bruno_hanss_6b82bb52cd0fa/converting-large-json-ndjson-csv-and-xml-files-without-blowing-up-memory-20ao</guid>
      <description>&lt;p&gt;Most of us have written something like this at some point:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;hugeString&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It works.&lt;/p&gt;

&lt;p&gt;Until it doesn't.&lt;/p&gt;

&lt;p&gt;At some point the file grows.\&lt;br&gt;
50MB. 200MB. 1GB. 5GB.&lt;/p&gt;

&lt;p&gt;And suddenly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The tab freezes (in the browser)&lt;/li&gt;
&lt;li&gt;  Memory spikes&lt;/li&gt;
&lt;li&gt;  The process crashes&lt;/li&gt;
&lt;li&gt;  Or worse --- everything technically "works" but becomes unusable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't a JavaScript problem.&lt;/p&gt;

&lt;p&gt;It's a buffering problem.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Real Issue: Buffering vs Streaming
&lt;/h2&gt;

&lt;p&gt;Most parsing libraries operate in &lt;strong&gt;buffer mode&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Read the entire file into memory&lt;/li&gt;
&lt;li&gt;  Parse it completely&lt;/li&gt;
&lt;li&gt;  Return the result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means memory usage scales with file size.&lt;/p&gt;

&lt;p&gt;Streaming flips the model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Read chunks&lt;/li&gt;
&lt;li&gt;  Process incrementally&lt;/li&gt;
&lt;li&gt;  Emit records progressively&lt;/li&gt;
&lt;li&gt;  Keep memory nearly constant&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That architectural difference matters far more than micro-optimizations.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why I Built a Streaming Converter
&lt;/h2&gt;

&lt;p&gt;I've been working on a project called &lt;strong&gt;convert-buddy-js&lt;/strong&gt;, a Rust-based&lt;br&gt;
streaming conversion engine compiled to WebAssembly and exposed as a&lt;br&gt;
JavaScript library.&lt;/p&gt;

&lt;p&gt;It supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  XML&lt;/li&gt;
&lt;li&gt;  CSV&lt;/li&gt;
&lt;li&gt;  JSON&lt;/li&gt;
&lt;li&gt;  NDJSON&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core goal was simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Keep memory usage flat, even as file size grows.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not "be the fastest library ever."\&lt;br&gt;
Just predictable. Stable. Bounded.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Does "Low Memory" Actually Mean?
&lt;/h2&gt;

&lt;p&gt;Here's an example from benchmarks converting XML → JSON.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;File Size&lt;/th&gt;
&lt;th&gt;Memory Usage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;xml-large&lt;/td&gt;
&lt;td&gt;convert-buddy&lt;/td&gt;
&lt;td&gt;38.41 MB&lt;/td&gt;
&lt;td&gt;~0 MB change&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;xml-large&lt;/td&gt;
&lt;td&gt;fast-xml-parser&lt;/td&gt;
&lt;td&gt;38.41 MB&lt;/td&gt;
&lt;td&gt;377 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The difference is architectural.&lt;/p&gt;

&lt;p&gt;The streaming engine processes elements incrementally instead of&lt;br&gt;
constructing large intermediate structures.&lt;/p&gt;


&lt;h2&gt;
  
  
  CSV → JSON Benchmarks
&lt;/h2&gt;

&lt;p&gt;I benchmarked against:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  PapaParse\&lt;/li&gt;
&lt;li&gt;  csv-parse\&lt;/li&gt;
&lt;li&gt;  fast-csv&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's a representative neutral case (1.26 MB CSV):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Throughput&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;convert-buddy&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;75.96 MB/s&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;csv-parse&lt;/td&gt;
&lt;td&gt;22.13 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PapaParse&lt;/td&gt;
&lt;td&gt;19.57 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fast-csv&lt;/td&gt;
&lt;td&gt;15.65 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In favorable large cases (13.52 MB CSV):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Throughput&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;convert-buddy&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;91.88 MB/s&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;csv-parse&lt;/td&gt;
&lt;td&gt;30.68 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PapaParse&lt;/td&gt;
&lt;td&gt;24.69 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fast-csv&lt;/td&gt;
&lt;td&gt;19.68 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In most CSV scenarios tested, the streaming approach resulted in roughly&lt;br&gt;
3x--4x throughput improvements, with dramatically lower memory overhead.&lt;/p&gt;


&lt;h2&gt;
  
  
  Where Streaming Isn't Always Faster
&lt;/h2&gt;

&lt;p&gt;For tiny NDJSON files, native JSON parsing can be faster.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Throughput&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NDJSON tiny&lt;/td&gt;
&lt;td&gt;Native JSON&lt;/td&gt;
&lt;td&gt;27.10 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NDJSON tiny&lt;/td&gt;
&lt;td&gt;convert-buddy&lt;/td&gt;
&lt;td&gt;10.81 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That's expected.&lt;/p&gt;

&lt;p&gt;When files are extremely small, the overhead of streaming infrastructure&lt;br&gt;
can outweigh benefits.\&lt;br&gt;
Native &lt;code&gt;JSON.parse&lt;/code&gt; is heavily optimized in engines and extremely&lt;br&gt;
efficient for small payloads.&lt;/p&gt;

&lt;p&gt;The goal here isn't to replace native JSON for everything.&lt;/p&gt;

&lt;p&gt;It's to handle realistic and large workloads predictably.&lt;/p&gt;


&lt;h2&gt;
  
  
  NDJSON → JSON Performance
&lt;/h2&gt;

&lt;p&gt;For medium nested NDJSON datasets:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Throughput&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;convert-buddy&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;221.79 MB/s&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Native JSON&lt;/td&gt;
&lt;td&gt;136.84 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That's where streaming and incremental transformation shine ---&lt;br&gt;
especially when the workload involves structured transformation rather&lt;br&gt;
than just parsing.&lt;/p&gt;


&lt;h2&gt;
  
  
  What the Library Looks Like
&lt;/h2&gt;

&lt;p&gt;Install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;convert-buddy-js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;convert&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;convert-buddy-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;csv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;name,age,city&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s1"&gt;Alice,30,NYC&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s1"&gt;Bob,25,LA&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s1"&gt;Carol,35,SF&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Configure only what you need. Here we output NDJSON.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;buddy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ConvertBuddy&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;outputFormat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ndjson&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Stream conversion: records are emitted in batches.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;controller&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;buddy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;csv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;recordBatchSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

  &lt;span class="c1"&gt;// onRecords can be async: await inside it if you need (I/O, UI updates, writes...)&lt;/span&gt;
  &lt;span class="na"&gt;onRecords&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;records&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;total&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Batch received:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;records&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Simulate slow async work (writing, rendering, uploading, etc.)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="c1"&gt;// Report progress (ctrl.* is the most reliable live state)&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s2"&gt;`Progress: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ctrl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;recordCount&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; records, &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;throughputMbPerSec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt; MB/s`&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;

  &lt;span class="na"&gt;onDone&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;final&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Done:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;final&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

  &lt;span class="c1"&gt;// Enable profiling stats (throughput, latency, memory estimates, etc.)&lt;/span&gt;
  &lt;span class="na"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Optional: await final stats / completion&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;final&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;done&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Final stats:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;final&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It works in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Node&lt;/li&gt;
&lt;li&gt;  Browser&lt;/li&gt;
&lt;li&gt;  Web Workers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because the core engine is written in Rust and compiled to WebAssembly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Rust + WebAssembly?
&lt;/h2&gt;

&lt;p&gt;Not because it's trendy.&lt;/p&gt;

&lt;p&gt;Because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Predictable memory behavior&lt;/li&gt;
&lt;li&gt;  Strong streaming primitives&lt;/li&gt;
&lt;li&gt;  Deterministic performance&lt;/li&gt;
&lt;li&gt;  Easier control over allocations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;WebAssembly allows that engine to run safely in the browser without&lt;br&gt;
server uploads.&lt;/p&gt;




&lt;h2&gt;
  
  
  When This Tool Makes Sense
&lt;/h2&gt;

&lt;p&gt;You probably don't need it if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Files are always &amp;lt; 1MB&lt;/li&gt;
&lt;li&gt;  You're already happy with JSON.parse&lt;/li&gt;
&lt;li&gt;  You don't care about memory spikes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It makes sense if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You process large CSV exports&lt;/li&gt;
&lt;li&gt;  You handle XML feeds&lt;/li&gt;
&lt;li&gt;  You work with NDJSON streams&lt;/li&gt;
&lt;li&gt;  You need conversion in the browser without uploads&lt;/li&gt;
&lt;li&gt;  You want predictable memory footprint&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What I Learned Building It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Streaming is not just about speed --- it's about stability.&lt;/li&gt;
&lt;li&gt;  Benchmarks should include losses.&lt;/li&gt;
&lt;li&gt;  Native JSON.parse is hard to beat for tiny payloads.&lt;/li&gt;
&lt;li&gt;  Memory predictability matters more than peak throughput.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;There are many good parsing libraries in the JavaScript ecosystem.&lt;/p&gt;

&lt;p&gt;PapaParse is mature.\&lt;br&gt;
csv-parse is robust.\&lt;br&gt;
Native JSON.parse is extremely optimized.&lt;/p&gt;

&lt;p&gt;convert-buddy-js is simply an option focused on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Streaming&lt;/li&gt;
&lt;li&gt;  Low memory usage&lt;/li&gt;
&lt;li&gt;  Format transformation&lt;/li&gt;
&lt;li&gt;  Large file handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If that matches your constraints, it may be useful.&lt;/p&gt;

&lt;p&gt;If not, the ecosystem already has excellent tools.&lt;/p&gt;

&lt;p&gt;If you're curious, the full benchmarks and scenarios are available in&lt;br&gt;
the repository.&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/convert-buddy-js" rel="noopener noreferrer"&gt;convert-buddy-js — npm&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/brunohanss/convert-buddy" rel="noopener noreferrer"&gt;brunohanss/convert-buddy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you have workloads where streaming would make a difference, I’d be interested in feedback.&lt;br&gt;
You can get more information or try the interactive browser playground here: &lt;a href="https://convert-buddy.app/" rel="noopener noreferrer"&gt;https://convert-buddy.app/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>datascience</category>
      <category>node</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
