Introduction: The Challenge of Modern Data
JavaScript has come a long way from its humble beginnings as a scripting language for small web page interactions. Today, it powers everything from massive web applications to high-performance backend servers. But as the scale and complexity of our applications have grown, so too have the demands on how we handle data.
Consider these scenarios:
- Loading a 50MB, or even 1GB JSON file in a browser for visualization.
- Processing a multi-gigabyte log file on a Node.js server.
- Uploading or downloading large files, possibly compressed, over the network.
- Transforming data on the fly — decompressing, decoding, parsing, or even analyzing it — without ever loading it all into memory.
If you’ve ever tried to handle large data in JavaScript, you’ve probably run into the limitations of the classic APIs: callbacks, Promises, and even the venerable JSON.parse()
and JSON.stringify()
. They’re great for small data, but quickly fall apart when you hit the big leagues. Out-of-memory errors, UI freezes, and slow, unresponsive apps are all too common — even with files in the tens or hundreds of megabytes.
So how did we get here? And what’s the solution? The answer, as you might suspect, is streams — but not just any streams. In this article, we’ll trace the evolution from callbacks to modern Web Streams, explore how they fit into the broader JavaScript ecosystem, and set the stage for building robust, scalable, and universal data pipelines.
1. Callbacks, Promises, and the Need for Streams
Let’s start with the basics. For years, JavaScript’s answer to asynchronous data was the callback. You’d read a file, fetch data, or process an event, and pass in a function to be called when the data was ready.
fs.readFile('large-file.json', (err, data) => {
if (err) throw err;
processData(data);
});
This works fine for small files, but what if the file is huge? You’re forced to load the entire thing into memory before you can do anything with it. Not only is this inefficient, it’s dangerous — one big file and your process could crash.
Promises made things a bit nicer, but didn’t solve the core problem:
fs.promises.readFile('large-file.json')
.then(data => processData(data));
Still, you’re loading everything at once. For truly large data, you need to process it incrementally — as it arrives, chunk by chunk, without ever holding the whole thing in memory.
2. Node.js Streams: The First Revolution
Node.js was among the first to tackle this problem head-on with its Stream API. Streams let you read or write data piece by piece, as it becomes available. The classic example is reading a file:
const fs = require('fs');
const stream = fs.createReadStream('large-file.json');
stream.on('data', chunk => {
// Process each chunk as it arrives
});
stream.on('end', () => {
// All done!
});
This is a huge improvement. Now, you can process arbitrarily large files without running out of memory. Streams became the backbone of Node.js’s I/O model — files, network sockets, HTTP requests and responses, even process stdin/stdout.
Node.js streams come in several flavors:
- Readable: Data flows out (e.g., reading a file)
- Writable: Data flows in (e.g., writing to a file)
- Duplex: Both readable and writable (e.g., a TCP socket)
- Transform: A duplex stream that transforms data as it passes through (e.g., compression)
Streams are event emitters, with events like data
, end
, error
, and close
. You can also pipe streams together:
fs.createReadStream('input.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('output.txt.gz'));
However, Node.js streams aren’t perfect. They have a reputation for being tricky — subtle bugs, confusing modes (flowing vs paused), and a somewhat idiosyncratic API. They’re also Node.js-specific — not available in browsers, and not always easy to polyfill.
3. The Rise of Protocols: Iterables and Async Iterables
Meanwhile, JavaScript itself was evolving. ES6 introduced the iterable protocol — objects that can be looped over with for...of
. ES2018 brought async iterables, which let you loop over data that arrives asynchronously, using for await...of
.
// Synchronous iterable
for (const item of [1, 2, 3]) {
console.log(item);
}
// Asynchronous iterable
async function* asyncNumbers() {
yield 1;
await new Promise(r => setTimeout(r, 100));
yield 2;
yield 3;
}
for await (const item of asyncNumbers()) {
console.log(item);
}
This was a game-changer. Now, any object that implements the right protocol (Symbol.iterator
or Symbol.asyncIterator
) can be consumed in a standard way, regardless of where the data comes from.
Node.js quickly adopted this pattern. Since Node.js 10 (April 2018), Readable
streams implement Symbol.asyncIterator
, so you can do:
const fs = require('fs');
async function processFile(filename) {
for await (const chunk of fs.createReadStream(filename)) {
// Process each chunk
}
}
This is not only more idiomatic, it’s less error-prone — no more juggling event listeners, just a simple loop.
for await...of
prefers Symbol.asyncIterator
, but if not present, falls back to Symbol.iterator
. This means you can use for await...of
with both async and sync iterables. This subtlety is often overlooked, but it’s what makes protocol-based APIs so flexible.
4. Enter Web Streams: The New Standard
But what about the browser? For years, there was no standard streaming API in the browser. You had to use callbacks, Promises, or hacky workarounds like FileReader
for files, or XHR for network data.
That changed with the introduction of the Web Streams API — a set of standard, promise-based streaming primitives now available in all major browsers (with some features still rolling out). The Web Streams API brings three main classes:
- ReadableStream: For reading data
- WritableStream: For writing data
- TransformStream: For transforming data as it passes through
These are designed to be universal — the same API in browsers, Node.js (since v16.5, July 2021), Deno, and Bun.
Here’s a minimal example of a ReadableStream:
const stream = new ReadableStream({
start(controller) {
controller.enqueue('hello');
controller.enqueue('world');
controller.close();
}
});
const reader = stream.getReader();
reader.read().then(console.log); // { value: 'hello', done: false }
reader.read().then(console.log); // { value: 'world', done: false }
reader.read().then(console.log); // { value: undefined, done: true }
But you rarely use the low-level API directly. Instead, you consume streams with async iteration — if your environment supports it.
Unlike Node.js streams, Web Streams did not support Symbol.asyncIterator
from the start. This feature was proposed in 2017, but only landed in browsers much later:
- Firefox: February 2023 (v110)
- Chromium: April 2024 (v124)
- Safari: Not yet supported as of mid-2025 (see bug 194379)
Node.js, Deno, and Bun support it, but in browsers, you must check compatibility. If not available, you can always fall back to using .getReader()
.
// Fallback for environments without Symbol.asyncIterator
const reader = stream.getReader();
while (true) {
const { value, done } = await reader.read();
if (done) break;
// Process value
}
Notice the pattern: everything is built on protocols. If an object implements Symbol.asyncIterator
, you can for await...of
it. This is the glue that lets streams, iterators, and generators all work together.
5. Timeline: How Streaming Features Arrived
To appreciate how far we’ve come, here’s a timeline of key streaming features in JavaScript environments:
- May, 2015: First browser implementation of Web Streams (Chromium 43)
- January, 2019: Firefox completes initial Web Streams support
-
April, 2018: Node.js 10 adds
Readable[Symbol.asyncIterator]
- May, 2020: Deno 1.0 launches with Web Streams support
-
July, 2021: Node.js 16.5 ships experimental Web Streams, including
ReadableStream[Symbol.asyncIterator]
-
April, 2022: Node.js 18.0 stabilizes Web Streams and
fetch()
- July, 2022: Bun 0.1.1 launches with Web Streams support
-
February, 2023: Firefox 110 adds
ReadableStream[Symbol.asyncIterator]
-
April, 2024: Chromium/Edge 124 adds
ReadableStream[Symbol.asyncIterator]
-
2023–2025:
ReadableStream.from()
lands in Firefox, Node.js 20.6, Deno, Bun (not yet in Chromium or Safari)
This drawn-out process means that, even today, you must be careful about which features are available in your target environment.
6. Node.js Streams vs Web Streams: A Comparison
At this point, you might be wondering: how do Node.js streams and Web Streams relate? Are they compatible? Which should you use?
Let’s break it down.
Feature | Node.js Streams (node:stream) | Web Streams API |
---|---|---|
Class Names | Readable, Writable, Duplex, Transform | ReadableStream, WritableStream, TransformStream |
Duplex | Duplex stream (readable & writable) | No DuplexStream, but can compose ReadableStream and WritableStream |
API Style | EventEmitter-based, protocol-driven (since Node 10) | Promise-based, protocol-driven |
Iteration |
for await...of (since Node 10) |
for await...of (modern browsers, Node.js 16.5+), getReader() always available |
Piping | .pipe() |
.pipeThrough() , .pipeTo()
|
Conversion |
Readable.from(iterable) , fromWeb() , toWeb() (Node.js 17+) |
ReadableStream.from() (Firefox, Node.js 20.6+, Deno, Bun), manual polyfill elsewhere |
Buffering |
highWaterMark , objectMode
|
QueuingStrategy , BYOB (bring your own buffer) |
Error Handling | Events (error ) |
Promise rejection, controller.error()
|
Symmetry and differences:
- If the class name ends with "Stream", it’s a Web Stream; if not, it’s a Node.js Stream.
- Node.js has a built-in
Duplex
stream; Web Streams do not, but you can compose aReadableStream
andWritableStream
to achieve similar functionality (we’ll cover this in a future article). - Node.js streams are event-based, but since Node 10, they’re also protocol-driven via
Symbol.asyncIterator
. - Web Streams are promise-based and protocol-driven in most environments today (except Safari).
Interoperability
Suppose you have a Web Stream and want to use it as a Node.js stream:
const { Readable } = require('stream');
const webStream = getWebStreamSomehow();
const nodeStream = Readable.from(webStream); // or Readable.fromWeb(webStream)
Or, going the other way (Node.js stream to Web Stream):
const { Readable } = require('stream');
const nodeStream = fs.createReadStream('file.txt');
const webStream = Readable.toWeb(nodeStream); // Node.js 17+
And in environments with ReadableStream.from()
(Firefox, Node.js 20.6+, Deno, Bun):
const webStream = ReadableStream.from(nodeStream);
The difference between Readable.from()
and Readable.fromWeb()
lies in the options passed as the second parameter. Readable.from()
and ReadableStream.from()
accept any iterable value, not just streams. If ReadableStream.from()
isn’t available, you can always write a polyfill (we’ll show how in the next article).
7. Why Protocols Matter: The Universal Abstraction
The beauty of this evolution is that protocols — not classes — are the real foundation. If your object implements the right protocol, it can be consumed as a stream, an iterator, or a generator, regardless of where it came from.
This means you can write code that works everywhere:
- In Node.js, Deno, Bun, or the browser
- With files, network responses, blobs, or custom data sources
- Asynchronously, and synchronously in some cases
For example, a function that processes data from any source:
async function processChunks(iterable) {
for await (const chunk of iterable) {
// Process each chunk
}
}
This will work with:
- A Node.js Readable stream
- A Web ReadableStream (in environments that support async iteration)
- An async generator
- Any object implementing
Symbol.asyncIterator
or even justSymbol.iterator
(sincefor await...of
falls back to sync iterables)
While this protocol-driven approach is powerful, keep in mind that browser support for ReadableStream[Symbol.asyncIterator]
is still catching up. As of mid-2025, only Firefox and Chromium-based browsers support it; Safari does not. For maximum compatibility, you may need to use .getReader()
.
What’s Next
Web Streams, built on protocol-driven abstractions, are now the backbone for scalable, universal data handling in JavaScript — across browsers and servers alike. In the next articles, we’ll move from fundamentals to hands-on practice: consuming data from every source, building transformation pipelines, transferring streams between contexts, and mastering streaming uploads and downloads.
Next up:
Part 2: Consuming Data with Web Streams — Files, Blobs, Responses, and Beyond (coming soon)
Top comments (0)