JavaScript's Array methods are eager. When you chain .filter().map().reduce(), each operation creates a new array in memory before moving to the next step.
For small datasets, this is fine. For large datasets, you run out of memory.
I built Iterflow to solve this: lazy evaluation with built-in statistics and windowing for data processing in TypeScript.
The Problem with Eager Evaluation
Here's what most of us write:
const results = data
.filter(row => row.status === 'active')
.map(row => transformRow(row))
.slice(0, 1000);
This looks clean, but here's what actually happens:
-
filter()iterates through ALL items β creates new array -
map()iterates through filtered items β creates another array -
slice()finally takes what we need
For 10 million rows, we're creating multiple arrays with millions of elements. That's why Node throws out-of-memory errors.
Lazy Evaluation with Iterflow
Here's the same operation with lazy evaluation:
import { iter } from '@mathscapes/iterflow';
const results = iter(data)
.filter(row => row.status === 'active')
.map(row => transformRow(row))
.take(1000)
.toArray();
The difference? Nothing executes until toArray().
Iterflow processes one item at a time through the entire pipeline. Need only 1000 items? It stops after finding 1000 matches, not after processing millions.
JavaScript has generators and the iterator protocol, but no fluent, chainable API for this pattern. So I built one.
What Makes Iterflow Different?
There are other iterator libraries for JavaScript (iter-tools, iterare, lazy.js). Iterflow focuses on:
1. Built-in Statistical Operations
const data = iter(measurements)
.filter(m => m.valid)
.map(m => m.value);
// Each statistical method consumes the iterator, so use separately:
const avg = iter(measurements).filter(m => m.valid).map(m => m.value).mean();
const spread = iter(measurements).filter(m => m.valid).map(m => m.value).variance();
// Or collect once if you need multiple stats:
const values = data.toArray();
const stats = {
avg: iter(values).mean(),
spread: iter(values).variance(),
range: [iter(values).min(), iter(values).max()]
};
No more importing separate stats libraries. Available: sum(), mean(), median(), variance(), min(), max().
2. Windowing and Chunking
const prices = [100, 102, 101, 105, 107, 110];
// Rolling average with sliding windows
const rolling = iter(prices)
.window(3)
.map(w => iter(w).mean())
.toArray(); // [101, 102.67, 104.33, 107.33]
// Process in fixed-size batches
iter(records)
.chunk(100)
.forEach(batch => processBatch(batch));
3. TypeScript-First Design
Full type inference throughout the chain:
const numbers = iter([1, 2, 3, 4, 5])
.filter(n => n > 2) // Iterflow<number>
.map(n => n.toString()) // Iterflow<string>
.toArray(); // string[]
4. Works with Any Iterable
Arrays, Sets, Maps, generators, your custom iterables. All work seamlessly:
// Generators for infinite sequences
function* fibonacci() {
let [a, b] = [0, 1];
while (true) {
yield a;
[a, b] = [b, a + b];
}
}
const first20Fibs = iter(fibonacci())
.take(20)
.toArray();
5. Zero Dependencies, Tiny Bundle
The entire library is 2.2KB gzipped (9KB uncompressed). Zero runtime dependencies.
Example: Processing Large Datasets
Generator functions make this powerful. Here's processing millions of log entries without loading them all into memory:
import { iter } from '@mathscapes/iterflow';
// Simulate reading from a huge log file (generator = lazy)
// Note: Current version supports sync iterators; async support coming soon
function* readLogLines() {
// In production: stream from file, database cursor, etc.
for (let i = 1; i <= 10_000_000; i++) {
yield `${new Date().toISOString()} [${i % 100 === 0 ? 'ERROR' : 'INFO'}] Response time: ${Math.random() * 1000}ms`;
}
}
const avgErrorTime = iter(readLogLines())
.filter(line => line.includes('ERROR'))
.map(line => {
const match = line.match(/Response time: ([\d.]+)ms/);
return match ? parseFloat(match[1]) : 0;
})
.mean();
console.log(`Average error response time: ${avgErrorTime.toFixed(2)}ms`);
// Only processes ERROR lines - most lines skipped entirely
The generator yields one line at a time. Iterflow only materializes matching entries. Constant memory usage regardless of dataset size.
When Should You Use Iterflow?
Use Iterflow when:
- Processing large datasets that don't fit in memory
- Building ETL pipelines with filter/map/reduce chains
- Need statistics on data streams (mean, median, variance)
- Working with generators or infinite sequences
- Want Python itertools-like patterns in TypeScript
Skip it when:
- Working with small arrays (<1000 items)
- You need the full array materialized anyway
Get Started
npm install @mathscapes/iterflow
import { iter } from '@mathscapes/iterflow';
// That's it. Start chaining.
const result = iter([1, 2, 3, 4, 5])
.filter(n => n % 2 === 0)
.map(n => n * 10)
.sum();
Status
Released v1.0.0-rc2 on January 4, 2026. Zero dependencies, 2.2KB gzipped, 128 tests passing.
Looking for feedback from developers working with large datasets, ETL pipelines, or functional patterns in TypeScript.
GitHub: mathscapes/iterflow
Issues/feedback: GitHub Issues
Top comments (1)
A lot of these are built in to JS... e.g. developer.mozilla.org/en-US/docs/W...