DEV Community

Cover image for From Plain Functions to Reactive Streams: a mindset change with a thousand benefits
Dario Mannu
Dario Mannu

Posted on

From Plain Functions to Reactive Streams: a mindset change with a thousand benefits

Functions have been at the core of computer programming since the dawn of time. From the simplest arguments in -> results out pure functions to async functions that return future values, to reentrant generators that can dynamically produce any number of results over time, sync or async.

There is one thing functions share in common: they lack any sort of event awareness.

Bad news in a world where literally everything revolves around events and a whole chain of reactions to be executed in response: a button click, a file upload, a form submission, a page refresh or a ticking timer.

Streams can handle a multitude of events in input, perform some processing and emit some results. They can take the output of other streams as their own input and compose up new derived streams.

A great analogy can be drawn with water pipelines or electrical circuits. Reactive streams treat data just like pipeline channel water and circuits enable the controlled flow of electricity.

From simple function to simple stream

Let's take a function that returns its argument doubled:

// imperative
function double(n: number) {
  return n * 2;
}
Enter fullscreen mode Exit fullscreen mode

An equivalent reactive stream that takes numbers and emits double the input value looks something like this:

// stream-oriented
const double = new Subject<number>().pipe(
  map(n => n * 2)
);
Enter fullscreen mode Exit fullscreen mode

You may have noticed how streams take a single parameter, which seems a bit of a limitation given that functions can take many. This can be mitigated by passing objects instead of plain values and in practice, given that a stream should represent a single "thing", this won't even feel like a limitation.

The difference is small, subtle, but fundamental. In the former case it's an imperative call that triggers the function and handles the result, in the latter case, the stream is something that can be fed and subscribed to, ideally by the framework.

From async functions to streams

The difference between sync or async streams is not as critical as it is between functions and async functions. At the end of the day, you combine streams the same way with the same operators (such as zip, combineLatest, withLatestFrom, etc) in a seamless way.

Perhaps one little exception is when you use promises inside streams. Say you have one that calls an API and propagates its results. You can't just use the map operator, as it would not wait for any fetch() call to resolve. What you want to use instead is the switchMap operator, which also cancels any in-flight operation before starting the new one.

// imperative
function getData(id: number) {
  return fetch(`/data/${id}`).then(r => r.json());
}
Enter fullscreen mode Exit fullscreen mode
// stream-oriented
const dataStream = new Subject<number>().pipe(
  switchMap(id => fetch(`/data/${id}`).then(r => r.json()))
);
Enter fullscreen mode Exit fullscreen mode

Actually this is an overly simplified example, because there's so much more to using fetch and handling errors, but we only wanted to focus on the structure now and how to get from an async function to a stream. There is a collection of examples showing how to fetch data and handle retries and errors. Perhaps we'll write about those in more detail some day.

So, what's the key difference, in summary? In one case, in the imperative paradigm, you "call" the getData() function and await the result. In the stream-oriented paradigm, you don't call anything. You only create pure streams and declare how you'd like those to be called. An example could be like the click on a button triggers the stream and its output goes into a <div> somewhere on the page. HTML templates serve this exact purpose.

Benefits of thinking in streams

Embracing streams over traditional functions brings several practical and architectural benefits:

1. Event awareness and composability. Streams integrate seamlessly with events, which are the backbone of interactive systems. They let you model complex workflows as a network of dataflows rather than a series of callbacks.

2. True asynchronicity. Streams naturally handle asynchronous data over time without forcing developers into nested then chains or async/await boilerplate.

3. Declarative clarity. Instead of describing how to react to each event, you define what should happen when it does. This leads to more maintainable, predictable, and testable code.

4. Error resilience and recovery. Operators like catchError, retry, or switchMap give you built-in tools to manage transient errors or cancellation logic gracefully.

5. Framework synergy. Modern UI and data frameworks—especially those aligned with reactive programming—can automatically connect streams to views, enabling state updates without explicit imperative glue code.

6. Better scalability. As systems grow, adding new event sources or transformations doesn't require changing existing logic. Streams let you plug in new flows like components in a circuit.

By shifting from functions to streams, you're no longer writing programs that merely execute from top to bottom; you're designing living systems that react, evolve, and adapt continuously in response to their environment.


Learn More

Top comments (1)

Collapse
 
klyngen profile image
Martin Klingenberg

Love your work and this is a great article. I enjoy Rxjs but I see how other devs struggle with RxJS. My vision is that devs can realize that business logic can be separated from the frameworks and that frameworks should only be used for rendering.

If you like I would love to have your opinions on my React Rxjs bindings. I have thought about making a package for it and making some articles.