In the previous post, I used a scene from The Eternaut — Favalli starting an old car after an EMP — as a way to introduce the “humble monolith” and how that differs from the "majestic monolith". Also, why the monolith vs microservices discussion is actually due to a "hidden" problem: rigidity.
What if we could have the same business logic, adopting not one but many runtime shapes? I realised that the functional and reactive pipelines underpinning The Pipeline Framework, paired with the TPF's own architecture could actually enable choices for the users: should I go for a monolith? Or deploy steps as microservices? I thought, why have to choose?
Introducing: runtime topologies in The Pipeline Framework.
The Pipeline Framework treats the business flow as the stable asset, and the runtime topology as something that can change over time, and even co-exist (local environment vs. production). Let's take a look at the currently supported runtime topology shapes.
None of these shapes is inherently “better.” They are just different ways of balancing:
- how much change you can isolate
- how much infrastructure you want to operate
- how much latency you can afford
- where your security boundaries sit
- how teams are organised
And yet, in most systems, choosing one of these ends up locking you in, TPF makes this possible by adapting the input/outputs of each step in an elegant way at build time, to match the runtime shape of choice.
Monolith
Everything runs in-process. Steps call each other directly, and plugins (persistence, caching) live in the same runtime.
This is the simplest possible setup. No network hops, minimal operational overhead, very direct debugging.
The trade-off is clear: everything shares the same blast radius.
Pipeline Runtime
Here, the orchestrator is separated, but the pipeline steps still run in a grouped runtime. Plugins are also externalised as shared services.

This tends to be a very practical middle ground.
You get a clear ingress point, reduced exposure of internal components, and some separation of concerns — without fully embracing distributed complexity.
Modular / Distributed
Each step becomes independently deployable, and plugins remain shared services rather than being embedded per step.

This gives you strong isolation, independent scaling, and clearer ownership boundaries.
It also introduces the usual trade-offs: more infrastructure, more network hops, and more operational complexity.
In the framework, the pipeline itself is defined independently of where it runs. Runtime mapping (PipelineRuntimeMapping and its resolver) determines placement, not behavior. In the csv-payments example, the same pipeline can run as a monolith, inside a pipeline runtime, or in a more modular layout — without rewriting the business logic.
That separation shows up elsewhere too. Step contracts define intent, mappers isolate boundaries, and services remain focused on transformation logic. Transport concerns don’t leak into the core, which means the system doesn’t become hostage to how components communicate.
Even at generation level, there’s a single semantic model that gets projected into different execution modes. Local calls, gRPC, REST, protobuf-over-http — they’re not treated as fundamentally different architectures, but as different ways of expressing the same flow.
And importantly, this isn’t just theoretical. The same reference system is built and tested in more than one topology, so the idea of switching shapes is exercised, not just claimed.
While the 3 topologies above are actual values in a YAML config, none of this means architecture becomes automatic or point-n-click: you still choose your topology deliberately, and maintain the build.
But that choice stops being irreversible: if the business flow can outlive the topology, then moving from a monolith to something more distributed (or even back again) stops being a rewrite and becomes a transition.
And that’s a very different place to be.

Top comments (0)