June 27, 2025
Reactive programming has emerged as a powerful paradigm for handling dynamic data flows and complex event-driven applications. But while its declarative style brings clarity and flexibility, it often comes at the cost of performance.
A fascinating approach to bridging this performance gap is explored in the research work “Reactive Ruby” by Moritz Viering (PDF). In this article, we’ll explore the core concepts of reactive programming, summarize this innovative thesis, and show how we can bring its ideas into real-world Ruby development with examples.
What Is Reactive Programming?
Reactive programming enables developers to define what a program should do when data changes — not how to propagate those changes.
A quick example:
a = 1
b = 2
c = a + b # => 3
a = 2
puts c # Still 3 in traditional code
In reactive programming , c would be automatically updated to 4 when a changes.
The Performance Problem
Reactive systems often build dependency graphs between data nodes. When a change happens, the system must propagate updates , ensuring all dependent values stay consistent. Compared to the well-known Observer pattern , this can introduce:
- Additional indirection
- Execution order guarantees (e.g., glitch freedom)
- Runtime overhead
The paper shows that some FRP implementations are up to 50x slower than equivalent Observer-based code.
Reactive Ruby: A New Hope
Viering’s thesis proposes Reactive Ruby , a new language layer built using:
- TruffleRuby (Ruby interpreter on the Graal VM)
- Graal JIT compiler for aggressive runtime optimization
By embedding reactive constructs inside TruffleRuby and leveraging just-in-time compilation , the system achieves performance comparable to optimized Observer-based code — a major breakthrough.
Source: Moritz Viering, “Reactive Ruby”, reactiveruby.pdf
Practical Implementations in Ruby
Let’s explore how we can experiment with similar ideas using Ruby and reactive constructs.
1. Reactive Variable Propagation
class ReactiveVar
attr_accessor :value, :subscribers
def initialize(value)
@value = value
@subscribers = []
end
def set(val)
@value = val
notify
end
def subscribe(&block)
@subscribers << block
end
def notify
@subscribers.each { |s| s.call(@value) }
end
end
a = ReactiveVar.new(1)
b = ReactiveVar.new(2)
c = 0
a.subscribe { |val| c = val + b.value }
b.subscribe { |val| c = a.value + val }
a.set(3)
puts c # => 5
2. Building a Dependency Graph
class ReactiveNode
attr_reader :compute, :dependencies
attr_accessor :value
def initialize(dependencies = [], &compute)
@dependencies = dependencies
@compute = compute
dependencies.each { |dep| dep.add_dependent(self) }
update
end
def update
@value = compute.call(*dependencies.map(&:value))
end
def add_dependent(dep)
# Notify dep when this node changes
end
end
This simple structure mirrors how Reactive Ruby tracks changes and updates nodes efficiently.
3. Avoiding Glitches
A glitch happens when an update propagates too early, creating inconsistent intermediate states. This can be avoided using topological sorting of the dependency graph before propagation — an idea also explored in the thesis.
Takeaways
Reactive Ruby shows that the performance of reactive systems can be drastically improved when:
- Paired with a JIT compiler like Graal
- Built on an AST interpreter like Truffle
- Carefully managing update propagation
It also proves that reactive abstractions don’t need to sacrifice speed if the runtime is optimized for them.
Reference
Moritz Viering, “Reactive Ruby”, 2015 http://mviering.de/reactiveruby.pdf
Final Thoughts
Reactive paradigms simplify many problems — UI reactivity, async event handling, and data pipelines. As shown in Reactive Ruby, with the right implementation strategy, we can build powerful and fast reactive systems, even in a dynamic language like Ruby.
Let’s rethink how we implement reactivity, not just for flexibility — but also for performance.
Top comments (0)