DEV Community

Cover image for **Mastering Project Reactor: Advanced Techniques for Production-Ready Reactive Java Applications**
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

**Mastering Project Reactor: Advanced Techniques for Production-Ready Reactive Java Applications**

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

Let's talk about what happens when your application needs to handle lots of things happening at once. Imagine a busy restaurant kitchen. Orders come in fast, the grill is sizzling, and plates need to go out. If the chef tries to cook each order completely before starting the next, the dining room would be empty. Everyone gets their food late. A good kitchen works on many orders at different stages simultaneously. That's the heart of reactive programming. It's a way of building software that handles many tasks concurrently, efficiently, and without falling over when things get busy.

In Java, Project Reactor gives us the tools to build this kind of system. You might already know about Flux for multiple items and Mono for zero or one item. But to build a truly robust kitchen—I mean, application—you need more than just the basic recipes. Let's walk through some techniques that make your reactive code ready for the real world.

First, let's address a common problem: what if the waiter brings orders to the kitchen faster than the chef can cook them? The counter gets piled high with tickets, things get lost, and eventually, the system crashes. In reactive streams, this is called backpressure. It's the pressure from a slow consumer (the chef) back onto a fast producer (the waiters). Reactor gives you ways to manage this flow.

You can tell the stream to buffer tickets, but only up to a point. Once the buffer is full, you need a strategy. Maybe you drop the oldest ticket to make room for the newest one. Or maybe you just drop any new ticket that arrives when you're full. Sometimes, you only want to keep the most recent ticket, constantly updating it. And other times, it's better to just signal an error immediately so the system knows something is wrong.

// Buffer with a limit, dropping old items when full
Flux.range(1, 1000)
    .onBackpressureBuffer(100,
        BufferOverflowStrategy.DROP_OLDEST)
    .subscribe(value -> processSlowly(value));

// Just drop items that can't be handled
Flux.interval(Duration.ofMillis(10))
    .onBackpressureDrop(droppedValue ->
        System.out.println("Couldn't handle: " + droppedValue))
    .subscribe(value -> processSlowly(value));

// Keep only the very latest item
Flux.interval(Duration.ofMillis(5))
    .onBackpressureLatest()
    .subscribe(value -> processSlowly(value));

// Fail fast if the consumer can't keep up
Flux.range(1, 1000)
    .onBackpressureError()
    .subscribe(
        value -> processSlowly(value),
        error -> System.err.println("Overflow: " + error)
    );
Enter fullscreen mode Exit fullscreen mode

The key is to choose the strategy that fits your scenario. Is it okay to lose some data? Then dropping might be fine. Do you need every piece but can handle bursts? Buffering could work. Understanding backpressure helps you prevent memory issues and build predictable systems.

Now, think about a news ticker or a live sports score. Many people want to see the same, constantly updating information. If each person who walks up to the scoreboard causes a whole new game to be played from the start just for them, that would be ridiculous. In Reactor, we have "cold" and "hot" publishers to model this difference.

A cold publisher is like a DVD. Every time you hit play (subscribe), you start the movie from the beginning. Each subscriber gets its own independent copy of the data stream. This is great for data you need to replay, like reading from a file or a database query result.

// A cold publisher: each subscriber triggers data generation
Flux<Integer> coldMovie = Flux.defer(() ->
    Flux.fromStream(generateDataFromDatabase()));

coldMovie.subscribe(data -> System.out.println("Viewer 1 sees: " + data));
coldMovie.subscribe(data -> System.out.println("Viewer 2 sees: " + data));
// Two separate database queries happen
Enter fullscreen mode Exit fullscreen mode

A hot publisher is like a live broadcast. The event is happening right now, and everyone tunes into the same feed. If you subscribe late, you miss what already happened. This is perfect for live events, sensor data, or user activity feeds.

// A hot publisher: one source, many subscribers
ConnectableFlux<Long> liveBroadcast = Flux.interval(Duration.ofSeconds(1))
    .map(tick -> getCurrentSensorValue())
    .publish(); // It's now a ConnectableFlux

liveBroadcast.subscribe(v -> updateScreen("Screen1", v));
liveBroadcast.subscribe(v -> updateScreen("Screen2", v));

// Nothing happens until we connect
liveBroadcast.connect(); // The sensor starts reading, all screens update together

// A shared hot publisher with a replay cache for late subscribers
Flux<Long> broadcastWithReplay = Flux.interval(Duration.ofSeconds(1))
    .map(tick -> getCurrentSensorValue())
    .replay(5) // Cache the last 5 values
    .autoConnect();

// A subscriber joining now gets the last 5 values immediately, then the live feed
broadcastWithReplay.subscribe(v -> System.out.println("Late joiner: " + v));
Enter fullscreen mode Exit fullscreen mode

Choosing between cold and hot changes how your data is shared and how resources are used. It's a fundamental decision in designing your streams.

As your data flows through various processing steps—from reading, to transforming, to saving—you often need extra information that's not part of the data itself. Think of a customer's request ID for logging, or their authentication token. Passing these as parameters through every method call would be messy. Reactor provides a neat solution called the Context.

The Context is like a small, invisible backpack that travels with your data subscription. You can put things in it at the start of a chain, and any step in the process can look inside the backpack to read those values, without the main data stream being affected.

// Writing values into the Context
Mono<String> processed = Mono.deferContextual(ctxView ->
        Mono.just("Request ID is: " + ctxView.get("requestId"))
    )
    .contextWrite(Context.of("requestId", "REQ-12345"));

// In a more complex chain, you can access context deep inside operators
Flux.range(1, 5)
    .flatMap(number ->
        Mono.deferContextual(ctx -> {
            String user = ctx.get("currentUser");
            return enrichNumber(number, user);
        })
    )
    .contextWrite(Context.of("currentUser", "alice"))
    .subscribe();

// A practical service method using context
public Mono<Order> fetchOrder(String orderId) {
    return Mono.deferContextual(ctx -> {
        String traceId = ctx.get("traceId"); // Retrieved from the "backpack"
        log.info("[{}] Fetching order {}", traceId, orderId);
        return database.lookup(orderId);
    });
}
// This method can be called without passing traceId explicitly.
// The caller just sets up the context before calling it.
Enter fullscreen mode Exit fullscreen mode

I find the Context incredibly useful for cross-cutting concerns like monitoring, logging, and security. It keeps your method signatures clean and your code focused on business logic.

Testing asynchronous, non-blocking code can feel tricky. You can't just use Thread.sleep and hope for the best. Reactor provides a library called StepVerifier which is like a dedicated test conductor for your streams. It lets you assert what should happen, in what order, and even control virtual time so tests that would normally take hours run in milliseconds.

@Test
void testMyStreamCompletesSuccessfully() {
    Flux<String> myFlux = Flux.just("Alpha", "Beta", "Gamma");

    StepVerifier.create(myFlux)
        .expectNext("Alpha")
        .expectNext("Beta")
        .expectNext("Gamma")
        .verifyComplete(); // Verifies completion with no errors
}

@Test
void testBackpressureIsRespected() {
    Flux<Integer> fastNumbers = Flux.range(1, 50);

    StepVerifier.create(fastNumbers, 0) // Start by requesting ZERO items
        .expectSubscription()
        .thenRequest(5) // Now ask for 5
        .expectNext(1,2,3,4,5)
        .thenRequest(10) // Ask for 10 more
        .expectNextCount(10)
        .thenCancel() // Stop the subscription
        .verify();
}

@Test
void testErrorRecoveryPath() {
    Flux<String> fluxWithError = Flux.just("ok", "ok")
        .concatWith(Flux.error(new RuntimeException("Boom!")))
        .onErrorReturn("fallback");

    StepVerifier.create(fluxWithError)
        .expectNext("ok", "ok")
        .expectNext("fallback") // Asserts our recovery worked
        .verifyComplete();
}

// My favorite: testing time-based operators without waiting
@Test
void testSlowOperationWithVirtualTime() {
    StepVerifier.withVirtualTime(() ->
            Flux.interval(Duration.ofDays(1)) // Emits every day!
                .map(day -> "Day " + day)
                .take(3)
        )
        .expectSubscription()
        .thenAwait(Duration.ofDays(3)) // Instantly jump forward 3 days
        .expectNext("Day 0", "Day 1", "Day 2")
        .verifyComplete();
}
Enter fullscreen mode Exit fullscreen mode

Using StepVerifier gives you confidence. You can test not just the happy path, but also how your stream behaves under error conditions, with specific backpressure requests, and over time.

Finally, let's discuss running this in production. It's not enough for the code to work on your laptop. You need to think about resources, observability, and resilience. A big part of this is managing threads wisely through schedulers. Blocking operations, like waiting for a database, should use a different pool of threads than CPU-intensive calculations. This prevents one slow task from stalling everything else.

// A scheduler for I/O or other blocking work
Scheduler blockingPool = Schedulers.boundedElastic(
    10, // Max threads
    100, // Task queue limit
    "blocking-io-pool"
);

// A scheduler for CPU-heavy work
Scheduler computationPool = Schedulers.parallel(4, "cpu-pool");

Mono<Data> databaseCall = Mono.fromCallable(() -> blockingDatabaseQuery())
    .subscribeOn(blockingPool) // Run the blocking call on the right pool
    .timeout(Duration.ofSeconds(5));

Flux<Integer> heavyMath = Flux.range(1, 1000000)
    .parallel()
    .runOn(computationPool) // Run computation in parallel on the CPU pool
    .map(this::veryExpensiveCalculation)
    .sequential();
Enter fullscreen mode Exit fullscreen mode

You also need to know what's happening inside your running app. Reactor can integrate with metrics libraries like Micrometer. You can add timing, count events, and see where bottlenecks are.

// Adding metrics to a key pipeline
Flux<Order> orderStream = orderInputFlux
    .name("orders.incoming") // Name for metrics
    .metrics()
    .tag("source", "kafka")
    .flatMap(this::processOrder, 10); // Concurrency of 10
Enter fullscreen mode Exit fullscreen mode

And you must plan for failure. Networks fail, external services go down. Your reactive pipeline should handle this gracefully. You can add timeouts, fallback to cached data, or retry with a delay.

public Flux<Product> getProductsSafe(String category) {
    return externalService.fetchProducts(category)
        .timeout(Duration.ofSeconds(3)) // Don't wait forever
        .onErrorResume(TimeoutException.class, e -> {
            // On timeout, try the cache
            return cacheService.getCachedProducts(category);
        })
        .onErrorResume(Exception.class, e -> {
            // For any other error, return a safe default
            return Flux.just(getDefaultProduct());
        })
        .retryWhen(Retry.backoff(3, Duration.ofSeconds(1))
            .filter(error -> error instanceof IOException) // Only retry on network issues
        );
}
Enter fullscreen mode Exit fullscreen mode

Putting all this together, reactive programming with Project Reactor is about building systems that are aware of their own capacity. They can handle load, manage resources, provide clear signals when stressed, and recover from problems. It starts with understanding the flow of data as a stream you can shape and control. From managing the push and pull of backpressure, to deciding who shares a data source, to carrying metadata silently alongside your business data, to thoroughly testing all behaviors, and finally to configuring it all to run reliably under real conditions. It's a powerful model for the kind of interactive, data-intensive applications we build today. The code you write becomes a declaration of how data should flow, how errors should be handled, and how resources should be used, which is both a challenge and a great advantage.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)