As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
I’ve spent years writing Java applications that just couldn’t keep up under load. A single slow database query would freeze the whole request, threads would pile up, memory would spike, and my users would stare at loading spinners. Then I discovered Reactive Programming with Project Reactor. It changed how I think about data flows, concurrency, and failure handling. Let me walk you through five techniques that turned my blocking, thread‑per‑request nightmares into non‑blocking, efficient pipelines.
First, you need to get comfortable with the two core types: Flux and Mono. Think of Mono as a container that either holds one value or nothing at all. Flux holds zero to many values, like a stream of events. Almost every business operation maps to one of these. When I fetch a single user from a database, I return Mono<User>. When I get a list of orders for that user, I return Flux<Order>. The mistake beginners make is wrapping everything in a Flux or using Mono for lists. Keep it simple: one or many.
// Fetch a single user
Mono<User> findById(String id) {
return userRepository.findById(id);
}
// Fetch multiple orders for a user
Flux<Order> findOrders(String userId) {
return orderRepository.findByUserId(userId);
}
// Combine results
Mono<UserProfile> buildProfile(String userId) {
Mono<User> userMono = findById(userId);
Flux<Order> ordersFlux = findOrders(userId);
return userMono.zipWith(ordersFlux.collectList())
.map(tuple -> new UserProfile(tuple.getT1(), tuple.getT2()));
}
In the example above, zipWith waits for both the user and the list of orders, then combines them into a UserProfile. I used to write this with CompletableFuture and manual synchronization. Reactor handles the coordination for me. The pipeline is lazy – nothing runs until someone subscribes. That’s another key idea: nothing happens until you pull the trigger.
Now, let’s talk about backpressure. Backpressure is just a fancy word for telling the producer “slow down, I can’t keep up.” In a reactive system, the consumer controls the flow. Without backpressure, a fast producer can swamp a slow consumer, filling up memory with buffers or causing timeouts. Reactor gives you three main strategies: buffer, drop, or use the latest. You pick based on your tolerance for data loss.
When I’m processing a stream of events where I cannot miss any, I use onBackpressureBuffer with a cap. If the buffer fills up, I decide what to do – maybe log a warning and drop the oldest.
// Buffer overflow: use drop with max buffer size
Flux<Event> eventStream = eventSource.stream()
.onBackpressureBuffer(1000,
event -> log.warn("Dropped event: {}", event.getId()),
BufferOverflowStrategy.DROP_OLDEST);
For stock prices where I only care about the most recent value, onBackpressureLatest is perfect. It drops intermediate values and keeps only the newest one.
// Use latest, dropping intermediate values
Flux<StockPrice> priceStream = priceService.stream()
.onBackpressureLatest();
A common mistake is forgetting to add any backpressure handling at all. Reactor defaults to an unbounded buffer, which can explode your memory. Always add explicit backpressure, especially when you’re dealing with infinite streams or high‑volume data.
Another big shift is understanding how threads work in Reactor. In traditional Java, each request gets its own thread – one thread per connection. With async, you have a small pool of event loop threads that handle many connections. You need to tell Reactor which type of work you’re doing: CPU‑bound or I/O‑bound. Using the wrong scheduler ruins performance.
For CPU‑intensive work – like image processing or cryptographic hashing – I use Schedulers.parallel(). That pool has as many threads as CPU cores.
// CPU‑bound work on parallel scheduler
Mono<List<ComplexResult>> processData = Flux.fromIterable(data)
.parallel()
.runOn(Schedulers.parallel())
.map(this::expensiveComputation)
.sequential()
.collectList();
For blocking I/O – like reading a file from disk or calling a legacy JDBC driver – I use Schedulers.boundedElastic(). It creates a separate thread pool that can handle blocking operations without starving the main event loop.
// Blocking I/O on bounded elastic scheduler
Mono<File> readFile = Mono.fromCallable(() -> {
return Files.readAllBytes(Paths.get("/large/data.txt"));
})
.subscribeOn(Schedulers.boundedElastic())
.map(bytes -> new File(bytes));
Notice I used subscribeOn to move the blocking call to the bounded elastic thread. Without it, that blocking call would happen on the event loop and freeze everything. I’ve been burned by that more times than I’d like to admit.
Error handling is where reactive programming gets tricky. You can’t just wrap everything in a try‑catch because the code runs asynchronously. Reactor provides operators that handle failures cleanly. My go‑to is onErrorReturn when I can provide a fallback default value.
// Return a fallback value on error
Mono<User> findUserWithFallback(String id) {
return userRepository.findById(id)
.onErrorReturn(new User("anonymous", "unknown"));
}
If I want to try another service after the first fails, I use onErrorResume.
// Try another source on failure
Mono<User> findUserResilient(String id) {
return primaryUserSource(id)
.onErrorResume(e -> secondaryUserSource(id));
}
For transient failures – network timeouts, database connection drops – I add retry logic with exponential backoff. Reactor’s Retry class makes this easy.
// Retry with exponential backoff
Mono<Order> placeOrder(OrderRequest request) {
return orderService.create(request)
.retryWhen(Retry.backoff(3, Duration.ofSeconds(1))
.maxBackoff(Duration.ofSeconds(10))
.jitter(0.5)
.filter(throwable -> throwable instanceof TransientException));
}
The jitter adds randomness to the backoff, avoiding thundering herd problems. I also filter to only retry on transient exceptions, not on permanent failures like “order already exists.”
One pattern I love is onErrorContinue. It’s perfect for batch processing where some items fail but you want to keep going with the rest.
// Skip problematic items in a Flux
Flux<Result> processOrders(Flux<Order> orders) {
return orders
.flatMap(order -> processOrder(order)
.onErrorContinue((error, item) ->
log.error("Failed to process order {}: {}", item.getId(), error.getMessage())));
}
Before I discovered onErrorContinue, I used onErrorResume inside each flatMap and returned an empty Mono. That works but is more verbose. onErrorContinue handles the skipping gracefully and logs the failure.
Finally, testing reactive code requires a different mindset. You can’t just call block() in unit tests – that defeats the purpose. Reactor’s StepVerifier lets you assert the sequence of events as they happen. You can also simulate time‑based operations using virtual time, which is a lifesaver for testing interval or delayElement.
Here’s a simple test that verifies a Flux with filtering and mapping:
@Test
void shouldProcessStreamCorrectly() {
Flux<Integer> source = Flux.just(1, 2, 3, 4, 5);
StepVerifier.create(source
.filter(i -> i % 2 == 0)
.map(i -> i * 10))
.expectNext(20, 40)
.verifyComplete();
}
Another test checks backpressure behavior. I start with a request of zero, then request five elements, and cancel after that.
@Test
void shouldHandleBackpressure() {
Flux<Integer> source = Flux.range(1, 100)
.onBackpressureDrop();
StepVerifier.create(source, 0)
.thenRequest(5)
.expectNext(1, 2, 3, 4, 5)
.thenCancel()
.verify();
}
Virtual time is the real magic. If I have a task that emits every hour, I don’t want to wait hours in a test. withVirtualTime lets me advance time instantly.
@Test
void shouldTestWithVirtualTime() {
StepVerifier.withVirtualTime(() ->
Flux.interval(Duration.ofHours(1))
.take(2))
.expectSubscription()
.expectNoEvent(Duration.ofHours(1))
.expectNext(0L)
.thenAwait(Duration.ofHours(1))
.expectNext(1L)
.verifyComplete();
}
I always add a expectSubscription() first because that checks that the subscriber got the subscription signal. Small detail, but it catches missing subscribe calls.
These five techniques feel simple once you’ve practiced them a few times. Start small – maybe replace one blocking endpoint with a reactive one. Use Mono.fromCallable wrapped in subscribeOn for your first step. Then add backpressure handling when you notice memory issues. Gradually move to composing multiple reactive sources with zip and flatMap. The shift from blocking to non‑blocking is not just about performance; it’s about building systems that stay responsive even when something goes wrong. I’ve seen applications that used to crash under 100 concurrent users now handle 10,000 with the same hardware, simply because they stopped blocking threads. That’s the real payoff.
So go ahead, open your project, find a piece of code that does a lot of sequential blocking calls, and try replacing it with a reactive pipeline. Use Mono for single values, Flux for streams. Add backpressure to protect your memory. Choose the right scheduler for the work. Handle errors explicitly with fallbacks or retries. And test your pipelines with StepVerifier before you deploy. You’ll be surprised how much simpler and more robust your code becomes.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)