In my last blog post, with looked at how to publish domain events using Spring Data. However, there is no use in publishing events unless you can also receive and handle them so that is what we are going to look at in this blog post.
We recall that Spring Data publishes domain events using the standard ApplicationEventPublisher
. This means that we can also handle events in the standard Spring way using @EventListener
so let's have a look at that first.
Handling Events with @EventListener
A domain event handler (I sometimes also use the term domain event listener - they mean the same thing) using @EventListener
looks something like this:
@Component
class MyDomainEventHandler {
@EventListener
public void onMyDomainEvent(MyDomainEvent event) {
// Handler code here.
}
}
The domain event handler is a Spring bean, which means you can inject other beans into it.
Once an event is published using the ApplicationEventPublisher
, it will by default be handed over to the ApplicationEventMulticaster
. The default implementation of this - SimpleApplicationEventMulticaster
- will just loop through all applicable listeners and call them one at a time. Unless a task executor and an error handler have been explicitly specified, the listeners will be called inside the same thread that published the event and if any one of the listeners throws an exception, it will halt the process and the remaining listeners will not be notified at all.
This means that if you use the default configuration of the event multicaster and use @EventListener
for your domain event handlers, they will participate in the same transaction that published the event. This also means that if you throw an exception from an event handler, you will rollback the entire transaction.
In some use cases, this may be desired behaviour, in which case using @EventListener
is the way to go. However, this also violates the third guideline of aggregate design which states that you should only edit one aggregate in one transaction.
There is a reason for this guideline and I've been bitten by it myself when we used @EventListener
for our domain event handlers. The thing is, handling domain events in the same transaction that published them works fine as long as you have a small number of domain handlers that are lightweight and that you are aware of. Problems start to show up when other developers attach more heavy-weight domain handlers that themselves may trigger domain events, which are handled in the same transaction, and so on. This leads to two significant problems:
First, this leads to longer running transactions which may timeout or run into different locking issues, such as deadlocks or optimistic locking failures.
Second, if any one of the domain event handlers in the chain fails, that domain event handler will have the power to rollback the entire transaction, essentially undoing all the events. Do you really want to give the power to change the past to a single domain event handler? After all, a domain event is published because something has happened, not because something will or might happen.
So how do we then make sure our domain event handlers run inside their own transactions?
Introducing @TransactionalEventListener
Spring has an alternative annotation for event handlers that need to be transaction aware and that is @TransactionalEventListener
. The annotation is used exactly the same way as @EventListener
:
@Component
class MyTransactionalDomainEventHandler {
@TransactionalEventListener
public void onMyDomainEvent(MyDomainEvent event) {
// Handler code here.
}
}
These event listeners are not invoked directly when an event is published. Instead, they are tied to the lifecycle of the current active transaction (and because of this, they do not work if you use a reactive transaction manager).
The default behaviour of a @TransactionalEventListener
is to execute after the current transaction has been successfully committed. If the transaction is rolled back, or there is no active transaction to begin with, nothing happens.
You can change this behaviour by passing different parameters to the annotation. The annotation can be configured in several ways but we will only look at two parameters:
-
phase
: The transaction phase to bind the event to. Default isAFTER_COMMIT
, but other options areBEFORE_COMMIT
,AFTER_ROLLBACK
andAFTER_COMPLETION
(event handler is executed regardless of whether the transaction was committed or rolled back). -
fallbackExecution
: Whether the event handler should be executed even if there is no active transaction. Default isfalse
.
If you use AFTER_COMMIT
or AFTER_COMPLETION
, it is very important that any transactional code that you may invoke from within the event handler starts its own transaction (in other words, they should use the REQUIRES_NEW
transaction propagation and not REQUIRED
). You can read about why in this blog post.
So now all of our problems are solved, right? Not quite. When our events were handled inside the same transaction, either all the changes were committed or none were. The data would always be in a consistent state after the transaction was complete.
With the @TransactionalEventListener
this is no longer the case. What if some of the event listeners fail and others succeed? Or what if the entire system goes down after the first transaction has committed, but before any event listener gets executed? This could put our data into an inconsistent state. How do we recover from this?
I'm going to leave you with a cliffhanger now, because that will be the subject of a future blogpost.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.