In my last two posts about domain-driven design, we looked at how to build repositories and how to use value objects as aggregate IDs. Now we are going to have a closer look at domain events.
From Spring's point of view, a domain event is just another application event that can be published using the built-in ApplicationEventPublisher
. In other words, we do not need to worry about building an event bus or some other infrastructure for publishing domain events: you inject the event publisher into your domain service and publish the event. However, in most cases you want to publish domain events directly from the aggregate without having to go via a domain service just for that purpose. Fortunately, we can do just that.
Spring Data provides a mechanism for publishing domain events directly from within aggregates without having to get a hold of the event publisher. We already touched on this mechanism when we looked at BaseAggregateRoot
and now we are going to take a closer look at it.
Under the hood, Spring Boot will register a method interceptor for all repository methods whose names start with save, such as save
and saveAndFlush
. This interceptor will look for two methods in your aggregate: one annotated with @DomainEvents
and another annotated with @AfterDomainEventPublication
.
The method annotated with @DomainEvents
is expected to return a list of events to publish. The interceptor will publish these events using the Spring application event publisher. Once the events have been published, the method annotated with @AfterDomainEventPublication
is invoked. This method is expected to clear the list of events, to prevent them from being published again the next time the aggregate is saved.
There is a caveat to keep in mind when designing and publishing domain events in this way and it has to do with events that include a reference to the aggregate root itself, for example like this:
public class PotentiallyProblematicDomainEvent {
private final MyAggregate myAggregate;
public PotentiallyProblematicDomainEvent(@NotNull MyAggregate myAggregate) {
this.myAggregate = myAggregate;
}
public @NotNull MyAggregate getMyAggregate() {
return myAggregate;
}
}
Whenever you design events like this, you have to be aware of how Spring Data and JPA work under the hood.
When you save an existing entity (and here I'm talking about the JPA entity concept, not the DDD one), Spring Data will end up calling EntityManager.merge
. If the entity is detached, JPA will retrieve the managed entity, copy all the attributes from the detached entity to the managed one, save it and return it. The managed entity will get its optimistic locking version incremented while the detached entity remains untouched.
However, since the domain event was registered on the detached entity, the domain event listeners will get a reference to the detached entity. This can lead to optimistic locking errors if a listener tries to perform any operations directly on the entity and then save it.
Here are a few examples of cases where the listeners will end up getting a stale entity with an incorrect optimistic locking version:
public class PotentiallyProblematicApplicationService {
@Transactional
public void firstProblematicMethod(@NotNull MyAggregate aggregate) { // <1>
aggregate.performAnOperationThatRegistersAProblematicDomainEvent();
myAggregateRepository.saveAndFlush(aggregate);
}
public void secondProblematicMethod(@NotNull MyAggregateId aggregateId) { // <2>
var aggregate = myAggregateRepository.getById(aggregateId);
aggregate.performAnOperationThatRegistersAProblematicDomainEvent();
myAggregateRepository.saveAndFlush(aggregate);
}
}
- This method accepts an aggregate as a parameter and performs operations on it directly. This means the aggregate is detached and will become stale once saved.
- This method accepts the aggregate ID as a parameter and looks up the aggregate before performing operations on it. However, the method is not
@Transactional
which means that both calls to the repository will run inside their own transactions, detaching the aggregate in between.
Now how do we address this? The second method is quite easy to fix: just make the entire method @Transactional
. That way, the aggregate will still be managed when it is saved and the domain event listeners will get the correct instance.
But what about the first method? An obvious solution would be to use the aggregate ID instead of the aggregate itself in the event. However, this has a problem of its own: if the event is registered before the aggregate has been persisted, the aggregate has no ID. If you never publish any events from unpersisted aggregates, this is not a problem. If you do, however, you can fix it like this:
public class SaferDomainEvent {
private final MyAggregate myAggregate;
public SaferDomainEvent(@NotNull MyAggregate myAggregate) { // <1>
this.myAggregate = myAggregate;
}
public @NotNull MyAggregateId getMyAggregateId() { // <2>
return myAggregate.getIdentifier();
}
}
- We still store a reference to the aggregate inside the event...
- ... but we only expose its ID to the outside world, forcing any listeners to fetch a fresh copy of the aggregate from the repository if they want to do anything with it.
This is yet again an example of the underlying technology silently sneaking into your domain model - your choice of persistence technology can even affect the design of your domain events. Fortunately in this case it is not a big deal, but it is still something that may come back and bite you later if you base your early designs on assumptions that later turn out to be incorrect (I've been there and done that, especially when it comes to JPA). The bottomline is that you need to know your tools well - not only how to use them but also how they work.
In a future post, we are going to look at how to catch the domain events we have published and some caveats related to that.
Top comments (1)
Thank you for sharing knowledge.