I'm implementing my experimental event-sourcing project currently. It is in a different language, but I use PostgreSQL to build Event Store.
I feel the need to lock event stream for an aggregate that's being updated. Otherwise, it is hard to guarantee a correct event order. Based on your experience, what is the way to do it?
I know, some people suggest that you just write all events in a single thread. But, put scaling aside, it doesn't feel reliable to tell that this instance is the only one writing to the event store. Suppose, Kubernetes is said to maintain one instance of a service. But an instance is dead if it stops responding to liveness check. What if packets are lost, but the instance is still alive and writing to the event store?
Anyway, I'm looking for your advice about enforcing correct event ordering.
Thank you for your feedback. I really appreciate it :)
I have used PostgreSQL as an event store before without issues. However, I didn't understand why you will need to lock down the events for an entire Aggregate (assuming it is identified by a UUID).
Can you show me why you needed to do this, please? (with an example) :)
An example domain is push notification settings in a mobile app. Push notifications are delivered by an onmichannel communications provider. A customer may have a number of mobile devices which are bound to his (single) phone number. A device is bound to a phone number through a process called personalization. One of a customer's devices is considered main - push notifications are delivered to this device. A customer may switch main device as he wishes. Next, there are a few kinds of messages, and a customer may choose a delivery channel for each message type.
I've chosen Customer to be an aggregate, so a single event stream stores state for a particular customer.
To start getting notifications, the following events should occur:
Customer initialized
Delivery channel set
Device personalized
Main device set
Synchronized to onmichannel provider
Because of the last synchronization step commands are stored and processed asynchronously. An omnichannel provider has API with a limited throughput, and it regularly fails and needs retry. But we don't want a customer to wait.
A command processor needs to know a current state of a Customer to decide, whether to reject or to store an event, and with what event data.
The point is, if I read from a database some state that I rely on, the next moment it may become outdated. And I may get conflicting events for a customer. So, I supposed that I need a way to lock a particular customer so that his event stream would reliably remain in the same state that I rely on to make up the next decision about changing this state.
EDIT: a device also may be depersonalized, which is important for security reasons. So, for example, a device that is not personalized may not be set as main.
Please don't just assume that my comments will be right or that they fit your domain; I'm merely commenting more as a question than a statement :)
I think that these events are not business domain ones but more an application(s) matter. They seem to describe tech clients (browsers, devices etc..) and not what the Customer can do. For example, user login is not a Customer's aggregate problem but an application access one.
I think, Eventual Consistency is key here and that you should (if possible) try to plan the system with it in mind.
Please let me know your thoughts on this. I enjoy such discussions :)
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Thanks for your articles, Keith!
I'm implementing my experimental event-sourcing project currently. It is in a different language, but I use PostgreSQL to build Event Store.
I feel the need to lock event stream for an aggregate that's being updated. Otherwise, it is hard to guarantee a correct event order. Based on your experience, what is the way to do it?
I know, some people suggest that you just write all events in a single thread. But, put scaling aside, it doesn't feel reliable to tell that this instance is the only one writing to the event store. Suppose, Kubernetes is said to maintain one instance of a service. But an instance is dead if it stops responding to liveness check. What if packets are lost, but the instance is still alive and writing to the event store?
Anyway, I'm looking for your advice about enforcing correct event ordering.
Hello :)
Thank you for your feedback. I really appreciate it :)
I have used PostgreSQL as an event store before without issues. However, I didn't understand why you will need to lock down the events for an entire Aggregate (assuming it is identified by a UUID).
Can you show me why you needed to do this, please? (with an example) :)
Kind regards,
from you dev friend..Keith!
An example domain is push notification settings in a mobile app. Push notifications are delivered by an onmichannel communications provider. A customer may have a number of mobile devices which are bound to his (single) phone number. A device is bound to a phone number through a process called personalization. One of a customer's devices is considered main - push notifications are delivered to this device. A customer may switch main device as he wishes. Next, there are a few kinds of messages, and a customer may choose a delivery channel for each message type.
I've chosen Customer to be an aggregate, so a single event stream stores state for a particular customer.
To start getting notifications, the following events should occur:
Because of the last synchronization step commands are stored and processed asynchronously. An omnichannel provider has API with a limited throughput, and it regularly fails and needs retry. But we don't want a customer to wait.
A command processor needs to know a current state of a Customer to decide, whether to reject or to store an event, and with what event data.
The point is, if I read from a database some state that I rely on, the next moment it may become outdated. And I may get conflicting events for a customer. So, I supposed that I need a way to lock a particular customer so that his event stream would reliably remain in the same state that I rely on to make up the next decision about changing this state.
EDIT: a device also may be depersonalized, which is important for security reasons. So, for example, a device that is not personalized may not be set as main.
Thanks for the long explanation :)
Please don't just assume that my comments will be right or that they fit your domain; I'm merely commenting more as a question than a statement :)
I think that these events are not business domain ones but more an application(s) matter. They seem to describe tech clients (browsers, devices etc..) and not what the Customer can do. For example, user login is not a Customer's aggregate problem but an application access one.
I think, Eventual Consistency is key here and that you should (if possible) try to plan the system with it in mind.
Please let me know your thoughts on this. I enjoy such discussions :)