@rhymes
is right. This is event sourcing. We use this in one of our products, and will likely to do so in future products, because our experiences with it are hugely positive. Especially long-term issues like schema changes and historical reports. These are things that can be very painful in current-state-only databases, but are easy with event sourcing.
Instead of each entity being a row in relational DBs or document in doc or KV DBs, each entity has a log (aka stream) of changes. Each individual entry in the log is called an event. (Usually events end up being rows or documents in an event store.) The main operations of an event store are:
Append events to a stream
Read events from a stream
Read events from all streams
Be warned that it does take a little bit more instrumentation to use event sourcing. Events by themselves are not very easy to query. You have to rebuild query data from events. In simple cases, you can do this for every requested query. But particularly listing entities is far too costly to rebuild every time. So you usually end up with 2 databases. An event store for writes and a current-state store for reads. So you need a component that updates the current-state store as new events occur. The nice part about this is that the current-state store is completely disposable. So schema changes are as simple as throwing it away, setting up a new one with different schema, and replaying events onto it.
Design of events is also important. For example, naming an event OrderUpdated is considered an anti-pattern. Because no Order listener is going to know whether they care about that event until they actually open it and examine its data. For maximum usefulness, events should be named with semantic business meaning. For example, OrderCanceled or OrderPaidInFull. Then it should contain the data necessary to process that specific event.
So, there is an up-front investment in knowledge and tooling. But the long-term payoff is high.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
@rhymes is right. This is event sourcing. We use this in one of our products, and will likely to do so in future products, because our experiences with it are hugely positive. Especially long-term issues like schema changes and historical reports. These are things that can be very painful in current-state-only databases, but are easy with event sourcing.
Instead of each entity being a row in relational DBs or document in doc or KV DBs, each entity has a log (aka stream) of changes. Each individual entry in the log is called an event. (Usually events end up being rows or documents in an event store.) The main operations of an event store are:
Be warned that it does take a little bit more instrumentation to use event sourcing. Events by themselves are not very easy to query. You have to rebuild query data from events. In simple cases, you can do this for every requested query. But particularly listing entities is far too costly to rebuild every time. So you usually end up with 2 databases. An event store for writes and a current-state store for reads. So you need a component that updates the current-state store as new events occur. The nice part about this is that the current-state store is completely disposable. So schema changes are as simple as throwing it away, setting up a new one with different schema, and replaying events onto it.
Design of events is also important. For example, naming an event OrderUpdated is considered an anti-pattern. Because no Order listener is going to know whether they care about that event until they actually open it and examine its data. For maximum usefulness, events should be named with semantic business meaning. For example, OrderCanceled or OrderPaidInFull. Then it should contain the data necessary to process that specific event.
So, there is an up-front investment in knowledge and tooling. But the long-term payoff is high.