DEV Community

Cover image for Event Sourcing: a one year retrospective
Sam
Sam

Posted on • Updated on

Event Sourcing: a one year retrospective

It has been a little over a year since building and supporting a production service based on Event Sourcing, where instead of designing a normalised relational database schema, the persistence layer of our domain was modelled as an immutable event stream.

With this approach, the read and write models are independent, you can create any pattern of access that is useful to your application, be it a relational database schema or something more specialised.

Our simplest example is the "wallet" aggregate, which records events based on receiving and spending credits. In the following example, we see the wallet event stream and the database table the events are projected into, noting that the event stream has far more information and context than our read model actually cares about (the total number of credits the user has at any one time).

+------------------------+--------+----------------+---------------------------------------------------------+
| Time                   | Stream | Event          | Payload                                                 |
+------------------------+--------+----------------+---------------------------------------------------------+
| 9/02/2022 02:44:19 PM  | wallet | CreditsGranted | {                                                       |
|                        |        |                |     "amount": 1                                         |
|                        |        |                | }                                                       |
| 10/02/2022 08:38:48 AM | wallet | CreditsSpent   | {                                                       |
|                        |        |                |     "amount": 1,                                        |
|                        |        |                |     "type": "jobLaunch",                                |
|                        |        |                |     "typeMetadata": {                                   |
|                        |        |                |         "jobId": "9348e9b3-43b8-4531-b79c-d70d0cb66ba0" |
|                        |        |                |     }                                                   |
|                        |        |                | }                                                       |
+------------------------+--------+----------------+---------------------------------------------------------+
Enter fullscreen mode Exit fullscreen mode
MariaDB [main]> select aggregate_id, credits from account_projection where credits > 0;
+--------------------------------------+---------+
| aggregate_id                         | credits |
+--------------------------------------+---------+
| 911357c8-4b63-4e5a-8b5d-d38eff30a7cb |       7 |
| ff452810-32d3-483b-9d93-064a1df11190 |       9 |
+--------------------------------------+---------+
Enter fullscreen mode Exit fullscreen mode

Our Wallet aggregate, responsible for recording events CreditsGranted and CreditsSpent:

final class Wallet implements AggregateRoot {
    use AggregateRootBehaviourWithRequiredHistory;

    private int $totalCredits = 0;

    public static function create(WalletId $id): static {
        return new static($id);
    }

    public function grantCredits(int $amount): void {
        if ($amount < 1) {
            throw CreditAddException::because('Cannot grant an amount of credits less than 1');
        }
        $this->recordThat(new CreditsGranted($amount));
    }

    protected function applyCreditsGranted(CreditsGranted $event): void {
        $this->totalCredits += $event->amount;
    }

    public function getTotalCredits(): int {
        return $this->totalCredits;
    }

    public function spendCredits(int $amount, CreditSpendTypeInterface $spendType): void {
        if ($amount < 1) {
            throw CreditSpendException::because('Cannot spend an amount of credits less than 1');
        }
        if ($amount > $this->totalCredits) {
            throw CreditSpendException::because('Not enough credits in wallet');
        }
        $this->recordThat(new CreditsSpent($amount, $spendType));
    }

    protected function applyCreditsSpent(CreditsSpent $event): void {
        $this->totalCredits -= $event->amount;
    }
}
Enter fullscreen mode Exit fullscreen mode

With quick tour out of the way, what are some of the pros and cons of this pattern, having used it for a year.

Flexible read models 😁

Being able to quickly design a schema and rewrite it later has been extremely useful. When rapidly prototyping features, like the wallet above, our first version only validates the users credit balance when attempting to launch a job on our platform, although at any time we could expand the read model to a history of the transactions.

Another tangible benefit has been doing away with the notion of DRY database design. Our application is a two sided marketplace, where parties transact around a "job" aggregate. The access each party gets to various information related to the job during its lifecycle is vastly different. Instead of having a single entity that is carefully access controlled and guarded we project into two completely different "job owner" and "job participant" schemas, which vastly simplifies the access control and complexity of our application. When working on the experience for either party, you have a read model tailored to each persona, that you can confidently pass to them for consumption.

Audit trails 😁

A real struggle with traditional database schemas is figuring out how much information is worth keeping at a given moment in time. Do you want to store when something was created? Probably. What about when it was last updated? Yeah, that seems useful to. What about when a single property changed? Maybe, maybe not? What about timestamps for each change of each property along with who initiated the change? Certainly not.

The event stream brings time into the equation as a first class citizen. Instead of having timestamp columns or logs that provide a narrow insights into what's going on, a collection of unchanging events show you when an event occurred and who initiated it, in all cases.

This is helpful for auditing, but also I've noticed our app seems to more and more frequently bubble up information about who did something and when, which I think has measurable improved some of our interfaces. An example from the front-end of our ACL system:

Image description

More tooling 😐

With a read and write model, plus tools to orchestrate rebuilding the write models, there is certainly more tooling and code required to implement this pattern.

There will also be a stage where there is more investment required. At the moment, we truncate and rebuild all our projections, every deploy, since at our current volume of events it takes < 30 seconds. At some future point in time we may need to be more selective, rebuild them more iteratively or consider additional build steps to streamline rebuilding into a new schema.

Excerpts of our rebuild command:

    bin/console dpa:rebuild-projections
    Rebuilding projections
    +----------------------+---------------+
    | Aggregate            | Message Count |
    +----------------------+---------------+
    | access_control_list  | 1880          |
...
    | wallet               | 16            |
    | total                | 7758          |
    +----------------------+---------------+

    Finished batch 0 default/batch0: 32.00 MiB - 917 ms
...
    Finished batch 14 default/batch14: 40.00 MiB - 958 ms

    Total events: 7,758 in default/rebuild: 42.00 MiB - 26310 ms
    Projections rebuilt
    bin/console dpa:update-projection-status --done
    Existing status: rebuilding
    New status: up to date
Enter fullscreen mode Exit fullscreen mode

Poor choice for some domains 😐

There are some domains which I think the pattern hasn't added much value. So far, I think that any time we're attempting to pull, sync or represent information from another service, system or process orchestrated elsewhere, using Event Sourcing adds an overhead that doesn't provide much benefit.

The source of truth is already housed somewhere else and understanding when a piece of information crossed the boundary from one service into another is less useful.

Thankfully pivoting away from Event Sourcing in such cases is extremely easy: project into a schema, swap your application code to read and write directly to the new schema, cease rebuilding the projection.

With these caveats in mind, I do still I think the pattern has been extremely useful and has paid back any investment many times over.

If you are interested in implementing this pattern, I found EventSauce to be a fantastic reference implementation, community and learning resource.


Header generated by DALL-E with the prompt: A wallet with digital features, with envelopes orderly stacked within it, digital art

Top comments (0)