DEV Community

Discussion on: What is Event Sourcing?

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Assuming you mean distributed/scalable, I have not found this unicorn as yet. Maybe we need to develop it. :)

For further information, here is a response I gave in a StackOverflow answer about Kafka as an event store. Note that the EventStore product mentioned below is open-source and they will provide a tuned AMI for AWS if you pay for support.


It seems that most people roll their own event storage implementation on top of an existing database. For non-distributed scenarios, like internal back-ends or stand-alone products, it is well-documented how to create a SQL-based event store. And there are libraries available on top of a various kinds databases. There is also EventStore, which is built for this purpose.

In distributed scenarios, I've seen a couple of different implementations. Jet's Panther project uses Azure CosmosDB, with the Change Feed feature to notify listeners. Another similar implementation I've heard about on AWS is using DynamoDB with its Streams feature to notify listeners. The partition key probably should be the stream id for best data distribution (to lessen the amount of over-provisioning). However, a full replay across streams in Dynamo is expensive (read and cost-wise). So this impl was also setup for Dynamo Streams to dump events to S3. When a new listener comes online, or an existing listener wants a full replay, it would read S3 to catch up first.

My current project is a multi-tenant scenario, and I rolled my own on top of Postgres. Something like Citus seems appropriate for scalability, partitioning by tentant+stream.