DEV Community

Discussion on: Event Storage in Postgres

 
kspeakman profile image
Kasey Speakman

I haven't tried an advisory lock. My basic understanding is that advisory locks are simpler than row/table locks. So they should reduce lock/unlock time. But how much, I haven't measured.

My observation in testing is that write perf hits a degradation breakpoint when indexes can't fit in memory anymore. And the bigger they get, the more cache misses, the more disk IO has to be done. First fix is scaling up db mem/cpu. When that limit is hit or impractically expensive, it's time to use multiple nodes like sharding.

The ultimate solution for perf is to have no position at all. The only order that matters is within a stream. Cross-stream ordering is best effort. So you could use something unstable (sometimes wrong) like timestamp. Readers should tolerate some out-of-order events across streams. So for example, no strict foreign key enforcement in tabular views (aka projections). Because the causal event from one stream might mistakenly be ordered after the effect event in another stream. But the events within each stream are totally ordered correctly. Sometimes it can be very convenient to have a stable ordering across streams so I probably wouldn't take this step until perf was a critical constraint.

Thread Thread
 
rusfighter profile image
ilija

On how many rows did you see the performance degregation? Is it in range of 1,10 or 100 millions?

Thread Thread
 
kspeakman profile image
Kasey Speakman • Edited

I can't remember the exact breakpoints and it would vary anyway based on your hardware and index structures (3 fields vs 1 field, etc). This is anecdotal memory rather than a scientific experiment. I wrote a chess simulation program that generated unique boards and looked for checkmates. It would start off saving millions of boards per second. But very quickly stabilized in the tens of thousands of writes per second. I was using COPY to insert rows with batch sizes of 1024. I ran it for 2-3 months. When I stopped, db had over a trillion records and a write rate in the single digits. DB size was getting close to 3TB. The indexes were bigger than the data table. I learned a lot from that. I was very impressed with postgres.

Also, the performance drop-off was accelerated due to duplicate checking. Which also got more expensive (more IO) as time went on. I was tracking the dupe rate as well, but don't remember it off the top.

Thread Thread
 
rusfighter profile image
ilija

Interesting. Did you also test your implementation of the eventstore with large amounts of data?

Thread Thread
 
kspeakman profile image
Kasey Speakman • Edited

Frankly no. We are using it for business ops rather than something like sensor data. If our write ops crosses even 1 event per second monthly avg we would have already grown the organization dramatically. The goal is not to ride out this solution to a million ops per second where it becomes very complicated to use. But to evaluate all the options when we come to the next inflection point. Though I constructed this and iterate on it, I realize that it has a context where it is useful and instructive. I am open to other options to meet different needs. I am still looking for perf gains with this, as that also means better efficiency with same resources. And that means less cost for cloud services, which is especially helpful early in a product life cycle. So I very much appreciate the code you posted and this discussion. :) And any testing anyone wants to do. I still want to post my updates and the F# libraries I use on top of this at some point. Still not quite where I want them.

Thread Thread
 
rusfighter profile image
ilija • Edited

Yes you are correct. I am currently using kind of the same eventstore implementation in postgres for a SaaS app and the event rate is really low. However migrating the data to another system could be painful and you want to make the best decisions as early as possible but keeping the infra manageable without using many different systems. My worries for postgres are mostly about the table size. A rate of 1 event per second already produced 32 million of events per year. And after a year or two, you still want a decent performance for fetching a event stream or inserting an event

I also found another eventstore implementation for postgres, used with elixir: github.com/commanded/eventstore . They use a different table structure, as can be found here: github.com/commanded/eventstore/bl...

Thread Thread
 
kspeakman profile image
Kasey Speakman • Edited

The nice thing about event sourcing is that events are the source of truth and they are pretty portable. We have copies of the system for multiple environments as I'm sure you do. So it would not be too much of a stretch to work out the integration details with a different solution, spin up a new system with it, save the existing events onto that copy, and validate it.

32 million rows is easy for postgres as far as just storage and insertion perf. One of our products is getting close to a million events after 6 years and it still runs lightly on micro instances. So I have lots of scale-up room before I need to reevaluate. That one used full consistency, so the read models are in the same db and there is write amplification. I used different schemas per tenant so it's not one set of indexes. If that does make a difference, the same could be accomplished with table partitioning.

The length of a stream (and therefore replay time) is more in the design camp than the perf camp. I've been meaning to make a post summarizing things I've learned and rules of thumb I use for event stream design. For example I will use unbounded streams for repeated processes. Like a yearly audit. But I don't replay the whole thing. I replay from the last completed audit. So replay sees a bounded number of events every time. And perf doesn't significantly change with years of history.

I will check that implementation out and see what I can learn. Thanks for linking it! :)

Thread Thread
 
kspeakman profile image
Kasey Speakman

I only looked at the table structures. It looks like it's meant to be eventstore.com on postgres. Interesting idea.

One of things I didn't mention with scaling limits to postgres is connection limits. Each connection eats non-trivial amounts of server resources. Our AWS t2.micro instances can handle about 85 before it can't make new ones. (Ask me how I know.) But resources allocated to sessions are resources not allocated to running SQL. So we want to stay well below the limit. This is why things like Pgbouncer exist.

I want to explore creating my own event store service that will accept ES commands and use SignalR or WebSockets for listeners on top of my postgres ES. It can maintain its own limited number of connections to give the db as much resources as possible. And assign Positions from mem to alleviate that bottleneck. It can use opportunistic batching for yet more perf. Practically though, this creates many new downsides. An extra network hop (+latency). New failure modes and recovery models. Pub/sub handling. This is all potentially fun stuff I might like to do for my own learning. But when approaching limits of the original solution, for work I'd be evaluating eventstore.com instead since they solved most of these problems already. And it's high availability and free to use. I'm sure it has its own issues / workarounds, but probably so would my service. :)