DEV Community

Cover image for Repeatable Read vs Serializable Isolation Level in Postgres
Peter Grman
Peter Grman

Posted on • Originally published at peter.grman.at on

Repeatable Read vs Serializable Isolation Level in Postgres

The question I want to answer here, is the question I had for the last 1.5 years:

When to use which isolation level and what is the difference in real world scenarios?

What's wrong with read committed?

First off, you might wonder, why am I comparing those 2 strict isolation levels at all and don't just take read committed. I think in my whole professional career and as a student, I always used the default transaction isolation level, read committed, and only learned in theory about the other ones. (OK, in one company that I joined, they actually also used read uncommitted, but let's not talk about that.)

So, to answer the question, what's wrong with read committed isolation level? - depending on your use case, maybe nothing. However, it does lead to behaviors, which could cause bugs that are hard to test for and fix. Those behaviors are in short:

  1. Lost update - 2 transactions read the old state; both perform changes; last write wins;
  2. Non repeatable reads and phantom reads - when you execute the same query the second time, you might get less or more elements back than the first time;

Yes, you can prevent both of them. You can get around lost updates with explicit locking, and non repeatable read isn't an issue if you only read once. But this introduces additional complexity in your application, which we didn't want to deal with.

Serializable VS Repeatable Read - documentation

After reading the on-line documentation, the Postgres wiki and even mailing lists, I still wasn't quite sure, what the big difference between those isolation levels would be. According to the SQL standard, repeatable read can allow phantom reads, while serializable can't. But in Postgres, they don't differ in this regard. In fact, up to Postgres 9.0, there wasn't any repeatable read isolation level. Go, check the documentation if you don't believe me.

With Postgres 9.1 SSI arrived and the old serializable isolation level was renamed to repeatable read and a new, more strict one emerged. From the wiki:

[...] Serializable Snapshot Isolation (SSI) in PostgreSQL compared to plain Snapshot Isolation (SI). These correspond to the SERIALIZABLE and REPEATABLE READ transaction isolation levels, respectively, in PostgreSQL beginning with version 9.1.

I won't go into deep technical details, about what SSI is and how it works, mostly because I'm not sure I understood it fully. Instead, this post will focus on the practical differences between the two isolation levels as I experienced them when switching form serializable to repeatable read in a real world application.

When you are running transactions in isolation level repeatable read (or serializable) you should expect for the transaction to fail from time to time with:

ERROR: could not serialize access due to concurrent update

However, thanks to SSI and fancy predicate locks (at least I think that's how it's implemented), Postgres can now abort the transaction also with another error:

ERROR: could not serialize access due to read/write dependencies among transactions

If you want to know more about SSI, then you should definitely check out the explanation on the Postgres wiki. I'll try to stick to practical examples for the rest of this post.

Serializable all the way

In the beginning of 2017, together with several other co-founders, we started building an ERP system for the travel industry. Because we had previously experienced the pain of inconsistent databases due to the lost update problem, we decided to trade stability for performance (at first). The idea was simple:

  • Every request is packed into a serializable transaction to guarantee that concurrent requests wouldn't screw up the system, and we wouldn't need to think about it.
  • Once we actually have many customers and this becomes an issue, we'll analyze our options and fix it. But we'd think about it only when we have that problem and not right from day one.

This worked great with one tiny exception. Remember those errors I mentioned before? During our CI builds we were testing also for concurrent requests and pretty soon started getting quite a few "could not serialize access due to read/write dependencies among transactions" errors. That shouldn't have been unexpected though, since the Postgres documentation clearly stated that you need to be ready to retry transactions when using any of those 2 isolation levels. We could also solve this fairly easily and in a global manner, not to deal with it anymore. (The implementation details, how we did it, will follow in a separate post.)

read/write dependencies in the real world

We were running our application in production with serializable isolation level for almost a year. We monitored closely, how many transaction retries we had during this time. Our assumption was, that it would get better, once we have more data. The reason for our assumption was our understanding , how SSI supposedly works. Part of it was based on predicate locks, by locking down index pages. If you only have few rows in your table (like you do during integration tests), all the data in a table will most likely fit into one or few index pages. This means, you'll get into deadlock situations very easily. This seemed fine for us since in real world scenarios you wouldn't have little data and a lot of requests. In fact we almost never had concurrent write requests. You should also keep in mind, those were internal retries and only led to sightly longer response times. So even in the unlikely case of concurrent writes, the end users didn't notice anything.

While on production everything was running as expected, we also had an integration environment. This environment contained countless partners with their own test accounts and data. Additionally, we started doing performance tests for bigger prospects. All of this led to quite a lot of data on the integration environment. That's when we realized, that things aren't getting better with more data. We were trying to come up with a new model explaining what we saw (as I mentioned, just by reading the documentation on SSI and serializable isolation level, it was quite hard to understand the real world implications for our use case). We realized that the majority of failures happened during a particular request. Additionally, it was almost exclusively read/write dependencies errors and only a tiny fraction (less than 0.1%) were concurrent update errors.

read+insert tables become the bottle neck

We had by far the highest number of collisions on a specific table. From this table, we were at first reading data, in order to verify if an additional row could be inserted. Why would we do that? Imagine tracking inventory. Instead of keeping count of how many items you have, you keep track of how many you bought and sold. Every time you want to sell an item (imagine it's an on-line shop) you need to verify that you still have it. If somebody else bought it in the meanwhile, the data you read from the table, based on which you made your decision, changed. In case you, at the same time, try to update the data, Postgres says that this isn't serializable anymore. That's why we were getting so many of those read/write dependencies errors.

Doing this might seem like a stupid idea, since keeping the count is so easy. However, instead of selling items, imagine you're hiring them out. You have for instance rental cars, which could be taken any hour of the day and brought back the same day or in 2 weeks. In that case it becomes really difficult figuring out how many cars still are available for lease. Instead, counting how many you already rented out is trivial.

Given how (I think that) Postgres SSI (serializable isolation level) works, it might have been possible to improve the predicate locking, by improving the index structure on that table. However, it's not only the predicate, which is considered, but the whole index page, with an additional random factor. Tweaking this in order to get things right seemed like an uphill battle and therefore we chose a different path.

Moving on to repeatable read

First we switched the tests in our integration pipeline. Instead of being restricted to 2 concurrent processes, we could now easily run more. The highest I tested with was 64 concurrent agents bombarding our application. Of course, the performance was poor on the build agent, but it worked and didn't lead to almost any transaction retries. So our expectations were met. Finally we had a model and it fit!

The problems, however, started shortly after. Because repeatable read doesn't throw the read/write dependencies errors we knew, we were going to have an issue with our read+insert table. We thought of 2 possible solutions:

  1. Serialize the access ourselves.
  2. Hope for the best, assuming that it is really unlikely to cause any real world problems.

Depending on your use case, option 2 (hope for the best) might actually be a really viable option. Sure you might oversell a bit, but orders also get canceled, so maybe nothing bad will ever happen. Unfortunately, for us, this wasn't a viable option. Instead, we had to serialize the access manually in those few cases and be smarter about it than Postgres' serializable isolation level.

Manually serialize access

You might think - this is easy, just run SELECT [...] FOR UPDATE. This only works in read committed isolation level because ...

  • ... for repeatable read transactions it actually isn't necessary, since they would anyway fail with a concurrent update error. (However, it does make the locking more aggressive, if that is what you are looking for.)
  • ... repeatable read transactions can only see data, which were committed before the transaction started. So even though a concurrent transaction T2 would need to wait for T1 to be finished, before continuing past the SELECT [...] FOR UPDATE locking, it still wouldn't be able to read the new data. (This actually causes another interesting issue, which I'll explain in detail in another post.)

Keep in mind, an explicit serialization is done automatically, the moment you are updating a shared row. So if you keep count of how many items you have in your inventory, additionally to storing the events of bought and sold items, this count is a shared resource and updating it in one transaction would cause all others to fail with a concurrent update error. If you don't have such a shared row naturally (even if it is just an optimization with calculated values), you can create one.

Take as an example the rental cars. Although it would be hard to keep correctly track of how many cars were reserved for which days, especially since one car could be reserved only for a few hours and therefore multiple times per day, we still can use that information for serialization. Every time a car is reserved, we update the counter in a separate table for all the days which are intersecting the reservation. Including the first and last day, even though we shouldn't be counting them as full days. This will be enough to trigger a concurrent update error in the unlikely event of 2 requests coming in at (almost) exactly the same time, for overlapping periods. In case those periods wouldn't have actually overlapped, because we aren't considering the time component in our simple locking table, it would be only a false positive, the transaction would retry and nobody would notice anything.

Conclusion

In the end, I was surprised of how well serializable isolation level worked for us. However, if you want a bit more predictability and can take read / write inconsistencies into account, I'd favor repeatable read. Now that I know how to deal with retrying transactions (blog post will follow), I don't think I want to go back to read committed any time soon. Of course, this greatly depends on the RDBMS you are using, and this article was Postgres specific. In case you do have a highly concurrent database where a lot of the rows are shared and you don't care too much about consistency when reading data multiple times, read committed might be the best option for you. But as a default, repeatable read seems just fine.

Top comments (0)