DEV Community

Lucas Barret
Lucas Barret

Posted on

Optimistic Locking (Rails internals again)

Recently, I had the opportunity to interview Ben Sheldon, the creator and maintainer of GoodJob.
I often like to create an article to understand more about the gem we discover or rediscover in GemRubyShow.

To tell you more about the gem, GoodJob is an ActiveJob Backend for your background Job. It works only with Rails and is backed by Postgres. It uses this interesting Pub/Sub mechanism of Postgres called LISTEN NOTIFY.

If you want to learn more about GoodJob, you can listen to the episode on YouTube. It was an excellent discussion with Ben Sheldon.

Aside from the LISTEN/NOTIFY mechanism, another essential mechanism in GoodJob is advisory Locks. In this article, we will try to understand better what they are and how to use them in Rails.

Advisory Locks in Rails

According to Postgres documentation, Advisory Locks are locks that have application-defined meanings. This is called Advisory Locks because it is up to you to use them correctly. They have a lot of benefits, avoid table bloat, and are automatically cleaned up by the server at the end of the session.

There are two locking options in Rails :

  • Optimistic Locking
  • Pessimistic Locking

Let's go with Optimistic Locking in this article!

Optimistic Locking

Let's say you have an application where you have users.
Imagine you and your colleague want to update the name of a user at the same time in different processes.

To avoid having an inconsistent database state, you can use Optimistic Locking.
We call it Optimistic Locking because we go from the principle that conflict won't happen often. But if they happen, remember that handling them is up to you.

Rails will support Optimistic Locking if you add a lock_version field to your table.
For example, following our example with users if you create a users table :

ActiveRecord::Schema.define do
  create_table:users, force: true do |t|
    t.string :name
    t.integer :lock_version
  end
end
Enter fullscreen mode Exit fullscreen mode

Rails will automatically 'understand' that he has to use Optimistic Locking.

Later, you want to update a user's name and do several operations at the same time in the same transaction (simulated by the sleep 20).

u1 = User.first

ActiveRecord::Base.transaction do
  u1.update(name: 'Lulu')
  sleep(20)
end
Enter fullscreen mode Exit fullscreen mode

But your colleague wants to update the name and did it in another request simultaneously.

p1 = Company.last
p1.update(name: 'Lucky')
Enter fullscreen mode Exit fullscreen mode

Since you have put the lock_version, your Colleague will end up with an error ActiveRecord::StaleObjectError.

Locking a row?

It can be nice to know how Rails deal with this. Let's look at the code in activerecord/lib/active_record/locking/optimistic.rb

for the update mechanism, well, you have a method called... _update_row(attribute_names,attempted_action = "update")

A guard condition checks if locking is enabled and calls super if not: return super unless locking_enabled?.

How do you know if locking is enabled? Simple, we check the flag, and as we have said before, check if there is a lock_version column.

This time the function speaks for itself:

  # Returns true if the +lock_optimistically+ flag is set to true
  # (which it is, by default), and the table includes the
  # +locking_column+ column (defaults to +lock_version+).
  def locking_enabled?
    lock_optimistically && columns_hash[locking_column]
  end
Enter fullscreen mode Exit fullscreen mode

That's nice, but we need to find out how it knows that the object is stale.

Down in the rabbit hole

I have been down in the rabbit hole for you, and here is the complete (almost...) stack trace for the update of a row :

-_update_row
--_update_record
---update
----exec_update alias of exec_delete
-----execute_and_clear
------exec_no_cache
Enter fullscreen mode Exit fullscreen mode

Be careful there are two exec_update methods (at least in the Postgres adapter code).

For the next part of the article, suppose that your Isolation Level in your transaction/database allows concurrent reading.

What will be returned from exec_no_cache is an instance of PG::result. In this instance, we have the cmd_tuples; this cmd_tuples contains the number of rows affected by your changes. By calling yield on this instance of PG::result we will get cmd_tuples.

Let's say you have read this row for the first time. lock_version was 0 for the two concurrent reads. But when you make any changes to your row, your lock_version is incremented : self[locking_column] += 1 in _update_record.

Since the lock_version does not match the other write operation, no columns are updated for one of the transactions. This will raise an ActiveRecord::StaleObjectError according to this snippet of code :

if affected_rows != 1
  raise ActiveRecord::StaleObjectError
  .new(self, attempted_action)
end
Enter fullscreen mode Exit fullscreen mode

There is no SQL locking mechanism as with Pessimistic Locking, which we will treat in another article. This is only at your application level.

Conclusion

Now you are aware of Optimistic Lock and what is happening inside. With Optimistic locking, you take the bet that conflict won't happen a lot.

And if they happen, you can resolve them efficiently (hopefully). It would be best to fix them yourself, so be careful when using optimistic locking.

What is nice about Optimistic Locking is that it is Database-independent. But this can also be dangerous if you update a row with a custom SQL without incrementing the lock_version; you will end up with the same problem you try to solve with your Optimistic Lock.

Top comments (0)