DEV Community

sheep
sheep

Posted on

Implementing Concurrency Control in MongoDB

MongoDB is my favorite database. As a document-oriented database, it offers great flexibility in how data is modeled and stored. I often use MongoDB together with Mongoose. Today, I’ll share how to implement concurrency control in MongoDB.

An Example of Concurrency Control

Let’s start with a simple example.

Suppose we have a bookmark stored as JSON:

{ "title": "dev.to", "url": "https://dev.to/" }
Enter fullscreen mode Exit fullscreen mode

User A and User B both want to edit this bookmark.
A wants to update the title to "DEV Community",
B wants to update the URL to "https://dev.to/dashboard".

When updates happen sequentially:

  1. A reads { title: "dev.to", url: "https://dev.to/" }
  2. A updates the title and saves { title: "DEV Community", url: "https://dev.to/" }
  3. B reads { title: "DEV Community", url: "https://dev.to/" }
  4. B updates the URL and saves { title: "DEV Community", url: "https://dev.to/dashboard" }

Everything works fine.

When updates happen at (almost) the same time:

  1. A reads { title: "dev.to", url: "https://dev.to/" }
  2. B reads { title: "dev.to", url: "https://dev.to/" }
  3. A updates the title and saves { title: "DEV Community", url: "https://dev.to/" }
  4. B updates the URL and saves { title: "dev.to", url: "https://dev.to/dashboard" }

Problem:
B accidentally overwrites A’s title change because both read data before either update happened.

The correct behavior should be:
When B tries to update, the system should detect that the original data has changed, reject B’s update, and prevent data loss.

Mongoose Solution with findOneAndUpdate()

findOneAndUpdate() is atomic, meaning MongoDB performs the match + update in one indivisible step. This allows us to implement optimistic concurrency control by adding a condition in the filter.

I prefer using an updateTime field for the checking purpose:

operation of A:

// read original data
const item = await Bookmark.findById(_id);
// retrieve updateTime value
const updateTime = item.updateTime;
// check updateTime and update
await Bookmark.findOneAndUpdate(
  { _id, updateTime },
  {
    title: "DEV Community", // new title
    updateTime: new Date().getTime(), // new updateTime
  }
);

Enter fullscreen mode Exit fullscreen mode

operation of B:

// read original data
const item = await Bookmark.findById(id);
// retrieve updateTime value
const updateTime = item.updateTime;
// if updateTime has been modified (by A), this will fail
await Bookmark.findOneAndUpdate(
  { _id, updateTime },
  {
    url: "https://dev.to/dashboard", // new url
    updateTime: new Date().getTime(), // new updateTime
  }
);

Enter fullscreen mode Exit fullscreen mode

Both A and B include updateTime in the filter.
If A updates first, B’s findOneAndUpdate() will fail to match any document because the updateTime has already been modified by A, which means the update is rejected safely.

Why findOneAndUpdate() Is Not Enough

The previous example updates only one document one time.
But what if the operation needs to:

  • update multiple documents
  • write logs
  • ensure cross-collection consistency
  • rollback if any step fails

This is where transactions become essential.

A transaction ensures that a group of operations either all succeed or all fail. If any operation fails, all previous operations are rolled back to maintain data consistency and prevent errors.

Using Transactions in Mongoose

Here’s a helper function for running any code inside a transaction:

import mongoose from 'mongoose';

export async function runInTransaction(job) {
  const session = await mongoose.startSession();
  session.startTransaction();

  try {
    const result = await job(session);
    await session.commitTransaction();
    return {
      result,
    };
  } catch (error) {
    await session.abortTransaction();
    throw error;
  } finally {
    session.endSession();
  }
}
Enter fullscreen mode Exit fullscreen mode

If anything inside job() throws an error, all operations are rolled back.

Combining findOneAndUpdate() + Transactions

Suppose A and B both also write a log entry while updating the bookmark.
We want:

  • bookmark update to succeed only if the updateTime matches
  • log entry to roll back if the update fails

operation of A:

await runInTransaction(async (_session) => {
  // save log
  await LogRecord.create([{ content: 'Update title' }], { session: _session });
  // operation on Bookmark data
  const item = await Bookmark.findById(_id).session(_session);
  const updateTime = item.updateTime;
  await Bookmark.findOneAndUpdate(
    { _id, updateTime },
    {
      title: "DEV Community", // new title
      updateTime: new Date().getTime(), // new updateTime
    },
    { session: _session }
  ).orFail(new Error('Conflict update')); // throw Exception on fail
});
Enter fullscreen mode Exit fullscreen mode

operation of B:

await runInTransaction(async (_session) => {
  // save log
  await LogRecord.create([{ content: 'Update url' }], { session: _session });
  // operation on Bookmark data
  const item = await Bookmark.findById(_id).session(_session);
  const updateTime = item.updateTime;
  await Bookmark.findOneAndUpdate(
    { _id, updateTime }, {
      url: "https://dev.to/dashboard", // new url
      updateTime: new Date().getTime(), // new updateTime
    },
    { session: _session }
  ).orFail(new Error('Conflict update')); // throw Exception on fail
});
Enter fullscreen mode Exit fullscreen mode

If the bookmark update fails due to a conflict:

  • .orFail() throws an error
  • the transaction catches it
  • all previous operations (including logs) roll back

This prevents inconsistencies between LogRecord and Bookmark.

Summary

Concurrency control ensures that multiple clients can read and write data safely without overwriting each other’s changes.

In this post, we explored a practical approach using:

  • findOneAndUpdate() for optimistic concurrency checks
  • transactions for keeping multi-operation workflows consistent

Together, they help prevent race conditions and maintain correct, consistent and reliable data even when many operations happen at the same time.


I just developed a browser extension called Bookmark Dashboard, a local-first bookmark management tool, and I’d love to share it with fellow bookmark enthusiasts!

Thanks for reading!

Top comments (0)