DEV Community

Cover image for MongoDB High Availability: Replica Set in a Docker Lab
Franck Pachot for MongoDB

Posted on • Edited on

MongoDB High Availability: Replica Set in a Docker Lab

MongoDB guarantees consistent and durable write operations through write-ahead logging, which protects data from instance crashes by flushing the journal to disk upon commit. It also protects against network partitions and storage failures with synchronous replication to a quorum of replicas. Replication and failover are built-in and do not require external tools or extensions. To set up a replica set, start three mongod instances as members of the same replica set using the --replSet option with the same name. To initiate the replica set, connect to one of the nodes and specify all members along with their priorities to become primary for the Raft election.

To experiment with replication, I run it in a lab with Docker Compose, where each node is a container. However, the network and disk latencies are too small compared to real deployments. I use Linux utilities tc and strace to inject some artificial latencies and test the setup in terms of latency, consistency, and resilience.

For this post, I write to the primary and read from each node to explain the write concern and its consequences for latency. Take this as an introduction. The examples don't show all the details, which also depend on read concerns, sharding, and resilience to failures.

Replica Set

I use the following Dockerfile to add some utilities to the MongoDB image:

FROM mongodb/mongodb-community-server
USER root
RUN apt-get update && apt-get install -y iproute2 strace
Enter fullscreen mode Exit fullscreen mode

I start 3 replicas with the following Docker Compose service:

  mongo:
    build: .
    volumes:
      - .:/scripts:ro
    # inject 100ms network latency and 50ms disk sync latency 
    cap_add:
      - NET_ADMIN   # for tc
      - SYS_PTRACE  # for strace
    command: |
     bash -xc '
     tc qdisc add dev eth0 root netem delay 100ms ;
     strace -e inject=fdatasync:delay_enter=50000 -f -Te trace=fdatasync -o /dev/null mongod --bind_ip_all --replSet rs0 --logpath /var/log/mongod
     '
    deploy:
      replicas: 3
Enter fullscreen mode Exit fullscreen mode

The command injects a 100ms network latency: with tc qdisc add dev eth0 root netem delay 100ms (it requires NET_ADMIN capability). The MongoDB server is started with strace (it requires SYS_PTRACE capability), which injects a delay of 50000 microseconds (delay_enter=50000) on each call to fdatasync

I declared a service to initiate the replicaset:

  init-replica-set:
    build: .
    depends_on:
      mongo:
        condition: service_started
    entrypoint: |
      bash -xc '
        sleep 3 ; 
        mongosh --host mongo --eval "
         rs.initiate( {_id: \"rs0\", members: [
          {_id: 0, priority: 3, host: \"${COMPOSE_PROJECT_NAME}-mongo-1:27017\"},
          {_id: 1, priority: 2, host: \"${COMPOSE_PROJECT_NAME}-mongo-2:27017\"},
          {_id: 2, priority: 1, host: \"${COMPOSE_PROJECT_NAME}-mongo-3:27017\"}]
         });
        ";
        sleep 1
      '
Enter fullscreen mode Exit fullscreen mode

Read after Write application

I use a service to run the client application:

  client:
    build: .
    depends_on:
      init-replica-set:
        condition: service_completed_successfully
    volumes:
      - .:/scripts:ro
    entrypoint: |
      bash -xc '
        mongosh --host mongo -f /scripts/read-and-write.js
      '
Enter fullscreen mode Exit fullscreen mode

The read-and-write.js script connects to each node with direct connection, labeled 1️⃣, 2️⃣, and 3️⃣, and also connects to the replica set, labeled πŸ”’, which writes to the primary and can read from secondary nodes:

const connections = {    
  "πŸ”’": 'mongodb://rs-mongo-1:27017,rs-mongo-2:27017,rs-mongo-3:27017/test?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=true&w=majority&journal=true',    
  "1️⃣": 'mongodb://rs-mongo-1:27017/test?directConnection=true&connectTimeoutMS=900&serverSelectionTimeoutMS=500&socketTimeoutMS=300',    
  "2️⃣": 'mongodb://rs-mongo-2:27017/test?directConnection=true&connectTimeoutMS=900&serverSelectionTimeoutMS=500&socketTimeoutMS=300',    
  "3️⃣": 'mongodb://rs-mongo-3:27017/test?directConnection=true&connectTimeoutMS=900&serverSelectionTimeoutMS=500&socketTimeoutMS=300',    
};    
Enter fullscreen mode Exit fullscreen mode

After defining the connection strings, the script attempts to establish separate connections to each MongoDB node in the replica set, as well as a connection using the replica set URI that can send reads to secondaries. It continuously retries connections until at least one node responds and a primary is detected. The script keeps references to all active connections.

Once the environment is ready, the script enters an infinite loop to perform and monitor read and write operations. On each loop iteration, it first determines the current primary node. It then writes a counter value, which is a simple incrementing integer, to the primary node by updating a document identified by the client’s hostname. After performing the write call, it reads the same document from all nodesβ€”primary, secondaries, and the replica set URIβ€”recording the value retrieved from each and the time it took for the read to return.

For every read and write, the script logs details, including the value read or written, the node that handled the operation, the time it took, and whether the results match expectations. It uses checkmarks to indicate success and issues mismatch warnings if a value is stale. If an operation fails (such as when a node is temporarily unavailable), the script automatically attempts to reconnect to that node in the background for future operations.

I made all this available in the following repo:

https://github.com/FranckPachot/lab-mongodb-replicaset/tree/blog-202507-mongodb-high-availability-replicaset-in-a-docker-lab

Just start it with:


docker compose up --build

Enter fullscreen mode Exit fullscreen mode

Write Concern majority - wait for network and disk

The connection string specifies w=majority

Once initialized, each line shows the value that is written to the replica set connection πŸ”’ and read from each connection πŸ”’,1️⃣, 2️⃣,3️⃣:

Screenshot:

Here is a sample output:

client-1            | 2025-07-08T20:19:01.044Z Write 19 to πŸ”’ βœ…(  358ms) Read 19 from πŸ”’ βœ…(  104ms) 19 from 1️⃣ βœ…(  105ms) 19 from 2️⃣ βœ…(  105ms) 19 from 3️⃣ βœ…(  105ms) client e0edde683498
client-1            | 2025-07-08T20:19:02.111Z Write 20 to πŸ”’ βœ…(  357ms) Read 20 from πŸ”’ βœ…(  104ms) 20 from 1️⃣ βœ…(  104ms) 20 from 2️⃣ βœ…(  105ms) 20 from 3️⃣ βœ…(  104ms) client e0edde683498
client-1            | 2025-07-08T20:19:03.179Z Write 21 to πŸ”’ βœ…(  357ms) Read 21 from πŸ”’ βœ…(  103ms) 21 from 1️⃣ βœ…(  104ms) 21 from 2️⃣ βœ…(  103ms) 21 from 3️⃣ βœ…(  104ms) client e0edde683498
client-1            | 2025-07-08T20:19:04.244Z Write 22 to πŸ”’ βœ…(  357ms) Read 22 from πŸ”’ βœ…(  103ms) 22 from 1️⃣ βœ…(  103ms) 22 from 2️⃣ βœ…(  104ms) 22 from 3️⃣ βœ…(  104ms) client e0edde683498
client-1            | 2025-07-08T20:19:05.310Z Write 23 to πŸ”’ βœ…(  357ms) Read 23 from πŸ”’ βœ…(  105ms) 23 from 1️⃣ βœ…(  105ms) 23 from 2️⃣ βœ…(  104ms) 23 from 3️⃣ βœ…(  104ms) client e0edde683498
client-1            | 2025-07-08T20:19:06.377Z Write 24 to πŸ”’ βœ…(  357ms) Read 24 from πŸ”’ βœ…(  105ms) 24 from 1️⃣ βœ…(  105ms) 24 from 2️⃣ βœ…(  104ms) 24 from 3️⃣ βœ…(  104ms) client e0edde683498
client-1            | 2025-07-08T20:19:07.443Z Write 25 to πŸ”’ βœ…(  357ms) Read 25 from πŸ”’ βœ…(  104ms) 25 from 1️⃣ βœ…(  104ms) 25 from 2️⃣ βœ…(  104ms) 25 from 3️⃣ βœ…(  104ms) client e0edde683498
client-1            | 2025-07-08T20:19:08.508Z Write 26 to πŸ”’ βœ…(  357ms) Read 26 from πŸ”’ βœ…(  104ms) 26 from 1️⃣ βœ…(  104ms) 26 from 2️⃣ βœ…(  104ms) 26 from 3️⃣ βœ…(  105ms) client e0edde683498
Enter fullscreen mode Exit fullscreen mode

The program verifies that the read gets the latest write (βœ…), but keep in mind, this is not guaranteed. The default write concern is 'majority', which serves as a durability guarantee. It ensures that a write operation is saved to persistent storage on the majority of replicas in the journal. However, it does not wait for the write to be applied to the database and to be visible by reads. The goal is to measure the latency involved in acknowledging durability.

With an artificial latency of 100ms on the network and 50ms on the disk, we observe a connection time of 100ms to a node for both read and write operations.
For writes, it adds 250ms for the majority write concern:

  • 100ms for a secondary to pull the write operation (oplog)
  • 50ms to sync the journal to disk on the secondary
  • 100ms for the secondary to update the sync state to the primary

The total duration is 350ms. It also includes syncing to disk on the primary, which occurs in parallel with the replication.

MongoDB replication differs from many databases in that it employs a mechanism similar to Raft to achieve consistency across multiple nodes. However, changes are pulled by the secondary nodes rather than pushed by the primary. The primary node waits for a commit state, indicated by a Hybrid Logical Clock timestamp, sent by the secondary.

Write Concern: 0 - do not wait for durability

Another difference when comparing with traditional databases is that the client driver is part of the consensus protocol. To demonstrate it, I changed w=majority to w=0 not to wait for any acknowledgment of the write call, and restarted the client, with five replicas of it:


 docker compose up --scale client=5

Enter fullscreen mode Exit fullscreen mode

The write is faster, not waiting on the network or disk, but the value that is read is stale:

client-5            | 2025-07-08T20:48:50.823Z Write 113 to πŸ”’ 🚫(    1ms) Read 112 from πŸ”’ 🚫(  103ms) 113 from 1️⃣ βœ…(  103ms) 112 from 2️⃣ 🚫(  103ms) 112 from 3️⃣ 🚫(  103ms) client e0e3c8b1bafd
client-3            | 2025-07-08T20:48:50.824Z Write 113 to πŸ”’ 🚫(    1ms) Read 112 from πŸ”’ 🚫(  104ms) 113 from 1️⃣ βœ…(  104ms) 112 from 2️⃣ 🚫(  104ms) 112 from 3️⃣ 🚫(  104ms) client 787c2676d17e
client-2            | 2025-07-08T20:48:51.459Z Write 114 to πŸ”’ 🚫(    1ms) Read 113 from πŸ”’ 🚫(  105ms) 114 from 1️⃣ βœ…(  104ms) 113 from 2️⃣ 🚫(  105ms) 113 from 3️⃣ 🚫(  104ms) client 9fd577504268
client-1            | 2025-07-08T20:48:51.520Z Write 114 to πŸ”’ 🚫(    1ms) Read 113 from πŸ”’ 🚫(  105ms) 114 from 1️⃣ βœ…(  105ms) 113 from 2️⃣ 🚫(  104ms) 113 from 3️⃣ 🚫(  104ms) client e0edde683498
client-4            | 2025-07-08T20:48:51.522Z Write 114 to πŸ”’ 🚫(    1ms) Read 113 from πŸ”’ 🚫(  103ms) 114 from 1️⃣ βœ…(  103ms) 113 from 2️⃣ 🚫(  103ms) 113 from 3️⃣ 🚫(  103ms) client a6c1eaab69a7
client-5            | 2025-07-08T20:48:51.530Z Write 114 to πŸ”’ 🚫(    0ms) Read 113 from πŸ”’ 🚫(  103ms) 114 from 1️⃣ βœ…(  103ms) 113 from 2️⃣ 🚫(  103ms) 113 from 3️⃣ 🚫(  103ms) client e0e3c8b1bafd
client-3            | 2025-07-08T20:48:51.532Z Write 114 to πŸ”’ 🚫(    1ms) Read 113 from πŸ”’ 🚫(  104ms) 114 from 1️⃣ βœ…(  103ms) 113 from 2️⃣ 🚫(  103ms) 113 from 3️⃣ 🚫(  103ms) client 787c2676d17e
client-2            | 2025-07-08T20:48:52.168Z Write 115 to πŸ”’ 🚫(    1ms) Read 114 from πŸ”’ 🚫(  103ms) 115 from 1️⃣ βœ…(  103ms) 114 from 2️⃣ 🚫(  103ms) 114 from 3️⃣ 🚫(  103ms) client 9fd577504268
client-4            | 2025-07-08T20:48:52.230Z Write 115 to πŸ”’ 🚫(    1ms) Read 114 from πŸ”’ 🚫(  103ms) 115 from 1️⃣ βœ…(  103ms) 114 from 2️⃣ 🚫(  103ms) 114 from 3️⃣ 🚫(  103ms) client a6c1eaab69a7
client-1            | 2025-07-08T20:48:52.229Z Write 115 to πŸ”’ 🚫(    1ms) Read 114 from πŸ”’ 🚫(  104ms) 115 from 1️⃣ βœ…(  104ms) 114 from 2️⃣ 🚫(  103ms) 114 from 3️⃣ 🚫(  103ms) client e0edde683498
client-5            | 2025-07-08T20:48:52.237Z Write 115 to πŸ”’ 🚫(    2ms) Read 114 from πŸ”’ 🚫(  103ms) 115 from 1️⃣ βœ…(  103ms) 114 from 2️⃣ 🚫(  103ms) 114 from 3️⃣ 🚫(  103ms) client e0e3c8b1bafd
client-3            | 2025-07-08T20:48:52.240Z Write 115 to πŸ”’ 🚫(    1ms) Read 114 from πŸ”’ 🚫(  103ms) 115 from 1️⃣ βœ…(  103ms) 114 from 2️⃣ 🚫(  103ms) 114 from 3️⃣ 🚫(  103ms) client 787c2676d17e
client-2            | 2025-07-08T20:48:52.876Z Write 116 to πŸ”’ 🚫(    1ms) Read 115 from πŸ”’ 🚫(  103ms) 116 from 1️⃣ βœ…(  104ms) 115 from 2️⃣ 🚫(  104ms) 115 from 3️⃣ 🚫(  103ms) client 9fd577504268
client-4            | 2025-07-08T20:48:52.936Z Write 116 to πŸ”’ 🚫(    1ms) Read 115 from πŸ”’ 🚫(  103ms) 116 from 1️⃣ βœ…(  104ms) 115 from 2️⃣ 🚫(  103ms) 115 from 3️⃣ 🚫(  103ms) client a6c1eaab69a7
Enter fullscreen mode Exit fullscreen mode

The write occurs immediately, succeeding as soon as it is buffered on the driver. While this doesn't guarantee the durability of the acknowledged writes, it does avoid the costs associated with any network latency. In scenarios such as IoT, prioritizing throughput is crucial, even if it means accepting potential data loss during failures.

Because the write is acknowleged immediately, but has to be replicated and applied on other nodes, I read stale values (indicated by 🚫) except when the time to read was higher than the time to replicate and apply, but there's no guarantee on it.

Write Concern: 1 journal: false

I adjusted the write concern to w=1, which means that the system will wait for acknowledgment from the primary node. By default, this acknowledgment ensures that the journal recording the write operation is saved to persistent storage. However, I disabled it by setting journal=false, allowing the write latency to be reduced to just the network time to the primary, which is approximately 100ms:

client-2            | 2025-07-08T20:50:08.756Z Write 10 to πŸ”’ βœ…(  104ms) Read 10 from πŸ”’ βœ…(  105ms) 10 from 1️⃣ βœ…(  105ms) 10 from 2️⃣ βœ…(  104ms) 10 from 3️⃣ βœ…(  104ms) client 9fd577504268
client-4            | 2025-07-08T20:50:08.949Z Write 10 to πŸ”’ βœ…(  103ms) Read 10 from πŸ”’ βœ…(  105ms) 10 from 1️⃣ βœ…(  105ms) 10 from 2️⃣ βœ…(  106ms) 10 from 3️⃣ βœ…(  105ms) client a6c1eaab69a7
client-1            | 2025-07-08T20:50:08.952Z Write 10 to πŸ”’ βœ…(  103ms) Read 10 from πŸ”’ βœ…(  104ms) 10 from 1️⃣ βœ…(  104ms) 10 from 2️⃣ βœ…(  104ms) 10 from 3️⃣ βœ…(  105ms) client e0edde683498
client-3            | 2025-07-08T20:50:08.966Z Write 10 to πŸ”’ βœ…(  103ms) Read 10 from πŸ”’ βœ…(  104ms) 10 from 1️⃣ βœ…(  105ms) 10 from 2️⃣ βœ…(  104ms) 10 from 3️⃣ βœ…(  104ms) client 787c2676d17e
client-5            | 2025-07-08T20:50:08.970Z Write 10 to πŸ”’ βœ…(  103ms) Read 10 from πŸ”’ βœ…(  105ms) 10 from 1️⃣ βœ…(  105ms) 10 from 2️⃣ βœ…(  105ms) 10 from 3️⃣ βœ…(  105ms) client e0e3c8b1bafd
client-2            | 2025-07-08T20:50:09.569Z Write 11 to πŸ”’ βœ…(  103ms) Read 11 from πŸ”’ βœ…(  104ms) 11 from 1️⃣ βœ…(  104ms) 11 from 2️⃣ βœ…(  104ms) 11 from 3️⃣ βœ…(  104ms) client 9fd577504268
client-4            | 2025-07-08T20:50:09.762Z Write 11 to πŸ”’ βœ…(  104ms) Read 10 from πŸ”’ 🚫(  105ms) 11 from 1️⃣ βœ…(  106ms) 11 from 2️⃣ βœ…(  105ms) 11 from 3️⃣ βœ…(  105ms) client a6c1eaab69a7
client-1            | 2025-07-08T20:50:09.765Z Write 11 to πŸ”’ βœ…(  103ms) Read 11 from πŸ”’ βœ…(  107ms) 10 from 1️⃣ 🚫(  104ms) 11 from 2️⃣ βœ…(  105ms) 11 from 3️⃣ βœ…(  106ms) client e0edde683498
client-3            | 2025-07-08T20:50:09.778Z Write 11 to πŸ”’ βœ…(  105ms) Read 11 from πŸ”’ βœ…(  104ms) 11 from 1️⃣ βœ…(  105ms) 11 from 2️⃣ βœ…(  105ms) 11 from 3️⃣ βœ…(  104ms) client 787c2676d17e
client-5            | 2025-07-08T20:50:09.782Z Write 11 to πŸ”’ βœ…(  103ms) Read 11 from πŸ”’ βœ…(  105ms) 11 from 1️⃣ βœ…(  104ms) 11 from 2️⃣ βœ…(  105ms) 11 from 3️⃣ βœ…(  105ms) client e0e3c8b1bafd
client-2            | 2025-07-08T20:50:10.381Z Write 12 to πŸ”’ βœ…(  103ms) Read 11 from πŸ”’ 🚫(  105ms) 11 from 1️⃣ 🚫(  105ms) 12 from 2️⃣ βœ…(  105ms) 12 from 3️⃣ βœ…(  105ms) client 9fd577504268
client-1            | 2025-07-08T20:50:10.578Z Write 12 to πŸ”’ βœ…(  104ms) Read 12 from πŸ”’ βœ…(  106ms) 12 from 1️⃣ βœ…(  105ms) 12 from 2️⃣ βœ…(  105ms) 12 from 3️⃣ βœ…(  106ms) client e0edde683498
client-4            | 2025-07-08T20:50:10.579Z Write 12 to πŸ”’ βœ…(  104ms) Read 12 from πŸ”’ βœ…(  106ms) 12 from 1️⃣ βœ…(  106ms) 12 from 2️⃣ βœ…(  105ms) 12 from 3️⃣ βœ…(  105ms) client a6c1eaab69a7
client-5            | 2025-07-08T20:50:10.594Z Write 12 to πŸ”’ βœ…(11751ms) Read 11 from πŸ”’ 🚫(  106ms) 12 from 1️⃣ βœ…(  106ms) 11 from 2️⃣ 🚫(  106ms) 11 from 3️⃣ 🚫(  105ms) client e0e3c8b1bafd
client-3            | 2025-07-08T20:50:10.592Z Write 12 to πŸ”’ βœ…(11753ms) Read 11 from πŸ”’ 🚫(  105ms) 12 from 1️⃣ βœ…(  105ms) 11 from 2️⃣ 🚫(  105ms) 11 from 3️⃣ 🚫(  105ms) client 787c2676d17e
Enter fullscreen mode Exit fullscreen mode

It is important to understand the consequences of failure. The change is written to the filesystem buffers but may not have been fully committed to disk since fdatasync() is called asynchronously every 100 milliseconds. This means that if the Linux instance crashes, up to 100 milliseconds of acknowledged transactions could be lost. However, if the MongoDB instance fails, there is no data loss, as the filesystem buffers remain intact.

Write Concern: 1 journal: true

Still with w=1, but the default journal=true, an fdatasync() is run before the acknowledgment of the write, to guarantee durability on that node. With my injected latency, it adds 50 milliseconds:

client-1            | 2025-07-08T20:52:34.922Z Write 48 to πŸ”’ βœ…(  155ms) Read 48 from πŸ”’ βœ…(  105ms) 48 from 1️⃣ βœ…(  105ms) 47 from 2️⃣ 🚫(  105ms) 48 from 3️⃣ βœ…(  105ms) client e0edde683498
client-3            | 2025-07-08T20:52:35.223Z Write 50 to πŸ”’ βœ…(  154ms) Read 50 from πŸ”’ βœ…(  104ms) 50 from 1️⃣ βœ…(  105ms) 49 from 2️⃣ 🚫(  105ms) 50 from 3️⃣ βœ…(  105ms) client 787c2676d17e
client-2            | 2025-07-08T20:52:35.276Z Write 49 to πŸ”’ βœ…(  155ms) Read 49 from πŸ”’ βœ…(  104ms) 49 from 1️⃣ βœ…(  105ms) 48 from 2️⃣ 🚫(  105ms) 49 from 3️⃣ βœ…(  105ms) client 9fd577504268
client-5            | 2025-07-08T20:52:35.377Z Write 49 to πŸ”’ βœ…(  155ms) Read 49 from πŸ”’ βœ…(  105ms) 49 from 1️⃣ βœ…(  104ms) 48 from 2️⃣ 🚫(  105ms) 49 from 3️⃣ βœ…(  104ms) client e0e3c8b1bafd
client-4            | 2025-07-08T20:52:35.430Z Write 50 to πŸ”’ βœ…(  154ms) Read 50 from πŸ”’ βœ…(  104ms) 50 from 1️⃣ βœ…(  105ms) 49 from 2️⃣ 🚫(  105ms) 50 from 3️⃣ βœ…(  105ms) client a6c1eaab69a7
client-1            | 2025-07-08T20:52:35.785Z Write 49 to πŸ”’ βœ…(  154ms) Read 49 from πŸ”’ βœ…(  103ms) 49 from 1️⃣ βœ…(  103ms) 48 from 2️⃣ 🚫(  103ms) 49 from 3️⃣ βœ…(  103ms) client e0edde683498
client-3            | 2025-07-08T20:52:36.086Z Write 51 to πŸ”’ βœ…(  154ms) Read 51 from πŸ”’ βœ…(  104ms) 51 from 1️⃣ βœ…(  105ms) 50 from 2️⃣ 🚫(  104ms) 51 from 3️⃣ βœ…(  104ms) client 787c2676d17e
client-2            | 2025-07-08T20:52:36.140Z Write 50 to πŸ”’ βœ…(  154ms) Read 50 from πŸ”’ βœ…(  105ms) 50 from 1️⃣ βœ…(  104ms) 49 from 2️⃣ 🚫(  104ms) 50 from 3️⃣ βœ…(  105ms) client 9fd577504268
client-5            | 2025-07-08T20:52:36.241Z Write 50 to πŸ”’ βœ…(  155ms) Read 50 from πŸ”’ βœ…(  104ms) 50 from 1️⃣ βœ…(  103ms) 49 from 2️⃣ 🚫(  103ms) 50 from 3️⃣ βœ…(  104ms) client e0e3c8b1bafd
client-4            | 2025-07-08T20:52:36.294Z Write 51 to πŸ”’ βœ…(  154ms) Read 51 from πŸ”’ βœ…(  102ms) 51 from 1️⃣ βœ…(  103ms) 50 from 2️⃣ 🚫(  103ms) 51 from 3️⃣ βœ…(  103ms) client a6c1eaab69a7
client-1            | 2025-07-08T20:52:36.645Z Write 50 to πŸ”’ βœ…(  154ms) Read 50 from πŸ”’ βœ…(  103ms) 50 from 1️⃣ βœ…(  103ms) 49 from 2️⃣ 🚫(  103ms) 50 from 3️⃣ βœ…(  103ms) client e0edde683498
client-3            | 2025-07-08T20:52:36.950Z Write 52 to πŸ”’ βœ…(  154ms) Read 52 from πŸ”’ βœ…(  104ms) 52 from 1️⃣ βœ…(  103ms) 51 from 2️⃣ 🚫(  103ms) 52 from 3️⃣ βœ…(  104ms) client 787c2676d17e
client-2            | 2025-07-08T20:52:37.003Z Write 51 to πŸ”’ βœ…(  154ms) Read 51 from πŸ”’ βœ…(  105ms) 51 from 1️⃣ βœ…(  105ms) 50 from 2️⃣ 🚫(  105ms) 51 from 3️⃣ βœ…(  104ms) client 9fd577504268
client-5            | 2025-07-08T20:52:37.103Z Write 51 to πŸ”’ βœ…(  155ms) Read 51 from πŸ”’ βœ…(  103ms) 51 from 1️⃣ βœ…(  104ms) 50 from 2️⃣ 🚫(  104ms) 51 from 3️⃣ βœ…(  104ms) client e0e3c8b1bafd
client-4            | 2025-07-08T20:52:37.155Z Write 52 to πŸ”’ βœ…(  155ms) Read 52 from πŸ”’ βœ…(  104ms) 52 from 1️⃣ βœ…(  104ms) 51 from 2️⃣ 🚫(  104ms) 52 from 3️⃣ βœ…(  103ms) client a6c1eaab69a7
client-1            | 2025-07-08T20:52:37.508Z Write 51 to πŸ”’ βœ…(  154ms) Read 51 from πŸ”’ βœ…(  104ms) 51 from 1️⃣ βœ…(  104ms) 50 from 2️⃣ 🚫(  104ms) 51 from 3️⃣ βœ…(  104ms) client e0edde683498
Enter fullscreen mode Exit fullscreen mode

In summary, MongoDB allows applications to balance performance (lower latency) and durability (resilience to failures) rather than relying on one-size-fits-all configuration that waits even when it is not necessary according to business requirements. For any given setup, the choice must consider the business requirements as well as the infrastructure: resilience of compute and storage services, local or remote storage, and network latency between nodes. In a lab, injecting network and disk latency can help simulate scenarios that illustrate the consequences of reading from secondary nodes or recovering from a failure.

To fully understand how it works, I recommend checking your understanding by reading the documentation on Write Concern and practicing in a lab. The defaults may vary per driver and version, and the consequences may not be visible without a high load or failure. In current versions, MongoDB favors data protection with the write consistency defaulting to "majority" and journaling to true (writeConcernMajorityJournalDefault), but if you set w:1 journaling defaults to false.

Top comments (0)