DEV Community

Takuya Matsuyama
Takuya Matsuyama

Posted on • Edited on

Improve PouchDB's initial sync with CouchDB 3x faster

Update (2020/08/07)

It is not a correct workaround to improve replications.
After having some research, it turned out that the actual bottleneck was my network topology on EC2.
My cluster nodes were deployed across different regions in order to make them disaster tolerant.
That caused the network bottleneck between nodes where the RTT was around 166ms.
I moved the nodes to the single region but in different availability zones where the RTT is less than 1ms.
Now it works very fast!
You don't need to hack PouchDB but check your network performance.

Problem

I was not happy with the replication performance between PouchDB and CouchDB, especially when performing the initial sync.
I'm building a note-taking app called Inkdrop which supports data sync between devices using a clustered CouchDB. I found that a user complains about the slow sync speed. I was aware of that, so I decided to work on it.
There is an issue on the PouchDB's GitHub repository:

But it's inactive and seems that nobody has solved the issue at the moment.

So, I tried to find the actual bottleneck in the replication process. It turned out that bulkGet (CouchDB: _bulk_get) takes very long time. It took 6 secs for fetching 100 documents. It's way slower than allDocs (CouchDB: _all_docs). I suspected it was causing the problem.
I found that PouchDB is specifying revs: true and latest: true parameters when calling bulkGet API:

It turned out that the response time becomes significantly faster when calling it without those params but with r: 1 option added.
The r=1 parameter means you read data from 1 replica node.
It lets the cluster avoid reading data from multiple nodes.
Getting revisions by specifying revs: true or latest: true is slow because the database has to check the document history. But after having looked into the source code of PouchDB, it seems to be not using _revisions field in the replication process if I were correct. The latest param is to avoid race conditions where another client updated the doc while syncing. But my app uses "one database per user" pattern, so I guess the race condition problem would be rare.

In conclusion, I accomplished improving the sync speed 3x faster by removing revs and latest params and added r: 1 to the bulkGet internal calls in PouchDB with the hack with the core modules as following.

In pouchdb-replication/src/getDocs.js#L46:

function createBulkGetOpts(diffs) {
  var requests = [];
  Object.keys(diffs).forEach(function (id) {
    var missingRevs = diffs[id].missing;
    missingRevs.forEach(function (missingRev) {
      requests.push({
        id: id,
        rev: missingRev
      });
    });
  });

  return {
    docs: requests,
    /* DELETE
    revs: true,
    latest: true
    */
  };
}
Enter fullscreen mode Exit fullscreen mode

In pouchdb-adapter-http/src/index.js#L341:

  api.bulkGet = coreAdapterFun('bulkGet', function (opts, callback) {
    var self = this;

    function doBulkGet(cb) {
      var params = {};
      if (opts.revs) {
        params.revs = true;
      }
      if (opts.attachments) {
        /* istanbul ignore next */
        params.attachments = true;
      }
      if (opts.latest) {
        params.latest = true;
      }
      // ADD THIS
      params.r = 1
      fetchJSON(genDBUrl(host, '_bulk_get' + paramsToStr(params)), {
        method: 'POST',
        body: JSON.stringify({ docs: opts.docs})
      }).then(function (result) {
        if (opts.attachments && opts.binary) {
          result.data.results.forEach(function (res) {
            res.docs.forEach(readAttachmentsAsBlobOrBuffer);
          });
        }
        cb(null, result.data);
      }).catch(cb);
    }

Enter fullscreen mode Exit fullscreen mode

Now it takes only around 2 seconds for 100 docs which is 3x faster than before.
It works fine with my app for now.

Top comments (2)

Collapse
 
arwysyah profile image
Arwy Syahputra Siregar • Edited

Glad you made this one and explain clearly. Let me try this one , actually i experiencing this issue when i do a replicate from server to local. When i do the same thing from local to server, i think it's more faster than i do from server to local. Maybe the amount of data impact too. But yeah initialsync also Running so slow and depends of the user network connection. Even though i was making it without live and retry option, and i do a retry function manually with my own function when the replication get a error response. Thank you

Collapse
 
davestewart profile image
Dave Stewart

Hey Takuya!

Would love to talk to you about Pouch DB + side hussles.

I think we are very similar developers! Can you follow me on Twitter so I can DM you?