DEV Community

Cover image for How to mock ALL external data sources and completely isolate the codebase while testing
zvone187
zvone187

Posted on

How to mock ALL external data sources and completely isolate the codebase while testing

In this blog post, I want to share with you an idea that my friend and I had to isolate the entire backend code from all external data sources in order to create tests.

The idea is this: if all external sources (such as databases and third-party APIs) provide the exact same data to your codebase, then your JavaScript code should always behave in the same way. So, to create a test, make API requests and during processing capture the data from these sources. Later, when you want to run the test, you make the same API request but have the system mock all external sources that were captured.

To achieve this, we came up with a system called Pythagora. It's an npm package and it can capture all the external sources your codebase interacts with, isolate your code, and then later restore all those external sources to run your server and test if everything is working correctly.

Using Pythagora, my friend and I were able to get public repos from 0 to 80% code coverage in just 30 minutes, which we think is pretty amazing!

In this post, I'll share with you how we created Pythagora, the challenges we faced, and how we overcame them. So, let's dive in!

Btw, here’s a demo video showcasing how it works.


Current state of integration tests

In the current state of automated integration tests for web apps, the most common approach involves making an API call to an endpoint and asserting response data like payload, status code, and so on. However, this approach has a major limitation. It doesn’t provide enough information to determine if everything is working correctly.

For instance, even if the response from the server is correct, there could still be issues with the database or other data sources. To address this, we could create an integration test that directly queries the database and then checks if the database was properly updated. But let’s face it, this can be a lot of work! We would have to create an entire backend API that retrieves the raw data from the database and other sources, just so we can query and assert them.


Mocking problem

The next problem we face is with mocking. For tests to be meaningful, most of them require some kind of server state, such as a user in the database. To create an integration test that works with a server state, we need to restore this state each time we run a single test. For example, with the database state, we need to populate the database before a test, run the test, and then drop the database after the test is completed. This can quickly become complex and time-consuming task.

Now, imagine having to deal with all the different possible combinations of user states. We would need to export hundreds or even thousands of user states and create code that populates the database with each user state before running the appropriate test. This integration test suite can easily become a full-blown codebase and would obviously require hours and hours of development.

To illustrate this, here’s a super simple example of an integration test that involves setting up and restoring the database state. As you can see, even for the simplest test that asserts very little data, the code can quickly become lengthy and cumbersome.

const request = require("supertest");
const app = require("../app");
const User = require("../models/user");
const mongoose = require("mongoose");

describe("POST /purchase", () => {
  let user;
  before(async () => {
    await mongoose.connection.dropDatabase();
    const viewedProductIds = ["productId1", "productId2"];
    user = new User({
      name: "testuser",
      viewedProducts: viewedProductIds,
    });
    await user.save();
  });

  test("should purchase a product and update the user", async () => {
    const productId = "productId2";
    const response = await request(app)
      .post("/purchase")
      .send({ productId });

    expect(response.statusCode).toBe(200);
    expect(response.body).toEqual({ message: "Product purchased successfully" });

    const updatedUser = await User.findOne({ name: "testuser" });
    expect(updatedUser.purchasedProducts).toContain(productId);
  });
});
Enter fullscreen mode Exit fullscreen mode

Blueprint for isolation

So, we came up with an idea to create a node module that acts as a server wrapper with two states: capturing and testing. During the capturing state, it starts the server and captures all requests going outside the server and coming into it. Later during the testing state, it can mock all these external sources to isolate the entire JS codebase.

To achieve this, we needed to implement two main concepts:

  • Capturing data
  • Restoring data

We experimented with several methods to capture all the data coming into the server. Initially, we wanted to record TCP packets at a low level. While this approach worked great for Redis, it became problematic for other encrypted data sources since we would need to encrypt and decrypt the data.

I created a TCP proxy that listens on the Redis port, forwards packets when capturing, and returns mocked data when testing. Interestingly, I created this entire proxy in just two hours with the help of ChatGPT. It was quite amazing what it created just from prompting – I might even create another post with the prompts I used to create this.

Anyway, this proxy couldn’t work for encrypted data sources, which are basically all sources other than Redis. So, we looked for another way to intercept all incoming data and mock it.

Finally, we settled on module patching.


Module patching

Module patching is a powerful technique that overrides a standard node.js module (such as the “http” module) so that whenever the code calls for let http = require("http"), our code is required instead. In a patch, you can execute whatever you need to execute and then call the original module’s function that’s supposed to be called. Here’s how you patch a module in node.js:

require.cache[require.resolve('http')] = {
    exports: require('./src/patches/http.js')
};
Enter fullscreen mode Exit fullscreen mode

Ok, now let’s create a patch for the “http” module that intercepts all HTTP requests processed by the server. First, we create a patch file in which we patch the createServer function. When the codebase requires the “http” module and calls the createServer function, our function will be called first while the result that the code (that called createServer) receives will be exactly the same as if the code called the original “http” module.

const originalHttp = require('http');

const originalCreateServer = originalHttp.createServer;
// we replace the "createServer" function with our own function which will call originalCreateServer.apply(this, arguments) so that the HTTP server is created
originalHttp.createServer = function (app) {
    // here, we can do anything we need for our app and finally we just return the call to the original createServer function with the same context and arguments
    return originalCreateServer.apply(this, arguments);
}


module.exports = originalHttp;
Enter fullscreen mode Exit fullscreen mode

As you can see, this technique is quite simple but super powerful. It basically consists of 3 parts:

  1. Saving the original method we want to patch (e.g., createServer)
  2. Replacing the method with our own function
  3. Calling the original method we’re patching inside our function with the same arguments and context so that the method does exactly what it should be doing, just as if our code doesn’t exist (with originalMethod.apply(this, arguments))

After playing around with this technique, we realized we could use it for every external data source. We patched “http” for third-party APIs, “mongodb” for intercepting mongo queries, “jwt” for authentication, etc.

At this point, we could technically capture the data from external sources and inject it during testing. However, when we inject the data, we need to know what data to inject. For example, if two requests come in parallel, we need to know which request is triggering which external source.


Tracking Async Context

We were faced with the challenge of tracking the API request that triggered a specific Mongo query. After some research, we discovered the powerful features of Async Context and Async Local Storage in Node.js. These features have the ability to track an application’s context across asynchronous functions. To use it, you simply initiate Async Local Storage and set the context ID, which can be a string, a number, or a JSON object. Whenever you call asyncLocalStorage.getStore() in your code, you will get the context ID that you set.

import { AsyncLocalStorage } from 'node:async_hooks';

const asyncLocalStorage = new AsyncLocalStorage();

asyncLocalStorage.run(contextId, () => {
    asyncLocalStorage.getStore(); // Returns the contextId
    await callSomeOtherFunction();
});
Enter fullscreen mode Exit fullscreen mode

We were amazed at how well this technique worked. Also, many thanks to Trevor Norris for helping us out in determining if we should use Async Local Storage or not. Btw, he actually created this feature for Node.js core.

With Async Context and Async Local Storage, we had everything we needed for our architecture:

  1. We patched the http and https modules to intercept incoming requests to the server and capture all data.
  2. We patched the mongodb module to intercept all mongo queries. For mongo queries, we went one step further so when you run a test, Pythagora doesn’t mock the responses from Mongo. It restores the documents used into a temporary pythagoraDb so that when an API request updates the database, Pythagora checks the actual database to see if it’s updated correctly.
  3. We patched the jwt module to ensure that authentication works even in the future when running tests.
  4. To capture external data, we ran the server wrapped with our module and made API requests to the server. Our module captured the API request and all data during the request processing.
  5. Finally, when we wanted to test our codebase, we ran the testing command, which again started the server wrapped with our module and made captured API requests. During the processing of the API request, our module mocked the data from external sources and expected the response from the server to be exactly the same as it was during capturing.

Final product

Once we had our MVP, we were surprised at how well it worked. So, we decided to turn it into an npm package and named it Pythagora!

If you want to try it out and see how it works, simply install it with npm i pythagora and run capturing and testing commands. It should work well for MEAN/MERN stacks.

After open-sourcing Pythagora and sharing it on Reddit (link to discussion), we received positive feedback, so we decided to continue working on it.


Conclusion

To sum up, in this post we have explored how it is possible to completely isolate a server from external data sources, allowing us to capture integration tests with all mocks and rerun them from any environment. The techniques we discussed, such as module patching and async context, open up a lot of possibilities for testing and development.

I hope you enjoyed reading this post and that you found the tech as cool as we did. If you have any questions or feedback, please let us know. And if you read this far and would like to support us, please consider starring the Pythagora Github repository – it would mean a lot to us.

Also, I created this tech deep dive video if you want to go deeply into details of the implementation of Pythagora.

Top comments (0)