DEV Community

Tomasz Wegrzanowski
Tomasz Wegrzanowski

Posted on • Edited on

1 1

Electron Adventures: Episode 20: IPC Benchmark

How fast is IPC communication between Electron frontend and backend? Let's do a quick benchmark.

Backend

The backend will simply have one channel increment, that return number passed plus one:

let { app, BrowserWindow, ipcMain } = require("electron")

ipcMain.handle("increment", (event, x) => (x+1))

function createWindow() {
  let win = new BrowserWindow({
    webPreferences: {
      nodeIntegration: true,
      contextIsolation: false,
    }
  })
  win.loadFile("index.html")
}

app.on("ready", createWindow)

app.on("window-all-closed", () => {
  app.quit()
})
Enter fullscreen mode Exit fullscreen mode

Benchmark

The index.html is just a placeholder for the results, so let's skip that. Here's the benchmark app.js:

let { ipcRenderer } = require("electron")

let localIncrement = (x) => (x+1)

let benchmarkLocal = async () => {
  let startTime = new Date()
  let x = 0;
  while (x < 100_000_000) {
    x = localIncrement(x)
  }
  let endTime = new Date()
  return endTime - startTime
}

let benchmarkIPC = async () => {
  let startTime = new Date()
  let x = 0;
  while (x < 10_000) {
    x = await ipcRenderer.invoke("increment", x)
  }
  let endTime = new Date()
  return endTime - startTime
}

let runBenchmark = async () => {
  let results = document.querySelector("#results")
  results.textContent = `
    10k IPC calls took: ${await benchmarkIPC()}ms
    100M local calls took: ${await benchmarkLocal()}ms
  `
}

runBenchmark()
Enter fullscreen mode Exit fullscreen mode

Results

And here are benchmark results:

Episode 20 Screenshot

As you can see, calling another process is much slower than calling local functions. For trivial function, it took 1.7ns to do a local call, and 80000ns to do an IPC call. So you definitely should consider which code goes into which process, and if you can achieve the same result with fewer round trips.

On the other hand, IPC was still very fast! If your UI has 60 frames per second, you have 16ms per frame, so 0.08ms latency per IPC call is plenty fast.

By comparison HTTP calls over the internet are something like 100ms, and even localhost http server would likely be >1ms.

This isn't meant as any "serious" benchmark, but it should give you some ballpark figures what kind of latencies to expect from different modes.

As usual, all the code for the episode is here.

Image of Timescale

🚀 pgai Vectorizer: SQLAlchemy and LiteLLM Make Vector Search Simple

We built pgai Vectorizer to simplify embedding management for AI applications—without needing a separate database or complex infrastructure. Since launch, developers have created over 3,000 vectorizers on Timescale Cloud, with many more self-hosted.

Read more →

Top comments (0)

👋 Kindness is contagious

Immerse yourself in a wealth of knowledge with this piece, supported by the inclusive DEV Community—every developer, no matter where they are in their journey, is invited to contribute to our collective wisdom.

A simple “thank you” goes a long way—express your gratitude below in the comments!

Gathering insights enriches our journey on DEV and fortifies our community ties. Did you find this article valuable? Taking a moment to thank the author can have a significant impact.

Okay