DEV Community

Cover image for Spring Boot + Electron, a case study
Idan Elhalwani for krud.dev

Posted on

Spring Boot + Electron, a case study

Background

We recently finished writing our desktop app for Spring Actuator, Ostara. Initially, we decided to rely on Electron's main process and write our "backend" in Node. We quickly hit roadblock after roadblock, until a decision was made to ditch the backend entirely, rewrite it in Spring Boot and Kotlin in the JVM ecosystem, and have the renderer communicate with it via REST.

We did this for a multitude of reasons:

  • First and foremost, we wanted the versatility of being able to run the project on and off Electron, and possibly with a remote backend. Our initial Node implementation relied heavily on IPC between the main and renderer and we knew that, for the most part, had to go.
  • It's a mature and robust ecosystem, and especially one where we knew we would be able to make rapid progress
  • We would be able to leverage our own JVM libraries, namely ShapeShift and a yet-unreleased framework we use for CRUD operations with traditional ORMs and ODMs.

Ecosystem Overview

Ostara is based off of electron-react-boilerplate and uses electron-builder to package the application.

The daemon uses JDK 17 with Kotlin, Spring Boot and Gradle

Project Daemon

Before we set out to research and implement this daemon for Ostara, we laid out a list of requirements:

  1. A user should not need to install Java.
  2. No significant downgrades to the developer experience.
  3. The daemon process should be robust and resilient.
  4. Support two-way communication in a manner that is similar to IPC, but doesn't tie us to Electron.
  5. The startup time for the daemon should be under 20 seconds at all times.

Let's break these down.

A user should not need to install Java

The rationale here is simple, right? The user installed your app, they don't really care about your app's dependencies, and they certainly don't want to be given (either automatically or otherwise) a shopping list before they can use it.

Now, this may sound like a silly argument to make when you realize Ostara's primary target audience is JVM developers who certainly have JDK on their workstation. But which JDK? Spring Boot developers come in all shapes and sizes, some will still run JDK 8 on their workstation while others will be on the cutting edge.

Overall, the goal of this requirement was to reduce our footprint and (visible) overhead as much as possible while allowing maximum versatility.

Now right off the bat, let me just say that an alternative to this process called jlink exists. We opted not to use it, so I will not elaborate on it further.

The app is packaged 4 times, once for Windows (x64), once for Linux (x64), and twice for Mac (ARM and x64). Each one of these has its own unique JDK.

The goal would be to download the respective JDK, where it would eventually find its way inside the finished package for each operating system and architecture.

An additional goal we set was to avoid bloating the repository with zip files that we can easily download on the spot and being forced to use Git LFS, so committing the JDKs was not an option.

To achieve the procurement part, we wrote jdkutil, a pure Python tool with no external dependencies.

The tool is very simple in its operation. Given a list of categorized URLs (In our case, this csv), it will download them, verify the checksum matches, and unzip them into a directory tree matching this pattern: {platform}/{arch}/jdk

As a visual example, for the four variants above we will receive the following tree:

jdks/
├── linux
│   └── x64
│       └── jdk
├── mac
│   ├── arm64
│   │   └── jdk
│   └── x64
│       └── jdk
└── windows
    └── x64
        └── jdk
Enter fullscreen mode Exit fullscreen mode

Once the JDKs were ready in this format, all we had to do was add the following to the extraResources block of the build section within our package.json:

{
    "build": {
        "extraResources": [
            {
              "from": "jdks/${os}/${arch}",
              "filter": [
                "**/*"
              ]
            }
          ]
    }
}
Enter fullscreen mode Exit fullscreen mode

When electron-builder runs, it will populate ${os} and ${arch} according to the operating system and architecture being built, so at the end of this process we should have an arch specific, OS specific folder called jdk in every package that we create, with the correct JDK for the designated platform.

No significant downgrades to the developer experience

While a developer is expected to have JDK 17 installed, if they are only working on the Electron side there should be no additional steps to run or perform.

Setup

A "frontend" developer shouldn't need to setup a working environment to work on the daemon, but they will still need to be able to run the current development version and build it from scratch.

Likewise, a "backend" developer should be able to run their daemon from their IDE of choice and be able to hook it up to the process.

Finally, in a packaged state we will have a jar as well as a custom JDK that we will need to use.

This lead us to split the way we start the daemon into two ways.

The first is the "Packaged" approach, where Electron already knows the location of a prebuilt JAR and simply runs the process.

The second is the "Remote" approach. In this approach, Electron doesn't actually start the process, but simply receives a predetermined port and address (Usually localhost) and creates a connection with the daemon over HTTP.

The remote approach covers both development use-cases. In order to achieve this, we split our start command into two:

    "start": "concurrently \"npm run start:daemon\"  \"ts-node ./.erb/scripts/check-port-in-use.js && npm run start:renderer\"",
    "start:thin": "ts-node ./.erb/scripts/check-port-in-use.js && npm run start:renderer"
}
Enter fullscreen mode Exit fullscreen mode

A "frontend" developer in this case would run start, which would then in turn compile and run the daemon locally. Since the app isn't packaged, the app will automatically determine that it needs to use a Remote daemon with the default host and port.

A "backend" developer would run start:thin which essentially does everything except start the daemon, with the expectation that the daemon is already running. Like the previous example, the app will automatically determine that it needs to use a Remote daemon with the default host and port.

Implementing health checks

Since the Daemon, like any process had the possibility of crashing or freezing, the next step was to implement health checks. We opted for a simple polling HTTP health check from the Electron main process to the daemon.

For this process we created 3 events in the Electron main process:

  • daemon-ready
  • daemon-healthy
  • daemon-unhealthy

The flow created around these events is simple.

Upon app start, the user will be sent to the splash screen. Once the daemon-ready event is fired, the splash screen is closed and is replaced by the main screen. If at any point during this time we receive the daemon-unhealthy event, the user's screen is replaced with the following:

Example of crashed daemon

If, during this time, daemon-healthy is fired, then the screen returns to normal.

Support two-way communication in a manner that is similar to IPC, but doesn't tie us to Electron

To achieve two-way communication between the Electron renderer and the daemon, we opted to use Websockets with the STOMP protocol. This approach decoupled the communication between the renderer and the daemon from the Electron framework, allowing for greater flexibility in the future and the ability to break away from Electron entirely if needed

Conclusion

In conclusion, despite the unorthodox choice, we successfully developed a robust and resilient daemon for our Electorn app using Spring Boot, all while addressing our primary goals.

By doing so, we have created a flexible and scalable app that can run on and off Electron with the potential for remote backend functionality. Additionally, we have opened the door for further expansion and improvements in the future, should the need arise.

Top comments (0)