DEV Community


Down the Rabbit Hole Debugging Node.js Cipher Support

urma profile image Ulisses Albuquerque ・7 min read

TL;DR: While most documentation on node.js and OpenSSL ciphers seem to indicate cryptographic algorithms are implemented in userland by OpenSSL, your Linux kernel version might impact the availability of some specific ciphers.

Recently while testing some code which leverages more recent cryptographic ciphers we discovered that node.js support for those is dependent on the node.js version, instead of completely relying on the underlying OpenSSL support.

With node.js 8.x this is what we get:

$ node -v

$ node -e 'console.log(JSON.stringify(require("crypto").getCiphers()))'

$ node -e 'console.log(require("crypto").getCiphers().length)'

However, when running the same code against node.js 10.x this is what we get:

$ node -v

$ node -e 'console.log(JSON.stringify(require("crypto").getCiphers()))'

$ node -e 'console.log(require("crypto").getCiphers().length)'

Because we were writing code in our local systems under node.js 10.x we were getting adequate coverage from our unit tests. However, once we started running the tests under our CI environment we got some errors. Turns out our CI environment does not have node.js 10.x available, only supporting node.js 8.x instead.

Leveraging nodenv we were able to run our code under node.js 8.x and identified the discrepancy shown above. We added some logic to our tests to skip the ones which touched node.js 10.x-specific ciphers. That made our tests pass in the CI environment, but the later Sonarqube quality gate which enforces test coverage now failed -- skipping non-available ciphers affected our coverage. Without a later version of node.js to use for testing in CI, we needed to change the way the tests were being run to ensure all code was being tested adequately.

Leveraging Docker

This is a somewhat common problem -- how to maintain test conditions as consistent as possible so you do not run into errors due to environmental differences. The solution is also pretty obvious -- we decided to use Docker images build on top of the official node base images. Our Dockerfile was quite simple:

ARG base_image
FROM ${base_image}

WORKDIR /opt/my-app-path
COPY . /opt/my-app-path
RUN npm install

CMD [ "npm", "test" ]

While there is definitely room for improvement (like using a non-root user, optimising for layer caching and more), it solves the key problem for us -- we can now build different versions of the image based on different versions of node.js by providing the base_image argument with all other libraries and binaries being the same across versions:

$ docker build --build-arg base_image=node:8.16.0-stretch-slim \
  -t my-app:8.16.0-stretch-slim-latest

$ docker build --build-arg base_image=node:10.16.0-stretch-slim \
  -t my-app:10.16.0-stretch-slim-latest

There were some additional hops to go through -- because the tests are now being executed inside a Docker container rather than directly in the build host, we need to mount an external path when running the tests and generate the results in a format our CI can parse.

$ docker run --rm -v $(pwd)/test-output:/opt/my-app-path/test-output \

We created a shell script which built test images for all the supported versions of node (8.x, 10.x and 12.x) and confirmed the correct ciphers were being skipped for version 8.x, but correctly used when running against 10.x and 12.x. We also stored test results in JSON files which included the version information alongside the test results, which could then be fed into plugins to our CI tool so we could get per-node-version test results. Everything looked good.

After committing the code, however, Sonarqube was still complaining about test coverage even on later versions of node.js. Clearly the test skip criteria was not behaving as expected in the CI environment -- something other than a node 10.x-specific cipher was not working as expected.

sonarqube results

Digging Deeper

After adding some debugging code to the tests, including capturing the cipher list from both node.js and OpenSSL, we were able to pinpoint which algorithms were not available in the CI environment -- aes-128-cbc-hmac-sha256 which was being used with pbkdf2. Confusingly, though, when checking the cipher list for node.js inside the Docker image on our local systems, aes-128-cbc-hmac-sha256 was indeed included:

$ node -e 'console.log(JSON.stringify(require("crypto").getCiphers().filter(c => c.match(/aes-128-cbc/))))'

OpenSSL also indicated it was supported:

$ openssl list -cipher-algorithms | grep -i aes-128 
aes128 => AES-128-CBC

Since Docker images are meant to abstract away environment issues, we were surprised to get distinct results when running the same commands in our CI environment -- aes-128-cbc-hmac-sha256 indeed was missing when running our tests on build agents.

When running containers, unless the user specifically exports host resources (like filesystem entries or ports) the only shared component between a Docker host and a container is the Linux kernel. That should not impact availability of ciphers, as OpenSSL implements all of its algorithms in userland code in the library... or does it?

That's when we came across the changelog for OpenSSL 1.1.0l, which includes the following tidbit:

  *) Added the AFALG engine. This is an async capable engine which is able to
     offload work to the Linux kernel. In this initial version it only supports
     AES128-CBC. The kernel must be version 4.1.0 or greater.
     [Catriona Lucey]

So, it turns out the Linux kernel version can indeed impact the availability of ciphers, or more specifically, of aes-128-cbc-hmac-sha256. That being said, the engine should be offered as an optimised implementation of the algorithm, not as the only one.

For now, we are continuing our investigation to determine whether this is expected behaviour for OpenSSL under Linux when using a pre-4.1.0 kernel.


Editor guide
jadeboer profile image

Nice work & write up!