My name is Sergey and I’m the author of dclint, a CLI tool for linting and formatting Docker Compose files.
In this article, I’ll show how to turn a Node.js CLI tool into a flexible utility that:
- Works as a standalone binary, so it doesn’t require Node.js to be installed.
- Supports multiple architectures (arm64/amd64) and operating systems (Alpine/Ubuntu).
- Could be integrated into CI/CD pipelines using Docker images.
We’ll cover the key steps: using Node.js Single Executable Applications (SEA), setting up Rollup for bundling, building optimized Docker images, and automating the publishing process with GitHub Actions.
A Bit of Context
Dclint is written in TypeScript because it’s the language I’m most comfortable with and the usage model I initially had in mind was quite simple:
Since we’re dealing with linting Docker Compose files, Docker would be already installed. So distributing the tool as a Docker image solves the question of which language it’s written in, as Docker becomes the only dependency. But for Node.js projects users could also run it via npx
.
However (and this is the beauty of open source), one of the users suggested another approach:
"In my case, we collect tools into specialized images that we build specifically with collapsed layers so that our CI/CD runners do not need to store many layers and can easily cache the intended tools with the least amount of image size."
— Ádám Liszkai in GitHub Discussion
And this got me thinking about creating an executable version of my tool that doesn’t depend on Node.js at all.
So my goals were:
- A clear and straightforward build process.
- A binary as small as possible.
- Compatibility with at least Ubuntu and Alpine.
- Support for both arm64 and amd64 architectures.
What Options Do I Have
There are several tools for creating standalone binaries:
- https://github.com/vercel/pkg (in public archive since 2024)
- https://github.com/nexe/nexe
Both tools come with good documentation and usage examples. However, in my case, not everything worked as I wanted.
Though relatively recently in Node.js 21 was introduced their own API for creating Single Executable Applications
Currently, this feature is at stage 1.1, meaning it’s “Experimental. Active development.” But I enjoy exploring new approaches, so I decided to give it a try.
Up next, I’ll explain how to set it up. If you prefer to dive straight into the code, check out the repository, and don’t forget to leave a star if you like the project!
Single Executable Applications API
In general this is a Node.js API that allows you to package your application into a single executable file.
This feature allows the distribution of a Node.js application conveniently to a system that does not have Node.js installed.
The single executable application feature currently only supports running a single embedded script using the CommonJS module system.
Node.js Documentation
The documentation is great and provides a step-by-step guide on how to create an executable file.
To simplify the process, I created a shell script named generate-sea.sh
. This script makes it easier to manage and run the necessary commands in different environments.
Here’s the script:
#!/bin/sh
# Checking that the path to the generation file is passed as an argument
if [ -z "$1" ]; then
echo "Usage: $0 <path_to_generation_file>"
exit 1
fi
GENERATION_PATH="$1"
# Generate binary
rm -rf "$GENERATION_PATH" && rm -rf sea-prep.blob && \
mkdir -p "$(dirname "$GENERATION_PATH")" && \
node --experimental-sea-config sea-config.json && \
cp "$(command -v node)" "$GENERATION_PATH" && \
npx -y postject "$GENERATION_PATH" NODE_SEA_BLOB sea-prep.blob --sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2
To generate the executable binary, simply run the script and specify the output path, for example:
./scripts/generate-sea.sh ./bin/dclint
As mentioned in the documentation, SEA only works with a single embedded script using the CommonJS module system. Therefore, to make this work, you’ll need a bundler to compile your project into a single CommonJS file, including all dependencies from node_modules
.
Rollup
I chose Rollup as the bundler for this project. When compiling code into a single file, it’s essential for the bundler to support tree-shaking (removing unused code). Rollup has this functionality enabled by default.
Rollup is a module bundler for JavaScript which compiles small pieces of code into something larger and more complex, such as a library or application.
Rollup Documentation
To achieve the desired result, I added the following configuration to Rollup:
export default {
...baseConfig('pkg', false, false), // Import a shared base config
input: 'src/cli/cli.ts',
output: {
file: 'pkg/dclint.cjs',
format: 'cjs',
inlineDynamicImports: true,
exports: 'auto',
},
context: 'globalThis',
};
The shared base config handles TypeScript, JSON files, and other project-specific configurations.
Unlike other build setups, here’s what’s different:
- inlineDynamicImports: true - all logic is bundled into a single file, even if the code uses dynamic imports.
- format: 'cjs' - the output bundle format is CommonJS.
-
No
external
field - all dependencies are bundled into the same file.
The result was a 10 MB JavaScript file. After creating the binary with SEA, the file size grew to 100 MB. That’s quite large for a relatively simple utility, but it's fine for me.
And now it’s finally completely self-contained. Or is it?
Docker
Since SEA doesn’t support building for different platforms and architectures natively and relies on the environment in which it’s executed — Docker is essential for cross-platform builds.
Docker is an open platform for developing, shipping, and running applications.
Docker provides the ability to package and run an application in a loosely isolated environment called a container.
Docker Documentation
Generate Binary with Docker
So in my case generate-sea.sh
script must be run in the same environment where the binary is intended to work.
For example, to build a binary for Ubuntu (arm64), I can use the following command:
docker run --rm --platform linux/arm64 -v "$PWD":/app -w /app node:20.18.0-bullseye ./scripts/generate-sea.sh ./sea/dclint-bullseye-arm64
Explanation:
-
--platform linux/arm64
specifies the target architecture for the build. -
node:20.18.0-bullseye
is a Node.js docker image compatible with Ubuntu.
Creating Docker Image
In addition to generating binaries, the utility is distributed as a Docker image, requiring a Dockerfile
to build the final container. I use a multi-stage build to minimize the final image size.
On the first stage it generates the binary using the generate-sea.sh
script.
And on the final stage it copies the generated binary, leaving unnecessary dependencies behind.
For the final stage, I use two types of images: Alpine and Scratch.
Alpine is a minimal base image (~5 MB), ideal for applications requiring a small footprint and enhanced security. Alpine on Docker Hub
Scratch is an empty base image for ultra-lightweight containers, suitable for standalone executables with minimal dependencies. Scratch on Docker Hub
Dockerfile Example:
# First stage (builder)
# -------------
FROM node:20.18.0-alpine3.19 AS builder
# Create working directory
WORKDIR /dclint
# Copy package.json and install dependencies
COPY package*.json ./
RUN npm ci
# Copy the rest of the project
COPY . .
# Build the binary with Rollup and SEA script
RUN npm run build:pkg && ./scripts/generate-sea.sh /bin/dclint
# Final stage (alpine)
# -------------
FROM alpine:3.19 AS alpine-version
# Suppress experimental warnings
ENV NODE_NO_WARNINGS=1
# Copy the binary from the builder stage
COPY --from=builder /bin/dclint /bin/dclint
# Create working directory
WORKDIR /app
# Define the entry point
ENTRYPOINT ["/bin/dclint"]
# Final stage (scratch)
# -------------
FROM scratch AS scratch-version
# Suppress experimental warnings
ENV NODE_NO_WARNINGS=1
# Copy the binary from the builder stage
COPY --from=builder /bin/dclint /bin/dclint
# Create working directory
WORKDIR /app
# Define the entry point
ENTRYPOINT ["/bin/dclint"]
Handling Library Dependencies
But running container from generated image will produce errors like these:
Error loading shared library libstdc++.so.6: No such file or directory (needed by /bin/dclint)
Error relocating /bin/dclint: _ZNSt7__cxx1119basic_ostringstreamIcSt11char_traitsIcESaIcEEC1Ev: symbol not found
...
This happens because, even though Node.js is bundled into the binary, it still requires the libstdc++
library, as shown by the ldd /bin/dclint
command:
ldd /bin/dclint
/lib/ld-musl-aarch64.so.1 (0xffffaeac8000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xffff9fe00000)
libc.musl-aarch64.so.1 => /lib/ld-musl-aarch64.so.1 (0xffffaeac8000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0xffffaea97000)
Of course you can copy these dependencies to final stage like this:
# Copy library dependencies
COPY --from=builder /lib/ld-musl-aarch64.so.1 /lib/ld-musl-aarch64.so.1
COPY --from=builder /usr/lib/libgcc_s.so.1 /usr/lib/libgcc_s.so.1
COPY --from=builder /usr/lib/libstdc++.so.6 /usr/lib/libstdc++.so.6
However, since dependencies differ across architectures (e.g., arm64 vs. amd64), I use the output of ldd /bin/dclint
to identify dependencies dynamically, copy them to a separate folder, and then include them in the final stage:
# Collect platform-specific dependencies
RUN mkdir -p /dependencies/lib /dependencies/usr/lib && \
ldd /bin/dclint | awk '{print $3}' | grep -vE '^$' | while read -r lib; do \
if [ -f "$lib" ]; then \
if [ "${lib#/usr/lib/}" != "$lib" ]; then \
cp "$lib" /dependencies/usr/lib/; \
elif [ "${lib#/lib/}" != "$lib" ]; then \
cp "$lib" /dependencies/lib/; \
fi; \
fi; \
done
With this approach, the final Dockerfile
looks like this:
# First stage (builder)
# -------------
FROM node:20.18.0-alpine3.19 AS builder
WORKDIR /dclint
COPY package*.json ./
RUN npm ci
COPY . .
# SEA Builder
RUN npm run build:pkg && ./scripts/generate-sea.sh /bin/dclint
# Collect platform-specific dependencies
SHELL ["/bin/ash", "-o", "pipefail", "-c"]
RUN mkdir -p /dependencies/lib /dependencies/usr/lib && \
ldd /bin/dclint | awk '{print $3}' | grep -vE '^$' | while read -r lib; do \
if [ -f "$lib" ]; then \
if [ "${lib#/usr/lib/}" != "$lib" ]; then \
cp "$lib" /dependencies/usr/lib/; \
elif [ "${lib#/lib/}" != "$lib" ]; then \
cp "$lib" /dependencies/lib/; \
fi; \
fi; \
done
# Final stage (alpine)
# -------------
FROM alpine:3.19 AS alpine-version
ENV NODE_NO_WARNINGS=1
# Install c++ dependencies
RUN apk update && apk upgrade && \
apk add --no-cache \
libstdc++=~13.2 \
&& rm -rf /tmp/* /var/cache/apk/*
COPY --from=builder /bin/dclint /bin/dclint
WORKDIR /app
ENTRYPOINT ["/bin/dclint"]
# Final stage (scratch)
# -------------
FROM scratch AS scratch-version
ENV NODE_NO_WARNINGS=1
# Copy dependencies
COPY --from=builder /dependencies/lib /lib
COPY --from=builder /dependencies/usr/lib /usr/lib
# Copy binary
COPY --from=builder /bin/dclint /bin/dclint
WORKDIR /app
ENTRYPOINT ["/bin/dclint"]
GitHub
With my GitHub pipeline I wanted to achieve two goals:
- Publish
alpine
andscratch
versions (supporting bothamd64
andarm64
) to Docker Hub. - Attach executable binaries for Alpine/Ubuntu (also
amd64
andarm64
) as assets to GitHub releases.
Publishing to Docker Hub
To publish images to Docker Hub, I use the docker/build-push-action@v6, where I specify:
- target: Which final image to publish.
- platform: Platforms to build for.
- tags: Tags under which the image will be published.
This action is invoked twice — for the alpine
version and for the scratch
version. Here's an example for the scratch
version:
jobs:
release:
runs-on: ubuntu-latest
steps:
- ...
- name: Build and push Scratch version
uses: docker/build-push-action@v6
with:
context: .
push: true
platforms: linux/amd64,linux/arm64
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/dclint:latest
${{ secrets.DOCKERHUB_USERNAME }}/dclint:${{ env.BUILD_VERSION }}
target: scratch-version
Building Binaries
For binary builds, I use matrix builds (GitHub Actions Matrix Guide) in the workflow. This allows simultaneous handling of different platforms and architectures:
jobs:
build_sea:
runs-on: ubuntu-latest
strategy:
matrix:
os: [alpine, bullseye]
arch: [amd64, arm64]
steps:
- ...
- name: Build binary
run: |
docker run --rm --platform linux/${{ matrix.arch }} -v "$PWD":/app -w /app node:20.18.0-${{ matrix.os }} ./scripts/generate-sea.sh ./sea/dclint-${{ matrix.os }}-${{ matrix.arch }}
You can view the full pipeline here:
GitHub Workflow File
Adding Binaries to Releases
The binaries are automatically added to releases via semantic-release
, though it can be done in other ways as well.
Here’s the part of release.config.js
responsible for attaching files:
export default {
...
plugins: [
...
[
'@semantic-release/github',
{
assets: [
{
path: 'README.md',
label: 'Documentation',
},
{
path: 'CHANGELOG.md',
label: 'Changelog',
},
{
path: 'sea/dclint-alpine-amd64',
label: 'DClint Alpine Linux Binary (amd64)',
},
{
path: 'sea/dclint-bullseye-amd64',
label: 'DClint Bullseye Linux Binary (amd64)',
},
{
path: 'sea/dclint-alpine-arm64',
label: 'DClint Alpine Linux Binary (arm64)',
},
{
path: 'sea/dclint-bullseye-arm64',
label: 'DClint Bullseye Linux Binary (arm64)',
},
],
},
],
],
};
View release.config.js on GitHub
Final Thoughts
While working on dclint
, I faced tasks that seemed simple at first but quickly turned into interesting challenges.
These challenges gave me valuable experience and helped make dclint
more practical tool: it runs without Node.js, supports multiple architectures, and can be easily installed via Docker or as a standalone file.
After all this optimizations I managed to significantly reduce the size of the Docker images:
- The compressed size of the Alpine-based Docker image shrank from 93 MB to 43 MB.
- For new Scratch-based version, the size is 38 MB.
If you’re want to see how it all works, check out the repository. I’d appreciate your stars and any suggestions for improvement.
If you liked this article, you can support me with PayPal or follow me in:
Top comments (0)