DEV Community

Cover image for Sending stdin into a container using nothing but kernel primitives
Bjørn T. Dahl
Bjørn T. Dahl

Posted on • Originally published at blog.apario.net

Sending stdin into a container using nothing but kernel primitives

Originally published at blog.apario.net

When a containerised process needs input via stdin, delivering it from outside the container is often done in a more complex way than necessary. This article describes a lightweight, robust approach using a host-created FIFO mounted into the container, providing a simple, near-zero-overhead, atomic, scriptable, multi-source stdin channel with no extra daemons, no docker exec, and no PTY involvement.

The problem

Some processes use stdin as their primary input channel. When such a process runs inside a Docker container, the conventional options for sending input to it from the host are:

  • docker exec -i container command — spawns a new process inside the container rather than reaching the existing one's stdin, introduces extra pipes and process overhead, and is unsuitable for automation when ordering or atomicity matters.
  • docker attach, or docker run -i with a held-open stdin — attaches a stdin stream to the container's primary process, but is fragile, blocks the caller, and is coupled to the lifetime of the writer.
  • A multiplexer inside the container (tmux, screen) — adds a persistent process and complexity inside the container, requires a separate attach mechanism, couples the container image to the tooling, and displaces the target process from PID 1.
  • socat — forks a fresh service process per connection in the LISTEN,fork EXEC: pattern, ruling out stateful services. Without fork, handles exactly one connection and exits. socat itself occupies PID 1 rather than the service process, except in the EXEC:...,nofork form, which execvps the service into PID 1 but is limited to a single connection.
  • A custom socket or network interface — requires the process to support it, which many do not.

None of these are satisfactory when the goal is simple, reliable, scriptable input delivery to a process that already reads from stdin. The Linux kernel, however, already provides a cleaner path.

The approach

All it takes is a single, well-established Linux primitive: the named pipe (FIFO), and a few lines of standard shell tooling.

A named pipe (a FIFO in POSIX terminology) is a kernel-managed special file that provides a unidirectional byte stream for inter-process communication, allowing unrelated processes to exchange data using normal file I/O without storing it on disk. It is created with mkfifo and exists in the filesystem. Unlike regular files, it does not support seeking and has no persistent contents, and unlike anonymous pipes it can be opened by unrelated processes. Because the kernel manages FIFO data entirely in memory, transfers are typically orders of magnitude faster than disk-based I/O and use far fewer resources. FIFOs are one of the 7 standard POSIX file types.

A Linux FIFO created on the host can be mounted directly into a container as a bind mount. Inside the container, a minimal wrapper script opens the FIFO and redirects it to the target process's stdin. From that point, writing to the FIFO delivers input directly to the containerised process's stdin, with kernel-guaranteed atomicity.

The additional processes and abstraction layers introduced by the conventional approaches are avoided entirely with this method. The full mechanism requires only three standard components:

  1. A FIFO created on the host before the container starts.
  2. The FIFO mounted into the container.
  3. A wrapper script inside the container that opens the FIFO and execs the target process with stdin redirected from it.

This works because Docker on Linux runs containers as processes directly on the host kernel, isolated via namespaces and cgroups. A FIFO created on the host is therefore the same kernel object inside the container as outside. The bind mount simply makes it visible at a chosen path in the container's filesystem namespace, with no copying, translation, or intermediary layers involved. The in-memory nature of the FIFO makes this input path as efficient as it can practically be.

Implementation

1. Create the FIFO on the host and set access permissions

mkfifo /var/run/myservice/command_pipe
chmod 0666 /var/run/myservice/command_pipe
Enter fullscreen mode Exit fullscreen mode

The FIFO must exist before the container starts. The permissions shown are permissive for simplicity. Tighten the permissions and ownership to suit your environment, but ensure the process inside the container can still open the FIFO.

2. Mount the FIFO into the container

docker run -d \
  -v "/var/run/myservice/command_pipe:/service/command_pipe" \
  my-image
Enter fullscreen mode Exit fullscreen mode

The container now sees the same FIFO file, and writes to it are immediately available inside the container. The kernel mediates the transfer entirely in memory.

3. Wrapper script inside the container

#!/bin/bash
# FIFO-based stdin channel for containerised processes
# See: https://blog.apario.net/sending-stdin-into-a-container

CMD_FIFO="/service/command_pipe"

if [[ ! -p "$CMD_FIFO" ]]; then
    >&2 echo "ERROR: FIFO $CMD_FIFO missing. Aborting."
    exit 1
fi

# Open the FIFO on fd 3, read/write (<>) to prevent blocking.
# This exec opens the file descriptor without touching the process.
exec 3<> "$CMD_FIFO"

# Exec the target process with stdin redirected from the FIFO.
# This exec replaces this shell, so the target process becomes PID 1.
exec /usr/bin/myservice <&3
Enter fullscreen mode Exit fullscreen mode

This script is the container's entrypoint. It is where the FIFO, the bind mount, and process handoff converge into a working stdin channel.

4. Sending input via the FIFO

# Single command
echo "reload config" > /var/run/myservice/command_pipe

# Multiple commands from a file
cat commands.txt > /var/run/myservice/command_pipe

# From another script
printf "save\nquit\n" > /var/run/myservice/command_pipe
Enter fullscreen mode Exit fullscreen mode

No special tooling required. Standard shell redirection and piping work as-is.

Key details

FIFO opening and file descriptors

Inside the container, the FIFO is opened read/write rather than read-only because a FIFO opened for reading blocks until a writer opens the other end, and vice versa. Opening the FIFO with <> (read/write) on a single file descriptor sidesteps this: the open call returns immediately because the same fd satisfies both ends. The process then holds the FIFO open continuously, meaning subsequent host writes never block waiting for a reader.

File descriptors 0, 1, and 2 are reserved for stdin, stdout, and stderr respectively. Any number from 3 up to the process's file descriptor limit is available for arbitrary use. fd 3 is chosen here by convention, but any unused descriptor would work.

It is technically possible to open the FIFO directly on stdin with exec 0<> "$CMD_FIFO", but this modifies the shell's stdin before the target process takes over. In a simple wrapper this may work, but any startup logic between that line and the final exec, e.g. sourcing files, running checks, or reading configuration, would have its stdin redirected from the FIFO. Using a custom fd as an intermediary keeps the shell's stdin untouched until the handoff.

Why exec instead of a subshell

Using exec to replace the wrapper shell with the target process means the target process inherits PID 1 inside the container. This is correct behaviour for Docker: PID 1 receives signals directly, including SIGTERM on docker stop, allowing clean shutdown. A subshell sitting between the FIFO and the process would intercept or swallow signals.

The shell's exec is just a thin POSIX wrapper around the kernel's execve system call; the process replacement itself is a kernel operation that leaves no intermediate process behind.

Atomicity

The Linux kernel guarantees that writes to a FIFO up to PIPE_BUF bytes are atomic. On Linux, PIPE_BUF is 4096 bytes. Multiple processes can write to the same FIFO concurrently without interleaving, as long as each write is within this limit. For line-oriented input, this is almost always satisfied.

The FIFO buffer's total capacity typically defaults to 64KB on Linux, configurable up to a system-wide maximum (typically 1MB). If writers produce data faster than the target process can consume them and the buffer fills, writes will be blocked until space becomes available.

One-way only

This mechanism delivers input to the target process. Output is not returned through the FIFO. Use docker logs or a separate log aggregation mechanism to observe the process's output.

Approach comparison

Property This approach docker exec In-container multiplexer
Scriptable from host Yes, native shell Yes, with overhead Requires attach tooling
Extra processes No At least 2 per command Multiplexer server process with IPC layer + attach client + docker exec processes
Service process is PID 1 Yes Should be No
Atomicity Kernel-guaranteed (per PIPE_BUF) Not guaranteed Depends on implementation
Overhead Minimal Per-command process spawn cost Constant resource footprint plus per-interaction exec and spawn process cost

Applicability

This approach works for most, if not all, containerised processes that read from stdin; input can originate from the host, from within the same container, or from a different container (including sidecars, via a shared mount or volume), or any combination of these.

It may not be suitable for processes that require a PTY (terminal emulation), though many processes that appear to require a PTY in interactive use will accept plain stdin input when running non-interactively.

Host to container communication is not applicable on Windows or macOS. This approach relies on a shared POSIX kernel between host and container. On Windows and macOS, Docker Desktop runs Linux containers inside a lightweight Linux VM, isolating them from the host kernel. Windows native containers run directly on the Windows kernel, which does not expose a POSIX-compatible interface to containers.

Summary

A host FIFO mounted into a container, opened read/write to prevent blocking, and passed to a target process via exec and stdin redirection provides a clean, low-overhead, kernel-mediated data channel into any containerised stdin-reading process. No extra daemons, no network sockets, no docker exec overhead. Data can be written to it with standard tools from the host, from within a container, or from any other process with access. The kernel handles atomicity and buffering. Together these properties make it a practical, low-complexity solution for scriptable stdin control in containerised environments, with no moving parts outside the kernel.

Provenance and scope

This approach emerged while working around stdin control limitations in a proprietary server binary. The original implementation is available in the Bedfeather project.

Each component is a decades-old POSIX or Linux primitive. Using FIFOs in container data pipelines is not new, and the shell methods involved are well established. Their combination in this pattern, however, appears to be rarely documented.

This article exists to distill the method into a clear, referenceable form. Security hardening, failure recovery, and behaviour under sustained, concurrent write load are deliberately outside this article's scope, but should be considered before production use.


Licence

Article text is licensed under CC BY 4.0 — share and adapt freely with attribution. Code samples are released under the MIT licence.

Full licence text including copyright notice is available on the original post.

Top comments (0)