If you missed Episode 1, we established the goal: Investigate whether native Kubernetes support in Coolify is actually impossible.
Now, the investigation moves from the "Why" to the "How." I spent the last few days inside the Coolify source code, trying to map exactly how it moves code from a repository into a running container.
Here is the technical reality of the engine.
ποΈ Part 1: Finding the Heartbeat
To understand how Coolify works, you have to find its "Engine Room." In this codebase, that room is located at app/Jobs/ApplicationDeploymentJob.php.
It is a massive, 4,000-line procedural job.
In some circles, a 4k-line file is a "code smell." But in an orchestrator, itβs actually a map. Because it's written procedurally, you can read it like a script. I spent hours tracing the flow:
- The Setup: Cloning the repo and establishing the build environment.
- The Network: Creating the Docker bridge networks.
-
The Deployment: Building the images and running
docker compose up.
The audit confirmed my first hunch: The logic isnβt hardcoded to Docker. Itβs a sequence of commands. If we can swap those commands, we can change the engine.
πΊοΈ The Map of the Territory
To find the path to Kubernetes, I first had to map the Labyrinth. Here is the simplified structure of the Coolify engine:
coolify/
βββ app/
β βββ Actions/ # Reusable deployment logic
β βββ Jobs/ # The heart: ApplicationDeploymentJob.php (4k lines)
β βββ Models/ # Data structures (Server, Destination, Service)
βββ bootstrap/
β βββ helpers/ # The heavy lifters: remoteProcess.php & proxy.php
βββ config/ # Global platform settings
βββ docker-compose.dev.yml
Key Discovery Points:
-
app/Jobs: This is where the linear deployment sequence lives. -
bootstrap/helpers/remoteProcess.php: This is the "SSH Tunnel" that makes everything possible. -
app/Models: This is where weβll define the newKubernetesDestination.
π Part 2: The Fedora Sidequest
Before I could dive deeper, I had to fix my own "Engine Room."
I develop on Fedora, which means Iβm running a security-hardened stack with SELinux. As soon as I tried to spin up a basic service like Dashy or Homepage in my local Coolify dev environment, I hit a stone wall.
Permission Denied.
The proxy container (Traefik/Caddy) couldnβt talk to the Docker socket. Everything was 404ing.
I spent a few hours patching bootstrap/helpers/proxy.php to handle this "hardened" reality. The fix required two key adjustments:
- Adding the
:zflag for volume relabeling (/var/run/docker.sock:/var/run/docker.sock:z). - Setting
privileged: truefor the local proxy.
The Lesson: Local dev is never as simple as docker compose up. But solving these "gatekeeper" bugs gave me a deeper understanding of how Coolify handles its proxy logic. Such knowledge I'll need when we move to K8s Ingress.
πͺ Part 3: The SSH Backdoor
While auditing the engine, I found the most important piece of the puzzle: remote_process.
Coolify doesn't rely on complex, vendor-locked SDKs to manage your servers. It does something much simpler and more powerful: it uses SSH to run shell commands.
This is the "Kubernetes Backdoor."
Right now, the ApplicationDeploymentJob sends strings like:
docker compose up -d
But because itβs just a CLI pipeline over SSH, there is no architectural reason it can't send:
kubectl apply -f manifest.yaml
The engine treats servers as SSH-ready shell endpoints. If your server has kubectl installed, Coolify can already talk to it. The "impossible" barrier isn't the architecture,it's just a translation problem.
π The Phase 2 Conclusion: Itβs a Translation Problem
They said Kubernetes isn't coming. I've found that the door is already wide open.
The challenge ahead isn't rewriting the core engine. It's building the Translator. We need to take the configuration you provide in the Coolify UI and turn it into Kubernetes YAML instead of Docker Compose labels.
Next in the Investigation:
Iβm moving on to building the KubernetesDestination model, the foundation for a cluster-native Coolify experience.
Follow along as we start building the bridge.
GitHub Issue: https://github.com/coollabsio/coolify/issues/2390
Connect with me: Twitter/X, Linkedin, Telegram
This is the second post in a series documenting my investigation into Kubernetes support for Coolify. Next up: Building the first Kubernetes Destination model.
Top comments (0)