DEV Community

Cover image for EP2: Mapping the Labyrinth: How Coolify Deploys Your Apps (and Why K8s Fits)
drtobbyas
drtobbyas

Posted on

EP2: Mapping the Labyrinth: How Coolify Deploys Your Apps (and Why K8s Fits)

If you missed Episode 1, we established the goal: Investigate whether native Kubernetes support in Coolify is actually impossible.

Now, the investigation moves from the "Why" to the "How." I spent the last few days inside the Coolify source code, trying to map exactly how it moves code from a repository into a running container.

Here is the technical reality of the engine.


πŸ—οΈ Part 1: Finding the Heartbeat

To understand how Coolify works, you have to find its "Engine Room." In this codebase, that room is located at app/Jobs/ApplicationDeploymentJob.php.

It is a massive, 4,000-line procedural job.

In some circles, a 4k-line file is a "code smell." But in an orchestrator, it’s actually a map. Because it's written procedurally, you can read it like a script. I spent hours tracing the flow:

  1. The Setup: Cloning the repo and establishing the build environment.
  2. The Network: Creating the Docker bridge networks.
  3. The Deployment: Building the images and running docker compose up.

The audit confirmed my first hunch: The logic isn’t hardcoded to Docker. It’s a sequence of commands. If we can swap those commands, we can change the engine.


πŸ—ΊοΈ The Map of the Territory

To find the path to Kubernetes, I first had to map the Labyrinth. Here is the simplified structure of the Coolify engine:

coolify/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ Actions/        # Reusable deployment logic
β”‚   β”œβ”€β”€ Jobs/           # The heart: ApplicationDeploymentJob.php (4k lines)
β”‚   └── Models/         # Data structures (Server, Destination, Service)
β”œβ”€β”€ bootstrap/
β”‚   └── helpers/        # The heavy lifters: remoteProcess.php & proxy.php
β”œβ”€β”€ config/             # Global platform settings
└── docker-compose.dev.yml
Enter fullscreen mode Exit fullscreen mode

Key Discovery Points:

  • app/Jobs: This is where the linear deployment sequence lives.
  • bootstrap/helpers/remoteProcess.php: This is the "SSH Tunnel" that makes everything possible.
  • app/Models: This is where we’ll define the new KubernetesDestination.

πŸ‰ Part 2: The Fedora Sidequest

Before I could dive deeper, I had to fix my own "Engine Room."

I develop on Fedora, which means I’m running a security-hardened stack with SELinux. As soon as I tried to spin up a basic service like Dashy or Homepage in my local Coolify dev environment, I hit a stone wall.

Permission Denied.

The proxy container (Traefik/Caddy) couldn’t talk to the Docker socket. Everything was 404ing.

I spent a few hours patching bootstrap/helpers/proxy.php to handle this "hardened" reality. The fix required two key adjustments:

  • Adding the :z flag for volume relabeling (/var/run/docker.sock:/var/run/docker.sock:z).
  • Setting privileged: true for the local proxy.

The Lesson: Local dev is never as simple as docker compose up. But solving these "gatekeeper" bugs gave me a deeper understanding of how Coolify handles its proxy logic. Such knowledge I'll need when we move to K8s Ingress.


πŸšͺ Part 3: The SSH Backdoor

While auditing the engine, I found the most important piece of the puzzle: remote_process.

Coolify doesn't rely on complex, vendor-locked SDKs to manage your servers. It does something much simpler and more powerful: it uses SSH to run shell commands.

This is the "Kubernetes Backdoor."

Right now, the ApplicationDeploymentJob sends strings like:
docker compose up -d

But because it’s just a CLI pipeline over SSH, there is no architectural reason it can't send:
kubectl apply -f manifest.yaml

The engine treats servers as SSH-ready shell endpoints. If your server has kubectl installed, Coolify can already talk to it. The "impossible" barrier isn't the architecture,it's just a translation problem.


πŸš€ The Phase 2 Conclusion: It’s a Translation Problem

They said Kubernetes isn't coming. I've found that the door is already wide open.

The challenge ahead isn't rewriting the core engine. It's building the Translator. We need to take the configuration you provide in the Coolify UI and turn it into Kubernetes YAML instead of Docker Compose labels.

Next in the Investigation:
I’m moving on to building the KubernetesDestination model, the foundation for a cluster-native Coolify experience.

Follow along as we start building the bridge.


GitHub Issue: https://github.com/coollabsio/coolify/issues/2390

Connect with me: Twitter/X, Linkedin, Telegram

This is the second post in a series documenting my investigation into Kubernetes support for Coolify. Next up: Building the first Kubernetes Destination model.

Top comments (0)