CI/CD automates the build and deployment process — push code, pipeline runs, new version deployed on the cluster. Here's how I set it up for ASTRING using GitLab CI/CD, and why I ended up switching from Docker-in-Docker to Kaniko.
The Initial Pipeline
The first version used Docker-in-Docker (DinD) — a standard approach where the CI job spins up a Docker daemon inside a container to build the image.
stages:
- dockerize
- deploy
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
dockerize:
stage: dockerize
image: docker:24.0.5
services:
- docker:dind
script:
- docker build -t $IMAGE_TAG .
- docker tag $IMAGE_TAG $CI_REGISTRY_IMAGE:latest
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
- docker push $IMAGE_TAG
- docker push $CI_REGISTRY_IMAGE:latest
deploy:
stage: deploy
image:
name: bitnami/kubectl:1.31
entrypoint: [""]
script:
- mkdir -p ~/.kube
- echo "$KUBECONFIG_BASE64" | base64 -d > ~/.kube/config
- kubectl apply -f deployment/astring/deployment.yaml
- kubectl apply -f deployment/astring/service.yaml
- kubectl apply -f deployment/astring/ingress.yaml
- kubectl rollout restart deployment/astring-backend -n astring
This worked locally but broke as soon as I moved the GitLab runner onto the on-premise Kubernetes cluster.
The Problem with DinD on k3s
My GitLab runner runs as a pod on the same k3s cluster that hosts everything else. k3s uses containerd as its container runtime, not Docker. DinD assumes a Docker daemon is available — it tries to talk to /var/run/docker.sock, which doesn't exist in a containerd environment. The build stage just failed.
Beyond the compatibility issue, DinD also requires privileged containers to run the nested Docker daemon, which is a security concern on a shared cluster.
Switching to Kaniko
Kaniko is a tool that builds container images from a Dockerfile without needing a Docker daemon. It runs entirely in userspace, reads the Dockerfile layer by layer, and pushes the result directly to a registry. No privileged container, no Docker socket, works fine on containerd.
Here's the updated pipeline:
stages:
- dockerize
- deploy
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
dockerize:
stage: dockerize
image:
name: gcr.io/kaniko-project/executor:v1.23.2-debug
entrypoint: [""]
script:
- /kaniko/executor \
--context "${CI_PROJECT_DIR}" \
--dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHORT_SHA}" \
--destination "${CI_REGISTRY_IMAGE}:latest" \
--cache=true
deploy:
stage: deploy
image:
name: bitnami/kubectl:1.31
entrypoint: [""]
script:
- mkdir -p ~/.kube
- echo "$KUBECONFIG_BASE64" | base64 -d > ~/.kube/config
- kubectl apply -f deployment/astring/deployment.yaml
- kubectl apply -f deployment/astring/service.yaml
- kubectl apply -f deployment/astring/ingress.yaml
- kubectl rollout restart deployment/astring-backend -n astring
The dockerize stage now uses the Kaniko executor image directly. No services block, no Docker socket, no privileged mode needed. --cache=true reuses unchanged layers across builds, which makes subsequent builds significantly faster.
How the Deploy Stage Works
The deploy stage uses kubectl to apply the manifests and trigger a rolling restart. It’s for the only testing purpose.
Kubeconfig is stored as a base64-encoded GitLab CI variable (KUBECONFIG_BASE64). The pipeline decodes it at runtime and writes it to ~/.kube/config.
Image tagging uses the commit SHA ($CI_COMMIT_SHORT_SHA) so every build produces a uniquely tagged image. The deployment manifest references the SHA tag, not latest — this ensures kubectl rollout restart actually pulls the new image. If we use latest without imagePullPolicy: Always, Kubernetes may skip the pull and restart with the cached image, which is not what we want.
Rolling restart means Kubernetes updates pods one at a time — new pod comes up, old pod goes down. Zero downtime deployment without any extra configuration.
Secrets
Config files and keys don't go into the image. They're stored as Kubernetes secrets and mounted into the pod at runtime. The pipeline itself only needs registry credentials and the kubeconfig — both stored as GitLab CI variables, never in the repo.
What's Next
This pipeline handles the backend. I'll write about the full cluster setup — k3s configuration, networking, ingress, and how everything is organized — in the next post.
Top comments (0)