Table of Contents:
1. Introduction: The Helm Fatigue
2. Why Kustomize? Key Advantages Over Helm
3. Our Approach: GitOps and Declarative Management
4. The Plan: A Simple Application with Environment Overlays
5. Explaining Kustomize: Bases, Overlays, and Patches
6. Example Walkthrough: From Code to Full Deployment
7. Automating Everything: A CI/CD Pipeline with GitHub Actions
8. Time Complexity and Performance
9. Conclusion: Embracing Simplicity
For years, Helm was the undisputed king of Kubernetes deployment. We were told it was the best way to manage complexity, but let's be honest: dealing with templated YAML, debugging --dry-run outputs, and managing Tiller (before it was removed) often felt more complicated than the problem it was solving.
I was on the verge of Kubernetes fatigue until I discovered a better way. A method that is simpler, more declarative, and feels native to the Kubernetes ecosystem. That method is Kustomize.
This Kustomize Kubernetes deployment guide will show you a cleaner, more maintainable path for managing your applications across different environments.
1. Introduction: The Helm Fatigue
Helm's "templating" approach injects dynamic values into YAML files. This creates a layer of abstraction that can be hard to debug. What does the final manifest actually look like? You have to run a command to see. This breaks the core Kubernetes principle of declarative configuration.
Kustomize takes a different, purely declarative approach. Instead of templating, you define a base set of resources and then patch them for different environments (like dev, staging, prod). It's kubectl-native, simple to understand, and incredibly powerful.
2. Why Kustomize? Key Advantages Over Helm
| Feature | Helm | Kustomize |
|---|---|---|
| Philosophy | Templating & Packaging | Patching & Overlays |
| Learning Curve | Steeper | Gentle |
| Debugging | Requires --dry-run
|
Direct file inspection |
| Native Integration | External Tool | Built into kubectl apply -k
|
| Configuration | Dynamic with values.yaml
|
Declarative with kustomization.yaml
|
3. Our Approach: GitOps and Declarative Management
We will adopt a GitOps-style approach. Our entire configuration—base and environment-specific patches—will live in Git. This gives us a single source of truth, full audit trails, and easy rollbacks. Kustomize is a perfect fit for this methodology.
4. The Plan: A Simple Application with Environment Overlays
We'll deploy a simple Nginx application. Our goal is to have three distinct environments:
- Base: The common definition for all environments.
- Dev: A low-resource, low-replica setup.
- Staging: A medium-resource setup, closer to production.
- Prod: A high-availability, high-resource setup.
5. Explaining Kustomize: Bases, Overlays, and Patches
- Base: The common, shared configuration for your application (e.g.,
deployment.yaml,service.yaml). - Overlay: A directory that contains a
kustomization.yamlfile pointing to a base and defining patches to apply. - Patches: Strategic changes to the base. You can change anything from image tags and replica counts to environment variables and resource limits.
6. Example Walkthrough: From Code to Full Deployment
Let's build our Kustomize Kubernetes deployment structure.
Project Structure
my-app/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ └── service.yaml
└── overlays/
├── dev/
│ ├── kustomization.yaml
│ └── patch_replicas.yaml
├── staging/
│ ├── kustomization.yaml
│ └── patch_resources.yaml
└── prod/
├── kustomization.yaml
└── patch_resources.yaml
The Base Configuration
base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25 # We will override this in overlays
ports:
- containerPort: 80
resources: {} # Defined in overlays
base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
The Dev Overlay
overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- patch_replicas.yaml
namePrefix: dev-
namespace: dev-namespace
overlays/dev/patch_replicas.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
replicas: 1
template:
spec:
containers:
- name: nginx
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
The Prod Overlay
overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- patch_resources.yaml
images:
- name: nginx
newTag: 1.25-alpine # More secure base image
namePrefix: prod-
namespace: prod-namespace
overlays/prod/patch_resources.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
spec:
replicas: 3
template:
spec:
containers:
- name: nginx
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
To deploy to production, you simply run:
kubectl apply -k overlays/prod
This command tells Kustomize to build the final manifest from the base and the prod overlay, then apply it directly to your cluster.
6.1. CI/CD YAML GitHub Actions Pipeline
Here is a complete GitHub Actions pipeline that automatically lints, validates, and deploys these manifests to different clusters based on the branch.
.github/workflows/k8s-deploy.yml
name: 🚀 Deploy to Kubernetes
on:
push:
branches: [ main, staging, dev ]
env:
KUSTOMIZE_VERSION: "v5.4.1"
jobs:
lint-and-deploy:
name: "🔍 Lint & Deploy"
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: "📥 Checkout Code"
uses: actions/checkout@v4
- name: "🔧 Setup Kustomize"
run: |
curl -s -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64.tar.gz | tar xz
sudo mv kustomize /usr/local/bin/
- name: "🛠️ Build & Lint Manifests"
run: |
echo "Building manifests for all environments..."
for env in dev staging prod; do
if [ -d "./overlays/$env" ]; then
echo "--- $env ---"
kustomize build ./overlays/$env
# kubeval can be added here for strict validation
# kustomize build ./overlays/$env | kubeval --strict
fi
done
- name: "🔐 Deploy to Development"
if: github.ref == 'refs/heads/dev'
uses: azure/k8s-deploy@v4
with:
action: deploy
namespace: dev-namespace
manifests: |
./overlays/dev/kustomization.yaml
kustomize: true
# Connection to your dev cluster (configure these in GitHub Secrets)
k8s-url: ${{ secrets.DEV_K8S_URL }}
k8s-secret: ${{ secrets.DEV_K8S_SECRET }}
- name: "🔐 Deploy to Staging"
if: github.ref == 'refs/heads/staging'
uses: azure/k8s-deploy@v4
with:
action: deploy
namespace: staging-namespace
manifests: |
./overlays/staging/kustomization.yaml
kustomize: true
k8s-url: ${{ secrets.STAGING_K8S_URL }}
k8s-secret: ${{ secrets.STAGING_K8S_SECRET }}
- name: "🔐 Deploy to Production"
if: github.ref == 'refs/heads/main'
uses: azure/k8s-deploy@v4
with:
action: deploy
namespace: prod-namespace
manifests: |
./overlays/prod/kustomization.yaml
kustomize: true
k8s-url: ${{ secrets.PROD_K8S_URL }}
k8s-secret: ${{ secrets.PROD_K8S_SECRET }}
7. Time Complexity
From an operational perspective, Kustomize is highly efficient.
- Build Time: The
kustomize buildcommand is O(n), where n is the number of resource files and patches. It's incredibly fast, often sub-second. - Cognitive Load: The major win. Understanding and debugging a configuration is O(1) — you just look at the files. This is a dramatic improvement over tracing values through complex Helm template hierarchies.
8. Output for Example
When you run kustomize build overlays/prod, the final, combined manifest output would look like this:
apiVersion: v1
kind: Service
metadata:
name: prod-nginx-service
namespace: prod-namespace
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prod-nginx-app
namespace: prod-namespace
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.25-alpine
name: nginx
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 512Mi
9. Conclusion: Embracing Simplicity
Switching from Helm to Kustomize felt like lifting a veil. My Kustomize Kubernetes deployment workflow is now transparent, maintainable, and perfectly aligned with the GitOps philosophy. The configuration is just plain YAML, and what you see in your Git repository is exactly what gets deployed to your cluster.
While Helm still has its place for packaging and distributing third-party software (e.g., from Artifact Hub), for deploying and managing your own applications, Kustomize is the superior choice. It reduces complexity, eliminates an entire class of debugging headaches, and just works.
Give Kustomize a try. Your future self will thank you.
Contact Links
If you found this series helpful, please consider giving the repository a star on GitHub or sharing the post on your favorite social networks 😍. Your support would mean a lot to me!

If you want more helpful content like this, feel free to follow me:
Top comments (0)