Most developers know how to write code.
Fewer understand what actually happens after clicking “Merge Pull Request.”
And almost nobody truly understands what Kubernetes is doing during a production deployment — until something breaks.
This post is a practical walkthrough of how code moves from PR → CI/CD → Helm → Kubernetes → Production, especially in a microservices setup.
If you’ve ever shipped something that worked locally but behaved differently in production, this is for you.
First: Kubernetes Doesn’t “Deploy” Your Code
This mental model changes everything.
Kubernetes doesn’t deploy applications the way we think about deployments.
It maintains desired state.
You declare:
- “I want 3 replicas.”
- “I want image
order-service:1.4.2.” - “I want these resource limits.”
Kubernetes continuously checks:
Does current state match desired state?
If not, it fixes it.
That’s the entire system.
Everything else — rolling updates, restarts, scaling — is just reconciliation.
What Happens After You Merge a PR
Let’s break this down in a real production workflow.
Step 1 — PR is merged
You merge code into main or develop.
No one logs into the cluster.
No one runs kubectl.
Step 2 — CI/CD pipeline runs
The pipeline:
- Runs tests (fail fast)
- Builds a Docker image
- Tags it (usually with commit SHA)
- Pushes it to a container registry
Example image tag:
order-service:1.4.2-a8f3d21
One-line rule: Never use latest in production.
You want immutable, traceable builds.
Step 3 — Deployment is triggered
Two common approaches:
Option A – CI runs Helm directly
helm upgrade order-service --set image.tag=1.4.2-a8f3d21
This updates the Kubernetes Deployment spec.
Option B – GitOps (ArgoCD / Flux)
CI updates a config repo → GitOps controller detects change → syncs cluster.
Either way, Kubernetes receives an updated Deployment object.
What Kubernetes Actually Does Next
This is where things get interesting.
If the image tag changed inside:
.spec.template
Kubernetes:
- Creates a new ReplicaSet
- Starts new pods with the new image
- Waits for readiness probe to succeed
- Gradually terminates old pods
This is a rolling update.
No downtime if configured correctly.
No Helm magic.
Just the Deployment controller doing its job.
Important: What Actually Triggers a Rollout
Only changes inside .spec.template trigger new ReplicaSets.
Examples that trigger rollout:
- Image change
- Environment variable change
- Resource limit change
- Container command change
Examples that do NOT:
- Changing replica count (only scales)
- Updating a Service
- Editing comments
Understanding this prevents confusion in production.
Small System vs Microservices: What Changes?
Small System (Single Service)
- One Deployment
- One Service
- One Helm chart
- One pipeline
Simple mental model.
Easier debugging.
Larger blast radius.
Microservices (Real World)
- auth-service
- order-service
- payment-service
- gateway
Each:
- Has its own Deployment
- Has its own Helm release
- Scales independently
- Deploys independently
When order-service updates, only that Deployment rolls.
Other services remain untouched.
This isolation is the real advantage of microservices.
But it comes with operational complexity.
Real Production Mistakes I’ve Seen
1. Memory limit reduced “just a little”
Pods started OOM-killing under load during rollout.
Traffic dropped.
Rollback saved the night.
Lesson: Resource changes are real deployments.
2. ConfigMap updated but pods didn’t restart
ConfigMap changes do not restart pods automatically.
You must trigger rollout manually or use checksum annotations.
Lesson: Not everything behaves how you expect.
3. Using latest tag
Rollback becomes unpredictable.
You lose traceability.
Lesson: Immutable image tags are non-negotiable.
Security Awareness in Deployment
Things that matter more than people admit:
- No cluster-admin access for developers
- Proper RBAC
- Secrets stored securely (not in plain values.yaml)
- Resource limits on every container
- Liveness and readiness probes configured
A deployment is also a security event.
Treat it that way.
The Real Engineering Mindset
Don’t think:
“How do I deploy?”
Think:
“What changed in
.spec.template?”
If you understand that, you understand Kubernetes deployments.
Helm just renders YAML.
CI just builds images.
Kubernetes enforces desired state.
That’s the real system.
Final Thought
Kubernetes deployments feel complex until you understand the reconciliation loop.
Once you do, they become predictable.
And predictability is what keeps production stable.
If you want a deeper breakdown with diagrams, failure scenarios, and production war stories, I’ve written the full long-form version here:
👉 https://nileshblog.tech/from-code-to-kubernetes-production-deployment-guide/

Top comments (0)