DEV Community

Naveen Jayachandran
Naveen Jayachandran

Posted on

Kubernetes Deployment

A Kubernetes Deployment is a higher-level abstraction used to manage and scale containerized applications while ensuring they remain in the desired operational state. It provides a declarative way to specify how many Pods should run, which container images they should use, and how updates or rollbacks should occur — all without downtime.

Key Capabilities of a Deployment
With a Deployment, you can:

Scale applications dynamically based on workload.

Maintain availability by ensuring the specified number of Pods are always healthy and running.

Perform rolling updates to deploy new versions seamlessly.

Rollback easily if a deployment introduces issues.

Automate self-healing, ensuring that failed Pods are recreated automatically.

Think of a Deployment as both a blueprint and a controller for Pods — it simplifies and automates most aspects of application lifecycle management in Kubernetes.

Common Use Cases
Kubernetes Deployments are widely used for managing application lifecycles. Common scenarios include:

Rolling out new applications: Create a Deployment that launches a ReplicaSet, which in turn provisions Pods. You can monitor rollout progress using deployment status commands.

Seamless application updates: Modify the PodTemplateSpec to trigger a new ReplicaSet. The Deployment automatically scales up the new version while gradually scaling down the old one — ensuring zero downtime.

Rollback to previous versions: If an update introduces instability, roll back to an earlier revision easily.

Dynamic scaling: Adjust the replica count manually or automatically using autoscalers to handle traffic fluctuations.

Pausing and resuming rollouts: Pause a rollout to batch multiple updates together and resume when ready.

Monitoring rollout progress: Check rollout status to confirm whether updates are progressing smoothly or stuck.

Resource cleanup: Automatically remove obsolete ReplicaSets to maintain cluster efficiency.

Core Components of a Deployment
A Kubernetes Deployment consists of three key parts:

Metadata: Includes the name and labels. Labels establish relationships between Deployments, ReplicaSets, and Services.

Specification (spec): Defines:

Number of replicas (Pods)

Selector labels

Pod template (template) The Pod template includes container specifications such as:

Container name

Image to use

Ports to expose

Resource limits (CPU, memory)

Status: Automatically maintained by Kubernetes. It reflects the current state of the Deployment and enables self-healing. If the actual and desired states differ, Kubernetes reconciles them automatically.

Example: Nginx Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Commands:

Create the Deployment

kubectl apply -f nginx.yaml

Check status

kubectl get all
This will successfully create and deploy an Nginx application on your cluster.

Updating a Deployment
You can update Deployments in two main ways:

Method 1: Edit Using kubectl
kubectl edit deployment
This opens the configuration in your terminal. Make the changes (press i to insert), then save and exit (Esc + :wq).

Method 2: Update the YAML File
Edit the YAML file directly (e.g., change container port from 80 to 8000), and reapply:

kubectl apply -f nginx.yaml
Rolling Back a Deployment
If an update causes problems, you can easily revert to a previous version.

Steps:
List all revisions:kubectl rollout history deployment

Rollback to a previous revision:kubectl rollout undo deployment/nginx-deployment --to-revision=1

Validate: Always test your rollback strategy to ensure minimal downtime during real incidents.

Viewing Rollout History
kubectl rollout history deployment/
To view details for a specific revision:

kubectl rollout history deployment/web-app-deployment --revision=3
This helps track configuration changes and revert if needed.

Scaling a Deployment
You can scale Deployments manually or automatically:

Manual Scaling:
kubectl scale deployment/tomcat-deployment --replicas=5
Autoscaling:
kubectl autoscale deployment/tomcat-deployment --min=5 --max=8 --cpu-percent=75
Here:

--min=5: Minimum Pods always running

--max=8: Maximum Pods during high load

--cpu-percent=75: Scaling threshold based on CPU usage

Pausing and Resuming a Rollout
Pause a rollout:
kubectl rollout pause deployment/webapp-deployment
Resume the rollout:
kubectl rollout resume deployment/webapp-deployment
You can also update the container image during a paused rollout:

kubectl set image deployment/webapp-deployment webapp=webapp:2.1
Deployment Status Phases
Kubernetes reports various statuses for a Deployment:

Status Description
Pending Deployment is initializing or waiting for resources.
Progressing Deployment is rolling out changes or creating ReplicaSets.
Succeeded Deployment completed successfully.
Failed Deployment failed due to configuration or environment errors.
Unknown API server cannot determine status or connection lost.
Check rollout progress:

kubectl rollout status deployment/
Common Deployment Failures and Causes
Failure Reason Description
Failed probes Readiness or liveness probe misconfigured.
Image pull errors Incorrect image name or tag.
Insufficient resources Resource quota limits exceeded.
Dependency issues Service dependencies (like databases) unavailable.
For detailed debugging:

kubectl describe deployment
Canary Deployments
A Canary Deployment gradually introduces new versions of an application to a subset of users or Pods, allowing testing under real workloads before a full rollout.

Example Approach:
Deploy a new version to 50% of Pods while the rest continue serving the old version.

Based on feedback or monitoring results, either:

Roll out to all Pods, or

Roll back to the stable version.

Implementation Methods:

Traffic Splitting using Istio or other service mesh tools.

Blue-Green Deployment – maintain two environments (old and new) and switch traffic when ready.

ReplicaSet vs Deployment
ReplicaSet Deployment
Ensures the specified number of Pods are running. Manages ReplicaSets and automates Pod lifecycle.
Does not support rolling updates or rollbacks. Supports both rolling updates and rollbacks.
Handles Pods directly. Handles ReplicaSets, which in turn manage Pods.
Suitable for simple, static workloads. Suitable for dynamic, frequently updated applications.
Summary
A Kubernetes Deployment simplifies managing applications by handling updates, scaling, rollbacks, and self-healing automatically. It abstracts away the complexity of direct Pod or ReplicaSet management, enabling you to define your application’s desired state and letting Kubernetes maintain it efficiently.

Top comments (0)