DEV Community

Cover image for Stop Memorizing Kubernetes: Pods, Deployments, and Services Explained
SHARON SHAJI
SHARON SHAJI

Posted on

Stop Memorizing Kubernetes: Pods, Deployments, and Services Explained

Kubernetes feels complicated mostly because its core concepts are poorly explained.
This post explains Pods, Deployments, Services, and Ingress using only production-relevant details, with real YAML examples.


1️⃣ Kubernetes Pod

What is a Pod?

A Pod is the smallest deployable unit in Kubernetes.

  • A pod can run one or more containers
  • Containers inside a pod:
    • Share the same IP address
    • Communicate using localhost
    • Share storage volumes
  • Kubernetes schedules pods, not containers

Pods are ephemeral by design.

Pod Example

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
    - name: nginx
      image: nginx:latest
      ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

What this does

  • Runs a single NGINX container
  • Exposes port 80 inside the pod
  • Assigns a temporary IP address

⚠️ If this pod crashes, Kubernetes will not recreate it automatically.

That is why pods are not used directly in production.


2️⃣ Kubernetes Deployment

What is a Deployment?

A Deployment is a controller that creates, manages, and maintains Pods.

It exists to solve the problems Pods have.

A Deployment provides:

  • Automatic pod recreation (self-healing)
  • Horizontal scaling using replicas
  • Rolling updates with zero downtime
  • Rollback to a previous version if something breaks

In real systems, you deploy Deployments, not Pods.

Deployment Example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.25
          ports:
            - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

What this does

  • Runs 3 identical Pods

  • Automatically replaces failed Pods

  • Ensures the desired state is always maintained

If one Pod crashes, Kubernetes creates a new Pod automatically.

Scaling a Deployment

kubectl scale deployment nginx-deployment --replicas=5
Enter fullscreen mode Exit fullscreen mode

Kubernetes adds or removes Pods without downtime.


3️⃣ Kubernetes Service

Why Services Exist

Pods have fundamental limitations:

  • Pod IP addresses are not stable
  • Pods can be destroyed and recreated at any time

Because of this, you should never access Pods directly.

A Service provides:

  • A stable virtual IP address
  • Built-in load balancing
  • DNS-based service discovery

How Services Work

  • A Service selects Pods using labels
  • Traffic sent to the Service is distributed across matching Pods
  • Clients never need to know Pod IP addresses

Common Service Types

Type Purpose
ClusterIP Internal cluster communication (default)
NodePort External access via node IP and port
LoadBalancer Cloud-managed external access

ClusterIP Service Example

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

What this does

  • Creates a stable internal endpoint (nginx-service)
  • Load balances traffic across all matching Pods
  • Decouples clients from Pod lifecycles

NodePort Service Example

apiVersion: v1
kind: Service
metadata:
  name: nginx-nodeport
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
Enter fullscreen mode Exit fullscreen mode

How to access

http://<node-ip>:30080
Enter fullscreen mode Exit fullscreen mode

NodePort is useful for testing and demos, but not ideal for production.

--

LoadBalancer Service Type

A LoadBalancer Service exposes an application externally using a cloud provider’s load balancer.

  • Automatically assigns a public IP
  • Distributes traffic across Pods
  • Works only on cloud Kubernetes clusters (AWS, Azure, GCP)

LoadBalancer Example

apiVersion: v1
kind: Service
metadata:
  name: nginx-loadbalancer
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
Enter fullscreen mode Exit fullscreen mode

What this does

  • Creates a cloud-managed external load balancer
  • Assigns a public IP address
  • Load balances traffic across Pods

Get the external IP:

kubectl get svc nginx-loadbalancer
Enter fullscreen mode Exit fullscreen mode

LoadBalancer is simple for direct exposure, but Ingress is preferred for large-scale HTTP applications.


Final Summary

  • Pod → Smallest runtime unit, temporary by nature
  • Deployment → Manages Pods, ensures availability and scaling
  • Service → Stable networking and load balancing
  • Ingress → HTTP/HTTPS routing with a single entry point
  • LoadBalancer → Cloud-based external exposure

Each component solves a specific problem.

Using the right one at the right place is what makes Kubernetes manageable.


Final Thought

Kubernetes is not complicated — unclear explanations make it look that way.

Once Pods, Deployments, Services, and Ingress are understood as building blocks, the rest of Kubernetes becomes predictable.

Top comments (0)