DEV Community

Dechen Tshering
Dechen Tshering

Posted on

🚀 Understanding Kubernetes Services: ClusterIP, NodePort, LoadBalancer + Manual Scheduling

Hey everyone! đź‘‹
As part of my DevOps learning in public, I’ve been diving deep into Kubernetes and exploring how Services work—particularly how to expose applications inside and outside the cluster using different service types.

In this blog, I’ll walk you through my experiments using ClusterIP, NodePort, and LoadBalancer, along with a quick intro to manual pod scheduling on specific nodes.

Let’s go! 🚀

⚙️ Step 1: Create a ReplicaSet

First, I created a simple ReplicaSet using the following YAML:

# myapp.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: webapp
spec:
  replicas: 5
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: mycontainer
        image: nginx
Enter fullscreen mode Exit fullscreen mode

Apply it:

kubectl apply -f myapp.yml
Enter fullscreen mode Exit fullscreen mode

Demo 1: ClusterIP

ClusterIP is the default service type and is only accessible within the cluster.

# cip.yml
apiVersion: v1
kind: Service
metadata:
  name: mysvc
spec:
  type: ClusterIP
  ports:
  - targetPort: 80     # container port
    port: 5000         # service port
  selector:
    app: web
Enter fullscreen mode Exit fullscreen mode

Or expose it using the CLI:

kubectl expose rs webapp --target-port=80 --port=5000 --name=mysvc
Enter fullscreen mode Exit fullscreen mode

Access It:

  1. Get the service IP:
kubectl get svc
Enter fullscreen mode Exit fullscreen mode
  1. Get any node name:
kubectl get nodes -o wide
Enter fullscreen mode Exit fullscreen mode
  1. Start a debug pod on that node:
kubectl debug node/<NODE_NAME> -it --image=nginx
Enter fullscreen mode Exit fullscreen mode
  1. Inside the pod:
apt-get update -y && apt-get install curl -y
curl <SERVICE_IP>:5000
Enter fullscreen mode Exit fullscreen mode

Demo 2: NodePort

NodePort allows you to access your service externally on a static port on any worker node.

# nodeport.yml
apiVersion: v1
kind: Service
metadata:
  name: web-node-port
spec:
  type: NodePort
  ports:
  - targetPort: 80
    port: 80
    nodePort: 30002
  selector:
    app: web
Enter fullscreen mode Exit fullscreen mode

Or use:

kubectl expose rs webapp --target-port=80 --port=80 --type=NodePort
Enter fullscreen mode Exit fullscreen mode

Access It:

  1. Get the node IP:
kubectl get nodes -o wide
Enter fullscreen mode Exit fullscreen mode
  1. Curl from a debug pod:
curl <NODE_IP>:30002
Enter fullscreen mode Exit fullscreen mode

Demo 3: LoadBalancer

LoadBalancer is commonly used in cloud environments to expose services publicly using an external IP.

# lb.yml
apiVersion: v1
kind: Service
metadata:
  name: mysvc
spec:
  type: LoadBalancer
  ports:
  - targetPort: 80
    port: 80
  selector:
    app: web
Enter fullscreen mode Exit fullscreen mode

Or via CLI:

kubectl expose rs webapp --target-port=80 --port=80 --type=LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Access It:

kubectl get svc
# Use the external IP shown to access your app in the browser

Enter fullscreen mode Exit fullscreen mode

Note: This works on cloud providers like GCP, AWS, or Azure. On Minikube or bare metal, you may need MetalLB or another load balancer.

Bonus: Manual Scheduling

I also tried manually scheduling a pod to a specific node. This can be helpful when testing specific node configurations or behavior.

# pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx-container
    image: nginx
  nodeName: <NODE_NAME>
Enter fullscreen mode Exit fullscreen mode

Just replace with a real node name from:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Apply it:

kubectl apply -f pod.yml
Enter fullscreen mode Exit fullscreen mode

Key Learnings

_

  • ClusterIP: Great for internal communication between services.
  • NodePort: Exposes apps on every node's IP at a static port.
  • LoadBalancer: Best for public access when running on cloud providers.
  • Manual scheduling gives control, but use it with caution in production. _

If you're also learning Kubernetes, let’s connect and grow together! 🌱
Drop a comment if this was helpful or if you’ve got tips of your own 🙌

Top comments (0)