DEV Community

nk sk
nk sk

Posted on

Understanding NodePort, Port, and TargetPort in Kubernetes Services

When you deploy applications in Kubernetes, you often need to expose them so they can be accessed by other pods within the cluster or by users outside the cluster. Kubernetes Services provide this abstraction layer, and three key fields define how traffic flows to your application:

  • port
  • targetPort
  • nodePort

Let’s break these down with examples.


1. targetPort – The Container’s Listening Port

The targetPort is the port inside the container (pod) where your application is actually running.

👉 Example: If your application (say, Nginx or a Java Spring Boot app) listens on port 8080, then:

targetPort: 8080
Enter fullscreen mode Exit fullscreen mode

This ensures that incoming traffic from the Service is forwarded to the right port on the pod.


2. port – The Service’s ClusterIP Port

The port is the port exposed by the Service itself (ClusterIP) inside the Kubernetes cluster.
Other pods in the cluster use this port to reach your Service.

👉 Example:

port: 80
targetPort: 8080
Enter fullscreen mode Exit fullscreen mode

Here:

  • Other pods call http://<service-name>:80
  • Kubernetes internally forwards this traffic to the application’s 8080 inside the container.

So, port is like the entry door of the Service within the cluster.


3. nodePort – The External Access Port on Worker Nodes

The nodePort is used when you want to expose your Service outside the Kubernetes cluster.
It opens a specific port on each worker node’s IP address, allowing external clients to connect.

👉 Example:

nodePort: 30080
Enter fullscreen mode Exit fullscreen mode

If your node’s external IP is 192.168.1.100, you can now access your application at:

http://192.168.1.100:30080
Enter fullscreen mode Exit fullscreen mode

⚠️ NodePort values must be in the range 30000–32767 (default Kubernetes range).


Example: NodePort Service Manifest

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
  labels:
    app: my-app
spec:
  type: NodePort
  selector:
    app: my-app
  ports:
    - name: http
      port: 80        # Service port inside cluster
      targetPort: 8080 # Container port (where app listens)
      nodePort: 30080 # External access port on each node
Enter fullscreen mode Exit fullscreen mode

Flow:

  1. A client sends a request to http://<node-ip>:30080
  2. Kubernetes Service forwards traffic to port 80 of the Service
  3. The Service routes traffic to the pod’s port 8080 where the app is running

Visualizing the Flow

[ Client Browser ] 
        |
        v
 http://<node-ip>:30080   <-- NodePort
        |
        v
   Service:80             <-- ClusterIP Port
        |
        v
   Pod:8080               <-- targetPort (container app)
Enter fullscreen mode Exit fullscreen mode

Choosing Between ClusterIP, NodePort, LoadBalancer, and Ingress

  • ClusterIP (default):
    Internal-only access. Use when services only talk inside the cluster.

  • NodePort:
    Simple external access (via node IP + port).
    Good for development/testing, but not ideal for production.

  • LoadBalancer:
    Automatically provisions a cloud load balancer (AWS ELB, GCP LB, Azure LB).
    Recommended for production when on cloud providers.

  • Ingress:
    Best option for managing multiple services with a single entry point (via domain name and path-based routing).
    Usually fronted by Nginx Ingress Controller, Kong, or Traefik.


Best Practices

✅ Use ClusterIP for pod-to-pod communication
✅ Use Ingress or LoadBalancer for production-grade external access
✅ Reserve NodePort for development, debugging, or when no external load balancer is available
✅ Keep port and targetPort clear and consistent to avoid confusion
✅ Always secure external endpoints (TLS/HTTPS, authentication)


Final Thoughts

Understanding how port, targetPort, and nodePort work together is key to properly exposing your applications in Kubernetes.

  • targetPort = app inside pod
  • port = service entry point inside cluster
  • nodePort = external entry point on nodes

For production environments, consider Ingress + TLS for scalable, secure, and manageable traffic routing.


Top comments (0)