Introduction Today was a deep dive into Kubernetes networking, specifically focusing on the Service concept. As I continue exploring K8s, I realized that managing Pods individually is impossible because their IPs change constantly. Today, I solved that problem by learning how Services provide stable networking for applications.
Here is a breakdown of what I learned and the practical demos I performed.
- Why do we need Services? In Kubernetes, Pods are ephemeral. If a Pod dies and a new one is created, it gets a completely new IP address. This makes it hard for other parts of the application to communicate with it.
I learned that a Service creates a stable IP address and DNS name for a set of Pods. It acts as an abstraction layer, ensuring that even if the backend Pods change, the frontend can still communicate without issues.
- Service Discovery: Labels and Selectors One of the most interesting concepts I grasped today was how a Service knows which Pods to send traffic to. It doesn't use IP addresses; it uses Labels and Selectors.
Labels: Key-value pairs attached to Pods (e.g., app: my-web-app).
Selectors: The Service uses a selector to find all Pods with that specific label.
I experimented with this by creating Pods with specific labels and observing how the Service automatically "discovered" them and started routing traffic.
- Hands-on: NodePort Service I performed a practical demo using the NodePort service type.
By default, a Service is only accessible inside the cluster (ClusterIP). To access my application from outside the cluster (my local machine), I used type: NodePort. This opens a specific port on every Worker Node, allowing external traffic to reach the Service.
The YAML configuration looked something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-web-app
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
After applying this, I was able to access my application using the Node's IP and the defined port (e.g., http://:30007).
- Public Access with LoadBalancer Finally, I looked at how to expose an application to the wider internet (public access) using the LoadBalancer type.
While NodePort is good for testing, it's not ideal for production. The LoadBalancer type automatically provisions a native Cloud Load Balancer (like AWS ELB or Azure LB) if you are running on a cloud provider. This gives you a single, stable external IP address that distributes traffic across your nodes.
Key Takeaways
Services provide stable networking for unstable Pods.
Labels and Selectors are the glue that binds Services to Pods.
NodePort is great for development/testing access.
LoadBalancer is the standard way to expose services to the public internet in cloud environments.
It feels great to move beyond just running Pods and actually start connecting them!
Linkedin: https://www.linkedin.com/in/dasari-jayanth-b32ab9367/
Top comments (0)