In the last two parts, we deployed our first application using a declarative YAML manifest. We have a Deployment
managing a Pod
that runs our Nginx container. It's resilient and its state is defined in code.
But there's a big problem: it's a web server that no one can connect to.
By default, Kubernetes Pods are isolated from the outside world. They live on an internal cluster network, and we have no way to route traffic to them. To fix this, we need to introduce another fundamental Kubernetes object: the Service.
What is a Service?
Recall our "Kubernetes Kingdom" analogy. If a Pod is a house, a Service is the permanent street sign and address for that house (or group of houses).
A Service does two critical things:
- It provides a stable endpoint. Pods can be created and destroyed, getting new IP addresses each time. A Service gets a single, stable IP address that does not change.
- It acts as a load balancer. A Service can watch a group of Pods (e.g., all the Pods managed by our Deployment). When traffic hits the Service, it automatically distributes it to one of the healthy Pods in that group.
The Three Flavors of Service
Not all services are for public access. Kubernetes gives us a few different types of Services for different use cases. The three most common are:
ClusterIP
(Default): This is the most basic type. It gives the Service an internal IP address that is only reachable from within the Kubernetes cluster. This is perfect for internal communication, like a backend API service that only needs to be accessed by a frontend service, not the public internet.NodePort
: This is the simplest way to get external traffic to your service. It exposes the Service on a static port on each Node's IP address. You can then contact the service from the outside by requesting<NodeIP>:<NodePort>
. This is excellent for development and learning purposes.LoadBalancer
: This is the standard way to expose a service to the internet on cloud providers. When you create a Service of typeLoadBalancer
, Kubernetes works with your cloud provider (like AWS, GCP, or Azure) to provision an external load balancer. This cloud load balancer gets a public IP address and forwards traffic to your Service'sNodePort
. This is the most production-friendly option.
For our local cluster, NodePort
is the perfect choice. It's supported by all local cluster tools (Minikube, kind, Docker Desktop) without extra configuration.
Creating a Service Manifest
Let's create a manifest for our NodePort
Service. Create a new file named service.yaml
:
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Let's break down the spec
for this Service
:
-
type: NodePort
: We are explicitly telling Kubernetes we want to expose this Service on a port on our cluster's nodes. -
selector
: This is the most critical field. It tells the Service which Pods to send traffic to. It will look for any Pod with the labelapp: nginx
. If you look back at ourdeployment.yaml
from Part 5, you'll see that our Deployment creates Pods with exactly that label. This is how the Service and Deployment are linked. -
ports
: This section defines the port mapping.-
port: 80
: This is the port that the Service itself will be available on inside the cluster. -
targetPort: 80
: This is the port that our application container is listening on inside the Pod. The Nginx image serves traffic on port 80 by default.
-
So, this Service says: "Accept incoming TCP traffic on my own internal port 80, and forward it to port 80 on any Pod that has the label app: nginx
."
Applying the Manifest and Seeing the Result
Let's apply our new manifest to the cluster:
kubectl apply -f service.yaml
You'll see service/hello-nginx-service created
. Now, let's see what was created by running the get services
command:
kubectl get services
You should see something like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-nginx-service NodePort 10.108.140.94 <none> 80:31557/TCP 10s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
Look at the hello-nginx-service
line.
-
TYPE
isNodePort
. -
CLUSTER-IP
is the internal IP address. -
PORT(S)
shows the mapping:80:31557/TCP
. This means our Service's internal port 80 is mapped to port31557
on the node. Kubernetes picked a random high-numbered port for us. Your port number will be different.
Accessing Our Application
Now for the moment of truth! How you access this NodePort
depends on which local cluster tool you are using.
For Minikube:
Minikube has a handy shortcut. Just run this command:
minikube service hello-nginx-service
This will automatically open a browser window pointed to the correct IP and port.
For Docker Desktop and kind:
These tools map the cluster's Node ports to your localhost
. You just need to find the NodePort
number from the kubectl get services
command (in our example, it's 31557
).
Open your browser and navigate to http://localhost:<YOUR_NODE_PORT>
.
For example: http://localhost:31557
The Result:
In all cases, you should now see the "Welcome to nginx!" page in your browser.
What's Next
Success! We have now completed the full basic workflow: we defined an application in a Deployment
manifest, and we exposed it to the outside world using a Service
manifest.
Our application works, but it's very simple. Real-world applications need configuration. They need database URLs, API keys, and other parameters. Hardcoding these into a container image is a terrible practice.
In the next part, we will address this. We'll learn how to separate configuration from our application code using two new Kubernetes objects: ConfigMaps for non-sensitive data and Secrets for sensitive information.
Top comments (0)