Pods are the basic units of deployment. They are containers that run on the cluster, and they can be created and destroyed at any time. When you deploy an application on Kubernetes, you usually create a group of pods to run the application.
One important thing to understand about pods is that they are ephemeral, which means that they are not guaranteed to be stable. This means that if you restart a pod, it will be replaced with a new pod, which will have a different IP address. This can be important to consider if you are using the IP address of a pod to communicate with it.
For example, let's say you have a pod running a database, and you are using the IP address of the pod to connect to the database from your application. If you restart the pod, it will be replaced with a new pod, which will have a different IP address. This means that you will need to update the IP address in your application to the new IP address of the pod, which would be very difficult or even impossible. So clearly, we need something more stable and that is where Services comes into play.
What is a Service.
A service is an abstract way to expose an application running on a set of pods to the external network. When you create a service, you specify which pods you want to expose and how you want to expose them.
One important role of a service is to provide a fixed IP address for the application. This means that you can access the application using a fixed address, even if the underlying pods are changing or moving. For example, if you have a group of pods running a web server, you can create a service that exposes the web server on a fixed IP address. This way, you can access the web server using the fixed IP address, regardless of which pods are running the web server at any given time.
This is useful because it allows you to access the application consistently, regardless of the underlying infrastructure. For example, if you are using the IP address of a pod to connect to a database running in the pod, you will need to update the IP address in your application when the pod is restarted. By using a service, you can avoid this problem because the service provides a stable network endpoint for the application, regardless of the underlying pods.
Another role of a service is to act as a load balancer. This means that it distributes incoming traffic among the underlying pods in a way that is appropriate for the application. For example, if you have a group of pods running a web server, you can create a service that exposes the web server on a fixed IP address. The service will then act as a load balancer, distributing incoming requests to the web server among the underlying pods.
This is useful because it allows you to scale your application horizontally by adding more pods. For example, if you have a group of pods running a web server, and the traffic to the web server increases, you can simply add more pods to the group. The service will automatically distribute the traffic among the new pods, and the application will continue to be available without any disruption.
Types of services
ClusterIP: This is the default type of service in Kubernetes. It exposes the application on a cluster-internal IP address, which is only reachable from within the cluster. This type of service is useful when you want to access the application from within the cluster, but you don't want to expose it to the external network.
Here is an example configuration file for creating a ClusterIP service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
This configuration file creates a ClusterIP service named my-service that exposes the pods with the app: my-app
label on port 80
. The pods must be running a container that listens on port 80
. The service exposes the application on a cluster-internal IP address, which is only reachable from within the cluster.
To create the service in the cluster, you can use the kubectl create -f command
, passing the configuration file as an argument:
kubectl create -f service.yaml
This will create the service in the cluster, and you can verify it by running the kubectl get services
command. The cluster IP address of the service can be obtained from the CLUSTER-IP column.
LoadBalancer: This type of service exposes the application on a public IP address, using an external load balancer provided by the cloud provider. The load balancer distributes the traffic among the underlying pods. This type of service is useful when you want to expose the application to the external network.
Here is an example configuration file for creating a LoadBalancer service:
Here is an example configuration file for creating a LoadBalancer service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
This configuration file creates a LoadBalancer service named my-service that exposes the pods with the app: my-app
label on port 80
. The pods must be running a container that listens on port 80
. The service exposes the application on a public IP address, using an external load balancer provided by the cloud provider. The load balancer distributes the traffic among the underlying pods.
To create the service in the cluster, you can use the kubectl create -f command, passing the configuration file as an argument:
kubectl create -f service.yaml
This will create the service in the cluster, and you can verify it by running the kubectl get services command. The public IP address of the service can be obtained from the EXTERNAL-IP column.
Keep in mind that the exact process for creating a LoadBalancer service may vary depending on your cloud provider and the type of load balancer that you are using. Some cloud providers may require you to create the load balancer manually, and then specify the load balancer's IP address or DNS name in the service configuration file. Other cloud providers like AWS will automatically create the load balancer when you create the service. Consult the documentation of your cloud provider for more information.
NodePort: This type of service exposes the application on a fixed port of each node in the cluster. This allows you to access the application from the external network using the IP address of any node in the cluster. The traffic is distributed among the underlying pods using a simple round-robin algorithm. This type of service is useful when you want to access the application from the external network, but you don't have a load balancer available.
Here is an example configuration file for creating a NodePort service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
This configuration file creates a NodePort service named my-service that exposes the pods with the app: my-app label on port 80
. The pods must be running a container that listens on port 80
. The service exposes the application on a fixed port (30080 in this example) of each node in the cluster. You can access the application from the external network using the IP address of any node in the cluster and the fixed port number.
To create the service in the cluster, you can use the kubectl create -f command
, passing the configuration file as an argument:
kubectl create -f service.yaml
This will create the service in the cluster, and you can verify it by running the kubectl get services
command. The node port number can be obtained from the NODE-PORT column.
Overall, services are an important part of the Kubernetes network model, and they provide a stable network endpoint for applications running on the cluster. They allow you to expose your applications to the external network in a flexible and scalable way, and they provide load balancing and traffic management capabilities.
🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community here for live classes and FREE tutorial videos.
Top comments (0)