Service.yml:
apiVersion: v1
kind: Service
metadata:
name: <name_of_service>
spec:
selector:
<labelkey_of_pod_that’s_supposed_to_be_part_of_service>: <labelvalue_of_pod_that’s_supposed_to_be_part_of_service>
ports: <value_entered_is_as_list>
protocol: ‘TCP’
port:
targetPort:
type:
protocol: <by_default_value_is_TCP>
port: <port_number_at_which_service_will_be_exposed> i.e. outside_world_port
targetPort: <port_at_which_container_will_be_exposed> i.e. port_inside_container
type: <clusterIP / Nodeport / Loadbalancer>
//refer “Exposing a deployment in kubernetes” blog for getting detailed idea of type.
.kind: Defines the type of k8s object being created
.metedata.name: Assigns the name to the service object.
.spec: Defines the specification of service object
.spec.selector: Here, the value of selector is always set to “Matchlabels” by default and hence, we don't define it.
What is does is, it basically looks for the pods that have the labelkey and labelvalue pair as defined under selectors, and then controls those pods under this service.
So, for e.g. if key value pair is,
spec:
selector:
app:backend
So, it will look for all the pods having this key,value pair as label. Now, if backtracking to the origin of pods, we find that pods are created via deployment. Therefore, now on joining the dots we can conclude that, the value in selector should be same as:
.spec.selector.matchLabels
.spec.ports: The value accepted here is in list format as we can enter more than one value and hence its defined as portS.
.spec.ports.protocol: By default is ‘TCP’
.spec.ports.port:
port_number_at_which_service_will_be_exposed
To apply this service, run:
kubectl -f apply service.yml
Top comments (0)