DEV Community

Cover image for Kubernetes-101: Security concepts
Mattias Fjellström
Mattias Fjellström

Posted on • Originally published at mattias.engineer

Kubernetes-101: Security concepts

Security is in general a large and complex topic, and security in Kubernetes is no different. This is especially true if you host your own Kubernetes cluster. In this article I will go through some of the security related concepts that Kubernetes has to offer. Specifically I will will discuss:

  • How we can restrict network traffic between Pods in our Kubernetes cluster using NetworkPolicies.
  • How we can provide identities to our Pods using ServiceAccounts, as well as how to provide permissions for these identities using Roles, RoleBindings, ClusterRoles, and ClusterRoleBindings.
  • How we can define privileges and access control settings for Pods and containers in our Pods using security contexts.

It is important to note that the security concepts I am discussing in this article concern security inside of our cluster and our containers. We must also take care of securing access to our cluster using authentication, authorization, and network-related restrictions. That is a whole different topic and is outside the scope of this article.

Restrict network communication using NetworkPolicies

A NetworkPolicy is a Kubernetes resource that is used to define allowed incoming and outgoing traffic to and from a Pod.

Like an Ingress, a NetworkPolicy resource has no effect unless there is a corresponding controller installed in your cluster. And just like Ingress controllers there are a number of them available, for instance Calico. In this article I will only go through NetworkPolicies from a theoretical point of view, I will not install anything in my cluster.

Let's look at an example of a NetworkPolicy resource that displays many of the features that it supports and then go through it in more detail:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: my-network-policy
  namespace: my-application-namespace
spec:
  podSelector:
    matchLabels:
      app: nginx
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - ipBlock:
            cidr: 10.0.0.0/24
        - namespaceSelector:
            matchLabels:
              project: source-namespace
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.1.0/24
      ports:
        - protocol: TCP
          port: 8080
Enter fullscreen mode Exit fullscreen mode

In this manifest for a NetworkPolicy we see the following properties:

  • The .apiVersion is set to networking.k8s.io/v1, i.e. this resource belongs to the networking.k8s.io API group and it is using the v1 version.
  • The .kind is set to NetworkPolicy.
  • In .metadata I have defined both a name and a namespace, i.e. a NetworkPolicy resource is a namespaced resource.
  • In .spec we find the details of what Network traffic we allow:
    • We select the Pods that this NetworkPolicy applies to using .spec.podSelector with a matchLabels statement. In this example I select Pods that have a label named app with a value of nginx.
    • In .spec.policyTypes I specify that this NetworkPolicy includes rules for both Ingress (incoming traffic) and Egress (outgoing traffic).
    • In .spec.ingress I configure what incoming traffic is allowed, and in .spec.egress I configure what outgoing traffic is allowed. See additional details for these two properties in their own sections below.

Once we have our manifest describing our NetworkPolicy we can create it using kubectl apply, list it using kubectl get networkpolicies (or kubectl get netpol using the short-hand version), as well as viewing additional details of it using kubectl describe networkpolicy. However, for brevity I will not run these commands in this article.

Ingress rules

In the sample above I defined the following rule for ingress traffic:

ingress:
  - from:
      - ipBlock:
          cidr: 10.0.0.0/24
      - namespaceSelector:
          matchLabels:
            project: source-namespace
      - podSelector:
          matchLabels:
            role: frontend
    ports:
      - protocol: TCP
        port: 6379
Enter fullscreen mode Exit fullscreen mode

This rule says that any traffic destined for port 6379 originating from one of the following sources:

  • an IP in the CIDR-block 10.0.0.0/24
  • a Pod in the Namespace named source-namespace
  • a Pod with the label role: frontend

will be allowed. Note that it is enough that one of the three sources is a match for the traffic to be allowed, as long as it is destined to port 6379.

I could add additional rules if I wish.

Egress rules

In the sample above I defined the following rule for egress traffic:

egress:
  - to:
      - ipBlock:
          cidr: 10.0.1.0/24
    ports:
      - protocol: TCP
        port: 8080
Enter fullscreen mode Exit fullscreen mode

This rule says that my Pods (identified by the label app: nginx) are allowed to send traffic on port 8080 to IP-addresses in the CIDR-block 10.0.1.0/24. Any outgoing traffic outside of this rule will be blocked.

I could add additional rules if I wish.

Restrict what Pods can do using ServiceAccounts

Each Pod we run in our Kubernetes cluster has an associated ServiceAccount. A ServiceAccount is a non-human principal that we can assign roles and permissions to.

In this Kubernetes-101 series of articles we have created many Pods, but we have not assigned any ServiceAccounts to our Pods. If we do not explicitly assign a ServiceAccount to a Pod it will be assigned a default ServiceAccount instead.

Each Namespace in your cluster has a default ServiceAccount. You can create additional ServiceAccounts scoped to a given Namespace.

Why would you want to use a ServiceAccount? If your application must perform any action inside the cluster then you would need a ServiceAccount with the correct permissions to do so. Another situation is when you have a CI/CD pipeline, e.g. GitHub Actions, where you need an identity that can create resources inside of your cluster. In that situation you can create a ServiceAccount and obtain a token for this account that you store as a secret in your CI/CD pipeline. You can then use the token to perform actions as the ServiceAccount. In this article I will only go through how to create a ServiceAccount and assign it to a Pod in a cluster.

A manifest for a basic ServiceAccount resource looks like this:

# service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  namespace: my-namespace
Enter fullscreen mode Exit fullscreen mode

The ServiceAccount manifest has an .apiVersion, a .kind, and .metadata. This is all we need to create a ServiceAccount, although there are additional options you might want to specify - but I try to keep things simple here.

I can create my ServiceAccount from my manifest using kubectl apply, note that the Namespace named my-namespace must already exist:

$ kubectl apply -f service-account.yaml

serviceaccount/my-service-account created
Enter fullscreen mode Exit fullscreen mode

I can list all my ServiceAccounts for a given Namespace using kubectl get serviceaccounts:

$ kubectl get serviceaccounts --namespace my-namespace

NAME                 SECRETS   AGE
default              0         37s
my-service-account   0         20s
Enter fullscreen mode Exit fullscreen mode

We see that my-service-account is listed, but we can also see the default ServiceAccount that was created when my Namespace was created. To shorten the previous command a bit we can use the short form of serviceaccounts which is sa, so the previous command becomes kubectl get sa --namespace my-namespace.

As usual I can see additional details of a given ServiceAccount using kubectl describe:

$ kubectl describe serviceaccount my-service-account --namespace my-namespace

Name:                my-service-account
Namespace:           my-namespace
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>
Enter fullscreen mode Exit fullscreen mode

My ServiceAccount is not very interesting since I kept the configuration to a minimum.

Assign a ServiceAccount to a Pod

To assign a ServiceAccount to a Pod we simply specify the name of the ServiceAccount in .spec.serviceAccountName property of the Pod manifest:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  namespace: my-namespace
spec:
  serviceAccountName: my-service-account
  containers:
    - name: nginx
      image: nginx:latest
Enter fullscreen mode Exit fullscreen mode

I create my Pod using kubectl apply:

$ kubectl apply -f pod.yaml

pod/my-pod created
Enter fullscreen mode Exit fullscreen mode

Then to verify the ServiceAccount is set I can use kubectl describe:

$ kubectl describe pod my-pod --namespace my-namespace

Name:             my-pod
Namespace:        my-namespace
Priority:         0
Service Account:  my-service-account
Node:             minikube/192.168.49.2
Start Time:       Wed, 14 Feb 2023 20:33:17 +0100
Labels:           <none>
Annotations:      <none>
Status:           Running
IP:               172.17.0.2
... (output truncated) ...
Enter fullscreen mode Exit fullscreen mode

The output indicates that my-service-account is correctly assigned to the Pod.

Role-Based Access Control (RBAC)

A ServiceAccount is of little use unless we can assign it permissions to perform operations in our Kubernetes cluster. To do this we have the option of using RBAC. RBAC in Kubernetes involves four new resource types:

  • Role
  • ClusterRole
  • RoleBinding
  • ClusterRoleBinding

A Role is a set of permissions, scoped to a Namespace. A ClusterRole is likewise a set of permissions, but it is not scoped to a single Namespace. A RoleBinding assigns a given Role, or ClusterRole, to a subject (e.g. a ServiceAccount). A ClusterRoleBinding assigns a ClusterRole to a subject.

To keep things light in this article I will concentrate on Roles and RoleBindings. Keep in mind that if you need to assign permissions across Namespaces or for cluster-scoped resources (i.e. a Node) then you would need to use ClusterRoles and ClusterRoleBindings instead.

A basic Role that specifies permissions to read Pods might look like this:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: my-role
  namespace: my-namespace
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
Enter fullscreen mode Exit fullscreen mode

The Role manifest contains .apiVersion, .kind, and .metadata. However, unlike most other manifests we have seen there is no .spec. Instead there is .rules. A rule identifies resources (like Pods, Deployments, Jobs, Services, etc) from a specified apiGroup (an empty value "" refers to the core API group where most objects we are familiar with lives, e.g. Pods), and finally a rule also contains verbs which specifies what actions can be done on the resources. So our single rule says that this role gives permissions to get, watch, and list resources of type pods in the core API group.

Assuming we have a ServiceAccount named my-service-account we can now assign this Role to our ServiceAccount using a RoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: my-role-binding
  namespace: my-namespace
subjects:
  - kind: ServiceAccount
    name: my-service-account # my service-account from above
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: my-role # the name of my role from above
  apiGroup: rbac.authorization.k8s.io
Enter fullscreen mode Exit fullscreen mode

We see that the RoleBinding manifest ties together my subject, which is the ServiceAccount (the identity), with my Role. From the manifest it is also clear that we could assign the same Role to many subjects in the same RoleBinding if we wish.

Define privileges and access control settings for Pods and containers

A security context allows us to specify what user ID a container should run as, what group it should belong to, if we want to run as a privileged user or not, use Security Enhanced Linux (SELinux) features, and much more. This is an advanced topic and in this article I will just show you how you specify a security context.

Set the security context for a Pod

If you specify a security context for a Pod it will be set for each container in the Pod. A basic example of a Pod with a security context looks like this:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  securityContext:
    runAsUser: 1010
  containers:
    - name: my-container
      image: busybox:1.28
      command: ["sh", "-c", "sleep 1h"]
Enter fullscreen mode Exit fullscreen mode

The security context is specified in .spec.securityContext. In this example I specify that each container in my Pod should run all processes as the user with ID 1010. The container named my-container starts a sleep process at startup. We can verify that this process is indeed started as user 1010. I begin by creating my Pod using kubectl apply:

$ kubectl apply -f pod.yaml

pod/my-pod created
Enter fullscreen mode Exit fullscreen mode

Then I exec into my Pod and look at the current processes:

$ kubectl exec -it my-pod -- sh
$ ps

PID   USER     TIME  COMMAND
    1 1010      0:00 sleep 1h
    7 1010      0:00 sh
   13 1010      0:00 ps
Enter fullscreen mode Exit fullscreen mode

From the output we can see that the sleep process is indeed started as user 1010.

Set the security context for a container

We can also specify a specific security context for a given container, this will override any clashing security context that we specify for the whole Pod. Let us modify the previous example and add a security context for the container:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  securityContext:
    runAsUser: 1010
  containers:
    - name: my-container
      image: busybox:1.28
      command: ["sh", "-c", "sleep 1h"]
      securityContext:
        runAsUser: 2020
Enter fullscreen mode Exit fullscreen mode

In this manifest I have added .spec.containers[0].securityContext. In the security context I specify that I want the container to run any process it starts as user with ID 2020. This overrides the security context specified for the whole Pod. I repeat the process of creating my Pod and then execing into it:

$ kubectl apply -f pod.yaml

pod/my-pod created

$ kubectl exec -it my-pod -- sh
$ ps

PID   USER     TIME  COMMAND
    1 2020      0:00 sleep 1h
    7 2020      0:00 sh
   13 2020      0:00 ps
Enter fullscreen mode Exit fullscreen mode

The output shows that the sleep process has been started as user 2020.

Summary

This was mostly a theoretical lesson in three important concepts related to security in your Kubernetes cluster. In summary, we looked at:

  • NetworkPolicies
  • ServiceAccounts (together with Role, RoleBindings, ClusterRole, and ClusterRoleBindings)
  • Security contexts

Next article will be the last in this series of Kubernetes-101 articles. There I will do a high-level overview of the Kubernetes landscape to see what else there is to learn. Kubernetes is a platform, and although there are many concepts to learn you can still master plain Kubernetes in a relatively short time. The Kubernetes landscape includes a lot more than plain Kubernetes, and this is where things really get out of hand. There are at least 100 tools to do any task that you want to do in your cluster (that was an exaggeration, but probably close to the truth). Do you want to deploy applications using GitOps? Should you use Flux or Argo CD? Or something else? I will scratch this Kubernetes landscape surface in the next article, see you there!

Top comments (0)