DEV Community

Cover image for Kubernetes Network Policies for Isolating Namespaces
Lukas Gentele for Loft Labs, Inc.

Posted on • Originally published at loft.sh

Kubernetes Network Policies for Isolating Namespaces

By Ashish Choudhary

Kubernetes is hailed as a modern-day operating system for cloud-native applications. It simplifies application deployment and management for applications running in the cloud. However, when running applications deployed in production, security cannot be overlooked.

Kubernetes does provide some defaults, but those shouldn't be relied on when you have mission-critical applications running inside a multi-tenant cluster. The following are some of the highlighted security risks that you should handle properly before moving to production:

  1. Who can make changes to my cluster?
  2. What if we use the default namespace for all the different applications?
  3. What about container image vulnerabilities?
  4. How about securing sensitive data like credentials, keys, etc.?
  5. Should we restrict or allow all the pods to communicate with each other?

You should be aware that Kubernetes is not secure by default but provides ways and means to handle these security risks. For example, using RBAC, you can enable only an authorized person to make changes to your cluster.

Using namespaces, you can achieve resource and environment segregation among different teams or applications. Using namespaces and RBAC, you can limit the impact of a disaster.

For example, if someone fired a command by mistake, then the impact is limited to that namespace. We can reduce container image vulnerabilities by using a secure base image and regular scanning. Using secrets, you can secure sensitive data. By default, pods can communicate with each other irrespective of their namespace. But with Kubernetes network policies, you can control that, and policies here behave like firewall rules between your pods.

Networking and Internet Connections

We've discussed network policies briefly, but let's discuss them in detail. As you know, the Kubernetes network model is flat, which allows pods to communicate with each other by default. You can schedule your workloads without depending on the physical topology of network devices (routers, firewalls, etc.).

Network policies use Kubernetes constructs such as label selectors for defining which pods can talk to each other rather than using IP addresses. You can think of network policies like a virtual firewall. It's applied to your pods in real time.

It's worth mentioning that Kubernetes defines a base set of network policy APIs and just stores them. Your network plugin handles the implementation part. Network policies are configured using declarative manifests and, of course, can be kept with your source code. It will allow you to deploy them along with your applications on Kubernetes.

Deny All Traffic to Pod

Let's begin by creating pods to test whether they are allowed to communicate with each other—note that default Kubernetes behavior is to enable all pods to communicate with each other. So let's replicate this scenario by first creating a namespace names test:

kubectl create namespace test                                                   
namespace/test created
Enter fullscreen mode Exit fullscreen mode

Create an NGINX pod:

kubectl run nginx --image=nginx --labels app=nginx --namespace test --expose --port 80
service/nginx created
pod/nginx created

Enter fullscreen mode Exit fullscreen mode

Verify that pod and service are created in the test namespace:

kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/nginx 1/1 Running 0 96s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx ClusterIP 10.110.110.174 <none> 80/TCP 96s
Enter fullscreen mode Exit fullscreen mode

Now try to access this Nginx pod from another pod. Use this command for the verification.

kubectl run busybox --rm -ti --image=alpine -- /bin/sh
Enter fullscreen mode Exit fullscreen mode

If you don't see a command prompt, try pressing enter.

As soon as you get access to the shell, first download curl using the apk add curl command. Then simply run the curl 10.244.120.65:80 command. If you are unsure about the pod's IP address, you can use the command kubectl describe pod nginx -n test to confirm it.

curl 10.244.120.65:80
Enter fullscreen mode Exit fullscreen mode

You should see the following output.

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

You should be able to access the Nginx pod without any issue. It is the default and expected behavior, and there is nothing new about this setup.

Another thing worth mentioning is that you've created the alpine image pod in the default namespace (if you don't specify a namespace, then the default is used), but you could still communicate with the Nginx pod. So safe to say that namespace alone has no control over pod communication.

Verify the Network Policy

Let's try to change this default behavior by introducing a network policy that should deny all incoming traffic by default for pods in the test namespace.

Since you're using minikube for this testing, you would need a network plugin (Calico, Weavenet, etc.) as, by default, network policies are not supported with minikube. You can run the following command to add Calico network plugin support while starting minikube. It's better to start with a clean slate, so run the minikube stop and then minikube delete command to stop and delete the old cluster.

minikube start --network-plugin=cni --cni=calico
Enter fullscreen mode Exit fullscreen mode

Follow the instructions here to verify the Calico setup in minikube.

Now, run the command kubectl create namespace test to recreate the test namespace and command kubectl run nginx --image=nginx --labels app=nginx --namespace test --expose --port 80 to create the Nginx pod.

You would be using the following network policy YAML object. The important thing to mention here is that podSelector attaches this policy to pods that match the label app=nginx and under ingress no rules are defined, so any inbound traffic is not allowed.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-all-policy
  namespace: test
spec:
  podSelector:
    matchLabels:
      app: nginx
  ingress: []
Enter fullscreen mode Exit fullscreen mode

Create this network policy by pasting the content to a file named deny-all-policy.yaml and running the following kubectl command.

kubectl apply -f deny-all-policy.yaml                                           
networkpolicy.networking.k8s.io/deny-all-policy created
Enter fullscreen mode Exit fullscreen mode

Another thing to highlight here is that network policies are applied dynamically at runtime. So to test it further, run another pod that will try to access the Nginx pod you just created. For this purpose, attach to a terminal session of the alpine container image using the following command.

kubectl run busybox --rm -ti --image=alpine --namespace test -- /bin/sh
Enter fullscreen mode Exit fullscreen mode

As you can see in the following output, the traffic is getting dropped, and we are getting connection-timed-out messages instead.

curl 10.244.120.65:80
curl: (28) Failed to connect to 10.244.120.65 port 80: Operation timed out
Enter fullscreen mode Exit fullscreen mode

Clean Up Everything

kubectl delete networkpolicy deny-all-policy -n test
kubectl delete service --all -n test
kubectl delete pod --all -n test
Enter fullscreen mode Exit fullscreen mode

Allow/limit Traffic to Pod

In this section, you will allow traffic from only those pods with the matching label as app=nginx. To test this, use the following network policy.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-policy
  namespace: test
spec:
  podSelector:
    matchLabels:
      app: nginx
  ingress:
  - from:
      - podSelector:
          matchLabels:
            app: nginx
Enter fullscreen mode Exit fullscreen mode

Apply this policy by saving the above content to a file named allow-policy.yaml and running the following command.

kubectl apply -f allow-policy.yaml
networkpolicy.networking.k8s.io/allow-policy created
Enter fullscreen mode Exit fullscreen mode

You have the network policy created, and you can test it by running a pod with the app=nginx label.

Run the following command.

kubectl run busybox --rm -ti --image=alpine --labels app=nginx --namespace test -- /bin/sh
Enter fullscreen mode Exit fullscreen mode

Incoming traffic allowed.

curl 10.244.120.73:80
Enter fullscreen mode Exit fullscreen mode

Output:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
......
Enter fullscreen mode Exit fullscreen mode

Now test the traffic by connecting with a pod without the label app=nginx using the following command.

kubectl run busybox --rm -ti --image=alpine --namespace test -- /bin/sh
Enter fullscreen mode Exit fullscreen mode

Incoming traffic blocked.

curl 10.244.120.73:80
curl: (28) Failed to connect to 10.244.120.73 port 80: Operation timed out
Enter fullscreen mode Exit fullscreen mode

This policy is deployed to test namespace, so pods from other namespaces are not allowed. Let's verify that as well.

kubectl run busybox --rm -ti --image=alpine --labels app=nginx --namespace default -- /bin/sh
Enter fullscreen mode Exit fullscreen mode

Traffic blocked from default namespace.

curl 10.244.120.73:80
curl: (28) Failed to connect to 10.244.120.73 port 80: Operation timed out
Enter fullscreen mode Exit fullscreen mode

We can use the following network policy to allow traffic from another namespace matching label env=staging.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-different-namespace-policy
  namespace: test
spec:
  podSelector:
    matchLabels:
      app: nginx
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: nginx
      namespaceSelector:
        matchLabels:
          env: staging
Enter fullscreen mode Exit fullscreen mode

Clean Up Everything

kubectl delete networkpolicy allow-policy -n test
kubectl delete service --all -n test
kubectl delete pod --all -n test
Enter fullscreen mode Exit fullscreen mode

Default Deny All Inbound and Outbound Traffic

This is the default deny-all inbound and outbound network policy. It prevents all to and from traffic from all pods in the test-policy namespace. This policy will also come in handy when you add a new application but forget to create a policy for it. In those cases, this policy will apply to those pods and will restrict both ingress and egress traffic.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-ingress-egress-deny
  namespace: test-policy
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Ingress
  - Egress
Enter fullscreen mode Exit fullscreen mode

Securing Control Plane Components

Now let's talk about securing etcd, where all your cluster configuration and desired state persist. If attackers get access to the etcd datastore, they can control your cluster and run containers with elevated privileges on any cluster node. You can secure it by using Public Key Infrastructure, which uses a combination of keys and certificates. In addition, this ensures that data in transit is secure using TLS and access is restricted using credentials.

The Kubernetes API server is another essential component that allows external clients to communicate, know, and update the state of your cluster. Therefore, it is vital to make sure it's secure from external attacks.

The API server typically listens on secure TLS ports such as 443, 8843, and so on. You can get details about the API server endpoint using the kubectl cluster-info command. If you try to access the API server endpoint, you will most probably get the following response, which means that it is inaccessible from the internet. It also means that you are not authorized to access the API server endpoint.

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {

  },
  "code": 403
}
Enter fullscreen mode Exit fullscreen mode

Similar to etcd, the Kubernetes API server can also be secured using the PKI and TLS.

Conclusion

By now, you've learned a bit more about securing pod to pod communication using Kubernetes network policies to secure your cluster and applications. However, remember you can't solely rely on network policies for securing pod to pod communication. You also need to apply other security techniques, such as using TLS with mutual authentication to encrypt traffic.

Photo by Simone Dalmeri on Unsplash

Top comments (0)