DEV Community

Cover image for Kubernetes Hardening Tutorial Part 2: Network
Tiexin Guo for GitGuardian

Posted on • Updated on • Originally published at blog.gitguardian.com

Kubernetes Hardening Tutorial Part 2: Network

In the first part of this tutorial, we discussed how to enhance your Pod security in your K8s cluster. If you haven't read it yet, here's the link.

Today, we will walk you through networking-related security issues in a Kubernetes cluster and how to enhance them. After reading this tutorial, you will be able to:

  • harden control plane networking
  • achieve resource separation by namespace and network policy
  • manage your secrets properly

1 Control Plane Network Hardening

1.1 K8s as a Service with Cloud Provider Managed Control Plane

In many "Kubernetes as a service" types of clusters (for example AWS's Elastic Kubernetes Service), you don't "own" the control plane. You pay the cloud provider to manage it for you, so that you have less operational overhead to worry about and can focus only on the worker nodes.

This, however, doesn't mean you can't do anything about your control plane. When creating the cluster, you still need to specify a security group for the cluster's control plane.

In general, there are two strong recommendations regarding this:

  • Use different security groups for the cluster CP and for worker nodes
  • Use a dedicated security group for each cluster CP if you have multiple clusters

The first recommendation isn't hard to understand: sometimes, you need to open up more ports on your worker nodes, but not on your control plane nodes. For example, if you have some ingress that requires ingress traffic on a specific port, you only need to open it up on the worker node. Opening the same port on the control plane would profoundly increase the attack surface.

In reality, we want to separate the security group for the control plane and the worker nodes.
In our example, we create two aws_security_group, one is "control_plane", and the other is "worker". We will not use the same security group for both the control plane and the worker nodes.Check out the source code here.

#terraform/modules/eks/control_plane_sg.tf

resource "aws_security_group" "control_plane" {
  name        = "eks_cluster_${var.cluster_name}_control_plane_sg"
  description = "EKS cluster ${var.cluster_name} control plane security group."

  vpc_id = var.vpc_id

  tags = {
    "Name" = "eks_cluster_${var.cluster_name}_control_plane_sg"
  }
}
Enter fullscreen mode Exit fullscreen mode
#terraform/modules/eks/wprker_node_sg.tf

resource "aws_security_group" "worker" {
  name        = "eks_cluster_${var.cluster_name}_worker_sg"
  description = "Security group for all worker nodes in the cluster."

  vpc_id = var.vpc_id

  lifecycle {
    ignore_changes = [ingress]
  }

  tags = {
    "Name"                   = "eks_cluster_${var.cluster_name}_worker_sg"
    "kubernetes.io/cluster/" = var.cluster_name
  }
}
Enter fullscreen mode Exit fullscreen mode

If you would like to give it a try yourself:

git clone git@github.com:IronCore864/k8s-security-demo.git
cd k8s-security-demo
git checkout eks-security-groups
cd terraform
terraform init
terraform apply
Enter fullscreen mode Exit fullscreen mode

The second recommendation is because each cluster's control plane needs to accept traffic from its worker nodes, but if you have multiple clusters, you don't want cluster A's control plane to accept traffic from cluster B's worker nodes.

If we use the same security group for all control planes of all clusters, this security group would have to allow ingress from all clusters' worker nodes. In this case, if one cluster's worker node is compromised, the attack surface is profoundly increased because now all your control planes of all clusters are at risk.

#terraform/modules/eks/control_plane_sg.tf

resource "aws_security_group" "control_plane" {
  name        = "eks_cluster_${var.cluster_name}_control_plane_sg"
  description = "EKS cluster ${var.cluster_name} control plane security group."

  vpc_id = var.vpc_id

  tags = {
    "Name" = "eks_cluster_${var.cluster_name}_control_plane_sg"
  }
}
Enter fullscreen mode Exit fullscreen mode

As shown in the pull request above, we put the aws_security_group.control_plane inside the eks Terraform module. So if we want to re-use the module to create another cluster, there will be a dedicated security group for each cluster.

1.2 Self-Managed K8s Clusters

Maybe for some reason, you don't want the cloud providers to manage the control plane for you, or perhaps you are running a K8s cluster on-premise. In that case, you need to secure the networking part for the control plane yourself.

There are multiple tools that can help you create a K8s cluster. Some of them can create the underlying infrastructure for you; others can't. If you deploy it in the cloud, the "underlying infrastructure" here means VPC, network, subnets, virtual machines, security groups, etc. Of course, if you deploy it on-prem, you will need to sort out the networking part yourself.

For example:

  • kubeadm: you need to manage your own infrastructure.
  • kops: if you deploy in a Cloud Provider, it can create the underlying infrastructure for you; it can also generate Terraform scripts which in turn you can run to provision the infrastructure.
  • Kubespray: provides tools like Terraform scripts that can create the underlying infrastructure for you.

No matter which tool you use, no matter who creates the underlying infra and the security groups, you or the tools, you still need to make sure the rules mentioned in the previous section are satisfied.

Plus, since now you own the control plane (instead of managed by the cloud providers), with great power comes great responsibility. Now, you also need to make sure only the necessary ports of the control plane are exposed.

Here is a list of ports that are necessary from the control plane's perspective:

Protocol Direction Port Range Purpose
TCP Inbound 6443 (or 8080 if not disabled) Kubernetes API server
TCP Inbound 2379-2380 etcd server client API
TCP Inbound 10251 kube-scheduler
TCP Inbound 10252 kube-controller-manager
TCP Inbound 10258 cloud-controller-manager

Note: the official etcd ports are 2379 for client requests and 2380 for peer communication. The etcd ports can be set to accept TLS traffic, non-TLS traffic, or both TLS and non-TLS traffic.

And for the worker nodes:

Protocol Direction Port Range Purpose
TCP Inbound 10250 kubelet API
TCP Inbound 30000-32767 NodePort Services

For example, we can start from these security groups and continue from there.

Note that this is only the bare minimum. The ports must be changed/added if some pods in your cluster expose a port lower than the specified range in the Terraform files (e.g., 22, 80, or 443).

If you would like to try it out yourself:

git clone git@github.com:IronCore864/k8s-security-demo.git
cd k8s-security-demo
git checkout self-managed-k8s-security-groups
cd terraform
terraform init
terraform apply
Enter fullscreen mode Exit fullscreen mode

2 Namespace Separation and Network Policy

K8s namespaces are one way to partition cluster resources among multiple individuals, teams, or applications within the same cluster to achieve multi-tenancy.

By default, though, namespaces are not automatically isolated. Pods and services in different namespaces can still communicate with each other.

2.1 An Experiment on Namespaces

First, we create two new namespaces, "namespace-a" and "namespace-b", and deploy our little demo app in "namespace-a":

git clone git@github.com:IronCore864/k8s-security-demo.git
cd k8s-security-demo
git checkout namespace-separation
kubectl apply -f deploy-namespace-separation.yaml
Enter fullscreen mode Exit fullscreen mode

and let's create another pod in "namespace-b", and try to access our demo app's service in "namespace-a" from "namespace-b":

# create a test pod in namespace-b
$ kubectl apply -f testpod-namespace-separation.yaml

# trying to access k8s-security-demo service in namespace-a
$ kubectl exec -n namespace-b -it testpod -- sh
/ $ curl k8s-security-demo.namespace-a
Hello, world!/ $
Enter fullscreen mode Exit fullscreen mode

By default, we can resolve "servicename.namespace" by Kubernetes DNS from any other namespace and access it. Putting two apps into different namespaces doesn't separate them by default.

2.2 Network Policies

We can achieve true separation by using network policies.

Network policies control traffic flow between Pods, namespaces, and external I.P. addresses.

By default, no network policies are applied to Pods or namespaces, resulting in unrestricted ingress and egress traffic within the Pod network. But we can create one so that services within one namespace can't access services in another namespace to achieve resource separation.

Note/Prerequisites: network policies are implemented by the network plugin. To use network policies, you must be using a networking solution that supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.

Now, let's have a look at an example:

First, let's look at the network policies that we are going to create (PR here):

#networkpolicy.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: namespace-a
spec:
  podSelector: {}
  policyTypes:
  - Ingress
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-namespace-a
  namespace: namespace-a
spec:
  podSelector:
    matchLabels:
      app: k8s-security-demo
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          team: a
Enter fullscreen mode Exit fullscreen mode

As shown, we have created two network policies: one denies everything, the other allows access from within the namespace.

Try it out yourself:

git clone git@github.com:IronCore864/k8s-security-demo.git
cd k8s-security-demo
git checkout network-policy
kubectl apply -f deploy-namespace-separation.yaml
kubectl apply -f networkpolicy.yaml
kubectl apply -f testpod-namespace-separation.yaml
Enter fullscreen mode Exit fullscreen mode

Now let's try again to access the service in "namespace-a" from "namespace-b":

$ kubectl exec -n namespace-b -it testpod -- sh
/ $ curl k8s-security-demo.namespace-a
curl: (28) Failed to connect to k8s-security-demo.namespace-a port 80 after 129977 ms: Operation timed out
Enter fullscreen mode Exit fullscreen mode

If we try the same from "namespace-a":

$ kubectl exec -n namespace-a -it testpod -- sh
/ $ curl k8s-security-demo.namespace-a
Hello, world!/ $
Enter fullscreen mode Exit fullscreen mode

We got a successful result.

With different selectors, network policies can achieve a whole lot more.


3 Secrets Management

Since K8s secrets contain sensitive information like passwords, we need to make sure the secrets are safely stored and encrypted.

If you are using K8s as a service like AWS EKS, chances are, all the etcd volumes used by the cluster are already encrypted at the disk level (data-at-rest encryption). If you are deploying your own K8s cluster, this is also configurable by passing the --encryption-provider-config argument.

We can go one step further by encrypting K8s secrets with AWS KMS before they are even stored on the volumes on the disk level.

When secrets are created in the first place, it's pretty likely that we used K8s YAML files, but if we stored the secrets into YAML files, it wouldn't be really secure; after all, the values are merely base64 encoded, not encrypted. Anyone who gets access to the file could get the content of it.

It's a best practice to never store the content of secrets in files at all.

There are multiple solutions to achieve this. For example, you can inject secrets into Kubernetes Pods via Vault agent containers, or you can use secrets manager secrets in AWS EKS.

Here, we will have a look at a third (and probably simpler) solution: the external-secrets operator (https://external-secrets.io/). Basically, it is a Kubernetes operator to read information from a third-party service like AWS Secrets Manager and automatically inject the values as Kubernetes Secrets. So, it doesn't change the way you use your secrets (no need for sidecars or annotations).

To deploy the external secrets operator:

helm repo add external-secrets https://charts.external-secrets.io

helm install external-secrets \
   external-secrets/external-secrets \
    -n external-secrets \
    --create-namespace
Enter fullscreen mode Exit fullscreen mode

Then, we create a secret containing AWS credentials for the operator to use:

echo -n 'KEYID' > ./access-key
echo -n 'SECRETKEY' > ./secret-access-key
kubectl create secret generic awssm-secret --from-file=./access-key  --from-file=./secret-access-key
Enter fullscreen mode Exit fullscreen mode

Next, let's create a secret store pointing to AWS Secrets Manager:

git clone git@github.com:IronCore864/k8s-security-demo.git
cd k8s-security-demo
git checkout external-secrets
kubectl apply -f secretstore.yaml
Enter fullscreen mode Exit fullscreen mode

Finally, let's create a secret in AWS Secrets Manager:

aws secretsmanager create-secret --name secret-test --description "test" --secret-string '{"password": "root"}'
Enter fullscreen mode Exit fullscreen mode

and sync it using external secrets:

kubectl apply -f externalsecret.yaml
Enter fullscreen mode Exit fullscreen mode

Now, let's check:

$ kubectl get es
NAME      STORE                REFRESH INTERVAL   STATUS
example   secretstore-sample   1h                 SecretSynced

$ kubectl get secrets secret-to-be-created
NAME                   TYPE     DATA   AGE
secret-to-be-created   Opaque   1      4m7s
Enter fullscreen mode Exit fullscreen mode

As we can see, the external secret is created, and it has already successfully synchronized the data from AWS Secrets Manager.

In this way, we are using a secret manager as the single source of truth, we don't risk writing the secrets down anywhere in any file, and we don't change the way we use native Kubernetes secrets.


Summary

This tutorial demonstrated how to improve Kubernetes control plane security, achieve true resource separation by using namespaces and network policies, and use Kubernetes Secrets more securely.

The following tutorial will cover the authentication, authorization, logging, and auditing part of K8s security. See you then!

Top comments (0)