DEV Community

Marcelo Andrade for AWS Community Builders

Posted on

EKS and NetworkPolicies: the story so far

On my monthly "let´s keep up with AWS news" live stream, one of the news caught my eye as game changer, and it was this one:

Let's have a peek on what this is about and why it is a game changer.

Kubernetes NetworkPolicies

NetworkPolicies are often overlooked, but it is basically the major security feature of Kubernetes that grants you the hability to control the conectivity between your applications (and also the rest of the world!).

Many people get the feeling that the concept of namespaces provides isolation for applications running under, but it does not provide network isolation by default.

You can create a namespace app1 and grant access for your Team A to deploy their applications. But if you allow Team B to run jobs on a namespace app2, even if they will not be able to modify app1 deployments, their jobs will be able to connect to the app1 deployments:

# Checking IPs for the applications:
$ kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}: {.status.podIP}{"\n"}{end}' |
fgrep app
app1/backend-84bf889f7f-nrxbr: 192.168.130.72
app1/frontend-54d8796d8c-2fzsk: 192.168.140.197
app2/job-f8dd4484b-kmxkm: 192.168.138.201

$ for i in 192.168.130.72 192.168.140.197; do \
    kubectl -n app2 exec job-f8dd4484b-kmxkm -- curl $i -so /dev/null -w '%{HTTP_CODE}' \
  done
200
200
Enter fullscreen mode Exit fullscreen mode

This may not sound like a problem to you, but it is certainly not the most desirable situation, specially if most of your security is invested into the LoadBalancers/Web Application Firewalls that are outside the security perimeter of your cluster - it turns your cluster into a lateral movement extravaganza if someone not intended manages to get access to it.

Securing the application

In order to block traffic from other namespaces to your application, you have to apply a NetworkPolicy that denies all incoming traffic:

$ echo '---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: app1
spec:
  podSelector: {}
  policyTypes:
  - Ingress
' | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

Syntax might look weird at a glance, but it applies to all pods (podSelector: {}) allowing nothing because there is no ingress block defined at the top level of spec.

After these rules get applied, no one will be able to access pods on namespace app1:

$ for i in 192.168.130.72 192.168.140.197; do
    kubectl -n app2 exec job-f8dd4484b-kmxkm -- curl --connect-timeout 2 $i -so /dev/null -w '%{HTTP_CODE}\n'
  done
000
command terminated with exit code 28
000
command terminated with exit code 28
Enter fullscreen mode Exit fullscreen mode

Problem is not even applications on app1 will be able to access each other. Once you apply any networkpolicy of a type (ingress or egress), all traffic that is not specificly allowed will be denied.

So if you have a frontend/backend combo like below:

$ kubectl -n app1 get pods -o name
pod/backend-84bf889f7f-nrxbr
pod/frontend-54d8796d8c-2fzsk
Enter fullscreen mode Exit fullscreen mode

You should allow incoming conections to backend only from the frontend deployment, and frontend might get its connections from a Load Balancer outside the cluster.

To allow frontend to access backend, one would leverage the kubernetes labels to make the rule dynamic:

$ kubectl -n app1 get pods --show-labels | tr -s ' ' | cut -f1,6 -d' '
NAME LABELS
backend-84bf889f7f-nrxbr app=backend,pod-template-hash=84bf889f7f
frontend-54d8796d8c-2fzsk app=frontend,pod-template-hash=54d8796d8c
Enter fullscreen mode Exit fullscreen mode

Backend pods will always have the label app=backend, and Frontend pods will have the label app=frontend; this will tell the NetworkPolicy Controller to update the rules every time a new pod is created (or destroyed).

# Conecting to the **service** backend from the pod frontend
$ kubectl -n app1 exec deploy/frontend -- curl --connect-timeout 2 -so /dev/null http://backend  -w '%{http_code}'
000
command terminated with exit code 28

$ echo '---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-to-backend
  namespace: app1 
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 80
' | kubectl apply -f -

# Retesting:
$  kubectl -n app1 exec deploy/frontend -- curl --connect-timeout 2 -so /dev/null http://backend  -w '%{http_code}'
200
Enter fullscreen mode Exit fullscreen mode

Unfortunately, there is no easy way to restrict access to the frontend pods from the load balancers, because they're not hosted inside the cluster.

(You can use Security Groups for Pods for that!)

The best you can do is to limit connections from the subnets where the load balancers will create their ENIs (or use their ips, if you feel bold!):

$ echo '---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: lb-to-frontend
  namespace: app1
spec:
  podSelector:
    matchLabels:
      app: frontend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - ipBlock:
            cidr: 192.168.0.0/19
        - ipBlock:
            cidr: 192.168.32.0/19
        - ipBlock:
            cidr: 192.168.64.0/19
      ports:
        - protocol: TCP
          port: 80
' | kubectl apply -f - 
Enter fullscreen mode Exit fullscreen mode

Of course, anything else that is created on those subnets will be able to access the frontend pods on namespace app1.

The backend pod, otherwise, won't be able to start any connection to the frontend:

$ k -n app1 exec -it deploy/backend -- curl frontend:80 -so /dev/null -w '%{http_code}' --connect-timeout 2
000
Enter fullscreen mode Exit fullscreen mode

Kubernetes NetworkPolicies allow an incredible degree of microsegmentation with little to no effort, even allowing for the devs themselves to be responsible to translate their integrations in a declarative way.

But of course, there is always a catch.

NetworkPolicy Controller

Unfortunately, Netpols are one of the few native Kubernetes Resources that do not have a default controller assigned to it - the other major one would be Ingresses.

Even though it's clearly stated in the docs, as quoted bellow:

Network policies are implemented by the network plugin. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
source

It's still easy for people to miss it, specially if they are less kubernetes-savy than they should be when using this type of technology.

And yes, I've been asked "why my netpols are not working" before. People just assume that it should work.

So, if you install an EKS cluster right now and reproduce the same configurations I have listed here, nothing will work.

At least for a while, because things changed!

EKS and NetworkPolicies

Until August 2023, the only way use NetworkPolicies was to deploy a third party software called Project Calico. It's a full fledged CNI, but one would enable only the Policy part as described in the official EKS docs.

But now, things changed! If you install a new EKS Cluster with the CNI version 1.14.0 or above, it now supports NetworkPolicies natively!!

All you have to do is to create your EKS Cluster with the following on your eksctl yaml:

...
addons:
- name: vpc-cni 
  version: 1.14.0
  configurationValues: |-
    enableNetworkPolicy: "true"    
  attachPolicyARNs:
  - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
...
Enter fullscreen mode Exit fullscreen mode

And that's it! Awesome news for EKS users!

Final notes

I believe it's worth noting that AWS CNI native NetworkPolicies make use of eBPF and not iptables as many of the other solutions available.

It does not support port translation from Services:

For any of your Kubernetes services, the service port must be the same as the container port. If you're using named ports, use the same name in the service spec too.

For a brief period during creation, the pod will not have any restrictions applied to it:

The Amazon VPC CNI plugin for Kubernetes configures network policies for pods in parallel with the pod provisioning. Until all of the policies are configured for the new pod, containers in the new pod will start with a default allow policy. All ingress and egress traffic is allowed to and from the new pods unless they are resolved against the existing policies.

It's also not supported on Fargate or Windows nodes.

Links

Top comments (0)