Whatever your usecase is (more performance, decrease slightly the AWS bill, etc.), it's often a good call to switch to a Network Load Balancer for your Kubernetes cluster. You will get good performance gain at this level as you will use a more basic layer (4 on OSI model) to receive traffic. Given that all the routing logic is often already done applicatively through the Ingress Controller, or a service mesh like Istio, it's a good call.
In my usecase for example, it was specifically for having the possibility to use static IPs for my Network Load Balancer (through Amazon Elastic IPs feature).
After hours of tests and digging, I propose you a snippet that can be a good start for your switch. You will indeed create a new Service first, exposing the same Deployment that the Service currently existing. You will then have two load balancers reachable and forwarding the traffic to the same app. It's really useful to gracefully switch the traffic through DNS, test things, and be able to rollback quickly if needed (a TTL of 300 seconds is acceptable for that).
kind: Service
apiVersion: v1
metadata:
name: public-ingress-nginx-nlb
namespace: prod
labels:
app: public-ingress-nginx-nlb
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-AAA,eipalloc-BBB,eipalloc-CCC"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-AAA,subnet-BBB,subnet-CCC"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true
# until you use the AWS Load Balancer Controller, this last option above needs to be activated manually in the Target groups / Attributes tab
spec:
type: LoadBalancer
selector:
app: public-ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
Notes :
- The annotations
service.beta.kubernetes.io/aws-load-balancer-eip-allocations
andservice.beta.kubernetes.io/aws-load-balancer-subnets
are optional if you do not need to attach and use static IPs for your Network Load Balancer. If it's the case, you will need first to allocate them in EC2 (they need to be in your account and not currently in use). You do not need to have 3, 1 will work but I recommend having 3 if this is for production traffic. For redundancy, AWS will force you to define 1 public subnet per Elastic IP and each subnet will need to be in a different Availability Zone of the Region you are using. - To be able to use the PROXY protocol correctly, note that this annotation
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes
will not work if you did not setup the AWS Load Balancer Controller on your cluster. In the meantime, do not forget to go activate this option through EC2, you will need to edit each Target Group of your Network Load Balancer and check this option :
After a few days or weeks, if everything is working as expected, do not forget to delete your original Service : it will tear down automatically the old Classic or Applicative Load Balancer you were using, with no downtime or impact on your current Service linked to your Network Load Balancer. Also do not forget to update the arg --publish-service
of your Nginx Ingress Controller containers managed by your DaemonSet or Deployment specs.
Let me know if this page helped you in some way or if you have some suggestions for improvements.
Have a great day!
Top comments (1)
Wow, this is exactly what I've been scratching my head at all evening. Thanks for writing this!