DEV Community

byakku
byakku

Posted on

Connecting AWS ALB to Nginx Ingress with Terraform #EKS

By default, for reasons I don't care about anymore, you can't really use ALB in OOTB fashion with Kubernetes Nginx Ingress.

In this post I will show you how to create ALB using Terraform and set it up with EKS cluster and Kubernetes Nginx controller.

Prerequisites:

  • EKS Cluster with NodeGroup
  • AWS Account access (yeah)
  • K8S knowledge, I'm not gonna debug your yaml identation errors

How does it work?

We create Application LoadBalancer, we use LB Target Groups with target_type = "instance", then we set up our listeners to forward to traffic to given TargetGroup aaaand last step is attaching the autoscaling group (EKS worker nodes) to target groups. Ingress controller then listens on high ports and properly receives and distributes all the traffic.

Let's break that down!

Loadbalancer parkour

Let's start with simple Security Group. As it's a good practice, we'll use data lookups in Terraform to find suitable subnets and VPC.

For this article I have created dev-eks VPC and dev-eks cluster in it, with 3 private and 3 public subnets. You can use official VPC module

data "aws_vpc" "dev_vpc" {
  tags = {
    Name = "dev-eks"
  }
}

data "aws_subnets" "dev_private_subnets" {
  filter {
    name   = "tag:kubernetes.io/cluster/dev-eks"
    values = ["shared"]
  }

  filter {
    name   = "tag:kubernetes.io/role/internal-elb"
    values = ["1"]
  }
}

data "aws_subnets" "dev_public_subnets" {
  filter {
    name   = "tag:kubernetes.io/cluster/dev-eks"
    values = ["shared"]
  }

  filter {
    name   = "tag:kubernetes.io/role/elb"
    values = ["1"]
  }
}
Enter fullscreen mode Exit fullscreen mode

This is really comfy way of getting the info about objects you interact with in your code without too much unnecessary hardcoding.

Now, the Security Group:

resource "aws_security_group" "allow_https" {
  name        = "dev_allow_tls"
  description = "Allow HTTP/TLS inbound traffic"
  vpc_id      = data.aws_vpc.dev_vpc.id

  ingress {
    description = "HTTPS from internet"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "HTTP from internet"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "dev"
    from_port   = 0
    to_port     = 0
    protocol    = "all"
    cidr_blocks = [data.aws_vpc.dev_vpc.cidr_block]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = "alb-sg-dev"
  }
}
Enter fullscreen mode Exit fullscreen mode

The traffic will flow like this:

HTTP:
👽 -> Loadbalancer (port :80) -> EKS Worker nodes in NodeGroup (port :30080) -> ingress controller (we'll get to that part soon)

HTTPS:
👽 -> Loadbalancer (port :443) -> EKS Worker nodes in NodeGroup (port :30443) -> ingress controller

I'm too lazy to do a proper diagram. Imagine one is here.

Loadbalancer

Now we can create LoadBalancer! The setup is really nothing extraordinary.

resource "aws_lb" "alb_for_dev" {
  name               = "alb-ingress-dev-eks"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.allow_https.id]
  subnets            = data.aws_subnets.dev_public_subnets.ids

  enable_deletion_protection = true

  tags = {
    Environment = "dev"
    Terraform   = "true"

  }
}
Enter fullscreen mode Exit fullscreen mode

While you are here consider adding access_logs configuration with S3. You might want to use it at some point.

Target Groups

TGs route requests to all registered targets within TargetGroup. It can be instance, IP (?) or lambda. Read more here.

We are gonna use instance target type here. Of course we need two TGs, for HTTP and HTTPs.

resource "aws_lb_target_group" "http" {
  name        = "eks-dev-http"
  target_type = "instance"
  port        = "30080"
  protocol    = "HTTP"
  vpc_id      = data.aws_vpc.dev_vpc.id

  health_check {
    path     = "/"
    port     = "30080"
    protocol = "HTTP"
    matcher  = "200,404"
  }

  tags = {
    Environment = "dev"
  }
}

resource "aws_lb_target_group" "https" {
  name        = "eks-dev-https"
  target_type = "instance"
  port        = "30443"
  protocol    = "HTTPS"
  vpc_id      = data.aws_vpc.dev_vpc.id

  health_check {
    path     = "/"
    port     = "30443"
    protocol = "HTTPS"
    matcher  = "200,404"
  }

  tags = {
    Environment = "dev"
  }
}
Enter fullscreen mode Exit fullscreen mode

This example does not have any lucrative healthchecks on this level. Adjust the code to your needs, or don't, I don't care 🤷

Listeners

This is also important - we need to tell our ALB that it must listen on standard HTTP and HTTPS port. You need ACM cert for that also.

resource "aws_lb_listener" "http" {
  load_balancer_arn = aws_lb.alb_dev.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.http.arn
  }
}

resource "aws_lb_listener" "https" {
  load_balancer_arn = aws_lb.alb_dev.arn
  port              = "443"
  protocol          = "HTTPS"
  certificate_arn   = "arn:aws:acm:us-east-4:4206969777:certificate/whoa-some-id-was-here"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.https.arn
  }
}
Enter fullscreen mode Exit fullscreen mode

And now we attach ASG to LoadBalancer's Target Group. I have module for adding NodeGroups so I have module output there, but for simplicity we'll just imagine that 'some-asg-name-123' is valid and proper ASG.

resource "aws_autoscaling_attachment" "asg_alb_attachment_dev_http" {
  autoscaling_group_name = "some-asg-name-123"
  lb_target_group_arn    = aws_lb_target_group.http.arn
}

resource "aws_autoscaling_attachment" "asg_alb_attachment_dev_https" {
  autoscaling_group_name = "some-asg-name-123"
  lb_target_group_arn    = aws_lb_target_group.https.arn
}
Enter fullscreen mode Exit fullscreen mode

Optionally at this level I recommend adding Route53 record, just A type record that adds custom entry to your hosted zone. If you need to replace LBs or do any shady operation it's easier to update record and easily revert changes if something catches on fire.

resource "aws_route53_record" "lb" {
  zone_id = "Z035554117EW97P3ONW5G"
  name    = "dev-lb.somedomain.com"
  type    = "A"

  alias {
    name                   = aws_lb.alb_dev.dns_name
    zone_id                = aws_lb.alb_dev.zone_id
    evaluate_target_health = false
  }
}
output "alb_dev_url" {
  value = aws_lb.alb_dev.dns_name
}
Enter fullscreen mode Exit fullscreen mode

And hopefully if you've been careful enough during this relentless copy paste, Terraform code should be valid. Remember to use terraform fmt!

Now we need stuff to listen on 30443 and 30080 on EKS, and that must be Ingress controller.

Controller setup

This is fairly easy, install Nginx like you would normally but apply following diff in values.yaml or however your vars are called.

controller:
  service:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: <url-you-added-to-route53-in-previous-step>.com
    type: NodePort
    nodePorts:
      http: 30080 
      https: 30443
    externalTrafficPolicy: Cluster

  ingressClassResource:
    default: true
Enter fullscreen mode Exit fullscreen mode

How does this work is all Nodes in given NodeGroup reserve Ports 30080 and 30443 and listen on those, and forward requests to Ingress Controller. Then Nginx based on the Host header routes request to proper pods.

My comments

I highly recommend getting that R53 entry and making it work with External DNS if you don't do it already.

One of limitations in the current code is that you attach single ASG from NodeGroup to LB's Target Group. If your cluster has more NodeGroups, you either schedule Nginx on selected Nodes or do some for_each shenanigans with Terraform. I have single NodeGroup so I have no issues.

The same thing should work with any other Ingress controllers, you just need to make it use NodePort.

Cheers!

Top comments (0)