DEV Community

Cover image for Decoupling Ingress with TargetGroupBinding in EKS
Paweł Swiridow for u11d

Posted on • Originally published at u11d.com

Decoupling Ingress with TargetGroupBinding in EKS

As we scale our EKS clusters, relying solely on Kubernetes Ingress objects to provision AWS Application Load Balancers (ALBs) can become restrictive. Sometimes we need to attach an EKS Service to a pre-existing ALB managed by Terraform, or we need complex routing rules that are easier to manage in HCL (Terraform) than in K8s annotations.

The AWS Load Balancer Controller supports a custom resource called TargetGroupBinding (TGB). This allows us to provision the Load Balancer and Target Group in Terraform (Infrastructure layer) and simply "bind" our Kubernetes Service to it at runtime (Application layer).

This guide walks through how to set up an AWS Target Group in Terraform and register a Prometheus instance to it using Helm values.


Prerequisites

Before proceeding, ensure your environment meets the following requirements. This architecture relies on specific AWS components to route traffic directly to Pods:

  • Amazon EKS Cluster: A running EKS cluster is required.
  • AWS VPC CNI Plugin: We will be using target_type = "ip". This mode requires the AWS VPC CNI (the default networking plugin for EKS), which assigns native AWS VPC IP addresses to Pods. This allows the ALB to route traffic directly to the Pod IP, bypassing the worker node's kube-proxy.
  • AWS Load Balancer Controller: You must have the AWS Load Balancer Controller (v2.0+) installed and running in your cluster. This controller is responsible for installing the TargetGroupBinding CRD and actively managing the registration of targets.
    • Note: Ensure the controller has the necessary IAM permissions to elasticloadbalancing:RegisterTargets and elasticloadbalancing:DeregisterTargets.

Part 1: Infrastructure (Terraform)

First, we need to create the Target Group. The critical setting here is target_type = "ip".

When using the AWS Load Balancer Controller with the AWS VPC CNI, we want the ALB to send traffic directly to the Pod IP addresses, bypassing NodePorts.

main.tf

resource "aws_lb_target_group" "prometheus_tg" {
  name        = "eks-prometheus-tg"
  port        = 9090
  protocol    = "HTTP"
  vpc_id      = module.vpc.vpc_id

  # CRITICAL: Must be 'ip' for direct Pod routing via AWS LB Controller
  target_type = "ip"

  health_check {
    path                = "/-/healthy"
    protocol            = "HTTP"
    matcher             = "200"
    interval            = 30
    timeout             = 5
    healthy_threshold   = 3
    unhealthy_threshold = 3
  }

  tags = {
    Environment = "production"
    ManagedBy   = "terraform"
  }
}

# We need to export this ARN to pass it to our Helm chart later
output "prometheus_tg_arn" {
  value = aws_lb_target_group.prometheus_tg.arn
}
Enter fullscreen mode Exit fullscreen mode

Note: Do not use aws_lb_target_group_attachment in Terraform. The AWS Load Balancer Controller running inside the cluster will manage the targets dynamically as Pods come and go.

Part 2: Security (IAM Least Privilege)

By default, the generic AWS Load Balancer Controller policy is permissive (often using Resource: *). In a production environment - especially one with multiple teams sharing an AWS account - we should adhere to Least Privilege.

We must restrict the Controller's ability so it can only register/deregister targets for this specific Target Group, preventing it from accidentally modifying other Load Balancers.

Add this policy to the IAM Role used by your Load Balancer Controller ServiceAccount:

iam.tf

data "aws_iam_policy_document" "lb_controller_tgb_policy" {
  statement {
    sid       = "AllowRegisterTargets"
    effect    = "Allow"
    actions   = [
      "elasticloadbalancing:RegisterTargets",
      "elasticloadbalancing:DeregisterTargets"
    ]
    # Scope permissions strictly to the specific Target Group ARN created above
    resources = [aws_lb_target_group.prometheus_tg.arn]
  }

  statement {
    sid       = "AllowDescribeHealth"
    effect    = "Allow"
    actions   = [
      "elasticloadbalancing:DescribeTargetHealth"
    ]
    resources = [aws_lb_target_group.prometheus_tg.arn]
  }
}

resource "aws_iam_policy" "tgb_strict_policy" {
  name        = "eks-alb-controller-tgb-restricted"
  description = "Restricted access for TargetGroupBinding to specific TGs only"
  policy      = data.aws_iam_policy_document.lb_controller_tgb_policy.json
}
Enter fullscreen mode Exit fullscreen mode

Part 3: The Glue (TargetGroupBinding CRD)

The TargetGroupBinding CRD tells the controller: "Watch this Kubernetes Service, and whenever its endpoints change, update this specific AWS Target Group."

A raw TGB manifest looks like this:

apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
  name: prometheus-tgb
  namespace: monitoring
spec:
  serviceRef:
    name: prometheus-k8s # The name of your K8s Service
    port: 9090           # The port defined in the Service
  targetGroupARN: <YOUR_TF_OUTPUT_ARN>
Enter fullscreen mode Exit fullscreen mode

Part 4: Application Deployment (Prometheus Helm Values)

We don't want to apply that YAML manually. We want it version-controlled with our Prometheus deployment.

Most Prometheus Helm charts (like kube-prometheus-stack) support an extraManifests or additionalManifests property in their values.yaml. This allows us to inject arbitrary K8s objects-like our TGB-directly during the Helm install.

Here is how you configure your values.yaml to register Prometheus to the Terraform-managed Target Group.

values.yaml

# Configuration for kube-prometheus-stack
prometheus:
  service:
    port: 9090
    # Ensure the service selector matches what the TGB expects
    # usually standard, but good to verify.

# Injecting the Custom Resource Definition
extraManifests:
  - apiVersion: elbv2.k8s.aws/v1beta1
    kind: TargetGroupBinding
    metadata:
      name: prometheus-binding
      namespace: monitoring # Must match the release namespace
    spec:
      serviceRef:
        name: prometheus-kube-prometheus-prometheus # Default name in the stack
        port: 9090
      targetGroupARN: "arn:aws:elasticloadbalancing:us-east-1:1234567890:targetgroup/eks-prometheus-tg/6d0ecf831eec9f09"
Enter fullscreen mode Exit fullscreen mode

Summary of Flow

  1. Terraform creates the empty Target Group (Mode: IP).
  2. Helm deploys Prometheus + the TargetGroupBinding CR.
  3. AWS Load Balancer Controller sees the new Binding CR.
  4. Controller looks up the Pod IPs backing the Prometheus Service.
  5. Controller registers those Pod IPs into the AWS Target Group automatically.

This approach gives us the stability of Terraform-managed Infrastructure (the ALB and Listeners) with the flexibility of Kubernetes-managed endpoints.

Top comments (0)