DEV Community

Cover image for Amazon EKS enhanced network policies: Admin and DNS-based controls explained

Amazon EKS enhanced network policies: Admin and DNS-based controls explained

A guide to new Amazon EKS Admin and DNS (FQDN) network policies released in December: what they are, how they work, and how to use them with practical YAML.

Amazon EKS enhanced network policies: Admin and DNS-based controls explained

In December, Amazon EKS introduced enhanced network policy capabilities:

  • Admin network policies for cluster-wide control with tiers (Admin and Baseline).
  • Application (DNS/FQDN) network policies for domain-based egress in EKS Auto Mode.

This post explains what changed, why it matters, how policies are evaluated, and how to start using them today with simple examples.

TL;DR

  • Admin policies give platform/security teams cluster-wide, non-overridable controls (Admin tier) and default, overridable controls (Baseline tier).
  • Application Network Policies let you allow egress to domains (FQDNs) like api.example.com or *.s3.amazonaws.com — no more IP lists.
  • Evaluation order: Admin tier (Deny/Allow/Pass) → namespace NetworkPolicy/ApplicationNetworkPolicy → Baseline tier → default deny.
  • Requirements: Kubernetes 1.29+, VPC CNI v1.21.1+ for Admin policies on EC2-based clusters; DNS/FQDN policies are available in EKS Auto Mode.
  • Start with deny-by-default at the cluster level, then allow only what you need.

What’s new in EKS:

1) Admin Network Policies (ClusterNetworkPolicy CRD)

  • Cluster-scoped policies applied across namespaces.
  • Two tiers:
    • Admin: mandatory controls, cannot be overridden.
    • Baseline: defaults that can be overridden by namespace policies.
  • Use for org-wide guardrails (e.g., block IMDS, force isolation, allow monitoring).

2) Application Network Policies (ApplicationNetworkPolicy CRD)

  • Namespace-scoped, DNS/FQDN-aware egress rules.
  • Filter traffic by domain names at L7 (e.g., *.salesforce.com, api.internal.company.com).
  • Great for SaaS access and hybrid/on-prem connectivity without tracking changing IPs.
  • Available in EKS Auto Mode.

Why this matters

  • Stable rules: domains don’t change as often as IPs.
  • Central control: security teams set guardrails once, developers work safely within them.
  • Less toil: fewer IP updates, simpler operations.

How policy evaluation works (in order):

1) Admin tier ClusterNetworkPolicy

  • Deny: highest precedence; blocks immediately.
  • Allow: accepts and stops evaluation.
  • Pass: skips remaining Admin tier, defers to namespace policies. 2) Namespace policies
  • ApplicationNetworkPolicy (DNS/FQDN) and traditional NetworkPolicy are evaluated.
  • Can further restrict, but cannot override an Admin Deny. 3) Baseline tier ClusterNetworkPolicy
  • Defaults that can be overridden by namespace policies. 4) Default deny if nothing matches.

Think of it as organization guardrails first, then team-level details, then safe defaults.


Requirements and availability

  • Kubernetes: 1.29+.
  • EKS Auto Mode: supports Admin + DNS/FQDN ApplicationNetworkPolicy.
  • EKS on EC2: supports Admin policies via Amazon VPC CNI.
    • Amazon VPC CNI: v1.21.1+.
  • DNS-based policies are exclusive to EKS Auto Mode-launched EC2 instances.

Quick start

Enable network policy support in the VPC CNI (EC2-based clusters):

  • Update the VPC CNI add-on (v1.21.1+ recommended).
  • Set configuration values to enable network policies.

Example (Console → EKS → Add-ons → VPC CNI → Edit → Configuration values):

{ "enableNetworkPolicy": "true" }
Enter fullscreen mode Exit fullscreen mode

Verify CNI pods:

kubectl get pods -n kube-system | grep 'aws-node\|amazon'
Enter fullscreen mode Exit fullscreen mode

Note:

  • ApplicationNetworkPolicy (DNS/FQDN) requires EKS Auto Mode.
  • ClusterNetworkPolicy works on both EKS Auto Mode and EC2-based EKS with the VPC CNI requirements above.

Practical examples

1) Admin: block EC2 Instance Metadata Service (IMDS) for all pods

Mandatory, cluster-wide protection that can't be overridden.

apiVersion: networking.k8s.aws/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: block-instance-metadata-service
spec:
  tier: Admin
  priority: 10
  subject:
    namespaces: {}  # all namespaces
  egress:
    - name: deny-metadata-service
      action: Deny
      to:
        - networks:
            - cidr: "169.254.169.254/32"
Enter fullscreen mode Exit fullscreen mode

2) Admin: isolate a sensitive namespace from the rest

Blocks all ingress from other namespaces into a protected namespace.

apiVersion: networking.k8s.aws/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: protect-sensitive-workload
spec:
  tier: Admin
  priority: 20
  subject:
    namespaces:
      matchLabels:
        kubernetes.io/metadata.name: earth
  ingress:
    - action: Deny
      name: select-all-deny-all
      from:
        - namespaces:
            matchLabels: {}  # match all namespaces
Enter fullscreen mode Exit fullscreen mode

3) Admin: allow monitoring and DNS egress everywhere

Enforce visibility and reliable DNS resolution cluster-wide.

apiVersion: networking.k8s.aws/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: cluster-wide-allow-monitoring-and-dns
spec:
  tier: Admin
  priority: 30
  subject:
    namespaces: {}
  ingress:
    - action: Accept
      name: allow-monitoring-ns-ingress
      from:
        - namespaces:
            matchLabels:
              kubernetes.io/metadata.name: monitoring
  egress:
    - action: Accept
      name: allow-kube-dns-egress
      to:
        - pods:
            namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: kube-system
            podSelector:
              matchLabels:
                k8s-app: kube-dns
Enter fullscreen mode Exit fullscreen mode

4) Application (DNS/FQDN): allow a backend to call an on-prem domain

Namespace-scoped, for EKS Auto Mode.

apiVersion: networking.k8s.aws/v1alpha1
kind: ApplicationNetworkPolicy
metadata:
  name: moon-backend-egress
  namespace: moon
spec:
  podSelector:
    matchLabels:
      role: backend
  policyTypes:
    - Egress
  egress:
    - name: allow-onprem-api
      to:
        - domainNames:
            - "myapp.mydomain.com"
      ports:
        - protocol: TCP
          port: 8080
Enter fullscreen mode Exit fullscreen mode

5) Application (DNS/FQDN): allow access to AWS services by domain

Great for multi-tenant setups without managing IPs.

Allow pods labeled security-tier=low to use S3 from the default namespace:

apiVersion: networking.k8s.aws/v1alpha1
kind: ApplicationNetworkPolicy
metadata:
  name: access-to-s3
  namespace: default
spec:
  podSelector:
    matchLabels:
      security-tier: low
  policyTypes:
    - Egress
  egress:
    - name: allow-access-to-s3
      action: Accept
      to:
        - domainNames:
            - "*.s3.us-east-1.amazonaws.com"
Enter fullscreen mode Exit fullscreen mode

Allow one app to reach DynamoDB in another namespace:

apiVersion: networking.k8s.aws/v1alpha1
kind: ApplicationNetworkPolicy
metadata:
  name: access-to-dynamodb
  namespace: cart-services
spec:
  podSelector:
    matchLabels:
      app: checkout-service
  policyTypes:
    - Egress
  egress:
    - name: allow-dynamodb-access
      action: Accept
      to:
        - domainNames:
            - "*.dynamodb.us-east-1.amazonaws.com"
Enter fullscreen mode Exit fullscreen mode

Best practices:

  • Start deny-by-default: Use Admin tier Deny for high-risk destinations; then allow only what’s required.
  • Use labels for segmentation: e.g., security-tier=high vs security-tier=low across namespaces.
  • Prefer specific domains over wildcards: *.amazonaws.com is convenient but broad; use exact endpoints when possible.
  • Layer policies: Combine Admin policies with namespace-level ApplicationNetworkPolicy and traditional NetworkPolicy for defense-in-depth.
  • Keep policies small and readable: Short, focused rules are easier to review and audit.

Monitoring, audit, and Route 53 DNS Firewall:

  • Enable policy decision logging and forward to CloudWatch or your SIEM.
  • Audit “denied” flows regularly to catch misconfigurations or suspicious behavior.
  • Remember: If EKS policy allows a domain but Route 53 DNS Firewall blocks it, DNS resolution fails and the connection won’t establish. These layers complement each other.

Common gotchas:

  • DNS/FQDN policies require EKS Auto Mode; they won’t work on EC2-based clusters without Auto Mode.
  • Make sure your VPC and routing allow egress where needed (Transit Gateway, NAT, firewalls).
  • Order matters: Admin Deny beats everything; Pass hands control to namespace level; Baseline applies last.
  • Validate in staging first: replicate production network/DNS behavior to avoid surprises.

Quick checklist:

  • Kubernetes 1.29+.
  • For EC2-based EKS clusters: VPC CNI v1.21.1+ and enable network policy support.
  • For DNS/FQDN egress: use EKS Auto Mode.
  • Implement Admin Deny guardrails (e.g., IMDS).
  • Add specific ApplicationNetworkPolicy rules for allowed external domains.
  • Monitor, audit, and iterate.

Wrap-up:

Amazon EKS now makes it much easier to enforce strong, centralized network security while keeping developer workflows simple:

  • Admin policies create clear, cluster-wide guardrails.
  • Application (DNS/FQDN) policies let teams express intent using domain names instead of IPs.

Adopt a deny-by-default stance, allow only what you need, and keep auditing. Your clusters will be safer — with less operational overhead.

Happy shipping!

References ©️

AWS blog: https://aws.amazon.com/blogs/containers/enhance-amazon-eks-network-security-posture-with-dns-and-admin-network-policies/

Thank You 🖤

Until next time, keep innovating and securing your cloud journey!

💡 Thank you for reading!

Until next time, つづく 🎉

🙌🏻😁📃 see you in the next blog.🤘 Until next time 🎉

🚀 Thank you for sticking up till the end. If you have any questions/feedback regarding this blog feel free to connect with me:

♻️ LinkedIn: https://www.linkedin.com/in/rajhi-saif/

♻️ X/Twitter: https://x.com/rajhisaifeddine

The end ✌🏻

🔰 Keep Learning !! Keep Sharing !! 🔰

📅 Stay updated

Subscribe to our newsletter for more insights on AWS cloud computing and containers.

Top comments (0)