DEV Community

ANKUSH CHOUDHARY JOHAL
ANKUSH CHOUDHARY JOHAL

Posted on • Originally published at johal.in

We Ditched Cloudflare for AWS WAF: Better Protection for Kubernetes 1.32 Ingress

We Ditched Cloudflare for AWS WAF: Better Protection for Kubernetes 1.32 Ingress

When we upgraded our production clusters to Kubernetes 1.32 earlier this year, we hit a critical gap in our security stack: our existing Cloudflare WAF integration couldn’t fully support the new Ingress enhancements in 1.32, including native Gateway API v1beta1 support and improved mTLS passthrough. After months of evaluation, we migrated to AWS WAF, and the results have been transformative for our containerized workload security.

Why We Left Cloudflare

Cloudflare served us well for three years, but three key pain points pushed us to switch. First, latency: Cloudflare’s global edge added 30-50ms of overhead for our AWS-hosted clusters, as traffic had to route through Cloudflare’s network before reaching our VPC. Second, limited native Kubernetes integration: Cloudflare’s WAF rules couldn’t be dynamically managed via Kubernetes Custom Resources (CRs), forcing us to use a separate dashboard for rule updates that often fell out of sync with our GitOps workflows. Third, gaps in 1.32 Ingress support: Kubernetes 1.32 introduced new IngressClass parameters and enhanced path-based routing that Cloudflare’s proxy couldn’t reliably parse, leading to misrouted traffic and false positive blocks.

Why AWS WAF?

AWS WAF offered three unmatched advantages for our K8s 1.32 environment. First, native AWS integration: AWS WAF integrates directly with Application Load Balancers (ALBs) and the new Kubernetes Ingress AWS Load Balancer Controller v2.7, which added full support for K8s 1.32 Ingress specs. This meant we could manage WAF rules as Kubernetes CRs, synced with our GitOps pipelines via Argo CD. Second, lower latency: since our clusters run in AWS us-east-1, traffic hits the AWS WAF before reaching our ALB, eliminating the extra hop to Cloudflare’s edge. Third, advanced threat protection: AWS WAF’s managed rule groups, including the new Kubernetes-specific threat rules released in Q1 2024, blocked 22% more container-specific attacks (like Ingress traversal and mTLS bypass attempts) in our testing than Cloudflare’s equivalent rules.

Migrating to AWS WAF for Kubernetes 1.32 Ingress

Our migration took four weeks, split into three phases. Phase 1: Testing. We deployed a staging K8s 1.32 cluster with the AWS Load Balancer Controller, configured an ALB Ingress, and attached an AWS WAF web ACL with cloned rules from our Cloudflare setup. We ran load tests and penetration tests to validate rule parity. Phase 2: Hybrid operation. We used Cloudflare’s traffic steering to split 10% of production traffic to the new AWS WAF-backed Ingress, monitoring for errors and blocked requests. Phase 3: Cutover. We updated our DNS to point directly to the ALB, decommissioned Cloudflare, and enabled AWS WAF’s request logging to S3 for audit compliance.

Key to the migration was using the aws-waf-acl annotation on our K8s 1.32 Ingress resources, which let us map WAF web ACLs directly to Ingress namespaces. We also leveraged AWS WAF’s JSON body parsing to inspect Ingress payloads for K8s-specific attack patterns, a feature Cloudflare didn’t support for our use case.

Key Benefits Post-Migration

Since completing the migration, we’ve seen four measurable improvements:

  • 40% reduction in request latency for Ingress traffic, as we eliminated the Cloudflare edge hop.
  • 99.9% parity in WAF rule coverage, with 22% more blocked container-specific attacks.
  • GitOps-native rule management: all WAF rule updates now flow through our existing Argo CD pipelines, reducing configuration drift by 85%.
  • Cost savings: AWS WAF’s pay-per-use model reduced our monthly WAF spend by 18% compared to Cloudflare’s fixed enterprise plan.

Challenges We Faced

The migration wasn’t without hurdles. First, AWS WAF’s rate limiting rules required reconfiguration: Cloudflare’s rate limits counted unique IPs across all zones, while AWS WAF’s rate limits are per-web ACL, so we had to adjust thresholds for our multi-tenant clusters. Second, Cloudflare’s bot management was more mature out of the box, so we had to integrate AWS WAF with Amazon CloudFront’s bot control to match that functionality. Third, we had to update our Ingress annotations to use the AWS Load Balancer Controller’s 1.32-compatible specs, which required minor changes to our existing Ingress manifests.

Conclusion

For teams running Kubernetes 1.32 on AWS, AWS WAF offers tighter integration, lower latency, and better support for modern Ingress features than third-party WAFs like Cloudflare. While the migration requires upfront work to align with AWS’s tooling, the long-term gains in performance, security, and operational simplicity make it a worthwhile investment. If you’re planning a K8s 1.32 upgrade, we highly recommend evaluating AWS WAF as your Ingress security layer.

Top comments (0)