DEV Community

How Cloudflare Proxy Silently Broke My Lambda ALB Communication

TL;DR

Both api.hoge.com and backend.hoge.com had Cloudflare Proxy enabled.

Requests from the browser were already passing through Cloudflare Edge before reaching Lambda. When Lambda then called backend.hoge.com, DNS resolved to Cloudflare's Anycast IP — sending the request right back into Cloudflare Edge.

This created a Cloudflare loop: Cloudflare → Lambda → Cloudflare, resulting in:

Cloudflare Error 1000
DNS points to prohibited IP
Enter fullscreen mode Exit fullscreen mode
flowchart LR
    Browser -->|①| CF1[Cloudflare Edge]
    CF1 -->|②| Lambda
    Lambda -->|③ backend.hoge.com = Cloudflare IP| CF2[Cloudflare Edge]
    CF2 -->|❌ Error 1000| ALB
Enter fullscreen mode Exit fullscreen mode

As a quick fix, I turned Proxy OFF for backend.hoge.com. The long-term plan is to move Lambda → ALB communication inside the VPC.

The Error

Calling the API from the frontend returned a 403 Forbidden. The response body contained:

Cloudflare Error 1000
DNS points to prohibited IP
Enter fullscreen mode Exit fullscreen mode

API Gateway and Lambda appeared to be working fine, and there were no logs on the ECS side at all. My first instinct was to blame the ALB or WAF.

Architecture

flowchart LR
    Browser -->|HTTPS| APIGW[API Gateway]
    APIGW --> Lambda
    Lambda -->|HTTPS backend.hoge.com| ALB
    ALB --> ECS
    ECS --> RDS
Enter fullscreen mode Exit fullscreen mode

Lambda acts as a BFF (Backend for Frontend). The backend runs on ALB + ECS — a legacy constraint we hadn't moved away from yet. Lambda was calling that ALB over HTTPS using the backend.hoge.com domain.

Troubleshooting

Given the 403 status code, my initial suspects were:

  • WAF rules on the ALB
  • Security Group restrictions
  • Authorization logic in the ECS application
  • API Gateway authorizer configuration

No logs in ECS

As I dug deeper, something felt off. If the request was reaching the ALB, there should be something in the ECS logs. But there was nothing — not a single entry.

That pointed to the request never reaching the ALB in the first place.

Checking with curl

curl -v https://backend.hoge.com
Enter fullscreen mode Exit fullscreen mode

Looking at the response headers:

server: cloudflare
Enter fullscreen mode Exit fullscreen mode

And the response body contained DNS points to prohibited IP. That's when it clicked — Cloudflare itself was returning the 403, not our application.

Reading the docs

Cloudflare's official documentation covers the causes of Error 1000:

https://developers.cloudflare.com/support/troubleshooting/http-status-codes/cloudflare-1xxx-errors/error-1000/

According to the docs, this error occurs when an A record points to a Cloudflare-owned IP, or when a request is routed through another reverse proxy and ends up back at Cloudflare a second time.

Since our ALB was configured via a CNAME (not an A record), I initially dismissed this as "not applicable." I knew how Proxy ON worked in theory, but the pressure of an ongoing incident made me overlook it. If I had checked the docs more carefully, I would have found the root cause much sooner.

Root Cause

Our Cloudflare DNS settings looked like this:

Record Proxy
api.hoge.com ON
backend.hoge.com ON

It was a Cloudflare loop

Because api.hoge.com had Proxy ON, browser requests were already passing through Cloudflare Edge before arriving at Lambda.

When Lambda called backend.hoge.com — also Proxy ON — DNS returned Cloudflare's Anycast IP. The request looped back into Cloudflare Edge, creating a Cloudflare → Lambda → Cloudflare loop.

flowchart LR
    Browser -->|①| CF1[Cloudflare Edge api.hoge.com]
    CF1 -->|②| Lambda
    Lambda -->|③ backend.hoge.com = Cloudflare IP| CF2[Cloudflare Edge backend.hoge.com]
    CF2 -->|❌ Error 1000| ALB
    ALB --> ECS
    ECS --> RDS
Enter fullscreen mode Exit fullscreen mode

Why does this trigger Error 1000?

Cloudflare returns Error 1000 when it detects a loop, or when the resolved origin IP falls into one of these categories:

  • A Cloudflare-owned IP (loop prevention)
  • RFC 1918 private addresses (10.x.x.x / 172.16.x.x / 192.168.x.x)
  • Loopback address (127.0.0.1)

In this case, backend.hoge.com resolved to a Cloudflare IP, so Cloudflare treated it as a request targeting itself and returned DNS points to prohibited IP.

Fix #1: Quick Fix

I turned Proxy OFF for backend.hoge.com.

Record Proxy
backend.hoge.com OFF

With Proxy OFF, DNS returns the actual origin CNAME (the ALB domain) instead of a Cloudflare IP. The loop was broken and requests started flowing normally again.

flowchart LR
    Lambda -->|backend.hoge.com = ALB domain| ALB
    ALB --> ECS
    ECS --> RDS
Enter fullscreen mode Exit fullscreen mode

What I Learned

Cloudflare is more than just DNS

Cloudflare combines authoritative DNS, reverse proxy, CDN, and WAF into one. When Proxy is ON, all traffic is routed through Cloudflare Edge before reaching your origin.

This is great for browser-facing traffic — you get DDoS protection, caching, and WAF for free. But for server-to-server communication, it can cause unexpected behavior like this.

Proxy ON vs OFF changes the entire traffic path

flowchart LR
    subgraph Proxy OFF
        C1[Client] -->|ALB domain| ALB1[ALB]
    end
    subgraph Proxy ON
        C2[Client] --> CF[Cloudflare Edge] --> ALB2[ALB]
    end
Enter fullscreen mode Exit fullscreen mode

Match Proxy settings to your use case

Use case Proxy
Browser → API ON (CDN + WAF benefits)
Server → Server OFF (no benefit, potential loops)

If a server calls a Proxy ON domain while the calling service is itself behind Cloudflare, you risk creating exactly this kind of loop.

Fix #2: Long-term Fix

The quick fix works, but routing Lambda → ALB traffic through public DNS was never ideal. Moving this communication inside the VPC is the proper solution.

flowchart LR
    Browser -->|HTTPS| APIGW[API Gateway]
    APIGW --> Lambda
    subgraph VPC
        Lambda -->|VPC-internal| ALB
        ALB --> ECS
        ECS --> RDS
    end
Enter fullscreen mode Exit fullscreen mode

A few things to keep in mind for this migration:

  • Lambda needs to be deployed inside the VPC (if it isn't already)
  • Cold start impact from VPC placement is minimal these days — this used to be a real concern, but AWS has significantly improved it
  • You'll need to explicitly allow Lambda → ALB traffic in your security groups

This approach eliminates the Cloudflare dependency entirely for internal traffic, reduces latency, and simplifies the network topology.

Why Did This Break Suddenly?

The Proxy ON configuration had been in place for a long time. So why did this only start failing now?

I checked Cloudflare's changelog but couldn't find any official announcement around this time about changes to Error 1000 detection or proxy behavior.

Searching the Cloudflare community, there are scattered reports of "Error 1000 suddenly appearing" — but in each case, the cause turned out to be a specific configuration change on the user's end (a DNS record update, an IP change on the hosting side, etc.).

I'm still looking into Cloudflare's Audit Log for anything that might explain this. I'll update this post if I find something.

Summary

If a request passes through a Proxy ON domain, and that request then calls another Proxy ON domain, you get a Cloudflare loop — and Error 1000.

For server-to-server communication, either turn Proxy OFF for internal domains, or better yet, keep that traffic inside the VPC entirely.

Top comments (0)