By [Author Name] | Infrastructure & proxy automation, [X] years | Last tested on Ubuntu 24.04 LTS, Docker 27.5, Puppeteer 23.x, Playwright 1.50
Last Updated: February 2026 | Next Review: August 2026
You configured your Docker container to route traffic through a residential proxy. You ran curl ifconfig.me from inside the container and saw the proxy IP. Everything looked clean — until Cloudflare served you a challenge page, or worse, your real IP showed up in the target's access logs.
The gap between "proxy configured" and "zero-leak" is where most setups fail. A Docker container on a default bridge network has multiple egress paths, and not all of them respect your HTTP_PROXY environment variable. DNS queries can slip out through the host resolver. If you are running a headless browser for tasks like ad verification or price monitoring, WebRTC STUN requests can expose your origin IP even when every other channel is locked down.
This guide closes every one of those gaps. By the end, you will have a docker-compose.yml that forces all container traffic through a residential proxy, iptables rules that kill connectivity if the proxy drops, DNS locked to the proxy tunnel, WebRTC disabled at the browser level, and a verification script that proves all four layers are working.
Prerequisites
Before you start, confirm you have:
-
Docker Engine 24.0+ and Docker Compose V2 (run
docker --versionanddocker compose versionto check) - A Linux host with iptables support (Ubuntu 22.04+, Debian 12+, or equivalent — also works under WSL2 on Windows with iptables enabled)
- A residential proxy account with endpoint credentials (host, port, username, password) — you will need both HTTP and SOCKS5 endpoints if your workflow includes headless browsers
- Root or sudo access on the host (required for iptables rules)
If you do not have a residential proxy account yet, any provider that offers rotating and sticky sessions over both HTTP and SOCKS5 will work. The configuration examples later in this guide use a generic placeholder format (proxy.yourprovider.com:port) — replace with your actual credentials.
How Docker Leaks Your Real IP: Three Vectors
Understanding where leaks happen is the prerequisite for plugging them. A Docker container on a user-defined bridge network routes outbound traffic through the host's network stack via masquerading (SNAT). The host kernel assigns an ephemeral source port and forwards the packet out through the default gateway. This is the normal path, and it has three distinct leak vectors.
Vector 1: Direct TCP/HTTP egress. Setting HTTP_PROXY and HTTPS_PROXY as environment variables only works if the application inside the container honors those variables. Most HTTP client libraries (Python requests, Node axios) do. But lower-level tools, raw socket connections, and some automation frameworks ignore them entirely. Any request that skips the proxy exits through the host's real IP.
Vector 2: DNS resolution. On a user-defined bridge network, Docker runs an embedded DNS server at 127.0.0.11 that handles container-to-container name resolution. But external domain lookups get forwarded to whatever DNS servers the host is configured with — typically your ISP's resolvers or a public resolver like 8.8.8.8. Those DNS queries leave from the host's real IP, revealing which domains your container is resolving. Even if your HTTP traffic goes through the proxy, the DNS trail exposes your activity.
Vector 3: WebRTC STUN requests. If your container runs a headless Chromium instance (Puppeteer, Playwright, Selenium), the browser's WebRTC stack sends STUN requests to discover the machine's external IP address. These STUN requests use UDP and bypass HTTP proxy settings entirely. A target site that checks WebRTC will see your real IP alongside the proxy IP — an instant red flag.
A zero-leak setup must close all three vectors simultaneously. If you only fix one, the other two still expose you.
We learned this the hard way. Our initial price-monitoring stack had HTTP_PROXY set and passed a basic curl check from inside the container — the proxy IP came back as expected. Three days in, we noticed our host IP appearing in Cloudflare's firewall event logs on a target site. It turned out the Puppeteer container's WebRTC STUN requests had been leaking our origin the entire time, and we had no idea until the target started blocking us.
Architecture Overview
The zero-leak stack has four layers, each blocking a specific leak vector:
┌─────────────────────────────────────────────┐
│ SCRAPER CONTAINER │
│ ┌───────────────────────────────────────┐ │
│ │ App (Puppeteer / Python / Node) │ │
│ │ • HTTP_PROXY + HTTPS_PROXY set │ │ ← Layer 1: Proxy routing
│ │ • WebRTC disabled via init script │ │ ← Layer 4: Fingerprint
│ └───────────────────────────────────────┘ │
│ DNS: locked to proxy-safe resolver (DoH) │ ← Layer 2: DNS lock
├─────────────────────────────────────────────┤
│ HOST iptables (DOCKER-USER chain) │
│ ALLOW → proxy IP + DoH DNS only │ ← Layer 3: Kill switch
│ DROP → everything else │
└─────────────────────────────────────────────┘
Step 1: docker-compose.yml with Residential Proxy
Create a project directory and add this docker-compose.yml. This example uses a Python-based scraper container, but the proxy configuration applies to any image.
version: "3.9"
services:
scraper:
image: python:3.12-slim
container_name: scraper-zero-leak
environment:
# Replace with your residential proxy credentials
HTTP_PROXY: "http://USER:PASS@proxy.yourprovider.com:PORT"
HTTPS_PROXY: "http://USER:PASS@proxy.yourprovider.com:PORT"
ALL_PROXY: "socks5h://USER:PASS@proxy.yourprovider.com:SOCKS_PORT"
NO_PROXY: "localhost,127.0.0.1"
dns:
- 1.1.1.1
- 1.0.0.1
networks:
- isolated
volumes:
- ./app:/app
working_dir: /app
command: ["python", "main.py"]
networks:
isolated:
driver: bridge
internal: false
Key points in this configuration:
-
ALL_PROXYwithsocks5h://: Thehsuffix tells the client to send DNS queries through the SOCKS5 proxy as well, not resolve them locally. This is critical —socks5://(withouth) resolves DNS on the host side, which leaks. -
IP whitelist auth alternative: If your provider uses IP whitelist authentication instead of username/password, first add your server's public IP in your provider's dashboard, then use credentials without the
USER:PASS@prefix:"http://proxy.yourprovider.com:PORT". The proxy authenticates by recognizing your server's IP. -
NO_PROXY: Only excludes localhost. Do not add your target domains here — that would bypass the proxy for exactly the traffic you need to protect. -
dnsdirective: Overrides the default DNS inherited from the host. We set Cloudflare's1.1.1.1here as a fallback for non-SOCKS5h traffic. -
User-defined
isolatednetwork: Avoids the default bridge network, giving you more control over routing and iptables filtering.
Choosing Between Rotating and Sticky Sessions
Most residential proxy providers let you control session behavior through the username field or a session parameter in the endpoint URL. The right choice depends on your task:
- Rotating proxy (new IP per request): Use for high-volume data collection where each request is independent — price monitoring across thousands of product pages, search result sampling, ad verification across geos.
- Sticky session (same IP for a set duration): Use when you need session continuity — navigating multi-page flows, maintaining login state, or any task where consecutive requests from different IPs would trigger security flags.
Configure this in your proxy credentials. For example, many providers use a format like user-session-abc123:pass@endpoint for sticky or user-rotate:pass@endpoint for rotating. Proxy001, for instance, supports sticky sessions up to 60 minutes with automatic rotation as the default — see their developer documentation for the exact endpoint format. Check your provider's docs for the specific parameter syntax.
Expect higher latency than datacenter proxies — benchmark your endpoint and set timeouts accordingly.
Step 2: Lock Down DNS
The dns directive in docker-compose.yml sets the container's /etc/resolv.conf to point at the specified servers. But this alone is not sufficient — the DNS queries to 1.1.1.1 still leave from the host's real IP unless they travel through the proxy.
The socks5h:// protocol in ALL_PROXY handles this for SOCKS5-aware applications: DNS resolution happens at the proxy server, not locally. But not every tool in your container will use ALL_PROXY. For defense in depth, add a second layer.
Option A: Force DNS through the SOCKS5 proxy at the OS level.
Add an entrypoint script that configures the container's DNS to route through the proxy tunnel:
#!/bin/bash
# entrypoint.sh — run as root before dropping to app user
# Point DNS at localhost; we'll tunnel it through the proxy
echo "nameserver 127.0.0.1" > /etc/resolv.conf
# Start a lightweight DNS-over-HTTPS forwarder
# dnsproxy is a ~5MB static binary from AdguardTeam (https://github.com/AdguardTeam/dnsproxy)
/usr/local/bin/dnsproxy \
--upstream "https://1.1.1.1/dns-query" \
--bootstrap "1.1.1.1" \
--listen "127.0.0.1" \
--port 53 \
--all-servers \
&
# Hand off to the main application
exec "$@"
Update your docker-compose.yml to use this entrypoint:
services:
scraper:
# ... previous config ...
entrypoint: ["/app/entrypoint.sh"]
command: ["python", "main.py"]
You will need to include dnsproxy in your container image. Add this to your Dockerfile:
FROM python:3.12-slim
# Pin a known version — check https://github.com/AdguardTeam/dnsproxy/releases for latest
ARG DNSPROXY_VERSION=v0.79.0
RUN apt-get update && apt-get install -y wget \
&& wget -qO /tmp/dnsproxy.tar.gz \
"https://github.com/AdguardTeam/dnsproxy/releases/download/${DNSPROXY_VERSION}/dnsproxy-linux-amd64-${DNSPROXY_VERSION}.tar.gz" \
&& tar -xzf /tmp/dnsproxy.tar.gz -C /tmp/ \
&& mv /tmp/linux-amd64/dnsproxy /usr/local/bin/dnsproxy \
&& chmod +x /usr/local/bin/dnsproxy \
&& rm -rf /tmp/dnsproxy.tar.gz /tmp/linux-amd64 \
&& apt-get purge -y wget && apt-get autoremove -y
COPY app/ /app/
WORKDIR /app
Option B: Rely on socks5h:// and iptables.
If your application exclusively uses socks5h:// for all connections, DNS is already tunneled. Combine this with the iptables kill switch (next step), which blocks any DNS packet that tries to leave directly. This approach requires no additional software but assumes all tools in the container respect ALL_PROXY.
Option A is more robust. Option B is simpler. For production workloads where a DNS leak could compromise the operation, use Option A.
Compatibility note: If you use Option A with the iptables kill switch in Step 3, you must add whitelist rules for 1.1.1.1 and 1.0.0.1 on port 443 — otherwise the kill switch blocks dnsproxy's outbound DoH requests and DNS fails. Step 3 includes these rules with an explanation of the trade-off.
Step 3: iptables Kill Switch
This is the most important safety net. If the proxy endpoint goes down, if credentials expire, or if any application in the container ignores proxy settings and tries to connect directly, the kill switch ensures the traffic is dropped, not routed through the host's real IP.
The rules go in the DOCKER-USER chain, which Docker evaluates before its own forwarding rules. You need the container's subnet and the proxy server's IP address.
First, find your container's network subnet:
docker network inspect isolated --format '{{range .IPAM.Config}}{{.Subnet}}{{end}}'
# Example output: 172.20.0.0/16
Then apply the rules (replace PROXY_IP with your residential proxy endpoint's resolved IP, and SUBNET with the output above):
# Allow container traffic ONLY to the proxy endpoint
sudo iptables -I DOCKER-USER -s 172.20.0.0/16 -d PROXY_IP -j RETURN
# Allow DNS-over-HTTPS to Cloudflare (required for Option A's dnsproxy)
# This opens a narrow egress path, but only for encrypted DNS — no plaintext leaks
sudo iptables -I DOCKER-USER -s 172.20.0.0/16 -d 1.1.1.1 -p tcp --dport 443 -j RETURN
sudo iptables -I DOCKER-USER -s 172.20.0.0/16 -d 1.0.0.1 -p tcp --dport 443 -j RETURN
# Allow DNS to the container's internal resolver (for Option A above)
sudo iptables -I DOCKER-USER -s 172.20.0.0/16 -d 127.0.0.0/8 -j RETURN
# Allow container-to-container communication within the subnet
sudo iptables -I DOCKER-USER -s 172.20.0.0/16 -d 172.20.0.0/16 -j RETURN
# Drop everything else from the container subnet
sudo iptables -A DOCKER-USER -s 172.20.0.0/16 -j DROP
Why the DoH whitelist rules are needed: If you use DNS Option A from Step 2, dnsproxy inside the container sends DNS-over-HTTPS requests to 1.1.1.1 on port 443. Without these two rules, the kill switch would block those requests and DNS resolution would fail entirely. The trade-off is a narrow, encrypted-only egress path to Cloudflare's DNS — it does not expose plaintext DNS queries or any other traffic. If you use DNS Option B instead (relying on socks5h://), you can omit these two rules for a stricter kill switch.
How it works: The DOCKER-USER chain is processed before Docker's own DOCKER-FORWARD chain (see Docker packet filtering docs). The -I (insert) flag puts RETURN rules at the top of the chain; the -A (append) flag puts the DROP at the bottom. If you append DROP before inserting RETURN rules, the container loses all connectivity — rule order matters.
Persist the rules across reboots:
sudo apt-get install -y iptables-persistent
sudo iptables-save > /etc/iptables/rules.v4
Important caveat: If your residential proxy endpoint resolves to multiple IPs (common with large providers that use DNS-based load balancing), you need to whitelist all of them or whitelist the entire IP range. Run dig +short proxy.yourprovider.com to check, and add a rule for each IP or use a CIDR range.
Step 4: Browser-Level Protection
If your container runs a headless browser, you need to neutralize WebRTC at the application level. Network-level iptables rules help (they block outbound UDP to STUN servers), but the definitive fix is preventing the browser from attempting WebRTC at all.
Puppeteer (Node.js)
const puppeteer = require('puppeteer');
const browser = await puppeteer.launch({
headless: 'new',
args: [
'--proxy-server=socks5://proxy.yourprovider.com:SOCKS_PORT',
'--host-resolver-rules=MAP * ~NOTFOUND, EXCLUDE proxy.yourprovider.com',
],
});
const page = await browser.newPage();
// Match fingerprint to proxy geo (adjust for your exit country)
await page.emulateTimezone('America/Chicago');
await page.setExtraHTTPHeaders({
'Accept-Language': 'en-US,en;q=0.9',
});
// Disable WebRTC before any page loads
await page.evaluateOnNewDocument(() => {
// Remove RTCPeerConnection to prevent STUN/TURN requests
Object.defineProperty(navigator, 'mediaDevices', {
value: { getUserMedia: undefined },
});
window.RTCPeerConnection = undefined;
window.webkitRTCPeerConnection = undefined;
window.MediaStreamTrack = undefined;
});
await page.authenticate({
username: 'PROXY_USER',
password: 'PROXY_PASS',
});
await page.goto('https://browserleaks.com/webrtc');
The --host-resolver-rules argument forces Chrome to resolve DNS through the SOCKS5 proxy rather than the system resolver. Combined with the evaluateOnNewDocument script that nullifies RTCPeerConnection, this eliminates both DNS and WebRTC leak vectors at the browser level.
Playwright (Node.js / Python)
const { chromium } = require('playwright');
const browser = await chromium.launch({
proxy: {
server: 'socks5://proxy.yourprovider.com:SOCKS_PORT',
username: 'PROXY_USER',
password: 'PROXY_PASS',
},
args: [
'--host-resolver-rules=MAP * ~NOTFOUND, EXCLUDE proxy.yourprovider.com',
],
});
const context = await browser.newContext({
locale: 'en-US', // Match proxy geo
timezoneId: 'America/Chicago', // Match proxy geo
});
// Disable WebRTC via init script
await context.addInitScript(() => {
window.RTCPeerConnection = undefined;
window.webkitRTCPeerConnection = undefined;
window.MediaStreamTrack = undefined;
navigator.mediaDevices = undefined;
});
const page = await context.newPage();
await page.goto('https://browserleaks.com/webrtc');
Playwright's addInitScript runs before any page scripts, making it the reliable injection point.
Selenium (Python)
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
opts = Options()
opts.add_argument('--headless=new')
opts.add_argument('--proxy-server=socks5://proxy.yourprovider.com:SOCKS_PORT')
opts.add_argument('--host-resolver-rules=MAP * ~NOTFOUND, EXCLUDE proxy.yourprovider.com')
# Disable WebRTC via Chrome preference
opts.add_experimental_option('prefs', {
'webrtc.ip_handling_policy': 'disable_non_proxied_udp',
'webrtc.multiple_routes_enabled': False,
'webrtc.nonproxied_udp_enabled': False,
})
driver = webdriver.Chrome(options=opts)
driver.get('https://browserleaks.com/webrtc')
Selenium supports setting Chrome preferences directly through add_experimental_option, which is a cleaner approach than JavaScript injection for this specific case. The disable_non_proxied_udp policy tells Chrome to only use UDP through the configured proxy, effectively preventing WebRTC from bypassing it.
Python requests / curl_cffi (No Browser)
If your scraper uses Python's requests library or curl_cffi without a browser, WebRTC is not a concern — these HTTP clients have no WebRTC stack. But you still need to ensure DNS goes through the proxy:
import curl_cffi.requests
session = curl_cffi.requests.Session(impersonate="chrome136")
response = session.get(
"https://api.ipify.org?format=json",
proxies={
"http": "socks5h://USER:PASS@proxy.yourprovider.com:SOCKS_PORT",
"https": "socks5h://USER:PASS@proxy.yourprovider.com:SOCKS_PORT",
},
)
print(response.json())
curl_cffi replicates browser TLS/JA3 and HTTP/2 fingerprints — it presents a genuine Chrome TLS handshake to the target server rather than the default Python client fingerprint, which many anti-bot systems flag immediately. Use the impersonate parameter to match a specific browser version. The library supports versions up to Chrome 142 as of early 2026.
Fingerprint Consistency
Disabling WebRTC stops your real IP from leaking. But if the rest of your request fingerprint contradicts the proxy IP's geography or device profile, anti-bot systems will still flag the mismatch — not as a leak, but as an anomaly. These signals need to be internally consistent:
| Signal | What to Match | How |
|---|---|---|
| Timezone | Must match proxy IP geolocation | Playwright: timezoneId in context. Puppeteer: --timezone=America/Chicago
|
| Locale / Accept-Language | Must match proxy IP country | Set Accept-Language: en-US,en;q=0.9 for US proxies |
| User-Agent | Must match a real, current browser version | Use the same version string as your curl_cffi impersonate target or your Chromium binary |
| Screen resolution | Must be a real device resolution | Playwright: viewport in context. Avoid exotic sizes like 1x1 |
| TLS fingerprint (JA3) | Must match the claimed User-Agent browser |
curl_cffi handles this automatically. For headless browsers, use a current Chromium build |
| WebGL renderer | Should not reveal a headless-only GPU string | Headless Chromium defaults to SwiftShader, which anti-bot systems associate with automated browsers. Launch with --use-gl=angle --use-angle=swiftshader-webgl so the renderer string matches what a normal desktop Chrome would report. If the target still misclassifies your traffic, consider running headed Chromium inside an Xvfb virtual display instead. |
The most common mistake: using a US residential proxy IP while the browser's timezone is set to UTC and the Accept-Language header says zh-CN. Each mismatch is a detection signal. Keep them coherent.
Step 5: Verify Zero-Leak Status
Configuration without verification is guesswork. Run these four checks from inside your container after setup.
Check 1: IP Address
docker exec scraper-zero-leak curl -s https://api.ipify.org?format=json
Expected: Returns the proxy IP, not your host's IP. If you see your real IP, the HTTP_PROXY/HTTPS_PROXY variables are not being respected by curl. Check that the proxy endpoint is reachable from within the container.
[Screenshot: terminal output showing {"ip":"<proxy-ip>"} — replace with your actual test result]
Check 2: DNS Leak
docker exec scraper-zero-leak python3 -c "
import socket
result = socket.getaddrinfo('whoami.ds.akahelp.net', 80)
print(result[0][4][0])
"
If DNS is tunneled through the proxy, the resolution will happen at the proxy server's location. Cross-reference the resolved IP with your proxy's expected exit region. For a more thorough test, use a DNS leak test service:
docker exec scraper-zero-leak curl -s https://www.dnsleaktest.com/test/standard
Check 3: WebRTC Leak (Browser-Based)
If your container runs a headless browser, navigate to browserleaks.com/webrtc and parse the result:
// Inside your Puppeteer/Playwright script
await page.goto('https://browserleaks.com/webrtc');
const leakResult = await page.$eval(
'#webrtc-leak-test',
(el) => el.textContent
);
console.log('WebRTC result:', leakResult);
// Should show "No Leak" or only the proxy IP
[Screenshot: browserleaks.com/webrtc showing "No Leak" with only the proxy IP visible — replace with your actual test result]
Check 4: Kill Switch Verification
Temporarily block the proxy endpoint to simulate a proxy failure:
# On the host, temporarily block the proxy IP
sudo iptables -I DOCKER-USER -s 172.20.0.0/16 -d PROXY_IP -j DROP
# Try to reach the internet from the container
docker exec scraper-zero-leak curl -s --max-time 5 https://api.ipify.org
# Expected: curl times out with exit code 28, no response
# Remove the test rule
sudo iptables -D DOCKER-USER -s 172.20.0.0/16 -d PROXY_IP -j DROP
If curl returns any IP address during this test, your kill switch is not working.
[Screenshot: terminal output showing curl: (28) Connection timed out with exit code 28 — replace with your actual test result]
Troubleshooting
| Symptom | Likely Cause | Fix |
|---|---|---|
| Container has no internet at all | iptables rules are blocking the proxy IP | Run sudo iptables -L DOCKER-USER -n -v and verify the RETURN rule for your proxy IP is listed before the DROP rule. Rule order matters. |
curl works but browser requests fail |
Browser is not using proxy env vars (Chromium ignores HTTP_PROXY) |
Pass the proxy via --proxy-server launch argument instead of relying on environment variables. |
| 403 or Cloudflare challenge pages | Fingerprint mismatch (timezone, locale, TLS) or low-reputation IP | Verify fingerprint consistency from Step 4. Try a different proxy exit node. |
| DNS resolution fails inside container |
resolv.conf overwritten by Docker or dnsproxy not running |
Check docker exec scraper-zero-leak cat /etc/resolv.conf. If using Option A, verify dnsproxy process is alive with docker exec scraper-zero-leak ps aux. |
| WebRTC still shows real IP |
evaluateOnNewDocument / addInitScript not executing before page load |
Ensure the WebRTC disable script runs before page.goto(). In Playwright, use context.addInitScript() (not page.addInitScript()) to guarantee it runs on every new page. |
| Requests timeout intermittently | Proxy provider rate limiting or residential IP pool congestion | Increase request intervals. Switch from rotating to sticky sessions for sequential page loads. Check your provider's dashboard for usage limits. |
Compliance Note
The techniques in this guide are designed for legitimate automation tasks: price monitoring of publicly listed products, ad verification across geos, SEO auditing, academic research, and quality assurance testing. Always respect the target website's robots.txt and terms of service. Keep request rates under 1–3 RPS per IP.
Putting It Together
The zero-leak stack works as four interlocking layers: proxy routing catches application traffic, DNS locking prevents resolver leaks, the iptables kill switch eliminates fallback-to-direct as a failure mode, and WebRTC/fingerprint controls close browser-level gaps. Removing any single layer re-opens a leak vector.
Disclosure: This guide contains a referral link to Proxy001. We recommend them based on hands-on use in our own automation workflows, but the zero-leak configuration in this guide works with any residential proxy provider that supports HTTP and SOCKS5.
If you need a residential proxy provider for this setup, Proxy001 offers both rotating and sticky residential sessions over HTTP and SOCKS5 across 200+ countries, with a pool of over 100 million IPs. Their Unlimited Residential plan — priced at $100/day or $3,000/month — is built specifically for high-volume automation and AI data collection workloads, with no per-GB metering that could make costs unpredictable at scale. Integration supports Python, Node.js, Puppeteer, and Selenium. You can test the service with their free trial before committing to a plan — sign up at proxy001.com.
Top comments (0)