If you’re designing proxy routing for real workflows, the most reliable mental model is two lanes: one optimized for identity stability, one optimized for coverage and scale. The proxy “type” choices that usually map to these lanes are broken down well in this deep dive: Static residential ISP vs residential proxies
.
What you’ll build
You’ll build a two-lane routing layer that makes routing decisions before a request/browser context is created:
Identity lane: minimize churn
Goal: keep the same egress IP long enough to preserve trust signals for logins, 2FA flows, seller/admin sessions, and ad accounts.
Typical mechanics: sticky sessions, long-lived proxy endpoints, per-identity pinning, conservative concurrency.
Coverage lane: maximize reach
Goal: fetch more “surface area” across regions/keywords/pages without overloading a single identity footprint.
Typical mechanics: rotation per request or short sticky windows, larger pools, higher concurrency, stronger throttling/backoff.
Decision mapping
Use this mapping as a default, then override based on observed failure modes. If you want the “proxy selection” rationale behind this mapping, reference the tradeoffs here: proxy type tradeoffs for identity vs coverage routing
.
Task → Lane mapping
Login and 2FA flows → Identity lane
Pin by identity group, keep IP stable, lower concurrency.
Seller center ops (product edits, payouts, support tickets) → Identity lane
Long sessions, higher sensitivity to “suspicious login” heuristics.
Ad accounts (BM, campaign edits, billing) → Identity lane
Highest tolerance for slowness, lowest tolerance for churn.
SERP monitoring (rank checks, lightweight fetches) → Coverage lane
Spread requests, prioritize pool diversity over session continuity.
Scraping (category pages, search results, listings) → Coverage lane
Rotate or short stickies; build robust retry/backoff.
Rule of thumb
If the task cares about who you look like → identity lane.
If the task cares about how much you can see → coverage lane.
Minimal implementation examples you can copy
Example 1: Python requests with sticky session vs rotation
This is a tiny “router” you can drop into a service. It supports:
Sticky: same session id keeps the same proxy for a time window
Rotation: each request gets a fresh proxy
import time
import random
import requests
from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple
@dataclass(frozen=True)
class ProxyEndpoint:
# Example: http://user:pass@host:port
url: str
class TwoLaneProxyRouter:
def __init__(
self,
identity_pool: List[ProxyEndpoint],
coverage_pool: List[ProxyEndpoint],
sticky_window_s: int = 30 * 60,
):
self.identity_pool = identity_pool
self.coverage_pool = coverage_pool
self.sticky_window_s = sticky_window_s
# session_id -> (proxy, expires_at)
self._sticky_map: Dict[str, Tuple[ProxyEndpoint, float]] = {}
def _pick(self, pool: List[ProxyEndpoint]) -> ProxyEndpoint:
return random.choice(pool)
def get_proxy_sticky(self, session_id: str, lane: str) -> ProxyEndpoint:
now = time.time()
hit = self._sticky_map.get(session_id)
if hit and hit[1] > now:
return hit[0]
pool = self.identity_pool if lane == "identity" else self.coverage_pool
p = self._pick(pool)
self._sticky_map[session_id] = (p, now + self.sticky_window_s)
return p
def get_proxy_rotate(self, lane: str) -> ProxyEndpoint:
pool = self.identity_pool if lane == "identity" else self.coverage_pool
return self._pick(pool)
def fetch(
url: str,
lane: str,
router: TwoLaneProxyRouter,
session_id: Optional[str] = None,
rotate: bool = False,
timeout_s: int = 25,
):
if rotate:
proxy = router.get_proxy_rotate(lane)
else:
if not session_id:
raise ValueError("session_id required for sticky routing")
proxy = router.get_proxy_sticky(session_id=session_id, lane=lane)
proxies = {"http": proxy.url, "https": proxy.url}
headers = {"User-Agent": "lane-router/1.0"}
r = requests.get(url, proxies=proxies, headers=headers, timeout=timeout_s)
r.raise_for_status()
return r.text
if __name__ == "__main__":
identity = [
ProxyEndpoint("http://USER:PASS@IDENTITY_HOST_1:PORT"),
ProxyEndpoint("http://USER:PASS@IDENTITY_HOST_2:PORT"),
]
coverage = [
ProxyEndpoint("http://USER:PASS@COVER_HOST_1:PORT"),
ProxyEndpoint("http://USER:PASS@COVER_HOST_2:PORT"),
ProxyEndpoint("http://USER:PASS@COVER_HOST_3:PORT"),
]
router = TwoLaneProxyRouter(identity, coverage, sticky_window_s=20 * 60)
# Identity lane: sticky session
html1 = fetch(
"https://example.com/account",
lane="identity",
router=router,
session_id="acct:shop_17",
)
html2 = fetch(
"https://example.com/account/settings",
lane="identity",
router=router,
session_id="acct:shop_17",
)
# Coverage lane: rotate per request
for _ in range(3):
_ = fetch(
"https://example.com/search?q=keyword",
lane="coverage",
router=router,
rotate=True,
)
Practical defaults
Identity lane: sticky by default, rotation only when you must.
Coverage lane: rotation by default, or short sticky windows for burst stability.
Example 2: Playwright with one identity group per proxy endpoint
Key idea: one identity group = one proxy endpoint. You can map “identity group” to:
a store/account
an ad account
a team member profile
a browser profile in your anti-detect stack
Playwright TypeScript example using separate contexts, each pinned to its proxy:
import { chromium, BrowserContext } from "playwright";
type IdentityGroup = {
id: string; // e.g. "shop_17" or "adacct_3"
proxyServer: string; // e.g. "http://user:pass@host:port"
storageStatePath: string; // saved cookies/localStorage per group
};
const identityGroups: IdentityGroup[] = [
{
id: "shop_17",
proxyServer: "http://USER:PASS@IDENTITY_HOST_1:PORT",
storageStatePath: "./states/shop_17.json",
},
{
id: "shop_18",
proxyServer: "http://USER:PASS@IDENTITY_HOST_2:PORT",
storageStatePath: "./states/shop_18.json",
},
];
async function openIdentityContext(g: IdentityGroup): Promise<BrowserContext> {
const browser = await chromium.launch({ headless: true });
// Proxy is pinned at context creation time.
const ctx = await browser.newContext({
proxy: { server: g.proxyServer },
storageState: g.storageStatePath,
});
return ctx;
}
async function runSellerOps(g: IdentityGroup) {
const ctx = await openIdentityContext(g);
const page = await ctx.newPage();
await page.goto("https://example.com/seller-center");
// ... perform actions under stable identity lane ...
// Tip: keep concurrency low per identity group.
await ctx.close();
}
(async () => {
// Run each identity group separately.
// Avoid running multiple identity groups through the same proxy endpoint.
await Promise.all(identityGroups.map(runSellerOps));
})();
Notes
If you need multiple contexts per group, keep them behind the same proxy endpoint.
If you need concurrency, scale by adding identity groups and endpoints, not by reusing one endpoint harder.
Example 3: Lane config in YAML
Keep this config in your repo and load it into your router. It forces the “lane mindset” into ops knobs.
import { chromium, BrowserContext } from "playwright";
type IdentityGroup = {
id: string; // e.g. "shop_17" or "adacct_3"
proxyServer: string; // e.g. "http://user:pass@host:port"
storageStatePath: string; // saved cookies/localStorage per group
};
const identityGroups: IdentityGroup[] = [
{
id: "shop_17",
proxyServer: "http://USER:PASS@IDENTITY_HOST_1:PORT",
storageStatePath: "./states/shop_17.json",
},
{
id: "shop_18",
proxyServer: "http://USER:PASS@IDENTITY_HOST_2:PORT",
storageStatePath: "./states/shop_18.json",
},
];
async function openIdentityContext(g: IdentityGroup): Promise<BrowserContext> {
const browser = await chromium.launch({ headless: true });
// Proxy is pinned at context creation time.
const ctx = await browser.newContext({
proxy: { server: g.proxyServer },
storageState: g.storageStatePath,
});
return ctx;
}
async function runSellerOps(g: IdentityGroup) {
const ctx = await openIdentityContext(g);
const page = await ctx.newPage();
await page.goto("https://example.com/seller-center");
// ... perform actions under stable identity lane ...
// Tip: keep concurrency low per identity group.
await ctx.close();
}
(async () => {
// Run each identity group separately.
// Avoid running multiple identity groups through the same proxy endpoint.
await Promise.all(identityGroups.map(runSellerOps));
})();
Wrong-lane signals and how to debug them
When things degrade, assume lane mismatch first. The proxy choice guidance behind these failure patterns is explained here: choosing static ISP lanes for identity work
.
Signal: 2FA prompts spike on accounts that used to be stable
Likely cause: identity traffic accidentally moved to coverage lane or rotation became too aggressive.
Actions:
Switch that workflow to identity lane sticky and extend sticky window.
Split pools: dedicate endpoints per identity group; reduce per-group concurrency to 1.
Signal: CAPTCHA frequency increases or changes “shape”
Likely cause: rotation is too fast for flows that expect continuity, or you’re retrying too aggressively.
Actions:
Reduce rotation rate: short sticky (60–180s) instead of per-request rotation for borderline flows.
Prefer IP diversity over concurrency: add endpoints/pool size, lower parallelism.
Signal: Content comes back partial or empty
Likely cause: throttling mismatch, block pages, geo mismatch, or overloaded endpoints.
Actions:
Add “response sanity checks” (length, required markers) and route failed fetches to a different lane or pool.
Increase backoff, lower RPS, and isolate “fragile targets” into their own coverage sub-pool.
Signal: Success rate drifts down over hours or days
Likely cause: pool quality drift, endpoint reuse patterns, or insufficient diversity.
Actions:
Track metrics per lane and per pool: success, captcha, empty-body, retry count.
Rotate pools, not identities: keep identity endpoints stable, refresh coverage pool diversity periodically.
Security and sourcing note
Use proxies with transparent sourcing and contracts you can explain to compliance.
Respect target ToS and legal boundaries; avoid credential stuffing and unauthorized access patterns.
Log minimally, protect secrets, and treat proxy credentials like production credentials.
Closing: build lanes, then tune with data
Start with strict defaults: keep identity flows stable, and keep coverage flows scalable. Once your observability is in place, you can tighten routing rules by task and by target. If you want the proxy-type decision logic that usually makes this work in practice, read the full breakdown here: static residential ISP and residential pools compared
Top comments (0)