If a plan says “unlimited,” you are really buying a policy surface: how throughput maps to billing, where concurrency stops scaling, how sessions behave over time, what caps exist on ports or auth, and how fair-use enforcement shows up as 429, queueing, and silent shaping. This lab makes those constraints visible with measurable gates and a repeatable test plan.
Run this lab after you skim the hub once: Unlimited Residential Proxies That Actually Scale Without Surprises
For a concrete baseline of what an “unlimited residential proxies” product surface usually exposes (pool, regions, auth modes, session options), anchor your notes to Unlimited Residential Proxies and then let the measurements decide.
Test setup you should not skip
Run one target class per experiment. Mixing ecommerce + news + socials in a single run ruins attribution.
Client rules
• Keep connections persistent. If you need a quick refresher on keep-alive semantics, review Connection header.
• Fix timeouts (example): connect 5s, read 20s.
• Disable retries initially. Add controlled retries only after you measure 429 behavior.
• Keep your proxy mode consistent across runs. If your provider supports multiple protocols, document which one you chose using a single reference like Proxy Protocols.
Ramp schedule
• Warmup: 2–3 minutes low concurrency
• Ramp: increase concurrency every 60 seconds
• Soak: hold intended concurrency for 10–20 minutes
Target discipline
• One host, one URL pattern, similar payload size each request.
• Prefer a stable endpoint (not login, not search, not CAPTCHA-heavy).
MaskProxy fits this lab well because it’s easy to translate “unlimited” claims into throughput, ceilings, and enforcement signals you can actually plot.
Minimal harness
You want two knobs: concurrency and pacing. Start with concurrency scaling, then add pacing to test fairness controls.
Bash probe for baseline signals
Replace PROXY_HOST:PORT and URL
for i in {1..20}; do
curl -sS -o /dev/null -w "%{http_code} %{time_total}\n" \
-x http://PROXY_HOST:PORT \
--keepalive-time 60 \
"https://target.example/path"
done
Tiny Python loop for repeatable ramps
This uses aiohttp proxy support; the most practical reference is the client advanced guide: aiohttp client advanced.
import time, statistics, asyncio, aiohttp
PROXY="http://PROXY_HOST:PORT"
URL="https://target.example/path"
async def worker(session, n):
out=[]
for _ in range(n):
t0=time.time()
try:
async with session.get(URL, proxy=PROXY) as r:
code=r.status
ra=r.headers.get("Retry-After")
await r.read()
except Exception:
code=0; ra=None
out.append((code, time.time()-t0, ra))
return out
async def run(conc=50, per_worker=50):
timeout=aiohttp.ClientTimeout(total=20, connect=5)
conn=aiohttp.TCPConnector(limit=conc, ttl_dns_cache=300, force_close=False)
async with aiohttp.ClientSession(timeout=timeout, connector=conn) as s:
rows=[x for t in await asyncio.gather(*[worker(s, per_worker) for _ in range(conc)]) for x in t]
codes=[c for c,, in rows]
lat=[t for ,t, in rows if t>0]
ra=[r for ,,r in rows if r]
return {
"n":len(rows),
"ok":sum(200 <= c < 300 for c in codes),
"429":sum(c==429 for c in codes),
"403":sum(c==403 for c in codes),
"err":sum(c==0 for c in codes),
"p50":statistics.median(lat) if lat else None,
"p95":sorted(lat)[int(0.95*len(lat))-1] if len(lat)>5 else None,
"retry_after_seen":len(ra),
}
Evidence bundle checklist
Capture these artifacts every run:
• Per ramp step: concurrency, duration, RPS, success %, 429 %, error %, p50 and p95
• Full HTTP status histogram
• Presence and distribution of Retry-After
• Client connection stats (pool saturation, open sockets)
• Provider counters if available (bandwidth, sessions, port limits, fair-use flags)
• Billing view screenshots or API counters for anything labeled “unlimited”
Gate 1 proves throughput scaling matches the billing surface
Goal
Validate that throughput scales with concurrency until a predictable ceiling, without hidden metering surprises.
Method
• Ramp concurrency 20→50→100→200 (60s each).
• Track RPS and bytes/sec. Note any counters that increment during “unlimited.”
Pass signals
• RPS rises predictably and p95 stays bounded.
• Billing and policy knobs align with what you observe.
Fail signals
• Throughput flatlines while latency inflates, especially before target limits should bite.
• Billing behaves like metered bandwidth or per-request charging.
Artifacts
• Concurrency vs RPS and p95 chart
• Billing snapshots and plan constraints from Unlimited Residential Proxies Pricing
Gate 2 proves 429 and fairness controls behave predictably
Goal
Identify whether 429 is destination rate limiting, provider fairness, or both, and whether pacing can restore stability.
Method
• At each ramp step, record 429% and Retry-After presence.
• Rerun with pacing at the same concurrency: cap RPS plus jittered sleeps.
Pass signals
• 429 drops when you slow down, and p95 stops climbing.
• Retry-After appears in a destination-like pattern.
Fail signals
• 429 persists even at low RPS, or appears with weak destination signals.
• Latency inflates broadly across runs in a way that looks like queueing.
Artifacts
• 429 and p95 time series
• With and without pacing comparison
For a practical baseline of 429 semantics and Retry-After behavior, use Cloudflare Error 429.
Gate 3 proves session TTL holds under keep-alive and load
Goal
Measure session stickiness and TTL behavior under realistic connection reuse.
Method
• Reuse connections and record the observed egress identity every N requests for 15–30 minutes.
• Add idle gaps (30s, 2m, 5m) to detect idle timeout boundaries.
Pass signals
• Identity holds for the expected TTL window.
• Rotation events align with TTL boundaries or idle timeouts.
Fail signals
• Mid-session identity flips under steady keep-alive.
• TTL collapses as concurrency increases even if RPS stays steady.
Artifacts
• Session timeline (identity vs time)
• Connection reuse stats
If your workload depends on stable exit identity, calibrate expectations against what “residential pool” actually implies in your environment using Residential Proxies.
Gate 4 proves scaling does not depend on hidden port or auth caps
Goal
Detect whether scaling requires multiple ports or credentials, and whether per-port ceilings exist.
Method
• Repeat Gate 1 with one port versus multiple ports at the same total concurrency.
• If available, compare one credential versus multiple credentials.
Pass signals
• Multi-port improves throughput without spiking 429.
• No auth failures correlated with concurrency.
Fail signals
• One port hits a hard ceiling far below expectation.
• Auth errors or disconnects appear as you scale.
Artifacts
• Single-port vs multi-port A/B summary
• Auth and connect error logs
If your results suggest you need churn-friendly throughput instead of sticky TTL, record that as a decision boundary and compare against a rotation-shaped product like Rotating Residential Proxies.
Gate 5 proves soak stability and avoids slow-burn failures
Goal
Prove the pool does not decay over time: rising blocks, depletion, or queueing that appears only after sustained load.
Method
• Soak at intended concurrency for 10–20 minutes.
• Track drift: success %, 403 and 429 rates, p95 trend.
Pass signals
• Metrics stay within a tight band.
• Errors are bursty and recover with backoff.
Fail signals
• Success rate trends down while 403 or 429 trends up.
• p95 climbs steadily run-over-run.
Artifacts
• Soak time series (success, 429, 403, p95)
• Any exit grouping evidence you can observe
How to interpret ambiguous results without guessing
When you hit a 429 wall, separate “destination says slow down” from “provider enforces fairness.”
More likely destination rate limiting
• Clear Retry-After patterns and stable latency until a threshold
• Improvement when you slow down per target
More likely provider shaping or fairness
• Throughput flatlines regardless of pacing changes
• Latency inflates broadly, then improves when you add ports or endpoints
If you need a ground truth reference for HTTP semantics, use RFC 9110.
Go or no-go checklist you can apply in one minute
• Throughput scales with concurrency until a predictable ceiling
• 429 behavior is explainable and responds to pacing and backoff
• Session TTL matches your session needs under keep-alive
• No hidden port or auth caps block scale
• Soak run is stable with no progressive decay
• Evidence bundle collected and comparable across runs
FAQ
1.Do I need to test multiple targets?
Yes, but run them separately. One target class per run keeps your signals interpretable.
2.Should I retry 429 immediately?
No. Use exponential backoff with jitter and respect Retry-After when present.
3.What is a clean go signal?
Stable soak metrics, predictable scaling, and 429 behavior that responds to pacing.
Top comments (0)