You fixed the TLS fingerprint. You sorted the Canvas. Your navigator object looks spotless. You're past the first three layers of detection.
And then Akamai blocks you anyway. Thirty seconds into the session.
Welcome to behavioral fingerprinting.
What behavioral analysis actually looks at
The first three detection layers — TLS, HTTP headers, JavaScript fingerprinting — are stateless. They analyze a single snapshot: what does this connection look like right now?
Behavioral analysis is different. It builds a model over time. It watches what you do, how you do it, and whether it looks like something a human would do.
The signals it collects:
- Mouse trajectory, speed and acceleration between clicks
- Time between keystrokes, and the distribution of that timing
- Scroll patterns — speed, direction changes, momentum
- Time between page load and first interaction
- Navigation flow — which elements get hovered before being clicked
- Idle periods — how long between actions, and what happens during them
None of these signals are individually conclusive. Combined over a session, they build a behavioral profile that's surprisingly hard to fake.
The mouse problem
A bot moving a mouse from point A to point B takes the shortest path. Constant speed. Perfectly straight line. No hesitation, no correction, no overshoot.
No human does that.
Human mouse movement has a characteristic shape: slow acceleration at the start, peak speed in the middle, deceleration as you approach the target. The path curves. Sometimes you overshoot and correct. In the final approach, there's micro-tremor — tiny random movements as your hand steadies.
The mathematical model that approximates this is a cubic Bezier curve with randomized control points, combined with a sine-based speed profile:
import math
import random
def bezier_path(p0, p3, steps=40):
dx = p3[0] - p0[0]
dy = p3[1] - p0[1]
dist = math.hypot(dx, dy)
deviation = dist * random.uniform(0.15, 0.40)
angle = math.atan2(dy, dx)
perp = angle + math.pi / 2
side = random.choice([-1, 1])
p1 = (
p0[0] + dx * 0.25 + math.cos(perp) * deviation * side,
p0[1] + dy * 0.25 + math.sin(perp) * deviation * side,
)
p2 = (
p0[0] + dx * 0.75 + math.cos(perp) * deviation * side * 0.3,
p0[1] + dy * 0.75 + math.sin(perp) * deviation * side * 0.3,
)
points = []
for i in range(steps + 1):
t = i / steps
mt = 1 - t
x = mt**3*p0[0] + 3*mt**2*t*p1[0] + 3*mt*t**2*p2[0] + t**3*p3[0]
y = mt**3*p0[1] + 3*mt**2*t*p1[1] + 3*mt*t**2*p2[1] + t**3*p3[1]
# Sine speed profile: slow-fast-slow
speed = (math.sin((t - 0.5) * math.pi) + 1) / 2
delay = 10.0 / (speed + 0.12)
# Micro-tremor in the final approach
if t > 0.85:
x += random.gauss(0, 0.7)
y += random.gauss(0, 0.7)
delay *= random.uniform(1.3, 2.2)
points.append((x, y, delay))
return points
The sine speed profile is the key. It produces the natural deceleration near the target that's characteristic of human motor control. Without it, the movement looks mechanical even if the path curves.
Overshooting
About 30% of the time, humans overshoot their target and correct. Your bot should do the same:
async def move_to(self, x, y):
if random.random() < 0.30:
await self._execute_move(
x + random.uniform(-20, 20),
y + random.uniform(-15, 15),
)
await asyncio.sleep(random.uniform(0.08, 0.20))
await self._execute_move(x, y)
30% is calibrated from observational data on human mouse behavior. Too high and it looks like a nervous tic. Too low and you lose the behavioral diversity that makes the pattern look human.
Typing speed: log-normal, not uniform
A bot typing at constant speed is one of the oldest and most reliable detection signals. The fix is obvious — add random delays between keystrokes.
But random uniform delays still don't look human. Human typing speed follows a log-normal distribution — most keystrokes happen in a typical range, but there's a long tail of slower keystrokes when you pause to think, notice a mistake, or just lose focus for a moment.
import numpy as np
def typing_delay() -> float:
return float(np.clip(np.random.lognormal(mean=4.2, sigma=0.5), 28, 400))
On top of the base delay, about 4% of keystrokes should trigger a longer pause — the distraction event where you look away from the screen for a second:
delay_ms = typing_delay()
if random.random() < 0.04:
delay_ms += random.uniform(400, 1400)
await asyncio.sleep(delay_ms / 1000)
Typos
Humans make typing errors. Bots don't. A session with zero typos across hundreds of keystrokes is statistically suspicious.
The realistic typo model uses a QWERTY neighbor map:
QWERTY = {
"a": ["q","w","s","z"], "s": ["a","w","e","d","x","z"],
"d": ["s","e","r","f","c","x"],
# ...
}
def get_typo(char: str):
neighbors = QWERTY.get(char.lower())
if not neighbors:
return None
wrong = random.choice(neighbors)
return wrong.upper() if char.isupper() else wrong
Typo rate around 4% feels natural. Below 1% starts to look suspicious. Above 8% looks like a bad typist — which is fine, humans vary.
Idle behavior
A session where every second is productive action is not a human session. Humans pause, scroll aimlessly, move the mouse for no reason, hover over things without clicking them.
async def idle(self, duration_s: float):
end = time.time() + duration_s
while time.time() < end:
if random.random() < 0.35:
action = random.choices(
["move", "scroll_tiny", "hover", "tremor", "pause"],
weights=[3, 2, 2, 2, 3]
)[0]
await self._execute_idle_action(action)
else:
await asyncio.sleep(random.uniform(0.3, 1.2))
The 35% activity rate during idle periods is empirically reasonable. Higher and the bot never stops moving. Lower and the session looks catatonic between actions.
The timing between actions
The distribution of time between actions matters as much as the actions themselves.
Human session timing follows a roughly 80/20 pattern: most transitions happen quickly (1-3 seconds), but about 20% involve a longer pause — reading, thinking, getting distracted.
async def wait_between_actions(self, long: bool = False):
if long or random.random() > 0.80:
delay = float(np.random.lognormal(mean=1.8, sigma=0.6))
else:
delay = float(np.random.lognormal(mean=0.3, sigma=0.4))
await asyncio.sleep(max(0.5, min(delay, 45.0)))
What you can't fully fake
Behavioral analysis at the most sophisticated level builds a model of you specifically, not just humans in general.
The most advanced systems don't just ask "does this look human?" They ask "does this look like the same human who was here yesterday?" A new session with suspiciously perfect behavioral patterns — no warmup, immediately optimal mouse paths — can look wrong even if every individual signal is within human range.
The mitigation is session warmup: spend time doing idle activity before starting meaningful work, and build browsing history before hitting the target site.
await browser.warm_history(
sites=["https://www.google.com", "https://www.wikipedia.org"],
dwell_s=10.0,
)
await browser.warmup(duration_s=5.0)
await browser.goto("https://target.com")
The honest ceiling
Behavioral fingerprinting is the hardest layer to beat because it's the hardest to specify. TLS has a defined format. Canvas fingerprinting has a defined algorithm. Human behavior doesn't — it's noisy, variable, and the detection systems are trained on real data you don't have access to.
The goal isn't to perfectly simulate a human. It's to be indistinguishable within the confidence bounds of the detection system. Those bounds are finite and different for every target.
Getting close enough is an engineering problem, not an unsolvable one. But it requires understanding what you're actually trying to match — not just adding random delays and hoping for the best.
This series covered the three main detection layers: TLS fingerprinting, Canvas/WebGL spoofing, and behavioral analysis. The common thread: coherence across all layers matters more than perfecting any single one.
Top comments (0)