DEV Community

Vhub Systems
Vhub Systems

Posted on

How to Bypass Akamai Bot Detection in 2026

Akamai is not Cloudflare.

If you have spent any time scraping at scale, you know Cloudflare. You have probably beaten its JavaScript challenges with undetected-chromedriver or cloudscraper. Maybe you have even automated past its CAPTCHA walls.

Akamai is a different beast. And if you are hitting Akamai-protected sites without understanding what you are doing, you are going to have a very bad time.

Why Akamai Is Harder Than Cloudflare

Cloudflare primarily checks:

  • Browser characteristics (User-Agent, headers)
  • JavaScript challenge execution
  • Cookie validity and fingerprint
  • IP reputation

Akamai adds layers that most scrapers never see coming:

TLS Fingerprinting (the big one). Akamai's sensor.js reads your TLS client hello packet — the very first byte of your TLS handshake — before any HTTP traffic happens. This fingerprint identifies your HTTP library, version, ciphers, extensions, and elliptic curve preferences. Standard requests, httpx, and even urllib3 have recognizable TLS fingerprints that Akamai catalogs.

HTTP/2 Fingerprinting. Akamai reads the HTTP/2 settings, window updates, and frame ordering of your connection. If your client does not behave like a real browser, you are flagged before the first request completes.

Behavioral Analysis. Beyond the headers, Akamai tracks mouse movement, scroll patterns, click timing, and session navigation flows. Headless browsers without human-like behavior are easy to identify.

Bot Signals Heuristics. Akamai assigns a bot score across dozens of signals — JA3 hash mismatch, missing HTTP/2 pseudo-headers, unusual header ordering, missing browser extensions, WebGL vendor mismatches.

The result: a standard curl or Python requests call will hit 100% detection. A Selenium headless browser will hit ~80% detection. A real browser with proper TLS impersonation + residential proxies will typically get through.

What Actually Works in 2026

After testing against multiple Akamai-protected endpoints, here is what holds up:

1. curl-cffi with Chrome Impersonation

The single biggest improvement you can make. curl-cffi is a Python library that uses libcurl under the hood and can impersonate real browser TLS fingerprints.

pip install curl-cffi
Enter fullscreen mode Exit fullscreen mode
from curl_cffi import requests

# Impersonate Chrome 124 on Windows — one of the most common browser fingerprints
# This changes your TLS client hello, JA3 hash, and HTTP/2 fingerprint
session = requests.Session()

response = session.get(
    "https://target-akamai-site.com/api/data",
    impersonate="chrome124",  # or "chrome120", "edge101", "safari16_5"
    proxies="http://username:password@proxy-provider:8080",
    timeout=30,
    headers={
        "Accept-Language": "en-US,en;q=0.9",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
    }
)

print(response.status_code)
print(response.text[:500])
Enter fullscreen mode Exit fullscreen mode

The impersonate parameter does the heavy lifting. When you specify chrome124, curl-cffi:

  • Sends a TLS Client Hello that matches Chrome 124's exact cipher suite ordering
  • Uses the same TLS extensions (ALPN, session tickets, supported curves)
  • Sets the correct JA3 hash
  • Impersonates HTTP/2 connection behavior

This alone drops detection from 100% to sometimes 0% on sites that only check TLS fingerprints.

2. Residential Proxies — Not Datacenter

Even with perfect TLS impersonation, your IP matters. Akamai cross-references IP ranges against known datacenter blocks, VPN exit nodes, and proxy provider ranges.

Residential proxies route traffic through real exit IP addresses assigned to ISPs. They are more expensive (~$12-16/GB vs $2-3/GB for datacenter) but they work.

from curl_cffi import requests

PROXY = "http://住宅-proxy-user:proxy-pass@residential-proxy.example.com:8080"

session = requests.Session()

# Rotate through a pool of residential proxies
for proxy in proxy_pool:
    try:
        response = session.get(
            "https://akamai-protected-site.com",
            impersonate="chrome124",
            proxies=proxy,
            timeout=15,
        )
        if response.status_code == 200:
            print(f"Success with proxy: {proxy}")
            break
    except Exception as e:
        print(f"Failed with {proxy}: {e}")
        continue
Enter fullscreen mode Exit fullscreen mode

3. Never Use Standard Selenium Without Modifications

Selenium with a stock ChromeDriver is dead on arrival against Akamai. Here is why:

  • ChromeDriver adds a webdriver property to navigator — detectable in 2 lines of JavaScript
  • The browser binary has detectable automation markers
  • TLS fingerprint still originates from Chrome's default client hello, which Akamai knows

If you must use a browser automation approach:

# Use undetected-chromedriver or playwright-stealth with additional hardening
from selenium import webdriver
from selenium_stealth import stealth

options = webdriver.ChromeOptions()
options.add_argument("--headless=new")
options.add_argument("--disable-blink-features=AutomationControlled")
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--disable-gpu")
# Remove webdriver flag
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option("useAutomationExtension", False)

driver = webdriver.Chrome(options=options)

stealth(driver,
    languages=["en-US", "en"],
    vendor="Google Inc.",
    webgl_vendor="Intel Inc.",
    renderer="Intel Iris OpenGL Engine",
    fix_hairline=True,
)

driver.get("https://akamai-protected-site.com")
Enter fullscreen mode Exit fullscreen mode

But even this is beatable. The curl-cffi approach is more reliable for API endpoints and static content.

What Does NOT Work

Standard requests / httpx

The TLS fingerprint is completely wrong. Akamai identifies these in milliseconds.

# This will get you blocked immediately
import requests
resp = requests.get("https://akamai-site.com")  # 100% detection
Enter fullscreen mode Exit fullscreen mode

Datacenter Proxies on Known Ranges

Akamai maintains one of the largest IP reputation databases in the world. Datacenter IPs from AWS, DigitalOcean, Hetzner, etc. are pre-blocked or heavily scored down.

undetected-chromedriver Alone

It used to work. The cat-and-mouse game has moved on. You need TLS impersonation AND good proxies.

Rotating User-Agents Without TLS Fix

User-Agent rotation only addresses one signal. Akamai reads dozens. Changing the UA while keeping the same TLS fingerprint is like changing your disguise but leaving your fingerprints everywhere.

Complete Working Example

Here is a functional scraper pattern for an Akamai-protected API:

import time
import random
from curl_cffi import requests
from dataclasses import dataclass
from typing import Optional

@dataclass
class ProxyConfig:
    host: str
    port: int
    username: str
    password: str

    def as_url(self) -> str:
        return f"http://{self.username}:{self.password}@{self.host}:{self.port}"

@dataclass
class ScraperConfig:
    target_url: str
    proxies: list[ProxyConfig]
    browser_profile: str = "chrome124"
    max_retries: int = 3
    retry_delay: float = 2.0

    def create_session(self, proxy: ProxyConfig) -> requests.Session:
        session = requests.Session()
        # Set browser-like headers
        session.headers.update({
            "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
            "Accept-Language": "en-US,en;q=0.9",
            "Accept-Encoding": "gzip, deflate, br",
            "DNT": "1",
            "Upgrade-Insecure-Requests": "1",
        })
        return session

    def scrape(self) -> Optional[requests.Response]:
        for attempt in range(self.max_retries):
            proxy = random.choice(self.proxies)
            try:
                session = self.create_session(proxy)
                response = session.get(
                    self.target_url,
                    impersonate=self.browser_profile,
                    proxies=proxy.as_url(),
                    timeout=20,
                    allow_redirects=True,
                )
                # Check for Akamai block page indicators
                if "cf-browser-verification" in response.text.lower():
                    print(f"Attempt {attempt+1}: Akamai challenge detected")
                    time.sleep(self.retry_delay * (attempt + 1))
                    continue
                if response.status_code == 403:
                    print(f"Attempt {attempt+1}: 403 Forbidden")
                    time.sleep(self.retry_delay * (attempt + 1))
                    continue
                print(f"Success! Status: {response.status_code}")
                return response
            except Exception as e:
                print(f"Attempt {attempt+1} failed: {e}")
                time.sleep(self.retry_delay)
        return None

# Usage
proxies = [
    ProxyConfig("us-ca.proxyscrape.com", 8080, "user1", "pass1"),
    ProxyConfig("us-ny.proxyscrape.com", 8080, "user2", "pass2"),
]

config = ScraperConfig(
    target_url="https://example-akamai-protected-site.com/api/pricing",
    proxies=proxies,
)

result = config.scrap()
if result:
    print(result.text[:500])
Enter fullscreen mode Exit fullscreen mode

Quick Diagnostic Checklist

Before going live with a scraper against an Akamai-protected site:

  • [ ] Using curl-cffi with impersonate="chrome124" or similar? (Not standard requests)
  • [ ] Routing through residential proxies, not datacenter?
  • [ ] IP not on known VPN/datacenter block lists?
  • [ ] Headers look like a real browser visit (Accept-Language, DNT, Upgrade-Insecure-Requests)?
  • [ ] No Selenium WebDriver properties in your requests?
  • [ ] JA3 hash matches the browser you are impersonating?

If you answered no to any of the above, you will get blocked.

Why Apify Actors Make This Easier

If you do not want to manage proxies, TLS fingerprints, and retry logic yourself, Apify has actors pre-configured for sites with Akamai protection. The smart-proxy-chrome-crawler actor handles rotation, browser impersonation, and proxy management out of the box. For a general-purpose contact info scraper that handles common anti-bot systems including Akamai, try this actor: https://apify.com/apify/contact-info-scraper

Summary

Akamai is not unbeatable, but it requires understanding the full stack of detection signals:

  1. Fix TLS first — use curl-cffi with browser impersonation
  2. Fix your IP — residential proxies, not datacenter
  3. Fix your headers — browser-like Accept, Accept-Language, DNT
  4. Avoid Selenium unless you know what you are doing — it is detectable without additional hardening

The combination of curl-cffi + residential proxies handles 90% of Akamai-protected targets. The remaining 10% require browser automation with stealth plugins — but that is a different article.


This article was written as part of the V-System content engine. If you are building data collection pipelines and hitting bot detection walls, the pattern here applies broadly: understand what the defender is actually checking, and fix the root cause, not the symptoms.


Related Tools

n8n AI Automation Pack ($39) — 5 production-ready workflows

Skip the setup

Pre-built scrapers with Akamai bypass built in:

Apify Scrapers Bundle — $29 one-time

35+ actors, instant download. Handle anti-bot automatically.

Top comments (0)