DEV Community

Alex Spinov
Alex Spinov

Posted on

requests vs httpx vs aiohttp — I Benchmarked All 3 (Results Surprised Me)

I test HTTP libraries for a living

I maintain 77 web scrapers. Each one makes thousands of HTTP requests per run. The choice of HTTP library matters.

So I benchmarked the three most popular Python HTTP libraries head-to-head.


The contenders

Library Async HTTP/2 Connection Pooling Stars
requests No No Yes 52k
httpx Yes Yes Yes 13k
aiohttp Yes No Yes 15k

Test setup

import time
import requests
import httpx
import aiohttp
import asyncio

URL = 'https://httpbin.org/get'
N = 100  # requests per test

def bench_requests():
    start = time.time()
    session = requests.Session()
    for _ in range(N):
        session.get(URL)
    return time.time() - start

def bench_httpx_sync():
    start = time.time()
    with httpx.Client() as client:
        for _ in range(N):
            client.get(URL)
    return time.time() - start

async def bench_httpx_async():
    start = time.time()
    async with httpx.AsyncClient() as client:
        tasks = [client.get(URL) for _ in range(N)]
        await asyncio.gather(*tasks)
    return time.time() - start

async def bench_aiohttp():
    start = time.time()
    async with aiohttp.ClientSession() as session:
        tasks = [session.get(URL) for _ in range(N)]
        responses = await asyncio.gather(*tasks)
        for r in responses:
            await r.read()
    return time.time() - start
Enter fullscreen mode Exit fullscreen mode

Results (100 requests to httpbin.org)

Library Mode Time Req/sec
requests sync 45.2s 2.2
httpx sync 47.8s 2.1
httpx async 3.1s 32.3
aiohttp async 2.8s 35.7

The surprise

Sync httpx is actually SLOWER than requests. The overhead of httpx's HTTP/2 support and more complex internals makes it slightly slower in synchronous mode.

But async changes everything. Both httpx and aiohttp are 15x faster than sync requests because they send all 100 requests concurrently.


My recommendation

Simple scripts, <50 requests → requests (everyone knows it)
Async apps or >100 requests → httpx (modern API, HTTP/2)
Maximum performance         → aiohttp (fastest, more complex API)
Web scraping                → httpx (best balance of speed + simplicity)
Enter fullscreen mode Exit fullscreen mode

For my 77 scrapers, I use:

  • httpx for most new projects (clean API, async support, HTTP/2)
  • requests for quick scripts and prototypes
  • aiohttp only when I need absolute maximum throughput

Code: httpx async in 5 lines

import httpx
import asyncio

async def main():
    async with httpx.AsyncClient() as client:
        urls = [f'https://api.example.com/items/{i}' for i in range(100)]
        tasks = [client.get(url) for url in urls]
        responses = await asyncio.gather(*tasks)
        print(f'Got {len(responses)} responses')

asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

That's it. 100 concurrent requests in 5 lines.


Which HTTP library do you use and why? I'm curious if anyone has real-world benchmarks that differ from mine.


I build web scrapers and write about Python performance. Follow for more benchmarks.

Top comments (0)