I stopped using the requests library 2 years ago. httpx does everything requests does, plus async, HTTP/2, and better defaults.
Here is why and how to switch.
Install
pip install httpx
Drop-in replacement. Most requests code works with zero changes.
The Basics
import httpx
# GET request (identical to requests)
resp = httpx.get("https://httpbin.org/get")
print(resp.status_code) # 200
print(resp.json())
# POST with JSON
resp = httpx.post("https://httpbin.org/post", json={"key": "value"})
# Headers
resp = httpx.get("https://api.github.com/user", headers={"Authorization": "Bearer token"})
Why httpx > requests
1. Async Support (Built-in)
import httpx
import asyncio
async def fetch_all(urls):
async with httpx.AsyncClient() as client:
tasks = [client.get(url) for url in urls]
responses = await asyncio.gather(*tasks)
return [r.json() for r in responses]
# Fetch 10 URLs in parallel instead of sequentially
urls = [f"https://hacker-news.firebaseio.com/v0/item/{i}.json" for i in range(1, 11)]
results = asyncio.run(fetch_all(urls))
With requests, you need aiohttp (different API) or threading. With httpx, it is the same API with async/await.
2. HTTP/2 Support
# Enable HTTP/2
client = httpx.Client(http2=True)
resp = client.get("https://www.google.com")
print(resp.http_version) # HTTP/2
HTTP/2 multiplexes requests over a single connection. Faster for APIs that support it.
3. Better Timeouts
# requests: timeout applies to... something?
# httpx: explicit connect vs read timeout
client = httpx.Client(timeout=httpx.Timeout(
connect=5.0, # Time to establish connection
read=10.0, # Time to receive response
write=5.0, # Time to send request
pool=5.0 # Time waiting for connection from pool
))
4. Built-in Retry with Transport
import httpx
from httpx import HTTPTransport
# Retry failed requests automatically
transport = HTTPTransport(retries=3)
client = httpx.Client(transport=transport)
resp = client.get("https://flaky-api.example.com/data")
Real-World Patterns
API Client
class APIClient:
def __init__(self, base_url: str, token: str):
self.client = httpx.Client(
base_url=base_url,
headers={"Authorization": f"Bearer {token}"},
timeout=30.0
)
def get_user(self, user_id: int):
return self.client.get(f"/users/{user_id}").json()
def create_item(self, data: dict):
return self.client.post("/items", json=data).json()
Web Scraper
async def scrape_pages(urls: list[str]) -> list[str]:
async with httpx.AsyncClient(
follow_redirects=True,
timeout=15.0,
headers={"User-Agent": "Mozilla/5.0"}
) as client:
tasks = [client.get(url) for url in urls]
responses = await asyncio.gather(*tasks, return_exceptions=True)
return [r.text for r in responses if not isinstance(r, Exception)]
Rate-Limited Client
import asyncio
import httpx
class RateLimitedClient:
def __init__(self, requests_per_second: float = 2.0):
self.client = httpx.AsyncClient()
self.delay = 1.0 / requests_per_second
async def get(self, url: str):
await asyncio.sleep(self.delay)
return await self.client.get(url)
Migration from requests
| requests | httpx |
|---|---|
requests.get(url) |
httpx.get(url) |
requests.Session() |
httpx.Client() |
session.get(url) |
client.get(url) |
response.content |
response.content |
| N/A | httpx.AsyncClient() |
| N/A | http2=True |
Most code is identical. The main difference: httpx Client requires explicit close (client.close()) or use as context manager (with httpx.Client() as client:).
📧 spinov001@gmail.com — I build production web scrapers and API integrations.
Related: 10 Dev Tools I Use Daily | 77 Scrapers on a Schedule | 150+ Free APIs
Top comments (0)