I see this mistake every week on Stack Overflow:
"I rewrote my script with async and it got SLOWER. Why?"
Because async Python is not about speed. It's about waiting efficiently.
Let me explain with actual numbers.
The Experiment
Task: Make 100 HTTP requests to an API.
Synchronous
import httpx
import time
def sync_fetch(urls):
results = []
for url in urls:
r = httpx.get(url, timeout=10)
results.append(r.status_code)
return results
urls = ['https://httpbin.org/delay/1'] * 100
start = time.time()
sync_fetch(urls)
print(f'Sync: {time.time() - start:.1f}s') # ~105 seconds
Async
import httpx
import asyncio
import time
async def async_fetch(urls):
async with httpx.AsyncClient() as client:
tasks = [client.get(url, timeout=10) for url in urls]
return await asyncio.gather(*tasks)
start = time.time()
asyncio.run(async_fetch(urls))
print(f'Async: {time.time() - start:.1f}s') # ~2 seconds
105 seconds vs 2 seconds. That's a 50x improvement.
But here's the thing — the CPU work is identical. The async version is faster because it waits for all 100 responses simultaneously instead of one at a time.
When Async HELPS
✅ I/O-bound tasks: HTTP requests, database queries, file reads, WebSocket connections
✅ High concurrency: Handling 1000+ simultaneous connections
✅ Web servers: FastAPI/Starlette vs Flask/Django (for I/O workloads)
✅ Scrapers: Fetching many pages in parallel
When Async HURTS
❌ CPU-bound tasks: Data processing, image manipulation, ML inference
❌ Simple scripts: If you make 3 API calls, async adds complexity for negligible gain
❌ When libraries don't support it: Mixing sync and async creates headaches
❌ When you need time.sleep(): Use await asyncio.sleep() instead — common gotcha
The Rule of Thumb
if task == "waiting for something":
use async # Network, disk, database
elif task == "computing something":
use multiprocessing # CPU-bound work
else:
use sync # Simple scripts, sequential logic
Common Mistakes
Mistake 1: Async with CPU work
# BAD — async doesn't help here
async def process_image(image):
return heavy_cpu_processing(image) # Still blocks the event loop!
# GOOD — use ProcessPoolExecutor
import concurrent.futures
async def process_image(image):
loop = asyncio.get_event_loop()
with concurrent.futures.ProcessPoolExecutor() as pool:
return await loop.run_in_executor(pool, heavy_cpu_processing, image)
Mistake 2: Awaiting one thing at a time
# BAD — sequential async (defeats the purpose)
async def fetch_all():
a = await fetch_user(1)
b = await fetch_user(2)
c = await fetch_user(3)
return [a, b, c]
# GOOD — concurrent async
async def fetch_all():
return await asyncio.gather(
fetch_user(1),
fetch_user(2),
fetch_user(3)
)
Mistake 3: Blocking the event loop
# BAD — time.sleep blocks everything
async def handler():
time.sleep(5) # ENTIRE server freezes for 5 seconds
# GOOD
async def handler():
await asyncio.sleep(5) # Only this coroutine waits
Real-World Performance
Here's what I measured in a production scraping pipeline:
| Approach | 100 URLs | 1000 URLs | 10000 URLs |
|---|---|---|---|
| Sync | 104s | 1040s | 10400s |
| Async (10 concurrent) | 12s | 108s | 1050s |
| Async (50 concurrent) | 3s | 25s | 220s |
| Async (100 concurrent) | 2s | 14s | 110s |
Diminishing returns after ~50 concurrent connections (server-side rate limiting kicks in).
My Stack for I/O-Heavy Python
- httpx over aiohttp (cleaner API, sync+async in one library)
- FastAPI over Flask (async by default)
- asyncpg over psycopg2 (3-5x faster for Postgres)
- aiofiles for async file operations
More Python tools: Python HTTP Benchmark | Python Security Tools
What's your experience with async Python? Love it or hate it? I've seen strong opinions on both sides. 👇
Python & API articles at dev.to/0012303
Top comments (0)