DEV Community

BAOFUFAN
BAOFUFAN

Posted on

The asyncio Mistake That Cost Me 3 Hours

Last week I was building an internal monitoring tool that needed to fetch pages from over 200 websites concurrently. Our old synchronous script took more than 40 minutes per run, and the boss asked, “Can you make it faster?” My immediate thought: this is a classic IO-bound task – I’ll just throw asyncio at it. Easy.

I wrote the code, ran it, and… the total time didn’t drop. In fact, it was over ten seconds slower than the synchronous version. I then spent three solid hours staring at the screen debugging. The root cause? I had crammed my synchronous mindset straight into an asynchronous framework. This pitfall is worth writing down.

Why Was It Slower? The Code That Drove Me Crazy

My first “async” attempt looked roughly like this:

import asyncio
import requests
import time

async def fetch(url: str):
    # 想当然地在协程里用 requests.get
    resp = requests.get(url, timeout=10)
    return resp.text[:100]

async def main():
    urls = ["https://httpbin.org/delay/1"] * 20
    start = time.time()
    # 用 asyncio.gather 并发执行
    results = await asyncio.gather(*[fetch(url) for url in urls])
    print(f"耗时: {time.time() - start:.2f}s, 结果数: {len(results)}")

asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

On the surface, it spawns 20 coroutines. But the actual runtime is nearly identical to serial execution. The reason: requests is synchronous and blocking. When it waits for network IO, it freezes the entire thread – and asyncio’s event loop runs in that same thread. The moment one coroutine calls requests.get(), the event loop is blocked solid. The other 19 coroutines are stuck waiting in line. What I thought was “concurrency” was just a sequential queue with coroutine labels.

The Right Way: Give Blocking Back to Blocking, Async to Async

Asyncio’s core is an event loop + cooperative scheduling. A coroutine yields control when it hits await, allowing the event loop to suspend IO-waiting tasks and switch to other ready coroutines. But this only works if every IO operation you use is natively asynchronous – meaning it returns an awaitable. A single synchronous blocking call pollutes the entire loop.

Here is the correct version, using aiohttp for async HTTP:

import asyncio
import aiohttp
import time

async def fetch(session: aiohttp.ClientSession, url: str):
    try:
        async with session.get(url, timeout=10) as resp:
            return await resp.text()
    except Exception as e:
        return f"ERROR: {e}"

async def main():
    urls = ["https://httpbin.org/delay/1"] * 20
    start = time.time()
    # 创建共享的 session,复用连接池,大幅降低开销
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(session, url) for url in urls]
        results = await asyncio.gather(*tasks)
    print(f"耗时: {time.time() - start:.2f}s, 结果数: {len(results)}")

asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

The changes are minimal, but each one hits a critical point:

  1. Replace requests with aiohttp. aiohttp’s session.get() returns a coroutine, and when used with await it does not block the event loop while waiting for data.
  2. Share a single ClientSession. In production, never create a new session per request – TCP connections won’t be reused, and both latency and resource consumption will spike. async with manages the lifecycle automatically.
  3. Hand all tasks to asyncio.gather. The total elapsed time is roughly the slowest request, not the sum of all requests.

A quick test: 20 targets, each with a 1-second delay/1. The correct version finishes in just over 1 second, while the broken blocking version takes over 20 seconds. With those two adjustments, performance jumped by an order of magnitude.

Add a Semaphore – Don’t Let Good Intentions Backfire

When you scale to 200 or 2000 URLs, unbounded concurrency causes two problems: you might overwhelm the target servers, and you can exhaust the local file descriptors. The best practice is to introduce an asyncio.Semaphore to cap concurrency – keeping both speed and stability:

import asyncio
import aiohttp

async def fetch(session, url, sem, max_retries=2):
    async with sem:  # 控制同时只有 N 个协程进入
        for attempt in range(max_retries + 1):
            try:
                async with session.get(url, timeout=10) as resp:
                    resp.raise_for_status()
                    return await resp.text()
            except Exception as e:
                if attempt == max_retries:
                    return f"FAILED({url}): {e}"
                await asyncio.sleep(2 ** attempt)  # 指数退避
    return None

async def main():
    urls = ["https://httpbin.org/delay/1"] * 200
    sem = asyncio.Semaphore(30)   # 最多同时 30 个请求
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(session, url, s
Enter fullscreen mode Exit fullscreen mode

Top comments (0)