You have written async def and await. You know the surface. At some point something deadlocked, or performed worse than expected, or behaved in a way that did not fit your mental model. This article is about building the mental model that makes those situations predictable rather than surprising.
Coroutines Are Not Threads and Not Callbacks
The two common async models before asyncio were threads and callbacks. Threads give you apparent concurrency with shared state and all the locking that entails. Callbacks give you non-blocking IO with inverted control flow that becomes unreadable at scale.
Coroutines are a third model. A coroutine is a function that can suspend its execution at specific points and resume from exactly that point later. The suspension is cooperative: the coroutine decides when to yield control, not a scheduler. The interpreter does not preempt it.
This is the first thing to internalize. Async Python is not parallel. It is concurrent in the sense that multiple coroutines can be in progress at the same time, but only one runs at any given moment. If a coroutine never yields, nothing else runs.
What async def Actually Produces
An async def function does not execute its body when called. It returns a coroutine object.
async def fetch(url):
print(f"fetching {url}")
return "data"
coro = fetch("https://example.com")
print(type(coro)) # <class 'coroutine'>
print(coro) # <coroutine object fetch at 0x...>
# nothing printed yet, the body has not run
The coroutine object implements the coroutine protocol: it has send(), throw(), and close() methods. The event loop drives it by calling send(None) repeatedly until it raises StopIteration, at which point the return value is carried in the exception's value attribute.
You can drive a simple coroutine manually to see this:
async def simple():
return 42
coro = simple()
try:
coro.send(None)
except StopIteration as e:
print(e.value) # 42
This is exactly what the event loop does, wrapped in scheduling logic. The await keyword and the event loop are two sides of the same mechanism.
What await Actually Does
await can only appear inside async def. It takes an awaitable object (a coroutine, a Task, a Future, or any object implementing __await__) and suspends the current coroutine until the awaitable completes.
At the bytecode level, await expr compiles to GET_AWAITABLE followed by YIELD_FROM. YIELD_FROM repeatedly sends values into the inner coroutine and yields values out to the outer caller until the inner coroutine raises StopIteration. The value of the StopIteration exception becomes the result of the await expression.
The chain goes all the way down. When you await a coroutine that awaits a Future, the YIELD_FROM instructions chain together. The innermost Future is what actually suspends execution by yielding a non-StopIteration value up through the chain to the event loop. The event loop receives that value (typically the Future itself), registers a callback to resume the coroutine when the Future completes, and moves on to other work.
async def inner():
await asyncio.sleep(1) # eventually yields a Future to the event loop
return "done"
async def outer():
result = await inner() # chains the yield through
print(result)
The event loop never sees inner or outer directly during the sleep. It sees a Future. When the timer fires, it resolves the Future, which triggers the callback that resumes inner, which completes and triggers the callback that resumes outer.
The Event Loop Is a Scheduling Loop
The event loop is simpler than most people expect. At its core it is a loop that does three things: run ready callbacks, poll IO for completeness, and schedule callbacks for completed IO.
A simplified version of CPython's event loop logic looks like this:
while True:
# Run all callbacks that are ready right now
while self._ready:
handle = self._ready.popleft()
handle._run()
# Poll IO with a timeout
# (uses select/epoll/kqueue depending on platform)
timeout = self._compute_timeout()
event_list = self._selector.select(timeout)
# Schedule callbacks for completed IO events
self._process_events(event_list)
# Run scheduled callbacks whose time has come
self._run_once_scheduled()
asyncio.sleep(1) works by creating a Future, scheduling a callback to resolve it after one second via call_later, and returning the Future for the coroutine to await. When the event loop's timer fires, it resolves the Future, which schedules the coroutine's resumption as a ready callback. On the next iteration of the loop, the coroutine resumes.
No thread is involved. The OS timer is polled during selector.select(). The event loop handles the rest.
Tasks vs Futures vs Coroutines
These three are related but distinct and conflating them causes confusion.
A coroutine is a generator-like object produced by an async def call. It does nothing on its own. It must be driven by something calling send() on it.
A Future is a low-level object representing a value that will exist at some point. It has states: pending, cancelled, done. When done, it holds either a result or an exception. You can attach callbacks that fire when it completes. Most application code never creates Future objects directly.
A Task is a subclass of Future that wraps a coroutine and drives it. When you call asyncio.create_task(coro), you get a Task that is immediately scheduled on the event loop. The Task calls coro.send(None) to start the coroutine, catches StopIteration when it completes, and resolves itself with the result.
async def work():
await asyncio.sleep(0.1)
return 42
async def main():
# coroutine: not scheduled yet
coro = work()
# Task: immediately scheduled, starts running on the next event loop iteration
task = asyncio.create_task(work())
# awaiting the coroutine directly: runs inline, no separate task
result = await coro
# awaiting the task: waits for the already-running task
result = await task
The difference between await coro and await asyncio.create_task(coro) is whether the coroutine runs as a separate scheduled unit or inline in the current coroutine's execution. For concurrency, you need tasks. Two await statements in sequence run sequentially. Two tasks run concurrently.
Blocking the Event Loop Is the Canonical Mistake
Because async Python is single-threaded cooperative concurrency, any code that does not yield blocks everything else. There is no preemption.
import time
async def bad():
time.sleep(2) # blocks the OS thread for 2 seconds
return "done"
async def also_running():
await asyncio.sleep(0)
print("I run eventually")
async def main():
await asyncio.gather(bad(), also_running())
# also_running does not run until bad() returns
# because time.sleep never yields to the event loop
time.sleep is a syscall that parks the OS thread. The event loop cannot run while the thread is parked. asyncio.sleep schedules a timer and yields a Future to the event loop, allowing other tasks to run during the wait.
The same problem applies to CPU-heavy code. A tight loop does not yield. A large file read through blocking IO does not yield. Any call to a synchronous library that does IO internally does not yield.
The fix is asyncio.to_thread (Python 3.9+) or loop.run_in_executor, which run blocking code in a thread pool. The coroutine awaits the thread pool result, yielding to the event loop while the thread runs:
async def process_file(path):
# runs in a thread pool, event loop remains free
content = await asyncio.to_thread(open(path).read)
return content
await and Custom Awaitables
Any object can be awaitable by implementing __await__, which must return an iterator. This is the extension point for writing your own low-level async primitives.
class SleepUntilNextSecond:
def __await__(self):
loop = asyncio.get_event_loop()
future = loop.create_future()
import time
delay = 1.0 - (time.time() % 1.0)
loop.call_later(delay, future.set_result, None)
yield from future.__await__()
return "woke up"
async def main():
result = await SleepUntilNextSecond()
print(result) # "woke up"
yield from future.__await__() chains into the Future's awaitable protocol, which ultimately yields the Future object up to the event loop. This is the same mechanism asyncio.sleep uses internally. Writing custom awaitables is how libraries like aiohttp and asyncpg integrate with the event loop at a level below async def.
Cancellation Is Cooperative
task.cancel() does not kill a task. It schedules a CancelledError to be thrown into the coroutine at its next await point. The coroutine receives it, and if it does not catch and suppress it, the task is cancelled.
async def careful_work():
try:
await asyncio.sleep(10)
except asyncio.CancelledError:
# do cleanup here
raise # re-raise to actually cancel
async def main():
task = asyncio.create_task(careful_work())
await asyncio.sleep(0.1)
task.cancel()
try:
await task
except asyncio.CancelledError:
print("task was cancelled")
The raise after cleanup is important. A coroutine that catches CancelledError and does not re-raise it has suppressed the cancellation. The task will appear to complete normally rather than being cancelled. This breaks asyncio.wait_for timeouts and structured concurrency patterns that depend on cancellation propagating correctly.
The Mental Model
Async Python is a single-threaded scheduler running coroutines that explicitly yield at await points. The event loop runs callbacks in order. IO completion and timers register callbacks. Coroutines are suspended at await and resumed when their awaited object resolves.
Everything that works correctly in async Python works because coroutines yield frequently. Everything that breaks does so because something held the thread without yielding. The event loop cannot fix that. It can only schedule what yields to it.
Further Reading
- asyncio documentation - the official docs are thorough; the "Developing with asyncio" section is particularly useful
- David Beazley: Python Concurrency from the Ground Up (PyCon 2015) - builds an event loop from scratch in 45 minutes, the best way to internalize the model
- Brett Cannon: How the asyncio event loop works - covers the bytecode and protocol in detail
- CPython source: Lib/asyncio/base_events.py - the actual event loop implementation, more readable than you expect
Top comments (0)