If you are not following the article series, here is a quick recap of the last article about asyncio. Then will discuss Waiting Utilities and Behaviour in asyncio.
A summary of Asyncio Terms: Event Loop, Coroutine, Task, Future
Term | What Is It? | What Does It Do? | How It Relates to Others |
---|---|---|---|
Event Loop | The main scheduler for async operations | Decides when each async job runs | Runs, manages, and switches between tasks |
Coroutine | A special async function (with async def ) |
Specifies steps for async tasks | Needs the event loop (as a Task/Future) to run |
Task | A wrapper over a coroutine | Schedules and tracks a coroutine's execution | Created from a coroutine, managed by event loop |
Future | A placeholder for a result (not just coroutines) | Represents something that will finish in the future | Awaited by Tasks/Coroutines, signals completion |
Typical flow:
- A coroutine is defined (using
async def
). - It is scheduled on the event loop (often by turning it into a Task).
- The Task is a kind of Future; it can be awaited for the results.
- The event loop manages when the Task (and thus the coroutine) runs.
Some other important terms of Asyncio
-
Awaitable: Anything that can be awaited using
await
(coroutines, Tasks, Futures). - Await/async: Python keywords that mark or drive async code (not new objects, but critical for usage).
- Callback: Functions scheduled to run when a Future/Task completes.
Waiting Utilities and Behaviour in asyncio
Asyncio provides several powerful functions and synchronisation primitives to control when and how coroutines produce results, handle exceptions, or wait for coordination between tasks.
Core Gathering and Waiting Functions
-
asyncio.gather(*aws, return_exceptions=False)
: It runs multiple awaitables concurrently, returning their results in the same order as the input. Ifreturn_exceptions
is set toTrue
, exceptions are captured as results rather than being propagated. -
asyncio.wait(aws, return_when)
: It allows waiting for a group of awaitables to complete using different policies (FIRST_COMPLETED
,FIRST_EXCEPTION
, orALL_COMPLETED
), returning two sets:done
andpending
. One thing to note here is that the order is not preserved. -
asyncio.as_completed(aws)
: It provides an iterator that yields each awaitable as it finishes, useful for streaming results or handling the fastest-completing tasks first. -
asyncio.shield(aws)
: It protects an awaitable from outer cancellation, but can still be cancelled directly.
Other Useful Coordination and Integration Tools
-
asyncio.wait_for(aws, timeout)
: It waits for an awaitable to complete within the provided time limit, raisingTimeoutError
if it runs too long. It is useful for enforcing timeouts. -
asyncio.Timeout()
(context manager in Python 3.11+): It is similar towait_for
, it allows providing a timeout for any block of code containing awaits. -
asyncio.Event
: It is a simple signalling primitive; it blocks coroutines until the event is set (useful for pausing until a condition occurs). -
asyncio.Condition
: It is a more advanced signalling; it allows coroutines to wait until the condition is notified. It is useful for stateful coordination (like consumers/producers). -
asyncio.Semaphore
/BoundedSemaphore
: It has a concurrency limit (for example, cap downloads or database connections). -
asyncio.Lock
: It ensures only one coroutine enters a critical section at a time. -
asyncio.Queue
: It passes results, tasks, or data between coroutines safely and efficiently.
Bridging to Synchronous Code
-
asyncio.to_thread(function, *args)
: It runs a synchronous blocking function in a thread so async code can remain responsive. -
asyncio.run_coroutine_threadsafe(coroutine, loop)
(for multi-threaded event loops): Use this to schedule a coroutine from outside the event loop's thread.
Note:
aws
resembles awaitables.
Example:
import asyncio
import random
async def work(i):
await asyncio.sleep(random.random())
return i
async def main():
tasks = [asyncio.create_task(work(i)) for i in range(5)]
print("gather:", await asyncio.gather(*tasks)) # results in input order
tasks = [asyncio.create_task(work(i)) for i in range(5)]
for coroutine in asyncio.as_completed(tasks):
print("one done:", await coroutine) # yields as they're finished
asyncio.run(main())
Output:
gather: [0, 1, 2, 3, 4]
one done: 1
one done: 4
one done: 2
one done: 3
one done: 0
Summary table
Function | Behavior |
---|---|
asyncio.gather |
Run group, ordered results, combined exceptions |
asyncio.wait |
Wait for completion, get done & pending , flexible policies |
asyncio.as_completed |
Yield tasks/results as they finish |
asyncio.shield |
Prevent outer cancellation |
asyncio.wait_for |
Wait with timeout |
asyncio.Event & Condition
|
Manual signaling between coroutines |
asyncio.Semaphore , Lock , Queue
|
Control concurrency, critical sections, safe communication |
asyncio.to_thread |
Run blocking code in thread |
asyncio.run_coroutine_threadsafe |
Submit coroutine to loop across threads |
Complete Example Demonstrating Loop, Tasks, and Futures
import asyncio
from threading import Thread
import time
async def coroutine(name, delay):
await asyncio.sleep(delay)
return f"{name} done"
async def main():
loop = asyncio.get_running_loop()
# Create and await a Task
t = asyncio.create_task(coroutine("T1", 1))
# Create a Future and set its result using call_later
future = loop.create_future()
loop.call_later(0.5, future.set_result, "future-result")
# Start a Task that awaits the future
async def waiter():
result = await future
return "waiter got " + result
w = asyncio.create_task(waiter())
# Demonstrate add_done_callback
def done_cb(task):
try:
print("done_cb:", task.result())
except Exception as exc:
print("done_cb exception:", exc)
t.add_done_callback(done_cb)
w.add_done_callback(done_cb)
# Start a thread that will set a future via call_soon_threadsafe
future2 = loop.create_future()
def thread_work():
time.sleep(0.2)
loop.call_soon_threadsafe(future2.set_result, "from-thread")
Thread(target=thread_work).start()
# Await everything
results = await asyncio.gather(t, w, future2)
print("gathered:", results)
asyncio.run(main())
Output:
done_cb: waiter got future-result
done_cb: T1 done
gathered: ['T1 done', 'waiter got future-result', 'from-thread']
Cancellation and Cleanup in asyncio
Task cancellation is a key mechanism in asyncio, allowing coroutines to be stopped and cleaned up gracefully.
How does cancellation work?
- Calling
task.cancel()
schedules a cancellation request. - The next time the coroutine hits an await point, it will receive an
asyncio.CancelledError
exception. - The coroutine can catch this exception only if it needs to perform cleanup (like releasing resources or saving state).
- It is important not to swallow the
CancelledError
silently. It should usually be allowed to propagate after cleanup, so higher-level code knows the task was cancelled.
Best Practices
-
Use try-finally, not try-except, to manage cleanup:
async def coroutine(): try: # main body ... finally: # cleanup code runs whether cancelled or not
This ensures cancellation propagates normally while allowing cleanup.
Always cancel background tasks on shutdown: Before closing the event loop, cancel all running tasks to avoid them operating on a closed loop.
Await the cancelled task to ensure cleanup completes: Calling
cancel()
just requests cancellation; awaiting the task lets allows to wait until it actually finishes cleanup.Avoid swallowing CancelledError except for good reasons: For robust structured concurrency (like
asyncio.TaskGroup
), lettingCancelledError
propagate is important.Use done callbacks for post-cancellation actions: Callbacks can be attached to tasks via
add_done_callback()
to respond once a task completes.
Example:
import asyncio
async def background():
try:
while True:
await asyncio.sleep(1)
finally:
# Always runs, even on cancellation
print("background cleanup")
async def main():
t = asyncio.create_task(background())
await asyncio.sleep(0.1)
t.cancel() # request cancellation
try:
await t # wait for task to clean up
except asyncio.CancelledError:
print("main observed cancellation")
asyncio.run(main())
Output:
background cleanup
main observed cancellation
Execution Flow:
Step | Coroutine | Description |
---|---|---|
1 | main | Started by asyncio.run()
|
2 | main | Schedules background via create_task()
|
3 | main | Hits await asyncio.sleep(0.1) and pauses |
4 | background | Runs until await asyncio.sleep(1) and suspends |
5 | main | Sleep of 0.1 seconds finishes |
6 | main | Calls t.cancel() to request cancellation |
7 | background | Resumes, CancelledError raised at sleep await |
8 | background | Runs finally cleanup and exits |
9 | main | Awaits cancelled task, catches cancellation error |
Running Blocking Code in asyncio with run_in_executor
The event loop runs on a single thread and must keep cycling through tasks that are ready to run. If a blocking function is run directly (for example, time.sleep()
or a synchronous HTTP request), it stalls the entire loop, delaying or freezing other async operations.
Solution: run_in_executor
The safest way to run blocking code is to hand it off to a separate thread or process using an executor, so the event loop thread remains free.
-
loop.run_in_executor(executor, function, *args)
: It lets submit a blocking functionfunction(*args)
to run in an executor (a thread or process pool). - The
executor
argument can be:-
None
(default): uses a thread pool executor managed by asyncio. - a custom
ThreadPoolExecutor
orProcessPoolExecutor
(need to create and manage).
-
This call returns an awaitable. Awaiting it suspends the coroutine until the blocking function completes. It allows program to run other async tasks smoothly in the meantime.
Example:
import asyncio
import time
def blocking(n):
time.sleep(n) # Blocking sleep
return n
async def main():
loop = asyncio.get_running_loop()
# Run blocking(n=1) without blocking the event loop
result = await loop.run_in_executor(None, blocking, 1)
print("result", result)
asyncio.run(main())
# Output: result 1
When to use run_in_executor
?
- Calling legacy synchronous functions inside an async app.
- Running CPU-bound tasks in a separate process pool (to bypass Python's GIL).
- Executing blocking I/O such as file system operations or database drivers that provide no async API.
Note: For CPU-intensive tasks,
ProcessPoolExecutor
should be used to avoid blocking other threads. Starting Python 3.9,asyncio.to_thread(function, *args)
simplifies these common cases by internally callingrun_in_executor
with the default thread pool.
Example:
import asyncio
import time
# Can be a CPU-intensive task
def blocking(n):
time.sleep(n) # Blocking sleep
return n
async def main():
result = await asyncio.to_thread(blocking, 1)
print("result", result)
asyncio.run(main())
# Output: result 1
Cooperative Multitasking vs Threads/Processes
Cooperative Multitasking
In cooperative multitasking, each task decides when to give up control to allow another task to run. The operating system does not force a switch—tasks must "cooperate" and yield the CPU voluntarily.
For example, Python's asyncio
is cooperative. Coroutines run until they use await
(for example, await asyncio.sleep()
), at which point they voluntarily pause.
Advantages
- Simpler to implement as there is no OS intervention.
- Lower system overhead.
Drawbacks
- One task can hog the CPU if it never yields (for example, an infinite loop), causing the entire program to freeze or hang.
- Poor reliability in real-time or multi-user systems where fairness is critical.
Thread-Based and Process-Based Multitasking (Preemptive)
Thread-Based Multitasking
- Multiple threads share the same process (memory space), but each thread can be scheduled independently by the OS.
- The operating system forcibly switches between threads — this is preemptive multitasking.
- Good for parallelising tasks inside one application (for example, handling user input while waiting for a network response).
Process-Based Multitasking
- Each process is fully separate, with its own memory and resources.
- Context switches are heavier than threads but provide strong fault isolation.
- The OS also manages these context switches preemptively — no process can hog the CPU.
Advantages
- Preemptive (threads/processes): OS ensures fair CPU distribution. Even if tasks are not "nice", the CPU will be reclaimed for others.
- Threads: Lightweight, fast communication (shared memory).
- Processes: Isolation; faults in one do not crash others.
Drawbacks
- Threads: No isolation. Bugs can crash the whole app.
- Processes: Higher overhead, slower IPC.
- Can be complex to coordinate safely (race conditions, deadlocks).
Thanks for reading!
We have covered a lot in this article. In the upcoming articles, we will go even deeper into asyncio, exploring:
- Running tasks concurrently with
asyncio.gather
/create_task
. - Async context managers (
async with
) and async iterators (async for
). - Difference between blocking vs non-blocking calls.
- Changes made from Python 3.11 and above?
- await function vs
asyncio.run()
- Integration with external libraries (
aiohttp
for async HTTP,aiomysql
, etc.).
Stay tuned to unlock the full power of Python’s asynchronous programming!
Top comments (0)