In the last article, we finally figured out(?) what a coroutine object does in the context of asynchronous programming. However, we have only looked into the individual trees rather than the forest. We would like to try to draw a bigger picture of how a single asynchronous function call is handled, by looking into the library asyncio.
Remark: why not curio or trio?
Both are quite famous libraries, but there are a few reasons why I choose asyncio instead for this article:
- It is an STL: i.e. it is well maintained and expected to be well structured.
- Uses concepts such as event loop, future, and task with coroutines. Since many other languages and frameworks nowadays use the similar(if not identical) concepts, I thought it would be more beneficial to understand those concepts first so that we can be more familiar with other use cases in another languages as well.
So, Let’s dig into asynchronous APIs provided by asyncio in Python. The main questions we want to answer are:
- What are these mysterious jargons: event loop, future, and task?
- How do these concepts work in harmony with the coroutine concept we have seen in previous articles?
But please note that I will only look into asyncio library, and won’t cover any abstract and theoretical concepts. This is not only because they are beyond our scope, but also because I strongly believe that any knowledge should start from concrete and actual things.
Remark: This article is based on asyncio in Python 3.11 and an UNIX OS(MacOS)
1. Cracking the async jargons in details
Ever since I began learning about the web technology in general, the jargon about asynchronous concepts have never been clear enough: event loop, coroutine, task, future, etc. Everyone uses them, but no one really tells me what exactly they are. Although wikipedia explains the concepts in quite a bit of detail, but in most cases they are too full of other computer science jargons, so you end up having to understand multiple other jargons just to grasp the original single jargon. Oh, not for beginners at all!
So here we would like to investigate how these concepts are actually implemented within the asyncio library. Of course, other libraries(not only those in Python) have their own version of implementation of these concepts, but I believe the core behavior must be similar.
Event Loop
Let’s start with event loop. So what is an event loop? Yes, it is just a programming pattern. But here, we want to know how it works, at least inside asyncio.
From Python 3.7, we run a coroutine object with asyncio.run
. It is the entrance to other asyncio APIs within a series of synchronous executions of Python code. If you go deep down to the bottom, You’ll see a class called _UnixSelectorEventLoop. This is our “default” event loop object, at least on an UNIX OS. Then what is a selector, that is an instance member of this class? You will find DefaultSelector
, which is determined here with help from another STL select. Before figuring out what selectors are, look how many kinds of selectors we have here: kqueue, epoll, devpoll, etc.
# https://github.com/python/cpython/blob/2d037fb406fd8662862c5da40a23033690235f1d/Lib/asyncio/unix_events.py#L57
class _UnixSelectorEventLoop(selector_events.BaseSelectorEventLoop):
"""Unix event loop.
Adds signal handling and UNIX Domain Socket support to SelectorEventLoop.
"""
# https://github.com/python/cpython/blob/2d037fb406fd8662862c5da40a23033690235f1d/Lib/selectors.py#L609
if _can_use('kqueue'):
DefaultSelector = KqueueSelector
elif _can_use('epoll'):
DefaultSelector = EpollSelector
elif _can_use('devpoll'):
DefaultSelector = DevpollSelector
elif _can_use('poll'):
DefaultSelector = PollSelector
else:
DefaultSelector = SelectSelector
Selectors are system calls. Therefore, we can see that by default, our asyncio library tries to use these OS-level kernel APIs to handle async I/Os internally(Would it be too much of an exaggeration to say that event loops in asyncio are just wrappers of these I/O system calls?). Selectors are what we expect our event loop to use. In simple words, they check whether there are any incoming events in the sockets we opened(For details, please see the man page of kqueue/epoll if you’re a macOS/Linux user. Other good references are beej’s guide and this linux programming bible book, chapter 63). Through these polling APIs, the kernel recognizes that which sockets are ready to be read.
Now that we know that an event loop in asyncio makes system calls(selectors) under the hood, let’s see where the selectors are called. If you look at _run_once
(by the way, this is the core function that other event-loop instance methods are based on), it calls selector.select
function in this line and processes the ready events. Here processing events means cleaning up canceled events and setting registered callbacks for the ready events as functions to be executed finally, inside _run_once()
.
# https://github.com/python/cpython/blob/2d037fb406fd8662862c5da40a23033690235f1d/Lib/asyncio/base_events.py#L1845
def _run_once(self):
# ---- code omitted ----
event_list = self._selector.select(timeout)
self._process_events(event_list)
# ---- code omitted ----
ntodo = len(self._ready)
for i in range(ntodo):
handle = self._ready.popleft()
# ---- code omitted ----
handle._run()
# ---- code omitted ----
So in a nutshell, any default event loop in asyncio uses poll/select kernel APIs to check any incoming event, and execute the enrolled callbacks. Our next question is, that how an event and related callbacks are registered to the current running event loop?
Task and Future
Task: when it is created
Let’s get back to our entrance, runners
. You’ll see that although asnycio.run
accepts its parameter as a coroutine, but eventually it creates a “task” object and tosses it to the event loop using the event loop’s create_task
function, which by default generates an instance of the Task
class. And then, this task object registers the callback __run_until_complete_cb()
, which closes the event loop, to the event loop to which it is connected(note that Task
class is a subclass of Future
class). Finally, the event loop begins to run.
# https://github.com/python/cpython/blob/2d037fb406fd8662862c5da40a23033690235f1d/Lib/asyncio/runners.py#L86
def run(self, coro, *, context=None):
# ---- code omitted ----
task = self._loop.create_task(coro, context=context)
# ---- code omitted ----
try:
return self._loop.run_until_complete(task)
# ---- code omitted ----
def run_until_complete(self, future):
# ---- code omitted ----
future.add_done_callback(_run_until_complete_cb)
try:
self.run_forever()
# ---- code omitted ----
But then what happens? The event loop runs ready-to-be-run callbacks in _run_once()
, so was our task created only to finish a loop it began by doing nothing? Here is what we’ve been missing: by the time it is created, the task registers a callback called __step()
with the current event loop.
Task: __step() and __wakeup()
It is actually the interface of the Task
class that we use, to run the logic of the registered coroutine, and there are two functions that control the coroutine: __step()
and __wakeup()
(these dunders(called name mangling) are presumably for preventing unexpected inheritance and the misuse of these functions).
However __step()
is where all the magic begins, and __wakeup()
is what makes __step()
keep going. It starts the coroutine of the task using send(None)
. When it gets a future object at the bottom of the coroutine chain, the task waits for that low-level future to finish its job and requests the event loop to wake it up, by registering __wakeup()
callback. This __wakeup()
in turn triggers __step()
, and we’ll have some kind of zigzag movements between these two functions until the task is completed with the StopIteration
exception.
# https://github.com/python/cpython/blob/2d037fb406fd8662862c5da40a23033690235f1d/Lib/asyncio/tasks.py#L250
def __step(self, exc=None):
# ---- code omitted ----
try:
# ---- code omitted ----
result = coro.send(None)
except StopIteration as exc:
# ---- code omitted ----
super().set_result(exc.value)
else:
# ---- code omitted ----
result.add_done_callback(.__wakeup, context=self._context)
# ---- code omitted ----
def __wakeup(self, future):
# ---- code omitted ----
try:
future.result()
# ---- code omitted ----
else:
self.__step()
# ---- code omitted ----
So what is a task anyways? In summary, it is an object that controls the coroutine to be executed in a continuous way by communicating with the current event loop. Analogously, coroutine is the body of a function, whereas its task is a person who keeps clicking the “next” button of a debugger, so that the function gets executed in a sequential manner.
Future: what is a future object?
You might have noticed that I haven’t mentioned any definition of the future object but only focusing on how a task object works with its coroutine, even if the Task
class inherits from the future class.
So what is a future object exactly? If you take a glance at the code, the most notable features are:
-
state
instance variable: the current state of the future object -
result
andexception
instance variable: the final “result” of this future object, withset_result()
andset_exception()
-
add_done_callback()
,remove_done_callback()
: registering callbacks to the event loop -
__await__
: the magic function that must be called inside a coroutine
So a future object is simply an interface that is designed to interact with a particular running event loop, and a programmer can choose what to do with it, just like coroutine. It can be described as:
- An object that can represent a specific state at a given time
- Once its state is
_FINISHED
, we can expect that it either has a result or an error - We can wait for it to provide that finished result or error(inside a coroutine).
- We can directly register any callback through a future with its event loop(You might notice that these features are fairly similar to
Promise
in JS, but while async functions in JS returnPromises
, Python async functions return coroutines that are not futures. So in Python we have more complex layers between actual execution logic(function) and an event loop).
The official documentation describes that a future is for encapsulating the low level of async I/Os, and it is up to the programmer to decide how to use it. But why does it mention “low level”?
I guess that is because a future is the object from which we obtain the actual result value we want. At the “lowest level” of a coroutine chain, the coroutine should “await” something that is not a coroutine(of course we could simply implement the last coroutine of the chain to return a value that has nothing to do with the external networks, but what is the point of such a coroutine?). The magic happens at __await__
of future. This is where our old friend yield
says hello, demystifying all these async … await …
syntactic sugars. Thus, in the end, all the async APIs are essentially used for awaiting this future object to provide the expected result.
A good example of using a future object is the function sock_recv. It doesn’t await another coroutine, but directly creates a future, and relates it a socket file descriptor(which is a “real low level”). The callback registered with the event loop is directly related to the selector APIs.
2. A big picture
We have discussed plenty of asyncio objects that work closely together: coroutines, tasks, futures, and event loops. And yet, the structure of a possible workflow is quite complex, so here we would like to draw a big picture of how a workflow occurs in an asynchronous context.
- Everything revolves around a single event loop: running on a single thread, the event loop constantly checks external I/Os using the select/poll APIs, and executes callbacks registered by tasks and other future objects.
- Using the event loop as its engine, a task takes care of running the logic that conducts asynchronous works wrapped within a single coroutine. The task registers callbacks with its event loop, such that those callbacks trigger the coroutine to run until completion.
- Thanks to the structure of coroutines similar to
yield
(and eventually related toyield
), a series of asynchronous executions can assemble as a single unit of work, avoiding what is called a “callback hell”. At the end of a coroutine chain, we will meet future objects that actually give us the expected result(or exceptions if unlucky).
Please note, we haven’t discussed enough about the asyncio library. We haven’t mentioned concepts like “context”, “schedule”, or “signal”, which are also important components of the library for certain. The reason we tried to omit them is that, because we wanted to know what the objects directly related to executing coroutine functions are in the asyncio library, and how they work together, rather than investigating every detail of a possible workflow. However, I hope I could revisit the remaining concepts as well and fully describe a whole picture in the near future.
Conclusion
Thank you for the time you have spent with me on this long, long journey (pun intended). From delving into the CPython source code of generators to exploring the asyncio library, we have covered a decent amount of concepts and actual pieces of code. Let me briefly list them here:
- Generators: iterators, yield, yield from, simple coroutines
- Coroutines: native coroutines, await
- asyncio: event loop, task, future
Equipped with this hard-learned knowledge, we can now use the async... await...
syntax with confidence and little doubt. The structures and concepts we have covered extend beyond just Python. They will provide a solid understanding of asynchronous features implemented in other programming languages and frameworks as well.
Top comments (0)