DEV Community

jD91mZM2
jD91mZM2

Posted on

Are async frameworks really worth it?

This "article" is going to be specifically about Rust and Tokio, but it should apply to more frameworks

This is more of a question than an article. I've been working on Synac for a while, and I've needed to handle multiple connections on one thread.
For the server, I went with tokio. It allowed me to do something like this:

listener.incoming().for_each(|(conn, addr)| {
    ssl.accept_async(conn).map_err(|_| ()).and_then(move |conn| {
        println!("Client connected!");
    });
});

core.run();
Enter fullscreen mode Exit fullscreen mode

The problem with that is that often you can't just make everything async unless you're programming in a language where blocking calls are barely possible (looking at you, JavaScript). Sure, Tokio is easy once you get the hang of it. But usually you want to combine syncs, asyncs, and all kinds of things. This is where Tokio becomes a problem.
For the synac client, I already had experience with a library that only allowed blocking reads, so multithreading wasn't possible. I didn't want to force built-in mutexes upon users, and I didn't want to prevent multithreading, so I came up with my own solution. It gave me code like this:

if let Ok(Some(packet)) = listener.try_read() {
    println!("Packet received: {:?}", packet);
}
Enter fullscreen mode Exit fullscreen mode

(If you wonder how I did this magic sorcery, I simply did something similar to read_exact, but in a struct. Code)
This ended up being a good decision, because I could then do non-blocking reads inside gtk::timeout_add! And it wasn't too difficult to code it either!

Now I want to ask: When should you use an async framework with cores and stuff, and when should you just write a non-blocking listener? I feel like non-blocking listeners are way more flexible, but there has to be a reason tokio exists... Right?

Top comments (10)

Collapse
 
idanarye profile image
Idan Arye

Isn't that what async reactors do behind the scenes though? They use non-blocking calls to check which events are ready, and pick and trigger the appropriate future.

Collapse
 
legolord208 profile image
jD91mZM2

Yup, which is why I don't really see the point of them. They make handling errors more difficult, make you have to use reference counters to move stuff into closures, etc.

Collapse
 
idanarye profile image
Idan Arye

If you only need to wait for one IO at a time and don't need to run anything else while waiting for it - sure, use the way you described. But if that's the case - you might as well just use a blocking API. In fact, blocking is probably better since it's easier on the CPU.

If you need to wait for a multipe IOs at a time, but are all pretty much the same modulo data (their result is handled by the same code, but each IO has it's own context), your approach will also work (you will have to check all listeners each time though) - though if you can use something like select you probably should.

But - if you need to wait on multiple IOs, handled by different code? How will your approach work there?

Simple example:

1. Run in parallel:
    1.1. Read some file X(different in each parallel invocation)
    1.2. Extract other file name Y from the contents of X
    1.3. Read Y
    1.4. Do something based on the content of Y

When you wait on all the IOs, and one of them returns - is it the one from 1.1 or the one from 1.3? Do you need to use it's result in 1.2 or 1.4? This one can probably be solved with a simple if - but does it scale?

Thread Thread
 
legolord208 profile image
jD91mZM2

sure, use the way you described. But if that's the case - you might as well just use a blocking API. In fact, blocking is probably better since it's easier on the CPU.

My use case was running it in gtk::idle_add/timeout_add, which only supports non-blocking calls (it was run on the same thread). It was also to read from it over a mutex, which means I could not keep the mutex locked during a whole blocking read.

But - if you need to wait on multiple IOs, handled by different code? How will your approach work there?

for thing in things {
    if try_read(thing) {
        return read_thing;
    }
}

?

Thread Thread
 
idanarye profile image
Idan Arye

My use case was running it in gtk::idle_add/timeout_add, which only supports non-blocking calls (it was run on the same thread). It was also to read from it over a mutex, which means I could not keep the mutex locked during a whole blocking read.

A GUI's event loop is very similar to to an async framework's reactor, so you were already using one behind the scenes.

for thing in things...

I said "handled by different code" - in your case they all go to the same code, the one that called that function.

Consider this:

This small program takes the number from 1 to 9, uses the math.js webservice to square them, and then uses that webservice again to multiply the result by 100. This, of course, I could do inside Rust - but I want to demonstrate an async flow, so I make them web requests - and I also added some random delay for each request to simulate real world scenarios where requests can take different times.

For each number, this program sends two requests - one for squaring it and one for multiplying it by 100. If I only wanted one request, your approach could work(pseudo-rust):

fn next_available() -> (usize, i64) {
    for (i, (orig_index, request)) in remaining_requests.iter().enumerate() {
        if Some(response) = request.try_read() {
            remaining_requests.remove(i);
            return Some((orig_index, response));
        }
    }
    None
}

fn timeout_callback() {
    if let Some(i, response) = next_available() {
        result[i] = response.data;
    }
    timeout_add(timeout_callback);
}

But... I do two different requests for each number, and handle their responses differently - for the first I send another request based on it's result, and for the second I store the result in a vector(actually join_all does that for me - I just log it and pass it on). How would you do that? Use two callbacks?

Thread Thread
 
legolord208 profile image
jD91mZM2

Alright, you're making some good points. I suppose I should see if I can use GTK+'s reactor to read a connection. Doubt it, but perhaps.

In your example though, I don't see why you couldn't just make the connection vector hold a type parameter and add the second request with another type.

Thread Thread
 
idanarye profile image
Idan Arye

In your example though, I don't see why you couldn't just make the connection vector hold a type parameter and add the second request with another type.

You mean something like this?

fn next_available() -> (usize, i64) {
    for (i, (orig_index, request)) in remaining_requests.iter().enumerate() {
        if Some(response) = request.try_read() {
            remaining_requests.remove(i);
            return Some((orig_index, match request {
                FirstRequest(..) => FirstResponse(response),
                SecondRequest(..) => SecondResponse(response),
            }));
        }
    }
    None
}

if let Some(i, response) = next_available() {
    match response {
        FirstResponse(response) => {
            send_second_request(response.data);
        },
        SecondResponse(response) {
            result[i] = response.data;
        },
    }
}

It doesn't scale very well:

  • You need, for each IO in the algorithm, to add a branch to the enum.
  • Code sequence that normally goes together needs to be broken to different places.
  • If you want to add other algorithms on the same loop, it's hard to keep them from tangling with each other.
Thread Thread
 
legolord208 profile image
jD91mZM2

Alright. Thanks for a nice discussion!

Collapse
 
tbodt profile image
tbodt

Async thingies don't just have a loop that constantly checks whether something has happened. They use a special system call to wait for any number of things at the same time (poll, select, epoll).

Collapse
 
idanarye profile image
Idan Arye • Edited

That's hardly the point here. You can use that "special system call" to wait for events even without an async framework, and if you have no choice but to constantly check a condition - you can still incorporate that into an async framework (I think that's what tokio-inotify does)