re: Are async frameworks really worth it? VIEW POST

TOP OF THREAD FULL DISCUSSION
re: Isn't that what async reactors do behind the scenes though? They use non-blocking calls to check which events are ready, and pick and trigger the a...
 

Yup, which is why I don't really see the point of them. They make handling errors more difficult, make you have to use reference counters to move stuff into closures, etc.

 

If you only need to wait for one IO at a time and don't need to run anything else while waiting for it - sure, use the way you described. But if that's the case - you might as well just use a blocking API. In fact, blocking is probably better since it's easier on the CPU.

If you need to wait for a multipe IOs at a time, but are all pretty much the same modulo data (their result is handled by the same code, but each IO has it's own context), your approach will also work (you will have to check all listeners each time though) - though if you can use something like select you probably should.

But - if you need to wait on multiple IOs, handled by different code? How will your approach work there?

Simple example:

1. Run in parallel:
    1.1. Read some file X(different in each parallel invocation)
    1.2. Extract other file name Y from the contents of X
    1.3. Read Y
    1.4. Do something based on the content of Y

When you wait on all the IOs, and one of them returns - is it the one from 1.1 or the one from 1.3? Do you need to use it's result in 1.2 or 1.4? This one can probably be solved with a simple if - but does it scale?

sure, use the way you described. But if that's the case - you might as well just use a blocking API. In fact, blocking is probably better since it's easier on the CPU.

My use case was running it in gtk::idle_add/timeout_add, which only supports non-blocking calls (it was run on the same thread). It was also to read from it over a mutex, which means I could not keep the mutex locked during a whole blocking read.

But - if you need to wait on multiple IOs, handled by different code? How will your approach work there?

for thing in things {
    if try_read(thing) {
        return read_thing;
    }
}

?

My use case was running it in gtk::idle_add/timeout_add, which only supports non-blocking calls (it was run on the same thread). It was also to read from it over a mutex, which means I could not keep the mutex locked during a whole blocking read.

A GUI's event loop is very similar to to an async framework's reactor, so you were already using one behind the scenes.

for thing in things...

I said "handled by different code" - in your case they all go to the same code, the one that called that function.

Consider this:

This small program takes the number from 1 to 9, uses the math.js webservice to square them, and then uses that webservice again to multiply the result by 100. This, of course, I could do inside Rust - but I want to demonstrate an async flow, so I make them web requests - and I also added some random delay for each request to simulate real world scenarios where requests can take different times.

For each number, this program sends two requests - one for squaring it and one for multiplying it by 100. If I only wanted one request, your approach could work(pseudo-rust):

fn next_available() -> (usize, i64) {
    for (i, (orig_index, request)) in remaining_requests.iter().enumerate() {
        if Some(response) = request.try_read() {
            remaining_requests.remove(i);
            return Some((orig_index, response));
        }
    }
    None
}

fn timeout_callback() {
    if let Some(i, response) = next_available() {
        result[i] = response.data;
    }
    timeout_add(timeout_callback);
}

But... I do two different requests for each number, and handle their responses differently - for the first I send another request based on it's result, and for the second I store the result in a vector(actually join_all does that for me - I just log it and pass it on). How would you do that? Use two callbacks?

Alright, you're making some good points. I suppose I should see if I can use GTK+'s reactor to read a connection. Doubt it, but perhaps.

In your example though, I don't see why you couldn't just make the connection vector hold a type parameter and add the second request with another type.

In your example though, I don't see why you couldn't just make the connection vector hold a type parameter and add the second request with another type.

You mean something like this?

fn next_available() -> (usize, i64) {
    for (i, (orig_index, request)) in remaining_requests.iter().enumerate() {
        if Some(response) = request.try_read() {
            remaining_requests.remove(i);
            return Some((orig_index, match request {
                FirstRequest(..) => FirstResponse(response),
                SecondRequest(..) => SecondResponse(response),
            }));
        }
    }
    None
}

if let Some(i, response) = next_available() {
    match response {
        FirstResponse(response) => {
            send_second_request(response.data);
        },
        SecondResponse(response) {
            result[i] = response.data;
        },
    }
}

It doesn't scale very well:

  • You need, for each IO in the algorithm, to add a branch to the enum.
  • Code sequence that normally goes together needs to be broken to different places.
  • If you want to add other algorithms on the same loop, it's hard to keep them from tangling with each other.
code of conduct - report abuse