DEV Community

Discussion on: Is Cooperative Concurrency Here to Stay?

Collapse
 
rhnonose profile image
Rodrigo Nonose

I'll chime in to point to Erlang for curiosity.

Erlang's VM, BEAM, implements preemptive scheduling. It lives in a single OS process and spawns a scheduler (thread) per CPU core (default, but configurable) and the processes (not OS, but user-land) are fully preemptive and has non-blocking IO with dedicated schedulers.

Erlang achieves full preemption by having complete memory isolation in the process and handling all side-effects through messages. It's a pretty unique runtime compared to others like Go and Node.js and it results in a bunch of benefits such as high availability, fault-tolerance, high concurrency and non-blocking garbage collection.

Collapse
 
nestedsoftware profile image
Nested Software • Edited

Thanks @rhnonose ! I have a question:

the processes (not OS, but user-land) are fully preemptive

How does this work? By 'processes' I think you are referring to the functions that do work inside of Erlang, kind of like coroutines in Python's asyncio, is that right? If so, how can they be preempted?

Update: I was curious enough to look into this, and how it works is kind of interesting:

happi.github.io/theBeamBook/#CH-Sc...

The preemptive multitasking on the Erlang level is achieved by cooperative multitasking on the C level. The Erlang language, the compiler and the virtual machine works together to ensure that the execution of an Erlang process will yield within a limited time and let the next process run. The technique used to measure and limit the allowed execution time is called reduction counting, we will look at all the details of reduction counting soon.

The Erlang VM runs an event loop that implements cooperative concurrency. In that respect it's conceptually very similar to how Node.js and NGINX work. However, every time we call a function in Erlang, the VM has the option to run a different task instead. That's the sense in which it is preemptive. I guess I'd call this this "pseudo-preemptive."

I'm not certain, but I suspect that from the point of view of performance, it is probably pretty close to cooperative concurrency, since context switching still happens within an OS thread, only at well defined switching points like function calls, and only if the VM thinks it's time to do so (via reduction counting). The overhead ought to be similar to what Node.js does when we call an async function with await in JavaScript.

What about tight loops? Well, since the only way to loop in Erlang is via recursion, that just doesn't apply. The one area where this can be a problem is with native code. If we call native code from Erlang, we have to be very careful with it, since it isn't bound by the same restrictions as the Erlang language and VM. A poorly written native module could in fact hang the VM.