It's not a secret that I'm a big fan of Elixir, so when I started doing Rust development I tried to bring some ideas from Elixir to the world of Ru...
For further actions, you may consider blocking this person and/or reporting abuse
While this code looks cool (I am a fan of Elixer + Erlang). The Rust reasons for not natively supporting green threads are valid (native interop, runtime complexities, not real threads, etc.).
Using green threads for parallel compute doesn't make sense as the OS can handle only a specific amount of work. “Massive” concurrency is only valuable if you are I/O bound. The mental demarcation between async/await and threads is very valuable when you consider this limitation.
Building a green thread implementation might encourage naive implementers to use them for compute.
Is there anything in the implementation that discourages massive paralleled compute in these green threads?
Again, I think this project is cool and is some interesting code :)
I know this article uses Rust but if you look closer at Lunatic project its about WASM not just Rust so in future it can support any language that compiles down to WASM. If you go to Lunatic GitHub repo first things written there is this "It is heavily inspired by Erlang and can be targeted from any language that can compile to WebAssembly. Currently there are only bindings for Rust available.".
I also find this project very interesting and maybe in future it can be alternative to Erlang VM.
You are right, I missed the "forest for the trees" :)
Personally I'm glad to see Erlang and Elexir ideas happening in Rust too — and even better — the within-processes-only crashes for example.
That sounds weird to me. If you look carefully in the article above: 2 000 real threads, Mac OS crash. But 10 x more lightweight threads: Works fine. That can be real-life useful when designing a web framework for example.
2,000 real threads doesn't make sense for parallel compute. i9 intel processors don't have 2,000 threads.
Now, 2,000+ "threads" or better yet async code DOES make sense for I/O. Because there is not real "work" being done.
But if you are doing compute bound work, 2k+ CPU threads makes no sense on a personal computer.
100%, if your work is I/O bound, it is exceedingly useful. That is the point of my comment.
The example of the TCP echo server is a great example of I/O bound work.
Ok, thanks for replying. Agreed that 2k CPU threads sounds a bit weird
To me, this looks amazingly promizing, B. Kolobara, and I'm looking forward to one day when maybe these things are available in an Elexir-Erlang style web framework. Such a framework could become the bost fastes, and the most scaleable (1st place shared with Erlang) in the world? And the most robust too? Because of any crashes happening only within the lightweight processes.
It looks like the the erlang baseline per process memory usage (stack + heap) is pretty low. Do you know what the Lunatic process size is / what is the future goal?
Lunatic's process size is a bit higher than Erlang's when a process is spawned. It's around 4KiB for the stack, if you don't use any heap data. On modern 64bit CPUs Lunatic will relay mostly on cheap virtual memory. The actual memory consumption during runtime should be lower than Erlang's in most cases just from the fact that Rust's data structures are more compact and memory efficient. This is something that can be optimised further if it ever becomes a bottleneck. Right now the development is focused on stability and correctness before performance.
If I had money to invest in things, then you would've just made an angel investor out of me. As I'm quick to say, I have never told someone, "I don't want it done right, I want it done fast!"
Also, this post has made me realize I've been too liberal in rating things as unicorns, because you, sir, are building a flerfing unicorn. My hats — all of my hats — off to you. 🎩🧢👒🎓⛑🪖👑
I love this! I have dabbled with Rust for a few years and have high hopes, and was disappointed when I saw Rust looking more like C# with the async/await stuff - so much complexity. And I'm a big Elixir fan and have used for years.
The discussion that the “OS” can handle only a specific amount of work - I think this means that the underlying HW can only handle a specific amount of work. The problem here is that HW is a moving target, so how do you know what you are running on? With the explosion of multi-core computing it seems every year we are running on computers with more cores - I don’t want to change my code every year.
Using “process” or “threads” is not just about performance optimizations - it’s also about the ease of programming. It’s much easier to write sequential code (do a, do b, do c) rather than break it up into async, await, etc., which brings in lots of complexities. I think Go understands this too.
I don’t understand the idea that massive concurrency only makes sense if you are I/O bound. Concurrency makes sense from 1) programming is easier, and 2) performance when you have more parallelism (which is becoming quite usual).
Just to add to this, I think Rust / Lunatic can do a better job at this than Java Akka. Java Akka and Play, etc., show the limitations and difficulties in isolation.
I'd consider using Pony if I need something of Rust+BEAM caliber.
Cool stuff though!