Saying that it is "great" is really overselling it at this point, unless the languages you're comparing to are mainly C or C++. I would also toss Ruby in that category for now, but they seem to be working on making async better with Ruby 3+.
When compared to Python, JavaScript, or C#... Rust's language support for async isn't really that impressive to most developers, although the implementation has some technical characteristics that are nice for people who care about the low level details. The ecosystem is pretty lacking, even though it is much better than it was a few years ago.
Compared to the aforementioned languages (including Rust), Go and BEAM-based languages are just on another level.
I'm sure both Rust's language ergonomics and the ecosystem will improve over time, but Rust's goals also make it unlikely to ever be compatible with my definition of a great async system... which is the model Go and Erlang use where the developer can't tell that every single sequential process isn't running in a separate OS thread -- and, by extension, every function is automatically async with no distinction that I have to worry about.
In Go, I can spin up as many goroutines as I need, and I don't have to worry about any single goroutine blocking an executor like I have to worry about in Rust.
Yes, some async runtimes in Rust have worked on clever things like monitoring the executor threads and spawning a new executor to steal a blocked executor's work queue... maybe some day that will be a battle tested solution. Like most of the Rust async story, there are rough edges that could benefit from some more time.
(I have worked professionally with both Rust and Go for several years, and I think they're both great languages with different trade offs... I'm not trying to bash Rust here.)
You are right, but this project (or a similar one) who uses WASM based rust nano processes has the potential to leave Go and Erlang in the dust: https://github.com/lunatic-solutions/lunatic
I can't tell if that is pulling V8 in as the WASM interpreter or not. Obviously, I would prefer that they leave others in the dust without having to pull in a huge C++ codebase, but it still sounds promising either way!
Awesome! I had heard of wasmer before, but I didn't realize its performance was that good yet. I'll definitely be paying closer attention to this space now.
Although, on closer inspection it seems like wasmer is using LLVM as a backend to get that good performance, which tempers my enthusiasm a little bit. I see that wasmer also supports the Cranelift backend, but the promised performance blog post[1] doesn't seem to have been published yet, so I don't know how much of a difference that makes in the real world.
A ~13x difference in JIT compile time (LLVM vs Cranelift) is huge, and waiting on LLVM itself to compile can also be... exciting. LLVM does good work for AOT compilation, but I don't know how I feel about the trade-offs of using it as a JIT. If a massive C++ code base is going to be pulled in anyways, I would almost rather them pull in V8 than LLVM... but that's just my opinion, since V8 seems to be very well optimized for the JIT use case.
But, I'm glad there is the choice to use Cranelift (and presumably exclude LLVM from the final binary) if that fits a particular use case, and I'm excited to see that follow-up blog post whenever the wasmer team has time to publish it.
There are some nice opportunities to bring LLVM compilation times closer to the optimal ones in multithreaded environments.
We are currently working towards that and hope to get something tangible in next releases... stay tuned!
when its ready, I'll be happy to try it. That said, the BEAM codebase is around 30 years old and indestructibly stable while being pretty damn fast. Its going to be awhile before anything with teh same goals catches up.
Also one of the reasons I'm looking forward to Java's Project Loom. It seems Go/Elixir/Java have went for the `virtual threading` path to simplify the dev experience greatly.
Rust has a great async story compared to other languages, which benefits IO-bound workloads even more so than CPU-busy ones.