For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | lasiotus's commentsregister

Motor OS is now a Tier-3 target in Rust.


I'm not the OP, but I have a similar experience with Motor OS: wasmi compiles and works "out of the box", while wasmtime has a bunch of dependencies (e.g. target-lexicon) that won't compile on custom targets even if all features are turned off in wasmtime.


Not sure how to help with this much information but I've built and run wasmtime on some pretty squalid architectures (xtensa and riscv32 microcontrollers among others) but the right collection of features might not be obvious. We can help you find the right configuration on the Bytecode Alliance zulip or the wasmtime issue tracker if you need it.


> Not sure how to help with this [...]

I guess not much can be done at the moment: dependencies are often the primary obstacle in porting crates to new targets, and just comparing the list of dependencies of wasmtime vs wasmi gives a pretty good indication of which crate is a bit more careful in this regard:

https://crates.io/crates/wasmtime/33.0.0/dependencies https://crates.io/crates/wasmi/0.47.0/dependencies


Wasmtime has many capabilities that wasmi does not, and therefore has more optional dependencies, but the required set of dependencies has been portable to every platform I've targeted so far. If anything does present a concrete issue we are eager to address it. For example, you could file an issue on target-lexicon describing how to reproduce your issue.


> If anything does present a concrete issue we are eager to address it.

That's great to hear! I think it is a bit too early to spend extra effort on porting Wasmtime to Motor OS at the moment, as there are a couple of more pressing issues to sort out (e.g. FS performance is not yet where it should be), but in a couple of months I may reach out!


Is that wasmtime in interpreter mode? I didn't see a rv32 backend to wasmtime (in cranelift) or did I not look in the right place.

What are the min memory requirements for wasmtime/cranelift?


There’s now an interpreter in wasmtime called Pulley. It’s an optimizing interpreter based on Cranelift, which generates interpreter opcodes which are more efficient to traverse than directly interpreting the Wasm binary.

I have run wasmtime on the esp32 microcontrollers with plenty of ram to spare, but I don’t have a measurement handy.



But if this benchmark is right, then wasmtime is 5x faster than wasmi for it:

https://github.com/khvzak/script-bench-rs


Wasmtime, being an optimizing JIT, usually is ~10 times faster than Wasmi during execution.

However, execution is just one metric that might be of importance.

For example, Wasmi's lazy startup time is much better (~100-1000x) since it does not have to produce machine code. This can result in cases where Wasmi is done executing while Wasmtime is still generating machine code.

Old post with some measurements: https://wasmi-labs.github.io/blog/posts/wasmi-v0.32/

Always benchmark and choose the best tool for your usage pattern.


That's a good point I didn't think about.

I guess it's like v8 compared to quickjs.

Anyway all this talk about wasm makes me want to write a scriptable Rust app!


Motor OS


Motor OS (https://motor-os.org) attempts to do exactly that, by focusing on a rather narrow, from a "mainstream OS", point of view, niche. Kind of "do this one thing better" approach.


Motūrus OS (https://github.com/moturus/motor-os) has a newer microkernel.


Some extra context for comparison: Talc is faster than Frusa when there is no contention, but slower when there are concurrent allocations. Both are much slower than Rust's system allocator. Benchmark here: https://crates.io/crates/frusa.


Your results caught me off guard. Particularly, the (linux) system allocator is too fast. I think the simplicity of the benchmark (allocating and immediately deallocating) might be causing issues... perhaps unwanted optimizations? I'm not sure.

On my random actions benchmarks (this resembles real allocation patterns somewhat better?):

- 1 thread: Talc is faster than Frusa and System, Frusa is comparable to System

- 4 threads: System is fastest, Frusa does about ~half as well, Talc does ~half as well as Frusa

Our benchmarks agree on the Frusa vs Talc comparison.

Benchmarks aside, Frusa seems neat. In particular, I had some misconceptions about how to tackle concurrency in Talc which Frusa's code demonstrates not to be true. I may give writing a concurrent version of Talc another shot soon.


Apologies, the benchmark is fine. The reason the system allocator is faster than I expected is because Linux's slab allocator takes over for especially small allocation sizes, and it's terrifically fast.

I'm changing up my random-actions benchmark to display results over various allocation sizes, as some allocators do much better than others at different sizes. As a heads up, Frusa takes a large hit at higher allocation sizes. Perhaps tuning bucket sizes or something could help? I'll try to have the benchmarks on GitHub this weekend so you can play around with them, if you'd like to investigate.


SEEKING VOLUNTEERS: Motūrus OS

Motūrus OS is a new operating system for VMs written in Rust. A lot of interesting things to do for someone interested in systems-level Rust projects, from a simple Elf loader to a crash-resistant filesystem.

Or just porting C stuff like vim.

https://github.com/moturus/motor-os


The lainding page at https://github.com/moturus/motor-os explicitly says that both networking and file I/O are slow and have to be improved. The only claim is about fast bootup, and the number is there, and can easily be verified.

Where do you see any unsupported claims re: performance?


I tried with async in the kernel first, but the cruft that was needed two years ago was not worth it in the kernel itself. The net I/O is actually async-first, see here: https://github.com/moturus/moto-runtime/blob/main/src/net.rs

File I/O will move to this model later (the current file I/O code is quite old and mostly a placeholder).


I think I missed this on my quick pass through the code, I see a lot async usage here as well, https://github.com/moturus/moto-runtime/blob/main/src/net_as...

Very cool. Thanks for pointing this out.


Yes! And because of sandbagging :)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You