Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's the other way around, Rust tried Go's approach and rejected it. The original incarnation of Rust was heavily inspired by Limbo, which was the language Pike developed prior to Go. As of 2011, you could describe Rust as a slightly lower-level Go, it even had (a poor stub implementation of) a GC. Rust's Go-style green thread runtime was removed in 2014 because it complicated the C interop story too much (which is a tradeoff Go is happy to make).

It's not a matter of minimalism or simplicity, it's a matter of different priorities. Rust's stackless async support is suitable for systems like microcontrollers, and the design reflects that.



> Rust's stackless async support is suitable for systems like microcontrollers, and the design reflects that.

This is the argument you hear, but how does it play out in practice? I don't know of any async runtimes that don't use Arcs, you can't use the async trait crate (boxed futures=allocations) and you don't have the escape hatches needed (Arcs) when you get too deep in type-hell. To this day, I haven't heard any embedded folks praise async Rust. I'm by no means an embedded expert, but if I were to do embedded I'd likely go with a plain old event loop to be safe.

Getting zero-cost (static only) stackless async was a huge flex, and I'm very impressed with the proof of concept. I'm just wondering whether it actually targeted the intended demographic.


I work professionally in embedded rust, and we don’t use async. That’s because we’re aggressively static; there’s no dynamic allocation in the system at all. This isn’t inherently an issue with async on bare metal, but it doesn’t really buy us anything, so we’re not doing it.

That said, it is very cool, and does work well, if you want to do it. I think for many projects it would make sense. For our requirements, synchronicity works, and so we’re sticking with it, but folks on my team use it for other things, just not in the specific context we’re in.

The key is that nothing in async rust requires dynamic allocation. This is in opposition to C++‘a async stuff, which can in theory optimize away the allocations that are required, but you’re relying on the optimizer to do that. Our particular environment allows us to be 100% static, all the time, and so we’re pursuing that route as far as it can go.


You really can get quite far by having everything be statically allocated, placing reasonable constraints on input sizes/rates, and making use of ring buffers.

I've only worked on embedded development using C++, but I didn't really encounter any issues writing (admittedly, somewhat simple) device drivers with no heap whatsoever.


I recently did an embedded rust project. Gave up on rust `async` and rolled my own asynchronous runtime on top of "synchronous" rust. It was much nicer than worrying about the type system. Since it's an embedded project with no OS, it's not hard to get peak performance from a very simple polling async runtime.

The thing that makes async runtimes complicated is handling multithreading in an intelligent way, and that's not something that worries most embedded engineers.


> stackless async support is suitable for systems like microcontrollers, and the design reflects that.

Tangent: How feasible would it be to have a language where the required stack space for all function invocations is statically known at compile time, just as the sizes of Rust futures are statically known? I guess this would be even harder, because with async functions, the compiler only has to store the state of the function in a statically sized structure when it reaches an await point. But I'm guessing that running out of stack is a common problem when developing for microcontrollers.


You can predetermine function stack sizes as long as you don't do anything resembling C's "alloca" (Rust doesn't support alloca directly, but it supports dynamically-sized types in some contexts which would stymie this). But from there you need to use that information to figure out the maximum stack usage of the program, which means exhaustively determining all possible execution paths, which can be a hard problem (you also need to forbid recursion, even mutual recursion, which languages generally don't try to do (something something y-combinator)).


I may be very wrong here, but the JVM byte code’s method structure explicitly has a stack size and local array max size property, both calculated at compile time. Are they using some heuristic here?


I'd bet that the stack size you refer to doesn't include callees (i.e., the stack used by procedures called by the method in question).


Oh you are right, Java has two stacks, an operational and a method/function stack. Only the former is calculated ahead of time (and the latter is configurable at runtime).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: