I'm curious why your perspective on Rust as a HW engineer. Hardware does a ton of things - DMA, interrupts, etc. that are not really compatible with Rust's memory model - after all Rust's immutable borrows should guarantee the values you are reading are not aliased by writers and should be staying constant as long as the borrow exists.
This is obviously not true when the CPU can either yank away the execution to a different part of the program, or some foreign entity can overwrite your memory.
Additionally, in low-level embedded systems, the existence of malloc is not a given, yet Rust seems to assume you can dynamically allocate memory with a stateless allocator.
Rust has no_std to handle not having an allocator.
Tons of things end up being marked “unsafe” in systems/embedded Rust. The idea is you sandbox the unsafeness. Libraries like zero copy are a good example of “containing” unsafe memory accesses in a way that still gets you as much memory safety as possible given the realities of embedded.
Tl;dr you don’t get as much safety as higher level code but you still get more than C. Or maybe put a different way you are forced to think about the points that are inherently unsafe and call them out explicitly (which is great when you think about how to test the thing)
Dunno. I used to do embedded. A ton of (highly capable) systems have tiny amounts of RAM (like 32kb-128kb). Usually these controllers run a program in an infinite event loop (with perhaps a sleep in the end of the iteration), communication with peripherals is triggered via interrupts. There's no malloc, memory is pre-mapped.
This means:
- All lifetimes are either 'stack' or 'forever'
- Thread safety in the main loop is ensured by disabling interrupts in the critical section - Rust doesn't understand this paradigm (it can be taught to, I'm sure)
- Thread safety in ISRs should also be taught to Rust
- Peripherals read and write to memory via DMA, unbekownst to the CPU
etc.
So all in all, I don't see Rust being that much more beneficial (except for stuff like array bounds checking).
I'm sure you can describe all these behaviors to Rust so that it can check your program, but threading issues are limited, and there's no allocation, and no standard lib (or much else from cargo).
Rust may be a marginally better language than C due to some convenience features, but I feel like there's too much extra work very little benefit.
You handle DMA the same way as on any other system - e.x. you mark the memory as system/uncached (and pin it if you have virtual memory on), you use memory barriers when you are told data has arrived to get the memory controller up to speed with what happened behind its back, and so on. You’re still writing the same code that controls the hardware in the same way, nothing about that is especially different in C vs Rust.
I think there is very much a learning curve, and that friction is real - hiring people and teaching them Rust is harder than just hiring C devs. People on HN tend to take this almost like a religious question - “how could you dare to write memory unsafe code in 2025? You’re evil!” But pragmatically this is a real concern.
Hubris is an embedded microkernel targeting the kilobytes-to-megabyte range of RAM. In Rust. They're saying very positive things about the bug prevention aspects.
> I'm curious why your perspective on Rust as a HW engineer.
C, C++ and Rust requires you to know at least the basics of the C memory model. Register variables, heap, stack, stack frame, frame invalidation, allocators and allocation, heap pointer invalidation, etc. There are obviously more complicated stuff (which I think you already know, seeing that you're an embedded developer), but this much is necessary to avoid common memory errors like memory leaks (not a safety error), use-after-free, double-free, data-race, etc. This is needed for even non-system programs and applications, due to lack of runtime memory management (GC or RC). You can get by, by following certain rules of thumb in C and C++. But to be able to write flawless code, you have to know those hardware concepts. This is where knowledge of process and memory architecture comes in handy. You start with the fundamental rule before programming, instead of the other way around that people normally take. Even in Rust, the complicated borrow checker rules start to make sense once you realize how they help you overcome the mistakes you can make with the hardware.
> Hardware does a ton of things - DMA, interrupts, etc. that are not really compatible with Rust's memory model - after all Rust's immutable borrows should guarantee the values you are reading are not aliased by writers and should be staying constant as long as the borrow exists.
> This is obviously not true when the CPU can either yank away the execution to a different part of the program, or some foreign entity can overwrite your memory.
I do have an answer, but I don't think it can be explained in a better way than how @QuiEgo did it: You can 'sandbox' those unsafe parts within Rust unsafe blocks. As I have explained elsewhere, these sandboxed parts are surprisingly small even in kernel or embedded code (please see the Rust standard library for examples). As long as you enforce the basic correctness conditions (the invariants) inside the unsafe blocks, the rest of the code is guaranteed to be safe. And even if you do make a mistake there (i.e memory safety), they are easier to find because there's very little code there to check. Rust does bring something new to the table for the hardware.
NOTE: I believe that those parts in the kernel are still in C. Rust is just a thin wrapper over it for writing drivers. That's a reasonable way forward.
> Additionally, in low-level embedded systems, the existence of malloc is not a given, yet Rust seems to assume you can dynamically allocate memory with a stateless allocator.
This is obviously not true when the CPU can either yank away the execution to a different part of the program, or some foreign entity can overwrite your memory.
Additionally, in low-level embedded systems, the existence of malloc is not a given, yet Rust seems to assume you can dynamically allocate memory with a stateless allocator.