Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apple handled this problem by adding memory safety to C (Firebloom). It seems unlikely they would throw away that investment and move to Rust. I’m sure lots of other companies don’t want to throw away their existing code, and when they write new code there will always be a desire to draw on prior art.




That's a rather pessimistic take compared to what's actually happening. What you say should apply to the big players like Amazon, Google, Microsoft, etc the most, because they arguably have massive C codebases. Yet, they're also some of the most enthusiastic adopters and promoters of Rust. A lot of other adopters also have legacy C codebases.

I'm not trying to hype up Rust or disparage C. I learned C first and then Rust, even before Rust 1.0 was released. And I have an idea why Rust finds acceptance, which is also what some of these companies have officially mentioned.

C is a nice little language that's easy to learn and understand. But the price you pay for it is in large applications where you have to handle resources like heap allocations. C doesn't offer any help there when you make such mistakes, though some linters might catch them. The reason for this, I think, is that C was developed in an era when they didn't have so much computing power to do such complicated analysis in the compiler.

People have been writing C for ages, but let me tell you - writing correct C is a whole different skill that's hard and takes ages to learn. If you think I'm saying this because I'm a bad programmer, then you would be wrong. I'm not a programmer at all (by qualification), but rather a hardware engineer who is more comfortable with assembly, registers, Bus, DRAM, DMA, etc. I still used to get widespread memory errors, because all it takes is a lapse in attention while coding. That strain is what Rust alleviates.


Not trying to make a value judgement on Rust either, just brainstorming why Rust changeover might go slow per the question.

FWIW I work in firmware with the heap turned off. I’ve worked on projects in both C and Rust, and agree Rust still adds useful checks (at the cost of compile times and binary sizes). It seems worth the trade off for most projects.


Okay. That sounds like a reasonable explanation. Thanks!

I'm curious why your perspective on Rust as a HW engineer. Hardware does a ton of things - DMA, interrupts, etc. that are not really compatible with Rust's memory model - after all Rust's immutable borrows should guarantee the values you are reading are not aliased by writers and should be staying constant as long as the borrow exists.

This is obviously not true when the CPU can either yank away the execution to a different part of the program, or some foreign entity can overwrite your memory.

Additionally, in low-level embedded systems, the existence of malloc is not a given, yet Rust seems to assume you can dynamically allocate memory with a stateless allocator.


I’d like to take a swipe at this

Rust has no_std to handle not having an allocator.

Tons of things end up being marked “unsafe” in systems/embedded Rust. The idea is you sandbox the unsafeness. Libraries like zero copy are a good example of “containing” unsafe memory accesses in a way that still gets you as much memory safety as possible given the realities of embedded.

Tl;dr you don’t get as much safety as higher level code but you still get more than C. Or maybe put a different way you are forced to think about the points that are inherently unsafe and call them out explicitly (which is great when you think about how to test the thing)


Dunno. I used to do embedded. A ton of (highly capable) systems have tiny amounts of RAM (like 32kb-128kb). Usually these controllers run a program in an infinite event loop (with perhaps a sleep in the end of the iteration), communication with peripherals is triggered via interrupts. There's no malloc, memory is pre-mapped.

This means:

- All lifetimes are either 'stack' or 'forever'

- Thread safety in the main loop is ensured by disabling interrupts in the critical section - Rust doesn't understand this paradigm (it can be taught to, I'm sure)

- Thread safety in ISRs should also be taught to Rust

- Peripherals read and write to memory via DMA, unbekownst to the CPU

etc.

So all in all, I don't see Rust being that much more beneficial (except for stuff like array bounds checking).

I'm sure you can describe all these behaviors to Rust so that it can check your program, but threading issues are limited, and there's no allocation, and no standard lib (or much else from cargo).

Rust may be a marginally better language than C due to some convenience features, but I feel like there's too much extra work very little benefit.


FWIW teaching it critical sections is dead simple, and likely already done for you. Ex cortex M:

https://github.com/rust-embedded/cortex-m/blob/master/cortex...

You can look at asynch and things like https://github.com/rtic-rs/rtic and https://github.com/embassy-rs/embassy for common embedded patterns.

You handle DMA the same way as on any other system - e.x. you mark the memory as system/uncached (and pin it if you have virtual memory on), you use memory barriers when you are told data has arrived to get the memory controller up to speed with what happened behind its back, and so on. You’re still writing the same code that controls the hardware in the same way, nothing about that is especially different in C vs Rust.

I think there is very much a learning curve, and that friction is real - hiring people and teaching them Rust is harder than just hiring C devs. People on HN tend to take this almost like a religious question - “how could you dare to write memory unsafe code in 2025? You’re evil!” But pragmatically this is a real concern.


Hubris is an embedded microkernel targeting the kilobytes-to-megabyte range of RAM. In Rust. They're saying very positive things about the bug prevention aspects.

https://hubris.oxide.computer/reference/

https://hubris.oxide.computer/bugs/


Here is my take on your question.

> I'm curious why your perspective on Rust as a HW engineer.

C, C++ and Rust requires you to know at least the basics of the C memory model. Register variables, heap, stack, stack frame, frame invalidation, allocators and allocation, heap pointer invalidation, etc. There are obviously more complicated stuff (which I think you already know, seeing that you're an embedded developer), but this much is necessary to avoid common memory errors like memory leaks (not a safety error), use-after-free, double-free, data-race, etc. This is needed for even non-system programs and applications, due to lack of runtime memory management (GC or RC). You can get by, by following certain rules of thumb in C and C++. But to be able to write flawless code, you have to know those hardware concepts. This is where knowledge of process and memory architecture comes in handy. You start with the fundamental rule before programming, instead of the other way around that people normally take. Even in Rust, the complicated borrow checker rules start to make sense once you realize how they help you overcome the mistakes you can make with the hardware.

> Hardware does a ton of things - DMA, interrupts, etc. that are not really compatible with Rust's memory model - after all Rust's immutable borrows should guarantee the values you are reading are not aliased by writers and should be staying constant as long as the borrow exists.

> This is obviously not true when the CPU can either yank away the execution to a different part of the program, or some foreign entity can overwrite your memory.

I do have an answer, but I don't think it can be explained in a better way than how @QuiEgo did it: You can 'sandbox' those unsafe parts within Rust unsafe blocks. As I have explained elsewhere, these sandboxed parts are surprisingly small even in kernel or embedded code (please see the Rust standard library for examples). As long as you enforce the basic correctness conditions (the invariants) inside the unsafe blocks, the rest of the code is guaranteed to be safe. And even if you do make a mistake there (i.e memory safety), they are easier to find because there's very little code there to check. Rust does bring something new to the table for the hardware.

NOTE: I believe that those parts in the kernel are still in C. Rust is just a thin wrapper over it for writing drivers. That's a reasonable way forward.

> Additionally, in low-level embedded systems, the existence of malloc is not a given, yet Rust seems to assume you can dynamically allocate memory with a stateless allocator.

That isn't true. @QuiEgo already mentioned no_std. It's meant for this purpose. Here is the reference: https://docs.rust-embedded.org/book/intro/no-std.html#bare-m...


So you try to say c is for good programmers only and rust let also the idiots Programm? I think that’s the wrong way to argue for rust. Rust catches one kind of common problem but but does not magically make logic errors away.


No, they are not saying that at all??

> It seems unlikely [Apple] would throw away that investment and move to Rust.

Apple has invested in Swift, another high level language with safety guarantees, which happens to have been created under Chris Lattner, otherwise known for creating LLVM. Swift's huge advantage over Rust, for application and system programming is that it supports an ABI [1] which Rust, famously, does not (other than falling back to a C ABI, which degrades its promises).

[1] for more on that topic, I recommend this excellent article: https://faultlore.com/blah/swift-abi/ Side note, the author of that article wrote Rust's std::collections API.


Swift does not seem suitable for OS development, at least not as much as C or C++.[0] Swift handles by default a lot of memory by using reference counting, as I understand it, which is not always suitable for OS development.

[0]: Rust, while no longer officially experimental in the Linux kernel, does not yet have major OSs written purely in it.


What matters is what Apple thinks, and officially it is, to the point it is explicitly written on the documentation.

The practical reality is arguably more important than beliefs. Apple has, as it turns out, invested in trying to make Swift more suitable for kernel and similar development, like trying to automate away reference counting when possible, and also offering Embedded Swift[0], an experimental subset of Swift with significant restrictions on what is allowed in the language. Maybe Embedded Swift will be great in the future, and it is true that Apple investing into that is significant, but it doesn't seem like it's there.

> Embedded Swift support is available in the Swift development snapshots.

And considering Apple made Embedded Swift, even Apple does not believe that regular Swift is suitable. Meaning that you're undeniably completely wrong.

[0]:

https://github.com/swiftlang/swift-evolution/blob/main/visio...


You show a lack of awareness that ISO C and C++ are also not applicable, because on those domains the full ISO language standard isn't available, which is why freestanding is a thing.

But freestanding is not experimental, unlike Embedded Swift according to Apple. And there are full, large OS kernels written in C and C++.

You continue being undeniably, completely wrong.


Is it really? It always depends on which specific C compiler, and target platform we are talking about.

For Apple it suffices that it is fit for purpose for Apple itself, it is experimental for the rest of the world.

I love to be rightly wrong.


There's an allocation-free subset.

https://www.swift.org/get-started/embedded/

Rust's approach is overkill, I think. A lot of reference counting and stuff is just fine in a kernel.


But at least a lot of tasks in a kernel would require something else than reference counting, unless it can be guaranteed that the reference counting is optimized away or something, right?

There are some allocations where it doesn't make sense for them to have multiple owners (strong references) but I wouldn't say it makes sense to think about it as being optimized away or not.

Nothing wrong with using reference counting for OS development.

Even kernel development? Do you know of kernels where reference counting is the norm? Please do mention examples.


Is this even a fair question? A common response to pointing out that Oberon and the other Wirth languages where used to write several OS’s (using full GC in some cases) is that they don’t count, just like Minix doesn’t count for proof of microkernels. The prime objection being they are not large commercial OS’s. So, if the only two examples allowed are Linux and Windows (and maybe MacOS) then ‘no’ there are no wide spread, production sized OS’s that use GC or reference counting.

The big sticking point for me is that for desktop and server style computing, the hardware capabilities have increased so much that a good GC would for most users be acceptable at the kernel level. The other side to that coin is that then OS’s would need to be made on different kernels for large embedded/tablet/low-power/smart phone use cases. I think tech development has benefitted from Linux being used at so many levels.

A push to develop a new breed of OS, with a microkernel and using some sort of ‘safe’ language should be on the table for developers. But outside of proprietary military/finance/industrial (and a lot of the work in these fields are just using Linux) areas there doesn’t seem to be any movement toward movement toward a less monolithic OS situation.


Apple is extending Swift specifically for kernel development.

Also, Swift Embedded came out of the effort to eventually use Swift instead for such use cases at Apple.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: