Every system under the Sun has a C compiler. This isn't remotely true for Rust. Rust is more modern than C, but has it's own issues, among others very slow compilation times. My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
There is a set of languages which are essentially required to be available on any viable system. At present, these are probably C, C++, Perl, Python, Java, and Bash (with a degree of asterisks on the last two). Rust I don't think has made it through that door yet, but on current trends, it's at the threshold and will almost certainly step through. Leaving this set of mandatory languages is difficult (I think Fortran, and BASIC-with-an-asterisk, are the only languages to really have done so), and Perl is the only one I would risk money on departing in my lifetime.
I do firmly expect that we're less than a decade out from seeing some reference algorithm be implemented in Rust rather than C, probably a cryptographic algorithm or a media codec. Although you might argue that the egg library for e-graphs already qualifies.
We're already at the point where in order to have a "decent" desktop software experience you _need_ Rust too. For instance, Rust doesn't support some niche architectures because LLVM doesn't support them (those architectures are now exceedingly rare) and this means no Firefox for instance.
> There is a set of languages which are essentially required to be available on any viable system. At present, these are probably C, C++, Perl, Python, Java, and Bash
Java, really? I don’t think Java has been essential for a long time.
Even Perl... It's not in POSIX (I'm fairly sure) and I can't imagine there is some critical utility written in Perl that can't be rewritten in Python or something else (and probably already has been).
As much as I like Java, Java is also not critical for OS utilities. Bash shouldn't be, per se, but a lot of scripts are actually Bash scripts, not POSIX shell scripts and there usually isn't much appetite for rewriting them.
I am yet to find one without it, unless you include Windows, consoles, or table/phone OSes, embedded RTOS, which aren't anyway proper UNIX derivatives.
I personally feel the Zig is a much better fit to the kernel. It's C interoperability is far better than Rust's, it has a lower barrier to entry for existing C devs and it doesn't have the constraints the Rust does. All whilst still bringing a lot of the advantages.
...to the extent I could see it pushing Rust out of the kernel in the long run. Rust feels like a sledgehammer to me where the kernel is concerned.
It's problem right now is that it's not stable enough. Language changes still happen, so it's the wrong time to try.
From a safety perspective there isn't a huge benefit to choosing Zig over C with the caveat, as others have pointed out, that you need to enable more tooling in C to get to a comparable level. You should be using -Wall and -fsanitize=address among others in your debug builds.
You do get some creature comforts like slices (fat pointers) and defer (goto replacement). But you also get forced to write a lot of explicit conversions (I personally think this is a good thing).
The C interop is good but the compiler is doing a lot of work under the hood for you to make it happen. And if you export Zig code to C... well you're restricted by the ABI so you end up writing C-in-Zig which you may as well be writing C.
It might be an easier fit than Rust in terms of ergonomics for C developers, no doubt there.
But I think long-term things like the borrow checker could still prove useful for kernel code. Currently you have to specify invariants like that in a separate language from C, if at all, and it's difficult to verify. Bringing that into a language whose compiler can check it for you is very powerful. I wouldn't discount it.
> Zig, for all its ergonomic benefits, doesn’t make memory management safe like Rust does.
Not like Rust does, no, but that's the point. It brings both non-nullable pointers and bounded pointers (slices). They solve a lot of problem by themselves. Tracking allocations is still a manual process, but with `defer` semantics there are many fewer foot guns.
> I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.
The jump from 2 to 3 is smaller than the jump from 1 to 2, but I generally agree.
> I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.
That was where my argument was supposed to go. Especially a third language whose benefits over C are close enough to Rust's benefits over C.
I can picture an alternate universe where we'd have C and Zig in the kernel, then it would be really hard to argue for Rust inclusion.
(However, to be fair, the Linux kernel has more than C and Rust, depending on how you count, there are quite a few more languages used in various roles.)
IMHO Zig doesn't bring enough value of its own to be worth bearing the cost of another language in the kernel.
Rust is different because it both:
- significantly improve the security of the kernel by removing the nastiest class of security vulnerabilities.
- And reduce cognitive burden for contributors by allowing to encode in thr typesystem the invariants that must be upheld.
That doesn't mean Zig is a bad language for a particular project, just that it's not worth adding to an already massive project like the Linux kernel. (Especially a project that already have two languages, C and now Rust).
Pardon my ignorance but I find the claim "removing the nastiest cla ss of security vulnerabilities" to be a bold claim. Is there ZERO use of "unsafe" rust in kernel code??
Aside from the minimal use of unsafe being heavily audited and the only entry point for those vulnerabilities, it allows for expressing kernel rules explicitly and structurally whereas at best there was a code comment somewhere on how to use the API correctly. This was true because there was discussion precisely about how to implement Rust wrappers for certain APIs because it was ambiguous how those APIs were intended to work.
So aside from being like 1-5% unsafe code vs 100% unsafe for C, it’s also more difficult to misuse existing abstractions than it was in the kernel (not to mention that in addition to memory safety you also get all sorts of thread safety protections).
In essence it’s about an order of magnitude fewer defects of the kind that are particularly exploitable (based on research in other projects like Android)
Not zero, but Rust-based kernels (see redox, hubris, asterinas, or blog_os) have demonstrated that you only need a small fraction of unsafe code to make a kernel (3-10%) and it's also the least likely places to make a memory-related error in a C-based kernel in the first place (you're more likely to make a memory-related error when working on the implementation of an otherwise challenging algorithm that has nothing to do with memory management itself, than you are when you are explicitly focused on the memory-management part).
So while there could definitely be an exploitable memory bug in the unsafe part of the kernel, expect those to be at least two orders of magnitude less frequent than with C (as an anecdotal evidence, the Android team found memory defects to be between 3 and 4 orders of magnitude less in practice over the past few years).
It removes a class of security vulnerabilities, modulo any unsound unsafe (in compiler, std/core and added dependency).
In practice you see several orders of magnitude fewer segfaults (like in Google Android CVE). You can compare Deno and Bun issue trackers for segfaults to see it in action.
As mentioned a billion times, seatbelts don't prevent death, but they do reduce the likelihood of dying in a traffic accident. Unsafe isn't a magic bullet, but it's a decent caliber round.
> Our historical data for C and C++ shows a density of closer to 1,000 memory safety vulnerabilities per MLOC. Our Rust code is currently tracking at a density orders of magnitude lower: a more than 1000x reduction.
"Unsafe" rust still upholds more guarantees than C code. The rust compiler still enforces the borrow checker (including aliasing rules) and type system.
There are devices that do not have access to memory, and you can write a safe description of such a device's registers. The only thing that is inherently unsafe is building DMA descriptors.
Zig as a language is not worth, but as a build system it's amazing. I wouldn't be surprised if Zig gets in just because of the much better build system than C ever had (you can cross compile not only across OS, but also across architecture and C stlib versions, including musl). And with that comes the testing system and seamless interop with C, which make it really easy to start writing some auxiliary code in Zig... and eventually it may just be accepted for any development.
I agree with you that it's much more interesting than the language, but I don't think it matters for a project like the Kernel that already had its build system sorted out. (Especially since no matter how nice and convenient Zig makes cross-compilation if you start a project from scratch, even in Rust thanks to cargo-zigbuild, it would require a lot of efforts to migrate the Linux build system to Zig, only to realize it doesn't support all the needs of the kernel at the start).
I can’t think of many real world production systems which don’t have a rust target. Also I’m hopeful the GCC backend for rustc makes some progress and can become an option for the more esoteric ones
There aren't really any "systems programming" platforms anywhere near production that doesn't have a workable rust target.
It's "embedded programming" where you often start to run into weird platforms (or sub-platforms) that only have a c compiler, or the rust compiler that does exist is somewhat borderline. We are sometimes talking about devices which don't even have a gcc port (or the port is based on a very old version of gcc).
Which is a shame, because IMO, rust actually excels as an embedded programming language.
Linux is a bit marginal, as it crosses the boundary and is often used as a kernel for embedded devices (especially ones that need to do networking). The 68k people have been hit quite hard by this, linux on 68k is still a semi-common usecase, and while there is a prototype rust back end, it's still not production ready.
There’s also I believe an effort to target C as the mid level although I don’t know the state / how well it’ll work in an embedded space anyway where performance really matters and these compilers have super old optimizers that haven’t been updated in 3 decades.
It's mostly embedded / microcontroller stuff. Things that you would use something like SDCC or a vendor toolchain for. Things like the 8051, stm8, PIC or oddball things like the 4 cent Padauk micros everyone was raving about a few years ago. 8051 especially still seems to come up from time to time in things like the ch554 usb controller, or some NRF 2.4ghz wireless chips.
Those don’t really support C in any real stretch, talking about general experience with microcontrollers and closed vendor toolchains; it’s a frozen dialect of C from decades ago which isn’t what people think of when they say C (usually people mean at least the 26 year old C99 standard but these often at best support C89 or even come with their own limitations)
It’s still C though, and rust is not an option. What else would you call it? Lots of c libraries for embedded target C89 syntax for exactly these reasons. Also for what it’s worth, SDCC seems to support very modern versions of C (up to C23), so I also don’t think that critique is very valid for the 8051 or stm8. I would argue that c was built with targets like this in mind and it is where many of its quirks that seem so anachronistic today come from (for example int being different sizes on different targets)
I’m sure if Rust becomes more popular in the game dev community, game consoles support will be a solved problem since these consoles are generally just running stock PC architectures with a normal OS and there’s probably almost nothing wrong with the stock toolchain in terms of generating working binaries: PlayStation and Switch are FreeBSD (x86 vs ARM respectively) and Xbox is x86 Windows, all of which are supported platforms.
Being supported on a games console means that you can produce binaries that are able to go through the whole approval process, and there is day 1 support for anything that is provided by the console vendor.
Otherwise is yak shaving instead of working on the actual game code.
Some people like to do that, that is how new languages like Rust get adoption, however they are usually not the majority, hence why mainstream adoption without backing from killer projects or companies support is so hard, and very few make it.
Also you will seldom see anyone throw away the confort of Unreal, Unity or even Godot tooling, to use Rust instead, unless they are more focused on proving the point the game can be made in Rust, than the actual game design experience.
Good luck shipping arbitrary binaries to this target. The most productive way for an indie to ship to Nintendo in 2025 is to create the game Unity and build via the special Nintendo version of the toolchain.
How long do we think it would take to fully penetrate all of these pipeline stages with rust? Particularly Nintendo, who famously adopts the latest technology trends on day 1. Do we think it's even worthwhile to create, locate, awaken and then fight this dragon? C# with incremental GC seems to be more than sufficient for a vast majority of titles today.
GNU/Linux isn't the only system out there, POSIX last update was in 2024 (when C requirement got upgraded to C17), and yes some parts of it are outdated in modern platforms, especially Apple, Google, Microsoft OSes and Cloud infrastructure.
Still, who supports POSIX is expected to have a C compiler available if applying for the OpenGroup stamp.
> GNU/Linux isn't the only system out there, [...]
Yes? I'm especially keen on the project to make just 'Linux' without any Gnu components. Jokes aside, yes, other operating systems exist.
> [...] and yes some parts of it are outdated in modern platforms, especially Apple, Google, Microsoft OSes and Cloud infrastructure.
Well, also on modern hardware in general. POSIX's idea about storage abstractions don't gel well with modern hardware. Well, not just modern hardware: atime has been cast aside since time immemorial. But lots of the other ideas are also dropping by the wayside.
In practice, we get something like 'POSIX-y' or 'close-enough-to-POSIX', which doesn't get a stamp from OpenGroup. I think Linux largely falls in that boat.
But regardless of that distinction, a C compiler is very much part of the informal 'close-enough-to-POSIX' class.
It's a bit of a shame that Unix has retarded operating system research for so long. Fortunately, the proliferation of virtual machines and hypervisors have re-invigorated the field. Just pretend that your hypervisor is your nano-kernel and your VMs are your processes. No need to run a full on Linux kernel in your VM, you can just have enough code running for your webserver or database etc, no general kernel needed. See eg https://mirage.io/
That isn't always the case. Slow compilations are usually because of procedural macros and/or heavy use of generics. And even then compile times are often comparable to languages like typescript and scala.
Compared to C, rust compiles much slower. This might not matter on performant systems, but when resources are constrained you definitely notice it.
And if the whole world is rewritten in Rust, this will have a non-significant impact on the total build time of a bunch of projects.
It's been mentioned obliquely, but what's considered "part of the compilation process" is different between Rust and C.
Most of the examples of "this is what makes Rust compilation slow" are code generation related; for example, a Rust proc_macro making compilation slow would be equivalent to C building a code generator (for schemas? IDLs?), running it, then compiling the output along with its user.
Rust compilation is also almost instant, if you have a relatively small project.
It's only when you get very large projects that compilation time becomes a problem.
If typescript compilation speed wasn't a problem, then why would Microsoft put resources into rewriting the tyescript compiler in Go to make it faster[1]?
rustc can use a few different backends. By my understanding, the LLVM backend is fully supported, the Cranelift backend is either fully supported or nearly so, and there's a GCC backend in the works. In addition, there's a separate project to create an independent Rust frontend as part of GCC.
Even then, there are still some systems that will support C but won't support Rust any time soon. Systems with old compilers/compiler forks, systems with unusual data types which violate Rust's assumptions (like 8 bit bytes IIRC)
Many organizations and environments will not switch themselves to LLVM to hamfist compiled Rust code. Nor is the fact of LLVM supporting something in principle means that it's installed on the relevant OS distribution.
Using LLVM somewhere in the build doesn't require that you compile everything with LLVM. It generates object files, just like GCC, and you can link together object files compiled with each compiler, as long as they don't use compiler-specific runtime libraries (like the C++ standard library, or a polyfill compiler-rt library).
If you're developing, you generally have control over the development environment (+/-) and you can install things. Plus that already reduces the audience: set of people with oddball hardware (as someone here put it) intersected with the set of people with locked down development environments.
Let alone the fact that conceptually people with locked down environments are precisely those would really want the extra safety offered by Rust.
I know that real life is messy but if we don't keep pressing, nothing improves.
> If you're developing, you generally have control over the development environment
If you're developing something individually, then sure, you have a lot of control. When you're developing as part of an organization or a company, you typically don't. And if there's non-COTS hardware involved, you are even more likely not to have control.
One problem of Rust vs C, I rarely see mentioned is that Rust code is hard to work with from other languages. If someone writes a popular C library, you can relatively easily use it in your Java, D, Nim, Rust, Go, Python, Julia, C#, Ruby, ... program.
If someone writes a popular Rust library, it's only going to be useful to Rust projects.
With Rust I don’t even know if it’s possible to make borrow checking work across a language boundary. And Rust doesn't have a stable ABI, so even if you make a DLL, it’ll only work if compiled with the exact same compiler version. *sigh*
The reason you rarely hear that mentioned is because basically every programming language is hard to work with from other languages. The usual solution is to "just" expose a C API/ABI.
Of course, C API/ABIs aren't able to make full use of Rust features, but again, that also applies to basically every other language.
I don't think they should be defined in terms of any programming language. They should be defined in terms of the architecture - bits, bytes, words and instructions, along with semantics of use. Like we do with networking protocols and file formats.
> Every system under the Sun has a C compiler... My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
This is still the worst possible argument for C. If C persists in places no one uses, then who cares?
> it always has been and always will be available everywhere
"Always has been" is pushing it. Half of C's history is written with outdated, proprietary compilers that died alongside their architecture. It's easy to take modern tech like LLVM for granted.
This might actually be a solved problem soonish, LLMs are unreasonably effective at writing compilers and C is designed to be easy to write a compiler for, which also helps. I don’t know if anyone tried, but there’s been related work posted here on HN recently: a revived Java compiler and the N64 decompilation project. Mashed together you can almost expect to be able to generate C compilers for obscure architectures on demand given just some docs and binary firmware dumps.
Which is neat! But if you care about performance, you're probably using Clang to compile for your esoteric architecture. And if your esoarch already supports LLVM... might as well write Rust instead.
I'm curious as to which other metric you'd use to define successful? If it actual usage, C still wins. Number of new lines pushed into production each year, or new project started, C is still high up the list, would be my guess.
Languages like Rust a probably more successful in terms of age vs. adoption speed. There's just a good number of platforms which aren't even supported, and where you have no other choice than C. Rust can't target most platforms, and it compiles on even less. Unless Linux want's to drop support for a good number of platforms, Rust adoption can only go so far.
> I'm curious as to which other metric you'd use to define successful? If it actual usage, C still wins.
Wait, wait, you can use C everywhere, but if absolute lines of code is the metric, people seem to move away from C as quickly as possible (read: don't use C) to higher level languages that clearly aren't as portable (Ruby, Python, Java, C#, C++, etc.)?
> Rust can't target most platforms, and it compiles on even less. Unless Linux want's to drop support for a good number of platforms, Rust adoption can only go so far.
Re: embedded, this is just terribly short sighted. Right now, yes, C obviously has the broadest possible support. But non-experimental Linux support starts today. GCC is still working on Rust support and rust_codegen_gcc is making in roads. And one can still build new LLVM targets. I'd be interested to hear what non-legacy platforms actually aren't supported right now.
But the real issue is that C has no adoption curve. It only has ground to lose. Meanwhile Rust is, relatively, a delight to use, and offers important benefits. As I said elsewhere: "Unless I am wrong, there is still lots of COBOL we wish wasn't COBOL? And that reality doesn't sound like a celebration of COBOL?" The future looks bright for Rust, if you believe as I do, we are beginning to demand much more from our embedded parts.
Now, will Rust ever dominate embedded? Will it be used absolutely everywhere? Perhaps not. Does that matter? Not AFAIAC. Winning by my lights is -- using this nice thing rather than this not so nice thing. It's doing interesting things where one couldn't previously. It's a few more delighted developers.
I'd much rather people were delighted/excited about a tech/a language/a business/a vibrant community, than, whatever it is, simply persisted. Because "simply persisted", like your comment above, is only a measure of where we are right now.
Another way to think of this is -- Perl is also everywhere. Shipped with every Linux distro and MacOS, probably to be found somewhere deep inside Windows too. Now, is Perl, right now, a healthy community or does it simply persist? Is its persistence a source of happiness or dread?
Why do you think people are delighted/excited about Rust? Some people are, but I’m sure plenty of people are about C and Perl is well. Some people isn’t most people and certainly not all people.
> Some people isn’t most people and certainly not all people.
I never said Rust was the universal language, that is pleasing to all people. I was very careful to frame Rust's success in existential, not absolute or universal, terms:
>> Winning by my lights is -- using this nice thing rather than this not so nice thing. It's doing interesting things where one couldn't previously. It's a few more delighted developers.
I am not saying these technologies (C and Perl) should go away. I am saying -- I am very pleased there is no longer a monoculture. For me, winning is having alternatives to C in embedded and kernel development, and Perl for scripting, especially as these communities, right now, seem less vibrant, and less adaptable to change.
>> Yes, you can use it everywhere. Is that what you consider a success?
> ... yes?
Then perhaps you and I define success differently? As I've said in other comments above, C persisting or really standing still is not what I would think of as a winning and vibrant community. And the moving embedded and kernel development from what was previously a monoculture to something more diverse could be a big win for developers. My hope is that competition from Rust makes using C better/easier/more productive, but I have my doubts as to whether it will move C to make changes.
Sometimes it's nice to know that something will run and compile reliably far into the future. That's a nice thing to have, and wide support for the language and its relatively unchanging nature make it reliable.
> Sometimes it's nice to know that something will run and compile reliably far into the future.
I'm not sure why you think this is a problem that Rust has? Perhaps you mean something different but the Rust project compiles available code on crate.io upon the release of a new version.[0] C compilers may imagine their new compiler code doesn't break old software, but Rust takes that extra step, so we know it won't.
Now, the Rust kernel is currently using beta and nightly features which are on track for inclusion in the stable Rust compiler. So, yes, right now compilation is tied to a specific kernel version, and may need to be updated if a feature changes. However, any C compiler used to compile the Linux kernel uses non-standard GCC extensions only recently adopted by clang. Imagine if the C standards committee chose to change the syntax/sematics of a non-standard extension. Do you not imagine the non-standard extension would also be similarly deprecated?
The issue seems to be Rust is telling you what is non-standard, and you're yelling "Look it's non-standard!". But consider that the kernel in practice is using lots of non-standard features, and should the C standard simply adopt this non-standard behavior that means likely having to changes lots of code.
You almost certainly have a bunch of devices containing a microcontroller that runs an architecture not targeted by LLVM. The embedded space is still incredibly fragmented.
That said, only a handful of those architectures are actually so weird that they would be hard to write a LLVM backend for. I understand why the project hasn’t established a stable backend plugin API, but it would help support these ancillary architectures that nobody wants to have to actively maintain as part of the LLVM project. Right now, you usually need to use a fork of the whole LLVM project when using experimental backends.
> You almost certainly have a bunch of devices containing a microcontroller that runs an architecture not targeted by LLVM.
This is exactly what I'm saying. Do you think HW drives SW or the other way around? When Rust is in the Linux kernel, my guess is it will be very hard to find new HW worth using, which doesn't have some Rust support.
In embedded, HW drives SW much more than the other way around. Most microcontrollers are not capable of running a Linux kernel as it is, even with NoMMU. The ones that are capable of this are overwhelmingly ARM or RISC-V these days. There’s not a long list of architectures supported by the modern Linux kernel but not LLVM/Rust.
Almost none of them, but that doesn’t make them “places no one uses”, considering everyone who can see these words is looking at a device with a bunch of C firmware running on microcontrollers all over the place.
You're putting words in my mouth there - that's not my point.
My point is that if they're not running linux, and the ones that are aren't running a modern kernel, then we shouldn't hold back development of modern platforms for ones that refuse to keep up.