> There is just so much unmodern and unsafe c++ out there. Mixing modern c++ into older codebases leaves uncertain assumptions everywhere and sometimes awkward interop with the old c++
Your complaint doesn’t look valid to me: the feature in the article is implemented with compiler macros that work with old and new code without changes.
“We build free software so we can tell you to go fuck yourself.”
Sounds like a great thing compared to the sanitized corpo bullshit from nowadays. Microsoft bought themselves into OSS with github and each project has a bland CoC.
It’s pathetic. Even the github monkeys know deep down that this is wrong.
It has potential to be the new thing, since several details synergize to make this incident more powerful:
1. Previous claims that Rust code often just works after compiling.
2. Previous claims that low-level error-handling idioms like matching, using Result, etc improve code reliability.
3. Previous claims that using unwrap in example code is ok for brevity. Also, Rust developers would know not to use it in production code.
4. The fact that significant portions of the internet were taken down because a production unwrap from a big, mature player and one of the Rust early adopters.
Sure, Rust is not the problem here, but rather Clownflare being too big and not having their SRE processes fully up to par for their size. Perhaps they are simply too big to operate at the needed level of reliability.
However, Rust anti-fans can easily ignore the above and simply press the issue and debate the minutiae of error handling, human reliability, etc. It’s surprisingly effective and might even catch the ear of management.
However, this article is overall not at the level expected of Rust anti-fans in 2025. I commend the author for trying, but they need to improve in several areas like providing iron-clad real-world examples, proving the required level of experience, focusing more on pain points like dependencies and the potential for supply-chain attacks, addressing reskilling issues and internal corporate politics, etc.
There was a blog by a veteran Rust game developer a while back which single-handedly destroyed the enthusiasm for Rust in gaming. That is the gold standard of Rust criticism for me.
1. But the code did just work after compiling. The code said "This can never be an Err, and if I'm wrong, you are allowed to panic." And it did just that.
2. They do. If you use them, which they didn't.
3. It is. Let's not discard personal responsibility.
4. The error would've happened in any language, Rust debatably made it easier to find though.
I don't write Rust code myself because I simply don't write any code that requires this kind of reliability, and thus I haven't expended the effort learn it properly. But if I were to start such a project, I would still go for Rust and learn it properly. I also don't have a "favorite" language. I just pick whichever seems most appropriate for the project, any decent programmer should be able to pick up any non-esoteric language to the point of adequacy in a few weeks anyway.
.unwrap() was a huge mistake. It should be banned from release builds outright. It's the equivalent of dereferencing a null pointer in C or the NullPointerException in Java. All .unwraps() should be replaced by .expect("uniquely identifying message") immediately and preferably the compiler checks that the expect messages are unique within a crate and issues a warning. Debug builds should by default give .unwrap() and .expect() a tiny chance, like 0.1%, to trigger anyway, even when the Option is Some (opt out via configuration).
To my knowledge, the only difference between `unwrap()` and `expect()` is that the latter takes a custom error message, whereas the former generates a generic error message. In both cases the resulting error message includes the filename and line number of the panic. Both can also generate stack traces if you set `RUST_BACKTRACE=1`, unless this was explicitly disabled at compile time.
So if you want to ban `unwrap()`, then you should probably also ban `expect()`, and make sure to handle all possible cases of Err/None instead.
> Debug builds should by default give .unwrap() and .expect() a tiny chance, like 0.1%, to trigger anyway, even when the Option is Some (opt out via configuration).
I'm trying to understand what you're proposing. Are you saying that normal debug builds should have artificial failures in them, or that there should be a special mode that tests these artificial failures?
Because some of these failures could cause errors to be shown to the user, that could be really confusing when testing a debug build.
I peruse Android system code at work and their C++ code base is not designed for safety. It’s just typical C++ code as any large company would write it.
And for a large juicy target like Android, that won’t be good enough to stay ahead of the attackers long term.
Of course, tools like Fil-C or hardware-based security might make Rust vs. C or C++ moot.
Edit: your comment makes a good point. Shame that trigger-happy (c)rustaceans are downvoting everything in sight which is not praising this PR piece disguised as a technical blogpost.
While crashing is better than exploitable behavior, catching bugs at compile time is even better. Neither hardware based strategies nor filc actually find bugs at compile time. Also the conversation with respect to security isn't about migrating old code, but what to use for new code that you write.
I will note that developers also feel more productive in rust. That's why they migrate existing things over to it even when it may not be beneficial for security.
If the C++ code I worked on looked like that[1] and was actually C with classes, then I’d be switching to Rust too.
For Google and Microsoft it probably makes sense to rewrite Windows and Android in Rust. They have huge amounts of legacy code and everybody’s attacking them.
It doesn’t follow that anyone else, or the majority has to follow then. But that’s predictably exactly what veteran rustafarians are arguing in many comments in this thread.
[1] Pointers getting passed all over the place, direct indexing into arrays or pointers, C-style casts, static casts.
That (PVOID)(UINT_PTR) with offsetting and then copying is ridiculous.
The pain will always remain when refactoring or changing code, with modifications cascading in the function and type definitions.
If a language is hard to write at first, it’s always hard to write. The saving grace of C++ is that one mustn’t use the overcomplicated functional aspects, template meta-programming, etc.
Through some amazing circumstances, all of the above (or their equivalents) + async is exactly what idiomatic Rust code has become.
Inside Rust there is a not so ugly language that is struggling to come to light and it is being blocked at every step.
> If a language is hard to write at first, it’s always hard to write.
That seems obviously false. Most fancy programming languages are difficult to write at first, C++ included. But they all get easier over time.
Rust got way easier to write over time for me. I'm soooo much more productive in it now compared to when I started. Does C++ not get easier to write over time too?
A significant amount of Rust’s “new” features over the last years have been “yeah if you tried to use x and y together, that didn’t work, but now it does.” From an end user perspective, a lot has been made more straightforward over time.
All of that stuff doesn’t matter though. If you look close enough everything is different to everything, but in real life we only take significant differences into consideration otherwise we’d go nuts.
Memory bugs have a high risk of exploitability. That’s it; the threat model will tell the team what they need to focus on.
Nothing in software or engineering is absolute. Some projects have decided they need compile-time guarantees about memory safety, others are experimenting with it, many still use C or C++ and the Earth keeps spinning.
This whole memory-bugs-are-magical thinking just comes from the Rust community and is not an axiomatic truth.
It’s also trivial to discount, since the classical evaluation of bugs is based on actual impact, not some nebulous notions of scope or what-may-happen.
In practice, the program will crash most of the time. Maybe it will corrupt or erase some files. Maybe it will crash the Windows kernel and cause 10 billion in damages; just like a Rust panic would, by the way.
Sounds like a straw-man. I know developers who are good enough to achieve it on their own, but they use the tooling anyway, because one can’t write perfect code always: feature requests might be coming in too fast, team members have different skill levels, dev turnover happens, etc.
Furthermore, memory bugs still can be considered by teams as just another bug, so they might not get prioritised.
The only significant difference is that there’s lots of criminal energy targeting them, otherwise nobody would care much.
Unfortunately, all too often not enough. But then again often one has UX designers for that, but they are all too often off to build flying castles in Figma.
Visual design is only one part of UX- interaction design and information architecture are equally important components.
At my last two jobs, when I did v front end work I had to coach designers through UX on a regular basis, because the designers did as much for the marketing department as they did for the development team.
Sadly, UX as a discipline doesn't get much love from most companies.
Most of UX design for the past ~15 years has been new and innovative ways to trick users, lie without really lying, and annoy your users just enough so you can extract what ever you need to extract, but not too much such that they actually leave.
If you want to create good UX, I would look at whatever the big dogs are doing (Amazon, Meta, Google, et. al.) and not do that.
Your complaint doesn’t look valid to me: the feature in the article is implemented with compiler macros that work with old and new code without changes.
See https://libcxx.llvm.org/Hardening.html#notes-for-users