I don't think this is silly FUD. The article describes a scenario where the low-level abstractions itself was buggy in a subtle way, the comparison to "unsafe" Rust seems entirely fair to me. (edited for typos)
With Rust you always could unsafely do whatever went wrong in somebody's C or Zig or whatever, but the question is whether you would. Rust's technical design reinforces a culture where the answer is usually "No".
I don't find the claim that weird low level mmap tricks here are perf critical at all persuasive. The page recycling makes sense - I can see why that's helping performance, but the bare metal mmap calls smell to me like somebody wanted to learn about mmap and this was their excuse. Which is fine - I need to be clear about that - but it's not actually crucial to end users being happy with this software.
I think we can agree that Mitchell knows what he’s doing and isn’t playing around with mmap just because. It’s probably quite important to ensure a low memory footprint. But mmap in rust is not extra risky in some weird mystical way. It’s just a normal FFI function to get a pointer back and you can trivially build safe abstractions around it to ensure the lifetime of a slice doesn’t exceed the lifetime of the underlying map. It’s rust 101 and there’s nothing weird here that can cause the unsafe bits here to be extra dangerous (in general unsafe rust can be difficult to get right with certain constructs, but it doesn’t apply here).
I actually don't think I agree about mmap. Reading around it seems as though Mitchell had clever ideas for abusing mmap and those didn't work out. Now he's got mmap for the pages and it works so why replace it, but that does not mean you need mmap to deliver this performance and in fact I'd be extremely surprised if that were true as somebody who spent about a decade of his life mostly writing close-to-metal database code in C using mmap...
If you want a whole lot of bytes and you ask your allocator, do you know what almost any popular general purpose allocator will do on a vaguely decent modern Unix? Call mmap to get them for you. So at most you're cutting out a few CPU instructions worth of middle man.
Also in C or Zig you do not need to create your own memory management using mmap. Whether this is necessary in this case or not is a different question.
In the end, if the Rust advantage is that "Rust's technical design reinforces a culture" where one tries to avoid this, then this is a rather weak argument. We will see how this turns out in the long run though.
The long run has already spoken. Go look at the reports out of Microsoft and Android. It’s screamingly clear that the philosophy of Rust that most code can be written in safe with small bits in unsafe is inherently safer. The defect rate plummets by one or two orders of magnitude if I recall correctly. C is an absolute failure (since it’s the baseline) and Zig has no similar adoption studies. You could argue it will be similar if you always compile releasesafe, but then performance will be worse than C or Rust due to all the checks and it’s unclear how big a while the places that aren’t dynamically checked are.
Oh and of course rust is inherently slightly faster because no reference aliasing is allowed and automatically annotated everywhere which allows for significant aggressive compiler optimizations that neither C nor Zig can do automatically and is risky to do by hand.
More FUD and guilt by association. Microsoft and Google are also major contributors to the C and C++ standards bodies. Microsoft also has C# and Google has Kotlin. I think claiming they control Rust is weak given the community organization structure within the project and claiming the studies are inherently biased because they provide some funding is exceedingly weak.
IMHO the onus is on you to present any contrary studies showing Rust's safety profile isn't as good as the studies indicate when compared with C++ or to demonstrate where Zig's safety profile in real world complex environments stacks up.
We can disagree on opinions, but you can't discard all experimental evidence in favor of no evidence, especially when the safety profile of Rust is backed by solid theoretical models as to why it would be safer.
To that point, AWS and Cloudflare have also adopted the Rust language for all new projects. I think that says something about the recognition that it really is much harder to write trivial memory vulnerabilities.
I don't put too much wait on the self-reporting by Microsoft or Google. I agree though that the strategy to write safe bits and abstractions is good. What I know not to be true is the idea that similar strategies would not work also in C.
> What I know not to be true is the idea that similar strategies would not work also in C.
Is your argument that developers at MS and Google haven’t been trying to employ these strategies for existing C codebases? It’s a bold position to take and one I’d say devoid of evidence; all the evidence suggests it’s really hard to reason about ownership in complex systems and abstractions only help you do so error free up to a very limited point.
I know for sure that Microsoft does not, because they are not interested in C (and there compiler does not even fully support recent standards) and I assume the same thing about Google. I general, I do not think they write much C in the first place. I also think their use cases and priorities are different from others.
> We will see how this turns out in the long run though.
Rust 1.0 was in 2015. This is the long run. And I disagree that safety culture is a "weak argument". It's foundational, this is where you must start, adding it afterwards is a Herculean task, so no surprise that people aren't really trying.
I am not saying that safety culture is irrelevant, not at all. I am saying that if the advantage of Rust is the culture that emphasizes safety (or rather memory safety, if the Rust community cared about safety in general cargo would not exist in this form) then that is a weak argument.
I don't think 10 years ago there was a lot of Rust used, so I am not sure how relevant it is that 1.0 was released at this time.
The culture of Rust is pretty uniform both in terms of convention (lots of good examples to learn from) and automated tooling (eg cargo clippy can fix many constructs into cleaner versions).
But sure, ultimately any code you see is limited by the talent of the author. However the safety of that code is not - it’s limited by how many unsafe blocks they wrote which you can actually grep for.
This is a naive and dangerous view of "unsafe". The safety of surrounding code depends on the unsafe blocks not violating invariants of safe Rust, and the safety of "unsafe" blocks may rely on assumptions about the safe part. Also it relates only to memory safety, so if your code review is to grep for "unsafe" blocks you are doing it wrong anyway.
Of course, culture and technical design are both important for any language, but be specific. Despite the prevalence of tools that improve C's safety, writing C safely generally requires a culture of using those tools and other techniques. For better or worse, Rust's borrow checker is a clear demonstration of where Rust lies on the safety-freedom spectrum.
The low level abstraction was buggy because they forgot to free memory because they confused types, not because of mmap.
Thats completely orthogonal to the question and less likely in Rust because you would generally use an enum with Drop implemented for the interior of the variants to guarantee correct release.
And mmap is no more difficult to call in Rust nor more magically unsafe - that’s the FUD. The vast majority of Ghostty wouldn’t even need unsafe meaning the vast majority of code gets optimized more due to no aliasing being automatic everywhere and why the argument that “zig is safer than unsafe rust” is disingenuous about performance or safety of the overall program.