Half the comments here are talking about the vtuber herself. Who cares. It's been talked before. Just imagine if half the thread is discussing what gender she is. What I am interested in is the claims here https://asahilinux.org/2022/11/tales-of-the-m1-gpu/#rust-is-.... (what is it called if it comes with a proof?).
The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?
In C? No, not unless you write your own scaffolding to do it.
In C++? Maybe, but you’d need to make sure you stay on top of using thread safe structures and smart pointers.
What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
So you’re not spending time chasing oddities because you missed a variable initialisation, or you’re hitting a race condition or some kind of use after free.
While there’s a lot of people who say that this slows you down and a good programmer doesn’t need it, my experience is even the best programmers forget and (at least for me), I spend more time trying to reason about C++ code than rust, because I can trust my rust code more.
Put another way, Rust helps with reducing how much of the codebase I need to consider at any given time to just the most local scope. I work in many heavy graphics C and C++ libraries , and have never had that level of comfort or mental locality.
For me it isn't even that it catches these problems when I forget. It is that I can stop worrying about these problems when writing the vast majority of code. I just take references and use variables to get the business logic implemented without the need to worry about lifetimes the entire time. Then once the business logic is done I switch to dealing with compiler errors and fixing these problems that I was ignoring the first time around.
When writing C and C++ I feel like I need to spend half of my brainpower tracking lifetimes for every line of code I touch. If I touch a single line of code in a function I need to read and understand the relevant lifetimes in that function before changing a single line. Even if I don't make any mistakes doing this consumes a lot of time and mental energy. With Rust I can generally just change the relevant line and the compiler will let me know what other parts of the function need to be updated. It is a huge mental relief and time saver.
I agree that Rust is the better language because it gives you the safe tools by default.
Smart pointers are no panacea for memory safety in C++ though: even if you use them fastidiously, avoiding raw pointer access, iterator invalidation or OOB access will come for you. The minute you allocate and have to resize, you're exposed.
Yeah that’s definitely true. Anytime I use any C++ api that I did not write or analyze at my best is a shotgun waiting to go off. Hell, even then, that assumes I know enough to catch everything. I’ve recently been doing security analysis of some very popular repos and C++ is terrifying at times.
Rust isn’t perfect but it gives me so much more trust in everything I do.
Additional advantage of Rust is the extensive macro system. The ability to generate a bunch of versioned structures out of a common description, all with their own boilerplate and validation code, is invaluable for this kind of work. Some of it can be done in C++ with templates as well, but the ergonomy is on a different level.
> What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.
For what it’s worth, the same is true of Swift. But since much of the original Rust team was also involved with Swift language development, I guess it’s not too much of a surprise. The “unsafe” api requires some deliberate effort to use, no accidents are possible there. It’s all very verbose through a very narrow window of opportunity if you do anything unsafe.
I have such a love hate relationship with Swift. It genuinely has some great ergonomics and I see it as a sister language to rust.
I just wish it was more cross platform (I know it technically works on Linux and windows…but it’s not a great experience) and that it didn’t have so much churn (though they’ve stopped futzing with the core components as much with Swift 4+).
I also wish it was faster. I did an Advent of Code against some friends. I picked Rust, they picked Swift. The rust code was running circles around their Swift ones even when we tried to keep implementations the same.
Anyway, that’s a rant, but I think to your point, I feel like Swift could have been as big as Rust..or bigger given the easier use, with many of the same guarantees. I just wish the early years were more measured.
>The rust code was running circles around their Swift ones even when we tried to keep implementations the same.
I've done Advent of Code a few years -- even Javascript implementations, if using a good (optimal) algorithm, are highly performant, so I'm suspicious of the claim. In most AoC problems, if your code is observably different between languages, it's the fault of the algorithm, regardless of language. But perhaps you are referring to well-profiled differences, even if there are no observable differences.
That said, in projects other than AoC I've compared Swift to C++ and it's hard to deny that a low-level language is faster than Swift, but Swift itself is certainly fast compared to most dynamically typed or interpreted languages like Python, JS, Ruby, etc. which are vastly slower than anything in Swift.
Swift is fast but it’s both easy to accidentally write slow code and there is a not-insignificant amount of overhead to the reference counting and constant dynamic table lookups.
When I say Rust ran circles around it, I mean the difference was not noticeable unless timing, and was the difference of 200ms vs 400ms or a 3 seconds vs 5 seconds , so nothing to write home about necessarily.
That is not the first time I want to understand a bit better the performance difference today between the approaches of Rust, without a garbage collector, ARC on Swift with the reference counting and other garbage collected languages, such as Javascript.
I know Javascript have an unfair advantage here, since the competition between V8 and the other javascript cores is huge over the years and garbage collection on JS is not often a problem. At least I see more people struggling with the JVM GC with its spikes in resource usage.
I've also heard that the erlang VM (be it written in elixir or erlang itself) implements GC on a different level, not to the global process, but on a more granular way.
Is there a good resource that compare the current state of performance between those languages or approaches?
Just like UNIX kernel code is a strange subset of C.
I always find strange how "kernel C" is C, even though ISO C would bork in kernel space, but doing a similar C++ subset is pointed out as not being C++.
I'd also call kernel C strange. Mostly justified, but there's no particular reason the memory allocation call with the same behavior as malloc() in IOKit isn't named malloc().
The history of Ada kept it from getting widespread attention. Sometimes it’s just about the right time and place.
Plus there’s something to be said for cargo being a killer feature for rust. Easy build configuration , easy package management and access to a large ecosystem of libraries.
Between that and the book..:As steep a learning curve as the language itself has, the actual on-boarding process is way more accessible than many other compiled languages.
Sure, but let's not pretend it is the very first time anything better than C and C++ came to be, and Ada is only a possible example.
Rust sucess story is briging Cyclone type system into mainstream, and in such a way that other languages (even those with automatic management runtimes) started looking at affine and linear type systems for low level performance optimizations.
However for explicit type conversions, bounds checking in strings and arrays, modules, co-routines, assignment before use,... there were plenty of alternatives, which for one reason or the other didn't took off, most not really technical related.
I am curios to know if the trait system helps a lot with mapping the underlying kernel features/quirks? Which language is better at creating abstractions that maps closer to how the kernel works?
Hmm , I’m no kernel writer but I don’t think trait’s necessarily offer anything in rust over C++ where I’d use polymorphism with abstract interfaces.
When I’ve written lower level components the things that have really been godsends outside the safety features are things like enums (much more powerful than C/C++ unions/enums and much more ergonomic than variants).
And even though I said traits don’t offer too much more than C++, one thing I think they do really offer is when combined with generics. Rust generics let you define trait requirements better (though not as extremely flexible) as C++ concepts/constraints.
I have a lot of experience in C, a lot of experience in C++, and some experience with Rust (I have some projects which use it). My opinion is that it's true, and the other comments are good explanations of why. But I want to point out, in addition to those: There's a reason why Rust was adopted into Linux, while C++ wasn't. Getting C++ to work in the kernel would almost certainly have been way less work than getting Rust to work. But only Rust can give you the strong guarantees which makes you avoid lifetime-, memory- and concurrency-related mistakes.
You can't underestimate the amount of personal hatred that Linus and several other linux maintainers have for C++. I can't really say I blame them - C++ before C++11 was a bit of a nightmare in terms of performance and safety.
I'm not exactly C or Rust expert so better to check
@dagmx comment for that, but I know some C++ and worked with networking enough to know some pitfalls.
Talking of C++ it can be really solid to work with your own data structures where you control code on both ends. Using templates with something like boost::serialization or protobuf for the first time is like magic. E.g you can serialize whole state of your super complex app and restore it on other node easily.
Unfortunately it's just not the case when you actually trying to work with someone else API / ABI that you have no contol over. Even worse when it's moving target and you need to maintain several different adapters for different client / server versions.
Possible? Definitely. Easier? Probably not. At least for the most part, there are a couple things which C(++) can sometimes be more ergonomic for and those can be isolated out and used independently.
The resident C/C++ experts here would have you believe that the same is possible in C/C++. Is that true?