At what point does a C++ programmer like yourself just switch to something that advertises safety but also C like performance? At what point does it get so difficult to write robust, “safe” C++ that one switches languages? How necessary is all this safety, even? (Doh! Of course it’s necessary for hardening your software against bugs and making the overall ecosystem safer).
That "safe (by default)" aspect of a given programming language isn't free. It's similar to the security of a system that comes at the expense of its usability. The direction C++ developed itself was to provide performance and usability. A programming language that puts safety first has to be more than a useful tool, it has to be a mentor at best and a bureaucrat at worst. When touting the safe languages people have to be aware of survivorship bias, that countless projects never got the chance to be developed due to the too high of a burden of doing it properly in them (i.e. in a safe manner, in the way "safe" is defined by language designers). But then again, there may be cases when this development cost may not be that bothersome compared to other aspects (e.g. a corporation willing to hire more developers to compensate for the lost effort if thus it gets to hire fast and cheap - pretty much whomever manages to please the compiler, no in-depth skills screening necessary, the shipped code guaranteed to be safe anyway).
Rust fans rely on the compiler to enforce safety, to the exclusion of other qualities. In C++, we rely on libraries to provide safety and also those other qualities, so library authors get powerful features to enable that.
In practice, people writing modern C++ code do not struggle with memory safety, so it has been a good trade.
People avoiding modern coding style, such as Lakos and his minions (and, apparently, Fuschia and Chrome authors) do not get that benefit.
This is one of those "...except when it does" takes. It's always been a bald faced No true Scotsman fallacy.
A: "Modern C++ has no problems with memory safety."
B: "But what about X? X was written in C++ last year. X has problems with memory safety."
A: "X is not modern C++."
That "memory unsafe languages produce vulnerabilities" is an empirical claim. I can give you gobs of data. If you actually think, "In practice, people writing modern C++ code do not struggle with memory safety", then you should produce some data that shows this to be the case.
So, my take is -- okay, prove it. Because, my guess is, your claim is mostly a feeling you have about your code (yes, simply good code vibes) rather than something you can demonstrate to others.
> In practice, people writing modern C++ code do not struggle with memory safety, so it has been a good trade.
Then why do we have so many memory safety bugs in, for example, modern webbrowsers? I'm relatively sure that the Chrome team is pretty competent, and yet...
"No true Scotsman...". Take string_view for example: perfectly cromulent modern C++. I use it daily to reduce the number of unnecessary copies. It's also very easy to code a use-after-free bug with it.
Only by storing it somewhere. So long as it is only passed down a call chain, no problem.
string_view really is no different from a naked pointer. Modern C++ treats naked pointers with well-deserved suspicion. I never have any trouble because pointers are always strictly evanescent values.
> So you believe then, that the majority of C++ code written at e.g. Microsoft is not 'modern'?
Maybe some new code is modern - but there is a tremendous legacy code base that can’t possibly be (and Microsoft has released enough open source that you can verify this yourself). Also retrofitting isn’t magic - the boundary between old and new will always cause problems.
Note that while MSRC asserts that, Azure Sphere OS and Azure RTOS only support C (not even C++), and WinUI team likes to boost themselves how much of the underlying COM components are written in C++ (in UWP some of them were .NET Native).
Also Visual C++ team is quite keen in brigging Rust like safety (as much as possible) to their static analysis tooling.
Obviously not. But an enduring large fraction are, and remain. And code there is practically never improved or modernized, only patched onto. It is a cultural thing: there is no perceived value in anything but new bullet-item features.
Rust requires you to write correct code. And no, lots of people using modern C++ do struggle with safety, and specifically memory safety. That's why these new languages exist, and exactly why they are gaining users every single day. No matter what someone on hacker news says...
This doesn't mean c or c++ are bad or something. But, yea...
Rust does not, in fact, require you to write correct code. No language can do that. The best a language can do is make it harder to write certain kinds of incorrect code. And, libraries coded in a powerful language can do that, too.
You are right it does not gaurantee correctness. Libraries can also provide unsafe or incorrect code. C++ lacks a package manager and a community mindset. So I trust c++ library's less than many modern programming languages...
Rust makes it so you can do this yourself for most things. It's not always convenient but it's the best I've seen so far.
I'm interested to see how automatic formal verification of rust code is going. Super interesting area, think a team at AWS is working on it, and a few other groups.
Yes there are several third party package managers, which is a pretty good smell that the situation is sticky at best. Out of the three my favorite is NuGET. Unfortunately the ecosystem last I checked anyway was weaker than say rusts' or go's(at the risk of comparing apples to oranges). Python also has multiple package managers, and the worst part of python when used appropriately is it's package managers and logical layout...
Most modern programming languages have one package manager dedicated to the language, and for good reason imo.
Part of the problem is, the c++ culture is kind of like the c culture, most people would rather write their own packages from scratch then leverage a community. I don't blame the languages, they were around before git was common.
I feel like I am bashing c++... I don't mean too, it's a great language and it does have good packages(some I prefer over what is available in say Rust), but most projects I've seen not using visual c++ do the 1990s thing and don't use these tools. Maybe it's their age, never looked hard at "why", being honest.
> survivorship bias, that countless projects never got the chance to be developed due to the too high of a burden of doing it properly
I would contend the cost of doing things “safely” is much higher in C++. since a human being has to mentally do all the work the compiler would do in a safe language.
Right. Op is saying an unsafe program that’s paying the bills and we can fix later is better than being a Rust evangelist on hn because your startup never got off the ground.
Somewhere out there, a startup is writing a browser in pure safe rust, and there won’t be any memory errors in it because they’re never gonna take on any tech debt and it’s never gonna ship.
If you race a skilled Rust team against an equally skilled C++ team to build some big complicated fast software, the Rust team would likely ship first, and with less bugs.
The C++ team will eventually ship too, but it will take much longer. The software will also be of high quality, and very slightly superior performance, but there will be a couple of memory leaks, maybe a couple of exploits, and possibly a tricky segfault, somewhere down the line. Maintaining the C++ team’s software without introducing further issues will require superhuman intelligence, so it won’t happen - there will increasing issues as the team turns over and the detailed understanding of the code is lost over time.
The Rust team will mostly suffer from frustration about how bad async is, go down the rabbit hole of using it, and then rip it out and replace it with hand-rolled state machines and epoll. Down the line at some point, future programmers will decide this is legacy garbage and replace it all with async again.
There will be no segfault or memory exploits, and a similar number of logic bugs to the c++ team.
I say this having worked on large C++ projects and large Rust projects, and with no particular religious love for Rust other than a grateful appreciation for the compiler.
You really can become productive in safe languages. It does take some practice, but it makes you a better programmer. Someone might turn the table and claim they are more productive in memory safe languages because when they hit production they can actually pivot to new projects rather then put out concurrency/parallelism fires.
Much of what can usefully be written in C++ cannot be expressed at all in other languages, Rust included. Thus, for serious users, switching would mean a big step down.
Most of the powerful, unique features are there to help encapsulate semantics in libraries, enabling libraries to adapt automatically to circumstances of use without loss of performance. As a result, libraries in C++ can be more powerful and useful. The more a library is used, the more resources are available to optimize and test it, so libraries can become extremely robust.
The complexity of C++ lives primarily in the fact that it is almost entirely backwards compatible all the way back to the late 1980s, and mostly compatible all the way back to K&R C.
Modern C++, when you stick to the new idioms (RAII, range loops, auto lambdas etc) is almost as compact and succinct as Python with type annotations. One of the biggest differences is readability is that the standard library is not "batteries included" so people roll a lot of stuff themselves.
> At what point does a C++ programmer like yourself just switch to something that advertises safety but also C like performance?
As a former diehard c++ developer, modern c++ was the reason I started learning Rust.
All the good parts of modern c++, without any of the legacy baggage holding it back, and avoiding all of the things that made modern c++ necessary in the first place.
I still program in c++ professionally but only for legacy programs. For anything new I would avoid it in favour of a modern language.
Ultimately, it doesn't really matter. People will continue to write C++ so we should continue to provide resources that encourage doing so as safely as possible.
Not that commenter, but I'm also a C++ programmer by trade who uses Rust wherever I can. Often, I can't use Rust because of platform support (AIX in my case), or because I need to use libraries or existing C++ code that uses heavy C++ template stuff, making writing a wrapper infeasible.
C++ is not by best friend by far, but I'm happy to gain any convenience and safety where I can get it in the language that I use regularly.
That's right. And they still have the wonky "shared object in an ar archive" setup for shared libraries.
For passers-by: Linux has static libs as .a files, indexed ar archives with object files in them, and ELF shared objects as .so shared libraries. In AIX, .a ar archives may hold object files to be static libs, they may hold shared objects to be treated as a shared library (a lot of shared libraries on AIX are .a files rather than .so), and they may hold both 64 and 32 bit files of each to support multiple architectures. A single .a file can be a 32 bit static library, a 64 bit static library, a 32 bit shared library, and a 64 bit shared library all in one.
It has some convenience, in that all the different types of libraries for a package can be found in one place, but I've always found it annoying, especially when you have very specific goals to accomplish and need some libraries linked statically and others dynamically.
I wanted to add that I have been a fan and in awe of C++ programmers for a long time but would never consider myself strong enough at it to critique it without looking like a fool. I just know what I've read and am curious. So if anything I said offended any C++ devs I never meant it to. The discussion that has been had has been awesome.