Given that I am working professionally with C++ (meaning I earn money by writing code in this language) I find it remarkable, that I appreciate and actually think it’s a useful book for my daily work. 1300+ pages of detailed language features just to somewhat get by and feel sufficient competent that my code does what I intended to do.
Looking behind my back there is a shelf of another 30+ books full of C++ (Meyers, Sutter, Alexandrescu, Langr, Feathers, Stroustrup, Lakos etc) that I have read page to page. However, I am certain that any sufficient C++ developer can embarrass me by showing me a corner of this language I have never experienced before. It just never stops. The things I have to keep in mind while coding a solution for the actual problem at hand can be astounding.
But then, I really like coding in it as there are these rare cases when I’m coming up with an elegant and straight to the point solution. I think I have some kind of Stockholm love-hate relationship with this tool of choice.
C++ is not the language for everyone. But most of Alexandrescu's book is more for people showing off than for people doing useful work.
And, much of it is obsolete, supplanted by straightforward core language features. Their usefulness was demonstrated by use in the wild of Alexandrescu's workarounds.
> most of Alexandrescu's book is more for people showing off than for people doing useful work.
I hate Alexandrescu's most famous book for its mismatch of title ("Modern C++ Design") and content ("Unwieldy C++ template tricks you should probably avoid").
Not a professional here - by my own standart at least - however I also program in C++ for a living (among other languages).
Definitely Stockholm-Syndrom in my case. Was the first language in which I was forced to work on some larger codebase. Since then I can't take the complexity of any other object-oriented language serious. All of them are basically a subset of C++.
At what point does a C++ programmer like yourself just switch to something that advertises safety but also C like performance? At what point does it get so difficult to write robust, “safe” C++ that one switches languages? How necessary is all this safety, even? (Doh! Of course it’s necessary for hardening your software against bugs and making the overall ecosystem safer).
That "safe (by default)" aspect of a given programming language isn't free. It's similar to the security of a system that comes at the expense of its usability. The direction C++ developed itself was to provide performance and usability. A programming language that puts safety first has to be more than a useful tool, it has to be a mentor at best and a bureaucrat at worst. When touting the safe languages people have to be aware of survivorship bias, that countless projects never got the chance to be developed due to the too high of a burden of doing it properly in them (i.e. in a safe manner, in the way "safe" is defined by language designers). But then again, there may be cases when this development cost may not be that bothersome compared to other aspects (e.g. a corporation willing to hire more developers to compensate for the lost effort if thus it gets to hire fast and cheap - pretty much whomever manages to please the compiler, no in-depth skills screening necessary, the shipped code guaranteed to be safe anyway).
Rust fans rely on the compiler to enforce safety, to the exclusion of other qualities. In C++, we rely on libraries to provide safety and also those other qualities, so library authors get powerful features to enable that.
In practice, people writing modern C++ code do not struggle with memory safety, so it has been a good trade.
People avoiding modern coding style, such as Lakos and his minions (and, apparently, Fuschia and Chrome authors) do not get that benefit.
This is one of those "...except when it does" takes. It's always been a bald faced No true Scotsman fallacy.
A: "Modern C++ has no problems with memory safety."
B: "But what about X? X was written in C++ last year. X has problems with memory safety."
A: "X is not modern C++."
That "memory unsafe languages produce vulnerabilities" is an empirical claim. I can give you gobs of data. If you actually think, "In practice, people writing modern C++ code do not struggle with memory safety", then you should produce some data that shows this to be the case.
So, my take is -- okay, prove it. Because, my guess is, your claim is mostly a feeling you have about your code (yes, simply good code vibes) rather than something you can demonstrate to others.
> In practice, people writing modern C++ code do not struggle with memory safety, so it has been a good trade.
Then why do we have so many memory safety bugs in, for example, modern webbrowsers? I'm relatively sure that the Chrome team is pretty competent, and yet...
"No true Scotsman...". Take string_view for example: perfectly cromulent modern C++. I use it daily to reduce the number of unnecessary copies. It's also very easy to code a use-after-free bug with it.
Only by storing it somewhere. So long as it is only passed down a call chain, no problem.
string_view really is no different from a naked pointer. Modern C++ treats naked pointers with well-deserved suspicion. I never have any trouble because pointers are always strictly evanescent values.
> So you believe then, that the majority of C++ code written at e.g. Microsoft is not 'modern'?
Maybe some new code is modern - but there is a tremendous legacy code base that can’t possibly be (and Microsoft has released enough open source that you can verify this yourself). Also retrofitting isn’t magic - the boundary between old and new will always cause problems.
Note that while MSRC asserts that, Azure Sphere OS and Azure RTOS only support C (not even C++), and WinUI team likes to boost themselves how much of the underlying COM components are written in C++ (in UWP some of them were .NET Native).
Also Visual C++ team is quite keen in brigging Rust like safety (as much as possible) to their static analysis tooling.
Obviously not. But an enduring large fraction are, and remain. And code there is practically never improved or modernized, only patched onto. It is a cultural thing: there is no perceived value in anything but new bullet-item features.
Rust requires you to write correct code. And no, lots of people using modern C++ do struggle with safety, and specifically memory safety. That's why these new languages exist, and exactly why they are gaining users every single day. No matter what someone on hacker news says...
This doesn't mean c or c++ are bad or something. But, yea...
Rust does not, in fact, require you to write correct code. No language can do that. The best a language can do is make it harder to write certain kinds of incorrect code. And, libraries coded in a powerful language can do that, too.
You are right it does not gaurantee correctness. Libraries can also provide unsafe or incorrect code. C++ lacks a package manager and a community mindset. So I trust c++ library's less than many modern programming languages...
Rust makes it so you can do this yourself for most things. It's not always convenient but it's the best I've seen so far.
I'm interested to see how automatic formal verification of rust code is going. Super interesting area, think a team at AWS is working on it, and a few other groups.
Yes there are several third party package managers, which is a pretty good smell that the situation is sticky at best. Out of the three my favorite is NuGET. Unfortunately the ecosystem last I checked anyway was weaker than say rusts' or go's(at the risk of comparing apples to oranges). Python also has multiple package managers, and the worst part of python when used appropriately is it's package managers and logical layout...
Most modern programming languages have one package manager dedicated to the language, and for good reason imo.
Part of the problem is, the c++ culture is kind of like the c culture, most people would rather write their own packages from scratch then leverage a community. I don't blame the languages, they were around before git was common.
I feel like I am bashing c++... I don't mean too, it's a great language and it does have good packages(some I prefer over what is available in say Rust), but most projects I've seen not using visual c++ do the 1990s thing and don't use these tools. Maybe it's their age, never looked hard at "why", being honest.
> survivorship bias, that countless projects never got the chance to be developed due to the too high of a burden of doing it properly
I would contend the cost of doing things “safely” is much higher in C++. since a human being has to mentally do all the work the compiler would do in a safe language.
Right. Op is saying an unsafe program that’s paying the bills and we can fix later is better than being a Rust evangelist on hn because your startup never got off the ground.
Somewhere out there, a startup is writing a browser in pure safe rust, and there won’t be any memory errors in it because they’re never gonna take on any tech debt and it’s never gonna ship.
If you race a skilled Rust team against an equally skilled C++ team to build some big complicated fast software, the Rust team would likely ship first, and with less bugs.
The C++ team will eventually ship too, but it will take much longer. The software will also be of high quality, and very slightly superior performance, but there will be a couple of memory leaks, maybe a couple of exploits, and possibly a tricky segfault, somewhere down the line. Maintaining the C++ team’s software without introducing further issues will require superhuman intelligence, so it won’t happen - there will increasing issues as the team turns over and the detailed understanding of the code is lost over time.
The Rust team will mostly suffer from frustration about how bad async is, go down the rabbit hole of using it, and then rip it out and replace it with hand-rolled state machines and epoll. Down the line at some point, future programmers will decide this is legacy garbage and replace it all with async again.
There will be no segfault or memory exploits, and a similar number of logic bugs to the c++ team.
I say this having worked on large C++ projects and large Rust projects, and with no particular religious love for Rust other than a grateful appreciation for the compiler.
You really can become productive in safe languages. It does take some practice, but it makes you a better programmer. Someone might turn the table and claim they are more productive in memory safe languages because when they hit production they can actually pivot to new projects rather then put out concurrency/parallelism fires.
Much of what can usefully be written in C++ cannot be expressed at all in other languages, Rust included. Thus, for serious users, switching would mean a big step down.
Most of the powerful, unique features are there to help encapsulate semantics in libraries, enabling libraries to adapt automatically to circumstances of use without loss of performance. As a result, libraries in C++ can be more powerful and useful. The more a library is used, the more resources are available to optimize and test it, so libraries can become extremely robust.
The complexity of C++ lives primarily in the fact that it is almost entirely backwards compatible all the way back to the late 1980s, and mostly compatible all the way back to K&R C.
Modern C++, when you stick to the new idioms (RAII, range loops, auto lambdas etc) is almost as compact and succinct as Python with type annotations. One of the biggest differences is readability is that the standard library is not "batteries included" so people roll a lot of stuff themselves.
> At what point does a C++ programmer like yourself just switch to something that advertises safety but also C like performance?
As a former diehard c++ developer, modern c++ was the reason I started learning Rust.
All the good parts of modern c++, without any of the legacy baggage holding it back, and avoiding all of the things that made modern c++ necessary in the first place.
I still program in c++ professionally but only for legacy programs. For anything new I would avoid it in favour of a modern language.
Ultimately, it doesn't really matter. People will continue to write C++ so we should continue to provide resources that encourage doing so as safely as possible.
Not that commenter, but I'm also a C++ programmer by trade who uses Rust wherever I can. Often, I can't use Rust because of platform support (AIX in my case), or because I need to use libraries or existing C++ code that uses heavy C++ template stuff, making writing a wrapper infeasible.
C++ is not by best friend by far, but I'm happy to gain any convenience and safety where I can get it in the language that I use regularly.
That's right. And they still have the wonky "shared object in an ar archive" setup for shared libraries.
For passers-by: Linux has static libs as .a files, indexed ar archives with object files in them, and ELF shared objects as .so shared libraries. In AIX, .a ar archives may hold object files to be static libs, they may hold shared objects to be treated as a shared library (a lot of shared libraries on AIX are .a files rather than .so), and they may hold both 64 and 32 bit files of each to support multiple architectures. A single .a file can be a 32 bit static library, a 64 bit static library, a 32 bit shared library, and a 64 bit shared library all in one.
It has some convenience, in that all the different types of libraries for a package can be found in one place, but I've always found it annoying, especially when you have very specific goals to accomplish and need some libraries linked statically and others dynamically.
I wanted to add that I have been a fan and in awe of C++ programmers for a long time but would never consider myself strong enough at it to critique it without looking like a fool. I just know what I've read and am curious. So if anything I said offended any C++ devs I never meant it to. The discussion that has been had has been awesome.
An opinionated thorough demonstration on how to write C++ at scale. I didn’t agree one all aspects, though I have to admit it was the first book on C++ that clearly stated _how_ to actually design stuff both on a physical as well as on a logical level. Something which most other books on C++ won’t get into. All that being said I think it was pretty relevant for medium to large codebases
The trouble with "Modern C++" is that nothing gets deleted from "Classic C++". Too much legacy code. They're trying to compete with Rust while remaining backwards compatible. That generates so many special cases that you need a 1300 page book. Learn to embrace the pain.
And yet converting code to Rust is hard. It is very tough to convert object-oriented C++ code to Rust. The models are too different.
Yes. Rust needs radically more use not to end up fizzling like myriad languages before it. But its stalwarts are hostile to measures to make it easier for people to pick up, even easy ones that do not alter the language at all. (E.g., allowing a debug build to complete despite borrow complaints.) It takes a special dedication to overcome barriers that most of the people it must attract lack.
It was meant to supplant C, but C users, as a rule, are defined by seeing a thousand other languages go by and not jumping. So Rust mostly picks up discontents from other languages, including Java, Go, and, yes, C++.
The danger in appealing to discontents is that they are likely to jump ship for the next shiny thing.
> But its stalwarts are hostile to measures to make it easier for people to pick up, even easy ones that do not alter the language at all. (E.g., allowing a debug build to complete despite borrow complaints.)
That seems like a rather unusual suggestion to me. Are there commonly used languages/situations where optimization level changes semantics/allowed constructs? The only thing I can think of off the top of my head is stuff that relies on tail calls be optimized in languages that don't guarantee tail call optimizations, but I think that's a relatively niche case compared to debug vs. release builds.
I also feel that that suggestion could arguably make the language harder to pick up, since something that worked in debug mode might require extensive refactoring for release mode. Given how much Rust relies on the optimizer to see through its abstractions, that seems problematic.
In addition, I suspect (but don't know for sure) that writing code that works correctly in both modes can be tricky - think of something that uses multiple threads, where "release-mode code" can rely on the lack of mutable aliasing but "debug-mode code" cannot. Either you maintain two versions of the code or you write code that doesn't rely on the borrow checker guarantees, which seems like it would defeat the purpose of having the borrow checker in the first place. This is a gut feeling, though, so I wouldn't be surprised if there were a way around this.
Finally, I'm not sure I agree with the notion that this change "[does] not alter the language at all." It's like the concept of "allowing a debug build to complete despite [type] complaints" in C++. You're going to have to change the language somehow to define the semantics of ignoring type errors, and I'm not sure how this will help newcomers or how easy such a change would be to implement.
Out of curiosity, what other measures did you have in mind? Or can you point me somewhere which discusses similar measures?
It's a little tricky, I think, because lifetimes are a part of the type of things and can determine behavior. But you could possibly do like GHC does with "defer type errors" and panic if you encounter a situation where that might matter (including before an illegal borrow or whatnot). It doesn't really help you solve the bit that was unchecked, but if you're trying to iterate around something earlier in the code it might be useful sometimes to be able to focus on the bit you're working on and put off cleaning up stuff you're not wanting to execute yet anyway.
GCH has a pretty significant head start given that it has lazy semantics and an interpreter.
I don't really see the point of disabling the borrow checker anyway. I assume the intent is that it would make incremental development easier, but as someone who has written probably 100kLOC of Rust I feel like it would take a ton of work to implement (ie: making `miri` radically more powerful and generalizing it) and solve... no real problems for me. I get that a beginner is going to see bigger wins there since they will hit borrow checker errors earlier, but even then, I learned rust in 2015 when the borrow checker was way stricter, and I don't think I fought very much with it - I learned immediately that I could clone my way out of almost any issue.
IMO a far better investment would be into compiler errors giving better hints, or IDE improvements to show lifetimes, etc, and those would all be global wins instead of something rather niche.
> But you could possibly do like GHC does with "defer type errors" and panic if you encounter a situation where that might matter (including before an illegal borrow or whatnot).
I honestly didn't know that capability existed. Just to make sure I'm reading the right thing, is it the technique described in Equality proofs and deferred type errors: A compiler pearl [0]?
I have no training in programming language theory, so I can't give a good evaluation on how applicable this technique is to Rust. The first thing that jumps to mind is that I think it relies on Haskell's laziness to avoid evaluating the thunk (?) representing the type error if it isn't needed, but that's not to say that a different approach can't be found that doesn't require laziness (assuming I read the paper correctly).
That being said, given Rust provides types which defer borrowing checks to runtime (Cell/RefCell [1]), I think it may be possible to implement something similar to -fdefer-type-errors by e.g., wrapping all references in a Cell/RefCell and inserting appropriate function calls. I would guess the main concern there is fragmenting the Rust ecosystem into code that requires a hypothetical -fdefer-borrowck-errors flag and code that works with "standard" Rust, and tying that to optimization level has its own issues.
It's admittedly a bit different than what I had in mind when I first responded to ncmncm's comment (where I interpreted it as "pretend the borrowck errors didn't exist," as opposed to "use runtime checks as appropriate"), but it does make more sense.
Right, but as far as I know those preserve language semantics and are stricter in debug mode than in release mode. The way I interpreted ncmncm's comment was that the suggested build-even-with-borrowck-errors was the opposite - different language semantics, and something that compiles in debug mode won't compile in release mode.
All compilers allow builds that trigger warnings to complete.
Obviously code that only runs when built "debug" differs from released code, which ... doesn't run. That is the whole point. Before you ship, you will resolve all the borrow nags, and then there will be a release build, and it will run the same.
In the meantime, you probably have more urgent worries to act on first. The only difference is who decides when, you or the compiler. As is, the compiler decides, period.
(I confess it utterly mystifies me how this is such a difficult concept for so many. Inability to comprehend something so simple bodes ill for judgment on more weighty matters.)
> All compilers allow builds that trigger warnings to complete.
That is true, but I don't see how it's relevant. Violations of the borrowing rules in Rust are like type errors in C++ or C. Warnings are simply not part of the picture.
> Obviously code that only runs when built "debug" differs from released code, which ... doesn't run.
I honestly don't know what this wording is intended to convey. The text before the edit made more sense.
But in any case, I don't really see the relevance. I asked about situations where the presence/absence of optimizations changes language semantics. As far as I know, programmers generally expect that the same code will exhibit the same observable behavior regardless of whether it was built in debug or release mode (i.e., the as-if rule, barring UB/IB/etc., of course). What you're suggesting appears to violate that - something that builds in debug mode may not compile at all in release mode, let alone run the same. How is that better for newcomers?
> Before you ship, you will resolve all the borrow nags, and then there will be a release build, and it will run the same.
This feels like it's doing a lot of hand-waving. By way of analogy:
"Before you ship, you resolve all the type nags, and then there will be a release build, and it will run the same"
If you started with a program where everything is a void*, "resolve all the type nags" is potentially a huge amount of work. There's absolutely no guarantee that you will only need to make "minor" changes to get your program working in release mode. Maybe what you wrote is good enough, maybe you need to completely restructure your program. That certainly does not sound very newcomer-friendly to me.
> In the meantime, you probably have more urgent worries to act on first. The only difference is who decides when, you or the compiler. As is, the compiler decides, period.
Couldn't this argument be extended to other aspects of a program that can be statically checked? Why stop at the borrow checker?
You demonstrate that anybody can fail to comprehend anything, howsoever simple, if they dedicate enough effort. What such failure indicates is obscure.
(Is it really so hard to understand how a program that exists and can be executed differs from the entire lack of any such program?)
It does suggest that the prospect of Rust ever being made more accessible to new users is slim to none, as, thus, is displacing C enough to move any needle, more's the pity.
We will need to rely on people graduating to use of modern C++, instead.
> (Is it really so hard to understand how a program that exists and can be executed differs from the entire lack of any such program?)
No, but again, I don't see the relevance here. What I have been trying to say is simple: tying language semantics to the presence or absence of optimizations is going to pose challenges that could easily overshadow any potential advantages a more "relaxed" debug mode might provide for newcomers. Suggesting that the borrow checker be able to be "turned off" is not novel, but the idea of tying it to optimization level is, and that is what I have been trying to discuss, without much success.
If you really imagine that "debug" vs. "release" building is about optimization, I don't know what to suggest for you.
Nobody has proposed "turning off" the borrow checker. This is another red herring always thrown up. Nobody has proposed "turning off" the borrow checker. Maybe imagining somebody did is your trouble? But I doubt it.
At some risk of triggering another massive incomprehension storm, you might better compare the suggestion to turning const violations in non-release builds into warnings (while imagining that over-aggressive intolerance of const violations was driving away potential users). Yes, fixing your const violations before release might be a chore, even a big chore, but it should be very, very easy to imagine other actual program logic problems that might be tackled entirely independently of them.
The code with the const violations in might end up deleted, in the process, so that being obliged to fix them first would have been a pure waste of effort.
> If you really imagine that "debug" vs. "release" building is about optimization, I don't know what to suggest for you.
I mean, if you really want to be pedantic about it, "debug" vs. "release" means precisely nothing since they can be arbitrarily customized. However, in the context of this comment chain (newcomers to Rust), who are likely to be using cargo in its default configuration, "debug" vs. "release" primarily comes down to optimization settings and the generation of debug information, neither of which change program semantics for the most part.
The only exception is overflow checking, which is disabled in release mode. While I'll freely admit this does weaken my argument, I assert it doesn't suffer from the main flaw that yours does, in that debug mode is less lenient with respect to overflow checks, and turning them off for release mode doesn't cause compilation errors. This is significantly more friendly to newcomers, though I still think the semantic difference between debug and release is not ideal.
> Nobody has proposed "turning off" the borrow checker
Nobody in this thread? Sure. Nobody on the Rust dev team? Also sure. I assert (albeit without evidence, unfortunately, though I suspect you know better than I do) that that sentiment is not that uncommon among newcomers, though.
In any case, I think you're reading my comment too literally (though that's partially my fault for being unclear). The intended meaning was that adding some way to "bypass" borrow checker errors is not new, whether by literally "turning off" the borrow checker or through some other mechanism as in your proposal.
> you might better compare the suggestion to turning const violations in non-release builds into warnings (while imagining that over-aggressive intolerance of const violations was driving away potential users).
Yes, this is one possible comparison, but as I stated in previous comments, an analogous comparison applies to every other statically-checkable property, and I would argue that they all face the same issue: that what you write in "debug" mode may be unusable in "release" mode, which does not seem like a good experience for a newcomer. In addition, there's the question of why the argument should stop at the borrow checker - why not also have a mode which allows a program to compile despite the compiler complaining about type errors (which is a superset of const violations)?
So in the end, yes, I can imagine ignoring one set of problems allowing a programmer to focus on another set of problems. I can also imagine solving one set of problems by ignoring another resulting in an entirely useless solution because the ignored problems are actually/incidentally important, in which case the ability to compile code in an "intermediate" state is cold comfort. Maybe preventing said intermediate code could have clued in the programmer that the solution they were headed towards was unfeasible, as well. Hard to say how this tangle of hypotheticals turns out.
Type violations mean the compiler doesn't know what code to generate. That would not apply to const violations, and likewise would not apply to borrow violations. Where in fact the borrow violation would result in a buggy program, the compiler has already issued its warning.
But all this is moot. It is clear that the overwhelming sentiment is that not even the tiniest change to make Rust easier for new people to adopt should find sensible consideration. Thus, Rust will see too little adoption to forestall fizzling, with the full approval of its most ardent fans. I will be sad to see that happen.
> Type violations mean the compiler doesn't know what code to generate.
I'm not sure this has to be true. While I'm not sure how a compiler might deal with something like std::vector<int> v = some_unordered_map with the existing language semantics, something like working with technically-incompatible pointers might be malleable enough to work with. In such a case, the compiler knows how to generate otherwise-appropriate code, but according to the language rules it can't. Ignoring the type error in this case could mean just generating the code anyways and letting the cards fall as they may, much like what may be done for const or borrow violations.
> It is clear that the overwhelming sentiment is that not even the tiniest change to make Rust easier for new people to adopt should find sensible consideration.
Are there other language which make similarly-situated concessions to newcomers?
This argument that rust is complicated is really tiring and laughable in the face of c++'s complexity.
Rust has the biggest concession to newcomers I have ever seen offered, it will not let you compile code that contains many commonly encountered show stopping confusing as sin errors...
The person claiming this, is in the same breath claiming that a book written by experts is not worth reading. Assuming ncmncm is an expert at c++: How confusing does a language have to become for that to even happen?
C++ is not the language in daily imminent danger of fizzling. Its proven value makes it worth picking up by thousands of people each week, year in and year out. More pick up C++ for professional use in any such week than the total employed coding Rust.
Rust is the language that, if not adopted fast enough, will pass its sell-by date and fizzle, like so many languages before it. Its true fans should be pulling out all the stops to try to make it easier to adopt. Instead, most do their utter best to prevent wider adoption, containing it as much as possible to the ragged few like themselves willing to tolerate any infelicity.
Rust's fate will be chosen by fans' actions, not their beliefs. Those actions are its doom. Judging by those, Rust will end up yet another potentially interesting language that never took off. I will know whose fault that was.
I mean I get what you are saying about how old stuff doesn't have adoption issues. But, rust is a pretty unique paradigm. The safety guarantees it offers aren't found in other languages without garbage collectors.
Rust is boring and technical. Boring in the "it just works and when it doesn't it's not hard to figure out why" way. You didn't say this, but for others passing by, it would be a mistake to view rust as a "shiny thing". People use it for a reason, barring syntax entirely.
In a lot of cases I would consider a staged rewrite. Ie, I would turn what I could into dylibs going either from rust to c++ or c++ to rust. Assuming the code has structure and isn't infinite singletons and free standing functions this can make more or less sense. I'd translate the parts with the most likelihood for memory safety or thread safety issues first, so the transition could be part of the release cycle.
It would still be a lot of work, but you might see benefits at intermediate releases this way.
They are? That's weird. Rust and C++ have tons in common and Rust takes tons of inspiration from C++, lots of Rust developers (on the compiler/ early Mozilla adopters) were C++ developers before. C++ still comes up often, like in discussions on constexpr vs const fn.
I've also never seen anyone say that Rust is closer to C than C++, that also seems weird.
The argument that Rust is closer to C than C++ is usually not how it is raised in most discussion around complexity and similarity. Also, to be clear, the discussion is usually centered around ‘a replacement for C’ or ‘why Rust is not a C replacement when compared to <insert-other-language>’. From my massive forum lurking habit, at multiple places, the argument seems to be closer to the following:
Person 1 says, “Zig is a viable C replacement, Rust is a C++ replacement.”
Person 2 responds, “Rust is absolutely a C replacement, Rust is so much more simple than C++, that they should not be compared. That simplicity, which is much closer to C than C++, is why Rust should be viewed as the ultimate C replacement.”
So, it’s not a clear cut statement that Rust is closer to C than C++. However, I certainly have taken that as the argument in most discussions I have seen.
Rust is not, in fact, much simpler than C++, and gets less so every release. In five years Rust will be obviously, fully as complex as C++ -- provided it is still used.
Maybe it is my stockholm syndrome of reaching out to C++ when I need something like it alongside my managed languages, but I definitly find template metaprogramming easier to follow, specially since constexpr and concepts, than the various kinds of macros available in Rust.
To keep from fizzling out, Rust has to be much better than what it is trying to compete with, so pointing to something similar in an established language falls short.
In any case, C++20 has Concepts, which where used clear up the template error problem.
I've talked with people in the rust community about modern c++ being similar to rust. They all agreed and honestly thought it was cool. Maybe 1 out of 50 rust programmers language bash and are ignorant, those people tend to be junior and are trying to learn. Everyone else is cool.
That said rust is pretty different from c++. Most people don't compare rust to c because c is so heinously unsafe and rust being the polar opposite. Ironically, rust was indoctrinated as the second supported language to the Linux kernel, alongside C. So they have that in common...
I wouldn't make too many assumptions about the rust community, they are pretty nice smart reasonable people who tend to write c/c++ for work...
Hmm. From Rust 1.0 in 2015 Rust provides natural for syntax which let you write:
for clown in clowns
This is pretty different from how C++ iterators work:
for (auto clown = clowns.begin(); clown != clowns.end(); ++clown)
So, to "try to compete" we'd expect Rust to adopt the C++ features here right? But instead, in C++ 20 now you can write:
for (auto clown : clowns)
Notice that, as so often, to get here C++ has to add layers of hacks. In Rust clowns is simply any type that implements IntoIterator. That's all, this feature is just syntax sugar and you can implement it yourself calling std::iter::IntoIterator::into_iter(clowns) and then looping over the resulting iterator -- but C++ has not only explicit handling for the range feature here, it also needs two extra special cases to handle the broken C++ built-in array type and the C++ initialiser structures so that these both do what you expect.
So, five years later, and with extra hacks, C++ gives you the same feature as Rust, and we are to believe that C++ isn't the one trying to compete?
Range based for loops were introduced in C++11 (technically C++11 was published in 2011, but almost all compilers supported this way back in 2008-2009) and it's also nothing more than syntactic sugar. In order to provide support for a type T you need the following functions, which can be free functions or member functions:
begin(T&)
end(T&)
++(decltype(begin(T&)))
The only hack I can think of related to your post is that if T is an array, it uses the free functions std::begin and std::end which are overloaded to work with C arrays.
It's not really competing. New rust features often get compared to existing C++ features. Also, I'm sure many people in C++-land are looking at rust and wondering which bits they like can be brought over.
That's actually a motivating reason for why people and companys tend to like Rust . many people transition to rust because c++ has become very bloated, complex, and the rules for what happens in the language can be more complicated than x86 assembly...
I wasn't implying otherwise. As I said, there's plenty of Rust that is inspired by C++. A lot of the 'const fn' stuff is talked about in the context of constexpr, for example. A lot of the const generics as well.
This was wrong (and I was away long enough that the correction window expired), as I've acknowledged elsewhere, but I would suggest that factually the numbers don't match your claim at all.
> The trouble with "Modern C++" is that nothing gets deleted from "Classic C++".
This is not exactly true. For example, with the introduction of smart pointers, manual memory management is kicked out of C++'s happy path. This makes code far simpler to reason about and safer. Sure, you can still use new and delete, but since C++11 you better have a very good reason to use those.
Nevertheless, it doesn't make sense to complain about not deleting stuff from C++ when a) until recently C++ was repeatedly criticized for having an extremely spartan standard library, b) since C++98 basic features were added to C++ such as support for parallel and concurrent programming, c) there is no feature that gathers anything resembling a consensus on beig worth taking out of the standard.
I'd add that pseudo-standard libraries for C++, such as Boost and POCO, have been popping up and growing in both scope and adoption. This reflects a clear need felt by the C++ community to increase the scope of it's standard components.
Outside of the happy path doesn't mean, inaccessible or uncommonly used. C++ is on track to be a composition of ten languages by 2030. I'm not against that in general, it's kind of liberating, but at some point in time I hope someone does decide to do some house keeping.
> There are too many APIs that still want raw pointers.
Irrelevant. C++'s smart pointers deal with ownership and life cycle management. You can still pass raw pointers to objects managed by a smart pointers.
The only exception is when you're dealing with a framework which already handles life cycle management, such as Qt. Nevertheless, even Qt offers its own smart pointers suite.
If the API takes ownership, can't you just release the smart pointer, effectively forgetting about it and letting the new owner deal with it? Maybe I'm missing somethign.
That means it is no longer managed by a smart pointer and you need to understand the nature of the API. I understood the above comment to mean that you can just solve problems by calling "get" and moving on with your life.
There's nothing wrong with using a raw pointer as an argument of a function. In this context this is usually equivalent to using a reference. What is wrong, in fact, is passing, instead, a unique pointer (by reference, necessarily) or, even worse, a shared pointer (by value) when there is no need for sharing.
> In this context this is usually equivalent to using a reference.
Not really. The very, very, very important difference is that the pointer can be null, signalling "no object". For some purposes, this is a critically important difference; in other cases, you should be passing a reference.
Anybody who believes this ought to think it an excellent reason to stay away from C++. For actual software projects I mean. Obviously if you want a career writing books (like the one reviewed) or on the conference circuit, or training, or several other prospects, this is great news. C++ is a good language it which to charge somebody $1000 per day plus expenses to tell them what they already knew.
But if you actually write software, or worse, if you need software but that isn't really what you actually wanted to do just a necessity like farmers need to know how to drive a tractor, then this ought to put you off. C++ doesn't want to be your best choice, according to ncmncm anyway, it wants to be a trap you fall into and can't get out of. Run away!
It's not silly at all, you're just not aware of the larger context the parent is speaking of (ironic considering accusations about myopia).
For example, the current c++ project I work on (for a large company) has over 1.5 million lines of c++, spanning over 5,000 files, with business knowledge and bug fixes spanning over 30 years.
It's not going to get rewritten in anything else, and the same goes for large amounts of other business software.
Depending on what that business software is doing I wouldn't be surprised in a few hundred thousand lines were written in something else. Microsoft (https://medium.com/@tinocaer/how-microsoft-is-adopting-rust-...), etc, are all doing this right now. But yea if you are writing low velocity, non mission critical software, no need to bother. On the other hand, there may come a point in time where that c++ project gets a boost by factoring the critical parts out of it into a safe language...
The one I’m working on is a behemoth and already has several hundred thousand lines of other languages (python, Java, scala) as part of it.
The c++ will be staying c++.
I don’t say this out of any affinity for c++(check my other comments for details), rather because for business reasons, the core of this system will always be c++.
In fact we a currently embarking on a multi-million dollar project to upgrade the application to run in the cloud and one of our biggest challenges is finding qualified c++ candidates.
Despite that, due to the complexity and scope of the application and years of business knowledge captured in obscure parts of the code, there is no consideration for it to be rewritten in anything else.
P.S. the software is absolutely mission critical (responsible for a non-trivial % of total company profit), hence the reason it's not going to get rewritten without a very, very compelling business case.
Unfortunately, the real world often gives us constraints that we can't change and we have to work around. We should absolutely use Rust or other "safer" languages where we can. But if we can't, for whatever reason, it makes sense to try to make the best of the hand we're dealt. That means regardless of what new developments are made in other languages, it's necessary to figure out how to use existing tools/languages like C++ safely.
Reality check, the OS used by 80% of the world for desktop workloads uses COM as the main userspace API since 2007.
Classical Win32 C APIs are for all practical purposes frozen since Windows XP.
And the answer you practical won't like, when not using COM directly, the advised way from Microsoft is to make use of .NET bindings for the OS APIs, either via C++/CLI, or the new kid in town, C++/WinRT as Windows Runtime Component.
Even if you don't use Windows as desktop, the other desktop contenders, also use COM like APIs on their driver stack, IO Kit, DriverKit, Treble, and XPC/Binder aren't much different from DCE RPC development experience.
The book reviewed is essentially backward-looking. It treats C++11 and newer features as inherently suspect. Its suspicions of newer features are based on superstition. You should not buy or pay attention to recommendations in this book unless obliged by employment at Bloomberg, and often not then.
The book covers only up to C++14 because that is the latest Standard that Bloomberg has managed to field. They got to use C++11 only a couple of years back. Code written at Bloomberg is not, as a rule, good code. The BDE library much (but not all) Bloomberg code relies on is, while nominally open source, not used outside Bloomberg, for reasons. (When BDE group were directed to start reviewing outside pull requests, all discussion was of how to find time to reject whatever might come up. They need not have worried.)
This is not a good book unless what you need is a doorstop.
Our analysis of C++11/14 features aims to be as objective as possible. Could you please provide any example of what you consider "superstition"?
Keep in mind that the book was heavily reviewed by multiple members of the ISO committee and experienced developers such as Andrei Alexandrescu, Sean Parent, Nina Ranns, Alisdair Meredith, and more. There's zero chance that "superstition" would have made it true their review, and I'm eager to see what you come up with :)
Your post is also spreading falsehood. Bloomberg has been using C++17 for several years already, and many teams nowadays use C++20 and even C++23 features. We target C++14 because we find it intellectually dishonest to claim to know the best use cases and pitfalls for features unless we have had multiple years of experience with them in the real world.
Your opinion seems quite biased, I take it that possibly you had some bad experiences at Bloomberg, which can happen as in any other large company.
Still, I'd be happy to hear your feedback and improve the book if you provide some concrete examples of "superstition".
I remember when this book was a 12-page polemic vainly ginning up arguments for limiting use of any language features newer than C++03. I thought it had been rightfully abandoned. Instead it hypertrophied as all Lakos work does.
Anything starting from that premise was bound to lead nowhere good.
The 1300-page size will anyway help limit its influence, just as has the gross inflation of Lakos's other output.
It is now obvious you have no intention of discussing the technical merits or drawbacks of the material in the book.
It is also obvious you hold a grudge against Lakos and/or Bloomberg for whatever reason.
Guess your crusade of spamming "please don't buy this book" without providing any concrete arguments is the way you present your grudge to the world.
Well, if you ever change your mind and intend on providing constructive feedback, feel free to let me know. I'm also happy to have a call and discuss what could be improved in the book for future editions.
Experienced Python, C, and Java programmer here (am also learning Rust on the side). Recently picked up a C++ project at work that targets C++14. I know caveman C++ from my time at University 20+ years ago. My god is there a galaxy of complexity in modern C++. I get the need for some of it (e.g., smart pointers), but for new projects, I can’t see a good reason not to use Rust at this point.
No joke. I used C++98 for a job a while back and got comfortable with it, I asked "hey why don't we use c++14"(this was years ago). The answer was basically, modern c++ is too complex to keep code bases sane without spending a week writing rules and having dozens of meetings, so they opted for the 20 yr old version.
More recently I've learned rust. I haven't missed c++ at all. Even the more modern stuff I've learned later on...
I was kind of hoping this was updated Modern C++ Design by Andrei Alexandrescu. That was my favorite C++ book and at least I thought it didn't use the STL in Loki at all. It was the better templating design though heavily at the mercy of compilers at the time.
Somewhat tangential, but Andrei Alexandrescu was fundamental for this book -- he reviewed the entirety of it, providing a lot of suggestions and improvements, and also helped write a few sections :)
I really wish that not every submission about anything C++ related would devolve into a full-on discussion about Rust. It keeps happening everywhere. Why do people have such problems with someone else using C++ for whatever they are building?
If I had cash to burn I might keep buying computer programming books, but as it is it's just become easier to use Google to research compsci topics in every area. For example, if I get confused about constexpr functions in C++ I can just Google with upload dates restricted to the past four years and up pops, for example, a nice learncpp page with just about everything I need to know for basic uses.
That said, I downloaded their sample pdf and the nice thing is the code they present is very well commented and so it's easy to read. They seem to take a 'read the code to learn the subject' approach, which I appreciate. Note that it takes a slightly conservative approach:
> "Only the passage of time can distill the programming community’s practical experience with each feature and how well it fared, which is why this book discusses features added up to C++14, even though C++20 is already out."
Maybe I'm missing something but they don't seem to offer the book as a searchable PDF? There seems to be a Kindle edition and paperback only. Worried about piracy or something? Otherwise it might be a useful reference, I guess.
They're not really for the same purpose. The internet is great for finding documentation and reference material, but for larger, cohesive concepts like the book this post is about is on, especially since it's somewhat fringe, you will likely only be able to find bad blog spam on medium, which will be far inferior in quality.
Same for me. Back in the day I learned C++ by reading each item in Effective C++ and studying Ellis and Stroustrup until I (almost) understood everything that Meyers mentioned in the item. It was slow going...
> Maybe I'm missing something but they don't seem to offer the book as a searchable PDF? There seems to be a Kindle edition and paperback only.
The InformIT page linked from the review has (I'm in the US):
* book for $64
* ebook as ePub, Mobi, and PDF for $51
* book + ebook bundle for $86
The sample PDF is searchable, so I presume the full PDF is as well.
If you're looking at the Amazon listing, then no, I don't expect Amazon to offer PDFs instead of, or even in addition to, a Kindle edition. Oddly, the Kindle edition is more expensive than Amazon's paper copy, or InformIT's ebook.
Based on the sample, this looks like a nice book. It seems to have Meyers-like advice in a more concise, reference-style presentation.
It's a slight concern that all four authors are from the same department of the same company. I don't know anything about Bloomberg Development Environment, but I would be skeptical of a similar offering from, say, Google.
Thanks for that info. Definitely this looks like a useful pdf to have on hand to look stuff up with a quick search (carrying around a 1300 pg book is out of the question). I really like the code samples on the sample.
Hey, you seem to have posted variations of this comment over and over again, without providing any substance to your claims.
Could you please explain why you believe that this book is not needed by anyone? Especially when it was reviewed and endorsed by ISO C++ committee members and experts such as Andrei Alexandrescu and Sean Parent. Concrete examples would be appreciated.
My mental image of C++ is a tool which is great when used by an expert, but has lots of sharp edges which can cut you if you're not careful - so not something I would apply the verb "embrace" to...
Collaboration on a C++ codebase can be interesting. One of the most common approaches that organizations use is to choose a subset of the language and discourage committing code that doesn't conform to that subset.
This limits the sharp edges and in particular makes it easier to identify where they might exist. Perhaps the most common rule is the prohibition of exceptional control flow. This can seem really limiting at first, but encoding the possibility of errors using the type system and forcing callers to handle errors explicitly is really powerful.
Sure, the effect of such guidelines, including the "Core Guidelines" is to further shatter the language into effectively incompatible dialects while not really solving anything.
One of the book's authors actually proposed work to get C++ to a place where C++ 20 could begin actually abolishing the things everybody agrees you shouldn't do, Vittorio's "Epochs" proposal. https://vittorioromeo.info/index/blog/fixing_cpp_with_epochs...
But the preference for C++ 20 was to add yet further stuff. I believe that, unwittingly, the C++ 20 process sealed the fate of the language in choosing to land features like Concepts but not attempt Epochs, I'm not sure it was actually possible, but if it wasn't possible in 2020 it's not going to be any easier in 2026 and it's only going to be more necessary.
The active subset is always the set of features not supplanted by the latest Standard supported by tooling in use. Anything newer is safer and nicer than old crap, so there is no temptation to stick to old crap, except among people who have elected not to learn anything new. Those are typically approaching retirement anyway.
There is still a very large set of people using older style C++. particularly those that came from C. I can assure you they are a long long way from retirement.
This is particularly true of large code-bases in established firms.
Personally, I think it would be an interesting idea to soft ban a lot of pointer use and only allow shared and unique. As pointed out in an earlier comment, it is a 'python' like experience without the performance compromise.
This said, the exception is when you need to use well tested existing libraries such as libmicrohttp to implement a local REST server without buying into some mega framework.
There's nothing about Epochs which cares about software licensing.
Yes, you will need the source code to C++ software to actually compile it but the relationship to licensing is at most coincidental.
Much of the time when you really can't negotiate source code, it's because it's a "pig in a poke" IP sale, what you're buying was actually either worthless or the seller never really owned it anyway. The video games industry is certainly rife with this.
"Because of ISO" has to be the lamest C++ excuse, and perhaps the fact it's seeing more use recently reflects a larger problem.
Also, remember, that ISO document is known for two decades to not actually describe the language as implemented, but the fix (pointer provenance) is controversial enough that the committee keeps "forgetting" to fix it. The ISO document describes an imaginary language which would be incredibly slow and useless if it really existed, the document exists because the real C++ language is sort-of similar to this imaginary language and for some people (though it seems gradually fewer over time) that's enough. Any pretence that ISO requires it to "work for everyone" is nonsense since as documented it isn't in fact working for anyone.
† One of the fun side effects is that you can't explain what ARM Morello ports do in terms of the ISO standard. CHERI reifies provenance, which would be fine if the standards document explained what provenance is, but the C and C++ documents deliberately do not do that. On paper Morello ports just waste space to make some valid programs crash, which is weird.
You can do CHERI with Rust exactly as it stands (and people have done that), it's just that usize and isize are bigger than you'd want because now we're stashing provenance, so the other options are worth exploring.
Rust actually talks about provenance (messing with pointers is of course unsafe so it doesn't need to worry about this in most of The Book) so this isn't a mystery in Rust. Indeed the API as it stood before Aria's work already makes it clear that you're stepping off the edge of the cliff if you try to do address arithmetic and similar provenance defeating magic. Aria's "Tower of weakenings" is about firming up some of those first steps, allowing us to do some Road Runner don't-look-down bullshit and know whether we can get away with it on real hardware with the real compiler, while still forbidding things that are definitely crazy (the ISO documents say such things work in C++, but they do not work in actual C++ compilers, they have, drum roll... Undefined Behaviour).
Aria did a bit more than write some blog posts, for example:
Again though, this isn't about licenses. Regardless of why you don't have source code, that's where the problem is. "Gee, it's difficult to compile this program without source code" is maybe something where C wants to draw a line in the sand because of its very simple ABI but C++ is a long way past the point where that's useful and people need to stop pretending otherwise, it's a cause of enormous frustration in the community.
The way Rust epochs work requires a compiler that is fully aware of all existing epochs, and goes through all the code applying the epoch semantics the crate expect on their build definition.
Also epochs can't introduce backwards incompatible semantics that break across epochs, imagine a crate exporting something whose representation at runtime changes between epochs and is used as parameter in some callback implementated in that epoch.
Syntax. Rust (since 1.0 in 2015) has a single Abstract Syntax, and the written syntax of each Edition is translated to the Abstract Syntax.
Rust provides tools which will do the equivalent transformation to your actual code, but of course this transliteration is ugly, so the preferred form for further development of a 2015 edition crate is the 2015 edition code, the transform is useful if you've decided to actually update to a newer edition so that it serves as a working (but perhaps slightly ugly) start.
This is already rather more compatibility than C++ has delivered. Rust 2015 Edition works just fine on a brand new Rust compiler in 2022, whereas a C++ 20 compiler can't always compile valid C++ 98, C++ 08, C++ 11, C++ 14 or C++ 17 code except via a "version" switch of some sort. They are, ultimately six distinct versions of the language.
It's true that this limits what is possible with a Rust Edition. However, because so much is possible it inspires people putting in that extra little bit of effort to get it done compatibly.
Example, Rust 1.0 you couldn't just iterate over an array. The built-in arrays don't implement IntoIterator because that needs const generics and Rust 1.0 didn't have const generics. Fast forward to Rust 2021 edition, and this works. But wait, that's syntactically impossible. You can't have arrays only implement IntoIterator on newer editions, that's not syntax.
So there's a hack. On modern compilers for Rust 2015 and 2018 editions, when you write x.into_iter() if "x" is an array, it silently ignores the fact it knows arrays are IntoIterator and continues considering other options, crucially (&x).into_iter() the iterator over a reference.
That's all the hack does though, so arrays are IntoIterator, you can use them as you'd expect for a container even from Rust 2015 code now, you can for loop over them, you can pass them to functions which want IntoIterator, you just can't use this method call syntax which would always have done something else.
This feature costs Rust a little hack in the 2015 and 2018 edition parsers forever and that's all.
I bet if Rust manages to achive the same market size that C++ enjoys in 2022, and various kind of compilers, OS support and deployment scenarions, during the next 40 years, those hacks will be just as messy as a language switch in any modern language nowadays.
Unfortunely most likely I won't be around to collect it.
I can't guess what would constitute "the same market size" especially because as we saw C++ is actually six different slightly incompatible languages.
For the purposes of defining their "success" people like Stroustrup seem to consider that the firmware developer who compiles their caveman 1990s C code with a Gnu C++ compiler the board vendor provided is a "C++ programmer" although if you ask that developer about say, operator overloading (a C++ 98 feature) they will look blank 'cos even C89 is a bit novel for them.
At the other extreme, HN resident C++ apologist ncmncm seems clear that nothing short of C++ 20 (a language which is documented but not yet fully working in compilers you can get) really counts as C++ and so if you're not writing Modules with Concepts (and presumably reporting Internal Compiler Errors left and right as you work) that's not really C++
Those are some very different size "markets". I suspect Rust is already similar numbers to the latter, but is a very long way off the former.
But yes, a 47 year old Rust will be crusty, however with any luck PL research didn't stand still for forty years and we'll be recommending people adopt something a bit less crusty for new work. One of the things that's different is obviously the Rust community and its ecosystem and that means it's not necessarily about winning converts for "the cause". A better language than Rust is a good thing not a bad thing.
Because different types of software make good use of different subsets of C++. A "footgun" for one codebase is a "high-leverage feature" for another code base. The subset of "no footguns" C++ that is common to virtually all applications probably asymptotically converges on the empty set. A feature of C++ is that you can precisely select an optimally expressive subset for most applications instead of being forced to write code in language that clearly wasn't designed with your use case in mind.
Having a lot of tools in your toolbox doesn't necessitate using all of them in inappropriate contexts. I don't try to hammer a screw just because I own a hammer.
This analogy sounds nice but is deceptive. The problem is there's no place where a foot gun is appropriate. I need a toolbox for screwing things so I get a screwdriver toolbox, but in that toolbox is a foot gun designed to blow my foot off. Why? What context is that foot gun appropriate? Should I put it in the hammer toolbox? C++ has footguns everywhere and there's no context where many of those footguns are appropriate.
The other thing with C++ is the complexity. The toolbox is so jam packed full of millions of tools I have basically can't comprehend the full ecosystem and how everything works. Additionally I pick one tool and that single tool itself has like 20 different ways of being used with a bunch of edge case gotchas.
Because that subset can still be better at solving your problem than those other languages - it could be faster, more mature, better at solving certain problems, you can leverage existing ecosystem etc.
This is exactly true. Noone in the C++ world uses all of C++ in their project. It is a multi-paradigm language, and you are free to choose one (or a few) of the paradigms and follow it throughout the entire project. For example, you can choose to base your project on virtual functions and interfaces, or you can choose to employ static polymorphism instead. Remarkably, even the C++ standard library, for all its intricacy and complexity, does not use all of C++. Also, it is useful to see C++ as two languages, one for library developers, and one for developers of applications.
There are lots of other advantages and disadvantages to any given language. You’re right that this is a mark against C++, but wrong in the assumption that it (always) outweighs all other considerations (performance, familiarity, compatibility with existing C++ codebase, portability to odd architectures). (Personally, I don’t like C++ and prefer Rust. But it is also reasonable to choose C++.)
> If I have to use a subset of a language to avoid shooting my feet, why not just use a language without footguns?
Because it has absolutely nothing to do with footguns.
The main factors in picking subsets, beyond personal tastes on coding styles and office politics, boil down to a) introducing exceptions in legacy code, and thus the need to revalidate what is otherwise solid code, b) not be forced to deal with the complexity of template metaprogramming when the value it adds back is negligible.
Computer science programs used to teach variations of C/C++. Last I checked where I went as well as talking to junior engineers I have worked with. You can get through a whole CS program at a dop 50 university without touching the language.
There have been debates here about C/C++ being "dangerous". And it can be, if you don't know how to use it. I recall discussions where doing in-line assembly in C had a similar discussion.
It doesn't take being an "expert". It does take "understanding what you are doing" but that same thing applies regardless of the language/tool/etc.
Wouldn’t you say that “understanding what your doing” applies a little more to C/C++ than many other languages/tools/etc? Both Microsoft[0] and Google[1] have established that ~70% of the defects they encounter in their flagship products and systems are memory-related, and generally attributed to the “dangers” of the underlying language: C++
It seems significant that, given that the individuals working on these projects are professionals, experts by some definition, they too are encountering issues related to the sharp edges of these languages. Issues that have wide ranging implications.
In light of that, doesn’t saying that “understanding what you’re doing” is all it takes to productively and safely use a language like C/C++ seem simplistic? Even the experts are getting it wrong.
I took a couple C++ classes in college. I’ve worked on both desktop and backend C++ systems in both C++98 and ‘11. It can be a difficult language, it has more than its share of footguns, it definitely helped me appreciate other languages that don’t have similar issues.
I prefer Rust's approach to spatial memory safety over C++; it doesn't get in your way and it's good at preventing out-of-bounds access at a low performance cost. I also like Rust's approach to threading over C++; in a world where data races are undefined behavior and the typical C++ threaded program (both from before and after C++11's memory model) is an entangled UB-filled mess impossible to refactor, Rust's Send/Sync and &/&mut is needed discipline to organize programs. I'm unconvinced by Rust's insistence on temporal memory safety (the compiler follows fixed rules, it takes unsafe code to teach it about things like scoped threads) and "aliased xor mutable" (which rules out even many correct C++/Java architectures and requires either dangerously tricky unsafe code or needless rewrites in safe code, and optimizes for local reasoning at the cost of flexibility and expressiveness, which is sometimes a bad tradeoff, and SB is a needlessly strict nightmare with no equivalent to unique_ptr supplying RAII deallocation without noalias semantics). Though it is wise to be cautious in C++ around objects with self-pointers (unsafe to move) or aliasing member pointers (tricky lifetime constraints).
When the problem is "understanding what you are doing" it's not so bad. But often the problem is more like "understand what everyone around you has done". If they shared an object across threads, then you can't do mutation without a lock. If they retained a reference into your vector, then you can't grow it. It's this sort of invisible UB at a distance that gets so hard to deal with as projects get larger, even between experts.
The depth of what you want to understand with C++ is sometimes pretty deep though. I’m thinking about things like lvalues, rvalues, const vs constexpr, etc…
That's not even the right analogy. I think of it more as a rotary phone dial. It's old and overly complex and poorly designed. It's still in use for legacy reasons and because you have tons of these "experts" who have invested so much time and pride into it they actually can't really accept the fact that legacy is the only reason why C++ is still here.
The problem is the nature of programming makes it so that once a language embeds itself into the infrastructure it's hard to remove or upgrade. Very similar to how most of the front end world is stuck with javascript.
JavaScript as ugly as it is, is nowhere near the monstrosity that C++ has become though.
Hi! This book is not targeting Bloomberg employees only, it is written for the broad audience of C++ developers who have previously used modern standards, or C++03 developers that want to learn C++11/14 in an effective and safe way.
Would you mind explaining why you believe that the audience for this book is limited to a specific company? Concrete examples would be appreciated.
If you're working (i.e. getting paid) for coding in C++, you're either working on old legacy code base or you don't because modern new projects have other viable options now that offers both bare metal speed and high level functionalities.
I get paid to write C++. Approximately 610,000 lines of C++. Typically hundreds of commits per month, sometimes thousands. If that's a legacy code base, I'm happy to keep it. You'd be insane to imagine rewriting this in any other language.
"Unsafe" has a very specific meaning in our book -- it doesn't refer to security or to the fact to a feature is inherently bad.
It means that the cost/reward ratio of teaching a certain feature and using it in a large codebase to which multiple engineers contribute to tends to be negative.
There's nothing inherently bad about 'final', but it's easy to overuse it for no real benefit, and it can cause reusability issues in larger companies where inter-team refactorings cannot be done automatically. That's pretty much it. There are also some excellent use cases for 'final' described in the book, but they are very niche.
Hence the "unsafe" categorization, i.e. I wouldn't teach this feature to a new hire :)
If you had spent a few minutes reading the meaning of "unsafe" in the book's first chapter (freely available), you would know exactly what we mean by "unsafe", which is far from the misinformation you're spreading.
I don't know if you have been fired from Bloomberg or failed to pass an interview there, but there seems to be vitriol and anger behind your posts.
I think that people would take your "criticism" more seriously if you backed it up with facts and examples instead of repeatedly begging readers to not buy the book :)
But virtual functions have strictly limited usefulness. If your go-to organizational feature is class hierarchies, you are probably not programming well.
'final' only partly relates to virtual functions, you are forgetting or ignoring the use of 'final' on classes.
Also, 'virtual' functions are extremely useful to define interfaces that separate teams can implement and consume independently.
Variants and algebraic data types are not the answer to anything, and it is not "outdated" to use 'virtual' where appropriate, despite what your other comments might imply.
The book agrees with that statement. If only you had taken 5 minutes to read a few pages before starting to beg strangers on the internet to not buy something that could improve their career just because one of the authors hurt your feelings sometime in the past...
> Virtual functions are a mechanism that has legitimate uses but makes a poor organizational basis for a program.
This is what I would consider "filler", much like the rest of your posts. Opinions presented as facts, without any sort of source, example, nuance, or elaboration.
Some time ago it occurred to me that the price you pay in time and effort absorbing the material from a book like this one is usually incomparably higher than the price of the book itself.
I can assure you that most (if not all) C++11/14 developers that have some prior experience with those standards will find our book useful in various possible ways.
If you don't want to take my word for it, check out the acknowledgements in the book itself -- it has been thoroughly reviewed endorsed by many top-notch ISO committee members and C++ experts.
There's absolutely nothing of sustinence in this book review - they give the book 5/5 despite having not actually read it, and the only comments on the individual topics are on the length of the chapters. This is really a terrible review.
I'm not sure if you stopped too early or if you have a grudge against the author or what.
> I’m still reading this book, and I haven’t read all the pages (1300+ pages!). Yet, I’ve seen all of chapters, read half of them and picked those related to my recent work, tasks, or blog posts. I’m impressed with the level of detail and the quality the authors put into this material. The book has become my primary “reference” for those C++11/14 parts.
Is any new code being written in C++? Apart from adding to existing C++ based projects like Chromium and maybe supporting legacy trading/financial services code bases what else is C++ being used for where all the modern new features would be helpful?
I haven't seen any job postings needing C++ lately but I am sure some niche jobs exist.
Feels like instead of learning the complexities of modern c++ people might just be better off using Rust or any of the other safer programming languages for new projects.
There are 80,000 jobs mentioning or requiring C++ and 4000 mentioning or requiring Rust.
I understand more "modern" startups will tend to pick Rust but saying you "haven't seen jobs needing C++" is a plain lie.
source:indeed.com
> Is any new code being written in C++? Apart from adding to existing C++ based projects like Chromium and maybe supporting legacy trading/financial services code bases what else is C++ being used for where all the modern new features would be helpful?
All Adobe products heavily use C++, Most Digital Audio Workstations heavily use C++, All 3D authoring software heavily use C++, Most new core Blender code is now C++ instead of C. Of Course, most game engines are C++ based as well. So this is just the consumer space.
There are domains where there is simply no alternative to C++ in terms of features and speed. Rust maybe, but I don't know enough about Rust to formulate an opinion. No need to say that C++ is absolutely here to stay.
I think what's hard for people to realize is, people who like rust usually aren't saying "is rust going to replace it"? Like cheesey blogs by hype artists say crap like that, but really having both is obviously beneficial to the programming community.
Looking behind my back there is a shelf of another 30+ books full of C++ (Meyers, Sutter, Alexandrescu, Langr, Feathers, Stroustrup, Lakos etc) that I have read page to page. However, I am certain that any sufficient C++ developer can embarrass me by showing me a corner of this language I have never experienced before. It just never stops. The things I have to keep in mind while coding a solution for the actual problem at hand can be astounding.
But then, I really like coding in it as there are these rare cases when I’m coming up with an elegant and straight to the point solution. I think I have some kind of Stockholm love-hate relationship with this tool of choice.