I wish they were more specific about when to use and not use exceptions. Truth be told I really hate exceptions and I strongly feel that they should only be used in the most disastrous situations. I'm messing with the 0MQ C++ wrapper now (which I will probably move off of) and they use exceptions for freaking everything. For example the send(msg) method returns true if the message is sent, false if EWOULDBLOCK is returned by the C api, or throws for any other error.. Like guys shit happens on networks all the time, just give me an error. Instead I have to try to track down all the damn exceptions and figure out what throws and what doesn't..
I actually really liked the way Apple did it for Objective-C. Don't use exceptions unless something is really going sideways, instead use NSError. I'm not saying this is the correct pattern for C++, but I don't personally think exceptions are the correct pattern either.
Exceptions are a mistake, plain and simple. They should have never been added to the language. They break the notion of programming as a sequence of events. With exceptions, now you have two sequences, the happy path and the 'wherever the hell the exception is caught' path. No matter how you slice it, exceptions are still a goto under the covers. If they are truly exceptional, you might as well call exit() too.
Go got this right. If there is an error, return it to the caller. The error can be a function, and the caller can call it when they want. This returns programming to a single sequence of events.
> They break the notion of programming as a sequence of events.
It's never been that way. What's a page fault?
> No matter how you slice it, exceptions are still a goto under the covers.
"break" is also a goto. What's your point?
> If they are truly exceptional, you might as well call exit() too.
That's completely unacceptable for robust software. Programs can recover from almost all errors and make forward progress, usually via staging some kind of rollback. Tearing down a whole process in response to error is a ridiculous extreme that works only in a few narrow domains.
> Exceptions are a mistake, plain and simple.
Is that why almost every language ends up adopting them in some way? Even languages that start out vehemently anti-exception, like Go and Rust, end up with the full try-and-catch exception toolkit. That's because, despite protestations, exceptions are extremely useful.
> Even languages that start out vehemently anti-exception, like Go and Rust, end up with the full try-and-catch exception toolkit.
Rust (and as far as I know, Go) have pretty much remained the exact same with regards to their implementations here. Go uses panic/recover a bit more liberally than Rust uses panic/catch_panic; using panic for control flow is not idiomatic, and so is not done. Since you can turn them into an abort, it's not something you can rely on when writing a library, and so I'm not aware of any significant libraries that do so. You couldn't even catch panics for a long time, and the possibility was mostly added to prevent UB with regards to extern functions.
I find it amusing and sad that the Rust ecosystem, via this "panics as abort" feature, replicated one of the worst parts of the C++ ecosystem: -fno-exceptions, which, in C++, turns throws into aborts.
In both language environments, the availability of a compiler option that breaks a language feature doesn't erase the existence of that language feature.
It doesn’t break the feature, that’s how the feature is designed. They’re not for error handling. It’s only broken if you try to treat them as something other than what they’re designed for.
I started out as a rust True Believer and still am active in the Rust community. But I don't use it anymore for new projects because it really doesn't bring anything new to the table. At my current job, I'm knee-deep in C++.
> No matter how you slice it, exceptions are still a goto under the covers.
This is a poor argument; the same could be said for almost all control flow statements: if/else, while, for, break, continue, (early)return ... they're all implemented using "goto under the covers". Moreover, lots of C programs use goto to do their error handling!
Seeing "throw" statements as gotos misses the point: you can't really create spaghetti control flow using "throw" statements.
Instead, you can see them as "return on steroids".
It's true that "try-catch" makes it hard to know where your code is gonna "jump" ; but that's the point : as the "thrower", you don't have to know, and you don't /want/ to know.
Think about the standard libraries for Java, C#, Python, D ... they don't have this luxury.
If when using "throw", you're wondering "where's the catch", then you /probably/ have a design issue.
>This is a poor argument; the same could be said for almost all control flow statements: if/else
It's technically possible to implement if/else (and loop conditions) with unconditional jumps and offsets, but for reasonable instruction sets (that support conditional jumps) they won't be "goto under the covers."
What's the reasonable alternative to exceptions? Checking return status or errno after every single function call and then manually propagating it upwards? (optional is a bit better but also falls into this category) Programmers are way too lazy for this and it usually ends up as either a redundant assert() crashing the whole application or just logging a error and continue with broken state.
Exceptions is following the "let it crash" philosophy from erlang, instead of broken recovery code scattered all over your code. (Crash doesn't necessarily mean a linux process in this context, more of a high level) This is a very powerful concept that allows you to think about the bigger picture such as if _anything_ unexpected happens inside this transaction, return a error to the user and continue processing the next request from the next user. Error handling should be done by layers, not by grunt work.
There are situations where I agree you might want to know how a certain function can fail, here I think javas checked exceptions strike a good balance between control vs verbosity as you just have to add a throws-clause if you don't want to handle a specific error at that point in the code.
Maybe panic is just a better name so people don‘t get the idea to use it for control flow in slightly less than normal situations? (i.e. ValueError, StopIteration in python).
Nothing about names like StopIteration and AttributeError indicate to me that I should use them for control flow. Python just instructs me to use them that way...
My point is, that part of python is IMO a bit ugly. Not due to naming, but because these are exceptions. A type system with good use of optionals across the standard lib is certainly more elegant than ValueError.
Well, strictly speaking, Go has panic() which is very similar to exceptions. But it also has a culture that strongly discourages using it unless things go very wrong.
Strictly speaking, a) you can catch a segfault, too, by installing a signal handler, and b) recover() is not quite the same, because with "real" exceptions, you can state in a catch clause what types of exceptions you handle, and the compiler/runtime will do the right thing, where in Go, you call recover() and look at the return value to see if there is something you can do about it. And if not, I do not think you can even "re-throw" the panic further up the stack.
(I think the panic()/recover() mechanism is intentionally crude in order to discourage people from over- or abusing it.)
I mean, for hell's sake, C++ exceptions date back to 1990. Nowadays you can throw exceptions on damn 16-bit microcontrollers (https://en.wikipedia.org/wiki/TI_MSP430) ; I doubt there's a relevant, non legacy unix where this does not work.
Nobody in Go culture treats it as "evil voodoo" that I'm aware of. There are clear guide-lines. User passes in a path to a file that doesn't exist and you attempt to open it? Return an error. That's not "exceptional" or "programmer error" - it's just normal "shit happens". You have an array-like structure with 10 elements and your function that modifies it gets passed in index 13? That's probably a bug or programmer error, not "user" error - so panic.
Go documentation is very clear on the difference between the two, actually. I've never been confused.
I strongly disagree with you. I think too few people use exceptions. They're great for making code more expressive, and it's only through exceptions that you can avoid two-phase initialization of classes and make constructors actually useful. It's only through exceptions that you can have meaningful copy-constructable resource-holding value types. Avoiding these constructs severely restricts the language. And for what? The arguments against exceptions appear to mostly arise from misunderstanding or some kind of weird aesthetic sense that I don't have.
Error codes also generally discard context and encourage a "just abort" mentality, especially with respect to memory exhaustion. That, or they become so general, mechanized, and macro-ized that they might as well be exceptions, but worse in almost every way.
> It's only through exceptions that you can have meaningful copy-constructable resource-holding value types.
Could you give an example of this specifically? Are you thinking of e.g. a File class whose constructor opens the file? Because every such example I can think of is one where it ultimately seems inappropriate to me to use exceptions to signal the error. Both from a conceptual standpoint (it might be something you have no control over, so it's not a programmer error) and from a performance standpoint. Example for the latter: say the user gives you a list of hundreds of thousands of files that are to be copied somewhere "if they exist". Imagine half of them don't exist. Now your program is going to have to run through hundreds of thousands of exceptions to signal this.
That file class might hold a file descriptor and dup the file descriptor in the copy constructor. If that dup operation fails because you've hit the FD descriptor limit and that copy constructor has no way of indicating that it failed to do its job, you're screwed.
Sure, you can give up on copy constructors altogether and manually manage resource copies, but doing so impoverishes the language, wrecks one of its best features, and makes programs both more verbose and probably a bit slower.
Sure, you can do this, but don't. Approximately the last thing you want is a copy constructor that can fail.
If you have to do an OS call to duplicate the underlying resource, assume the class isn't copyable in this way. It's almost certainly possible to make it moveable, and this will get you the bulk of the benefit: you can return it from functions, and you have a std::vector of it.
(Also worth noting that close (2), which presumably you'd call from the destructor, can fail with EIO or EINTR. I don't consider looping on EINTR ever safe, but that's up to you. EIO, on the other hand - what are you going to do about that? And if you don't call close from the destructor, what the hell use is this class anyway?? Overall, I don't think value objects make very good wrappers for a POSIX-style file descriptor.)
It was instructive. That said, there's nothing wrong in general with a class that owns a file descriptor, and making this class copy constructible as a legitimate design decision.
> Also worth noting that close (2), which presumably you'd call from the destructor, can fail with EIO or EINTR.
IO errors from close(2) should be ignored. If you care about durability, call fsync before you close. Even if close "fails" with an IO error, the underlying FD is still closed. EINTR does not actually happen.
A common pattern for dealing with failures on construction, such as the example above, without using exceptions is for the object to be constructed in a null state (often useful in any case) and implementing a 'explicit operator bool()' overload that evaluates to false if construction failed or is intentionally null. Of course, that means you have to actually check that construction was successful, similar to returning an error code. It doesn't require you to give up copy constructors etc if they are sensible (and these are usually, but not always, best served by move constructors).
But how is requiring the programmer to know that he has to check error codes and whatnot any different from requiring the call be place in a try/catch block?
Most cases where exceptions are useful are when the alternative is knowingly causing such a state that will inevitably lead to a seg fault. It's much easier to raise at the point of failure instead of trying to limp until the application eventually crashes or loses you all your money.
Your code isn't really statically typed if every object might support its declared methods, or might instead be an unusable time-bomb. Much better that an invalid object never exists and code that needs that object isn't reachable.
Exceptions are not for handling programmer errors. Assertions are.
Exceptions are indeed used for handling exceptional situations such as actual errors. If you only want to copy a file it it exists, then check if it exists (race-prone but fast) or create a function that copies a file only if it exists.
Or you know, swallow the nonexistence exception. An optimizing compiler might be able to elide the throw statement if it is inlining the copy function.
The only issue seems to be that few people are able to install or create a good top level error handler in case there is an actual unhandled failure.
That is not a programmer error, this is a domain error mostly for math functions or string parsing. (Other than relatively evil std::length_error/out_of_range which might go either way.)
In most cases where it is a bit gratuitously thrown it can be avoided.
You might want as well to redefine any error as a programmer error.
> That is not a programmer error, this is a domain error mostly for math functions or string parsing.
....what in the world? no... for domain errors there's, erm, std::domain_error. C++ Reference/StackOverflow/MSDN/etc. already explain all these. But it seems I have to copy-paste them here?
"Domain and range errors are both used when dealing with mathematical functions." [1]
"std::logic_error reports errors that are a consequence of faulty logic within the program such as violating logical preconditions or class invariants and may be preventable." [2]
"std::domain_error may be used by the implementation to report domain errors, that is, situations where the inputs are outside of the domain on which an operation is defined." [3]
About the dependency between constructors and inheritance, I like how rust handled the situation. In rust you do not have any constructors, but instead you can make a factory method that returns an Option<T> or Result<S, E>, so no need for any exceptions while construction. You can also do this in C++, but you still have to declare constructors anyway, along it being a bit inconsistent with other idiomatic C++ code...
At least these days we have class-level initializers to lessen the pain a bit and reduce the number of situations in which you have to write a constructor.
Bjarne's guidance has always been to use exceptions only in truly exceptional circumstances. A function that might occasionally fail on bad user input could instead return a std::optional and be marked noexcept (assuming it really doesn't throw). A function that fails to allocate memory, on the other hand, is truly exceptional, so throw std::bad_alloc.
The STL unfortunately doesn't always follow this approach, and tends to overthrow.
Signaling why something failed without exceptions is still unnecessarily tricky in C++. Not because it can't be done, but because there's no standard guidance on how to do it. There are 3,385 ways to do the job, so in many cases it's better simply not to.
So nice to see that attitude.
Exceptions, as implemented in C++, turn it into a dynamically typed language.
If working with code written by others (libraries, big project), it is really hard to reason about control flow if every function call is potential return statement.
It is not exactly easy to read top level error handling code either:
catch (Pig) { // now Grunk need find pig.
The amount of frame unwinding code generated can get really big,
etc.
I wish C++ provided means to preallocate memory for handling task (2 buffers for 4k, map for N objects of type T, ..., ok? then run this).
Instead, there is a russian roulette powered default allocator, that can out-of-mem at any time...
most containers have a "reserve" method which does this. for maps you can use boost's flat_map (ordered) or ska::flat_hash_map (hashed) which both use linear storage that you can reserve. and if for some reason you can't you can still pass your own allocator with preallocated memory, or use boost::pool_allocator
Right. My point was that there is no mechanism to do top level resource allocation, like exceptions do for top level error catching.
Allocators are fine, but they are not enough for this, really:
// big task will create map<int, int> with 1000 elements
MyAllocator a;
a.reserveMemory(????); // <-- what to put here?
bigtask(a);
> My point was that there is no mechanism to do top level resource allocation
How could it make sense if bigtask(a) is a separate function, maybe in another DLL / shared object ? Maybe bigtask is not even written in C++ but in C, Rust, D, whatever. Maybe it's not even using malloc but directly OS primitives.
Of course. What I meant is that if I am writing bigtask myself, then preallocating resources for any non-table-like data structure (like map) means I cannot use STL as it is now, even with custom allocators. I have to write custom data structures, or deal with any allocation potentially erroring out.
I don't understand why you are making a distinction between table-like and non-table-like containers: both can take allocators.
But in general, I don't really understand why you would want to do this: how do you expect your code to differentiate between the containers that you want to be affected by the memory changes and the ones that you don't want to ? If you don't want to make this difference and have a blanket coverage of, say, every std::vector called by your function it means that you cannot use any external library in your function, since other libraries may have other allocation requirements that your code would break. This looks like this would completely break encapsulation.
> I don't understand why you are making a distinction between table-like and non-table-like containers: both can take allocators.
Because for map<> in general I cannot know how much allocations and of what size will it need to store 1000 elements. I might glean it for particular implementation of STL, but not via any api.
Doing anything bearssl-like, "No dynamic allocation whatsoever", just does not work with STL.
> you cannot use any external library in your function, since other libraries may have other allocation requirements that your code would break.
Exactly. Most C++ libraries are not made with the idea of allowing you to police resource allocation. It's malloc all the way.
> Exactly. Most C++ libraries are not made with the idea of allowing you to police resource allocation
I was saying this from the point of view of the library author, not the code. e.g. for instance I spent some weeks optimizing allocations in a library that I'm working on, and am then making assumptions in the rest of the code according to the optimizations I made. If I did let users of the library change the allocation policy, I would have to introduce costly runtime checks everywhere to ensure that the invariants I set do still hold, that I really have enough memory to do what I must, and abort / throw in case such an invariant is broken. All of these situations are less desirable than enforcing my allocation policy in my code.
In a programming language that allows functions with multiple return values, error values of some sort become almost natural. In Lua, the idiom of returning "nil, errormessage" on failure is very common, too.
The way ObjC did it is to use exceptions to signal programmer error, and error objects for program errors. I think it’s the best style, at least until the day programmer errors just won’t compile , but that needs something like Idris to take over.
To be fair, it's not always true. Many APIs actually don't have NSError argument and throw Obj-C exceptions on actionable errors. For example, reading from an NSFileHandle may throw NSFileHandleOperationException. I suppose, in most cases it has been done for brevity. Though, you can also argue that only opening a file could fail (due to invalid path or insufficient permissions).
No because I almost never use exceptions myself (unless consuming them). The only time I would really use it is if for some reason the constructor could fail, but usually in those cases there are better ways of designing the class. Otherwise it's error codes/reasons.
Honestly I'm not sure where to draw the line specifically, but I obviously err more on the side of "don't use them". Exceptions in C++ are costly and imo control flow gets all wonky with them...I find it's easier to reason about a program if the error handling is there with the rest of the logic instead of being a list of things that can happen at some point in the above code block.
Any error handling will make control flow explode into checks. Exceptions at least allow you to put the checks in the right place and not necessarily at the point where function is executed.
In typical C code you get to propagate error check statements all the way everywhere or risk bugs. Or use central mechanism like errno and risk thread safety and overwrite while also not knowing about the source of the statement.
In C++ you might have the same problem.
Even std::optional (which defers the exception to actual value get) is not best as it loses the information on who made or set it...
Calling get on an empty optional is a programmer error though, not unlike dereferencing an invalid ptr, so throwing would be an improvement. :)
Most code would check or call value_or, buggy code would throw. There's still the complication of the unsafe std::optional interface, which can only be solved by wrapping/rewriting IMO.
This quote from Kernighan should be at the top of any C++ guideline: "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"
Writing supposedly smart code seems to be a weird culture thing in C++.
I've lost count of the number of C++ devs I've worked with that get the "Mmmm, donuts!" reaction to templates, and leave a stinking pile for everyone else to debug. I'm sure I'm included in that group of programmers, but I hope, not as much.
You've obviously never written any C++. The point of templates is precisely the fact that they make debugging easier, by replacing runtime assertions and bugs with cryptic compile-time error messages.
Figuring out a compiler error is significantly easier than having to debug a malfunctioning test.
You've obviously never tried to debug template heavy code. They are like ugly black boxes - you get an answer (maybe correct maybe not) or you get pages of incomprehensible often useless error messages.
If c++ wanted a meta language, they really should have created one long ago instead of continuing this template madness.
I'm going to dissent on this. While the Core Guidelines "raise the floor" for code safety, I think the may be lowering/hardening the ceiling[1] as well. One problem is that, they impose an (inflexible) standard for function interfaces that is still intrinsically unsafe. For example, they direct you to standardize on std::shared_ptr for objects shared between threads. But this prevents you from using (even) safer smart pointers that, for example, do automatic mutex locking[2].
Presumably the alternatives to a standardized, intrinsically unsafe interface are either a standardized, safe interface, which would require introducing safe alternatives for unsafe elements like std::shared_ptr and raw pointers. Or a more felixible interface standard. For example, making your public functions function templates when necessary.
Ugh, so much wrong with this. There is C++, as implemented by every compiler, and then there is fantasy C++, as seen by the committee... See, e.g. this trainwreck:
> P.6: What cannot be checked at compile time should be checkable at run time
Great! I want to do it! later...
> F.4: If a function may have to be evaluated at compile time, declare it constexpr
Ok. You can't put static_assert in constexpr function. You can't put runtime assert in constexpr function. You can't put debug print in constexpr function. How do you even debug that?
I agree, e.g. they recommend not using `#pragma once` because:
> It injects the hosting machine's filesystem semantics into your program, in addition to locking you down to a vendor. Our recommendation is to write in ISO C++
Erm. What. Compiling anything other than a one-file program "injects the filesystem semantics into your program". I mean... your program is stored in a filesystem. #include uses a filesystem. What utter tosh.
And as for locking you down to a vendor, even niche compilers that you've probably never heard of support it.
Hrmf, what I think they tried to say is that #pragma once causes your code to compile differently depending on which file it's stored inside. This can cause problem with some tooling (e.g. precompiled header generators, some distributed compiler tools, etc.) which concatenate header and source files to generate an amalgam.
So it's not "tosh", it probably just doesn't correspond to your way of writing C++ code.
You disappoint me pjmlp, you're not a true graybeard unless you use Vim or Emacs :o)
The slightly more serious question: is there anything like this in Vim or Emacs-land? I somehow doubt it... I, for one, have never seen something like it outside of the big IDEs (Visual Studio, IntelliJ, etc.)
I think 'dang provided the link so people could read the previous discussion (which has over 100 comments), not to imply it was a dupe. If the latter, I suspect he'd've marked it as such.
Small annoyance (I haven't read the entire thing, but I have strong feelings about this): gsl::index is disgustingly verbose. If you use stl, just use size_t. If you're using something else, follow the conventions they use (be it size_t, int, or whatever). Their examples [1] conveniently omit size_t to push their alternative.
Anyone know what these guidelines are talking about when they refer to a 'final_action' object, or the function/keyword 'finally'? I'm referring to E.19. I could not get the example below to compile even using the newest gcc with -std=c++1y at godbolt.org. It seems potentially useful, if it actually exists.
Not to threadjack, but: I never really learned C++, just C and then started using C++. And I haven't used C++ much in the last decade+, but would like to get back into it and learn how to do things The Right Way.
What are some good online resources for learning "modern C++", for someone who already knows several languages?
I actually really liked the way Apple did it for Objective-C. Don't use exceptions unless something is really going sideways, instead use NSError. I'm not saying this is the correct pattern for C++, but I don't personally think exceptions are the correct pattern either.