Hacker News new | past | comments | ask | show | jobs | submit login
Real cost of C++ exception vs error-code checking (lazarenko.me)
71 points by zeugma on March 8, 2011 | hide | past | favorite | 62 comments



I assume he was proving a point where main() of his full program using error code checking was not in fact checking the return value of foo()

Even in simple example code like this you can forget a check. In this case that result would be undefined if any call to devide failed.

I'd much rather have my program blow up with a readable stack trace pointing to where it happened than it working with a basically random value and then maybe blowing up somewhere totally unrelated or worse, destroying user data.


You can accomplish this by using asserts in your code.

You don't need exceptions to get a callstack and you can assert that values are valid and force a crash / callstack dump when they're not.

On top of that, you can compile the asserts out for release builds if you're confident they won't be hit.


Normally you turn assert() off for production buids, so it's not a robust solution, as some error conditions may not get generated in testing.

The memory protection catches a lot of out-of-bounds memory references pretty well, and if you enable core dumps, you can extract neat backtrace from the core file (provided your routines fail-forward in case of invalid arguments). Moreover, some compilers can be instructed to instrument your code & data, including stack, with guard data, meant to trip process if it accesses wrong memory region. GNU malloc does some guardians if you sest $MALLOC_CHECK_.

If you aren't worried about vendor lock-in, you can use GCC's __attribute__((warn_unused_result)) [1]

--

[1] http://sourcefrog.net/weblog/software/languages/C/warn-unuse...


And if you are worried about vendor lock-in, you can hide it behind a #define


Sure I can use asserts. But I'm as likely to forget the assert() as I am to forget to check the return value.

And even if I did: If you consider the faulty main() in the linked article: How would you use assert() there to make sure that result as used after the call to foo() is actually usable? If foo() returns -1 (because any of the calls to divide returned -1) then result is undefined.


You would put an assert inside of the divide function like so:

  int divide (int x, int y)
  {
    defend (y != 0);
    return x / y;
  }
Now divide is guaranteed to produce a correct result if it's called with correct data. It's up to the caller to make sure the data is correct, or an assert will happen.

Just to finish out the example to show how much cleaner asserting is compared to error handling:

  int foo(void)
  {
    volatile int x = 4, y = 28;
    return divide(x, y) + divide(y, x);
  }

  int main ()
  {
    return foo();
  }
That's not to say that error handling doesn't have its place, but it should only be used for data that you can't anticipate.


That's a precondition check. While useful (essential), it doesn't cover all "forgot to check an error code" cases.


If you're really trying to write efficient code (which is what this article claims to care about) you don't 'forget' to do things like assert that the data is correct. You _guarantee_ that the data is correct and then you process it as fast as you can without having to check.

You only run into trouble with this method if the data verification step is the computationally expensive operation.


I prefer Raymond Chen's take on exceptions: http://blogs.msdn.com/b/oldnewthing/archive/2005/01/14/35294...


This reads to me like 'writing good code is hard, and writing even better code is even harder'. With modern C++0x smart pointers (like unique_ptr) and RAII, you can get very high performance and (more) easily exception-safe code. Maybe a lot of C++ exception criticism comes from people still trying to code like C in C++?


But your exception-safe C++ code soon becomes bloated with shared_ptr and unique_ptr templates and copy ctors. Almost any C++ function, overloaded operator, or some implicit temporary object's ctor can throw an exception without warning. You must use RAII everywhere for every "resource".

Exception-safe RAII is pretty much an all-or-nothing affair.


Not sure what you mean with copy ctor bloat, but for your_flavor_of_smart_ptr, that's what the typedef keyword is for.

A common idiom I use is to typedef a class's preferred smart-pointer as ptr_t within the class namespace. Passing them around now looks like:

void some_function(my_class::ptr_t);

Exception safe, clear semantics, & no bloat.


"But your exception-safe C++ code soon becomes bloated with shared_ptr and unique_ptr templates and copy ctors."

You need to prove that, I think it's a bogus claim. There is no reason why good template code should be any larger than its hand-written equivalent. Any decent optimizing compiler will take care of the rest.


"modern C++" has changed meanings so many times in the past 10 years that it's not funny anymore. there's always some "modern" solution in C++ that ends up causing more problems, leading to further "modern" solutions that end up... you get my point.

some folks have decided to get off the C++ feature treadmill and go back to, well... getting things done with solid languages (e.g. C) instead of learning about the latest C++ non-solutions to non-problems.


C++ isn't a language like Java where language features and coding styles are handed down from on high. You get to -- you have to -- make your own decisions about which language features are useful for which purposes. Announcements of new C++ features are not, and never were, declarations that the "right way to code" was about to change. Nobody forced you to change the way you coded except by offering better ways, and what's the harm in that? Your "feature treadmill" is not compulsory unless you compulsively keep up with the Joneses (the Alexandrescus? but apparently he has moved on to D now.) The C++ coding style where I work has been the same for at least five years. The C++ coding style at my old shop evolved a little while I was there, but only because I wrote most of the code and was still learning, not because we were adopting new language features.

Anyway, have fun with C; it is certainly a language where you won't run into solutions to any problems you don't have, or solutions for very many other problems, for that matter.

P.S. Andrei Alexandrescu's book Modern C++ Design was published in 2001. That's ten years ago. How much has changed between then and now?


Really? C++ is perhaps the slowest moving language in my repertoire in terms of 'popular' approaches to solving common problems. Look how long a c++0x specification has taken.

And also, www.boost.org. Don't code c++ without it.


Hmm, well, I'm relatively young so I guess I missed all those broken promises :P Still, I think the "trying to code like C in C++" point still stands.


A compressed version can be obtained simply by comparing the initial release of C++ to modern C++, and recalling that the initial version of C++ itself shipped with, well, pretty much the same set of promises that modern C++ ships with.


Do you mean when it was called C with Classes in 1979, or when they changed the name to C++ in 1983? That's about three decades of change either way. A book about the history and evolution of C++ came out in 1994, a year before the first public release of Java. More time has passed since that book was released than between the invention of C with Classes and the publication of that book. When a language has been around for thirty years, it's hard to perceive its rate of change correctly relative to other languages. It would be best compared to Perl, which is only eight years younger, and which is also still thriving. C++ is actually a pretty slow-moving language.


I think you could replace C++ with many, many things involving computers in that sentence.


Well, if you're ever bored, read this book: http://www.amazon.com/Modern-Design-Generic-Programming-Patt...


Exactly.


"since you have to check every single line of code (indeed, every sub-expression) and think about what exceptions it might raise and how your code will react to it"

Yes, that's my concern with exceptions as well. It seems like the Java model (if I remember it right -- it's been a decade), which requires a method to either handle an exception that a sub-method throws or explicitly allow it to be thrown, would be preferable and help people avoid accidentally ignoring an exception.

I'd like to see support for exceptions like this in C++0X, but I haven't bothered to check to see if it's there...


You're thinking of "checked exceptions" and C++ already has them (pre 0X) via the "throws" clause on method declarations.

Checked exceptions is a hotly debatted topic and I think the world has kinda finally come around to deciding that the are a bad idea overall. Google it and see for yourself.


I don't know that it's accurate to compare C++ exception specifications with Java-style checked exceptions. Exception specifications aren't checked at compile time and typically don't do what you want at runtime.


shrug You're probably right. I've never really used them in C++, especially because I tend to avoid C++ in favor of the smallest possible amount of C to bootstrap primary use of a much higher level language.


As mentioned, C++ exception specifications are checked at run-time, not compile-time. And if your function's exception specification declares `throws(FooException)` and you call some third-party library function (which may or may not have its own exception specification!) that throws a `BarException`, C++'s runtime checks will `terminate()` your program!

C++ exceptions and exception specifications are pretty much all the worst possible design decisions. :(


Requiring all exceptions to be checked is evil. It leads to horrendous abstraction leakage (I need to let every caller know I use FooBarWidget in the core of my application??), improper error handling (just catch whatever comes out and move on so I don't have to declare it), or ubiquitous error wrapping (every class has a try...catch that turns the called exceptions into new exception types to throw).

Ironic that C++ is moving towards that as the Java community is moving away (by relying more on RuntimeException, which isn't checked).


> It leads to horrendous abstraction leakage (I need to let every caller know I use FooBarWidget in the core of my application??),

If your code catches any exceptions that might be thrown by its use of FooBarWidget, then you wouldn't need to specify it as an exception that your code might throw, right? If it doesn't, then your code exposes to callers that it uses FooBarWidget every time FooBarWidget throws an exception.


I didn't see any timings in his code.


Yes. How can he possibly say that "exceptions are faster" without such timings? C++ style, unwind-the-stack-and-call-destructor exceptions must really have a high run-time cost when an exception occurs. Also, his instruction count misses the instructions that occur at run-time handling an exception. Even when an exception doesn't occur, it isn't obvious that some run-time code doesn't get excuted.


But you have to unwind the stack either way. With exceptions, the exception handling does it. With error checking, you do it every time you type "if (something() == -1) return -1;"


Nice article but as far as I'm concerned you don't need to _prove_ this as it is a logical fallacy to start with.

If an exception is being thrown then something is wrong, if something isn't wrong then you implemented your exceptions incorrectly as exceptions shouldn't exist in normal program flow.

So to recap, you're writing a crap ton of more code just so you can return your error code _slightly_ faster than it would take an exception. You're optimising your failure cases, which (in the _vast_ majority of cases) is UTTERLY ABSURD.


It's not slightly faster, it can be an order of magnitude faster.

Example: a listen loop which handles disconnections through exceptions. This isn't stupid but it's not very efficient.


Why is it not stupid? If disconnections are part of normal application flow then why would you use an exception?

You are correct, I was somewhat disingenuous with _slightly_ faster. It is lots faster but lots faster in error cases, which from a philosophical angle is still absurd.

As long as you use your exceptions for "bad shit" (uncommon error conditions or completely unexpected failures or returns) then I still strongly believe that the performance comparison is silly.


Maybe I'm missing something. Are you saying that in your application, handling rarely-occuring unanticipated disconnections via exceptions has such high overhead that its results in unacceptable performance?


I think you partially missed the point - it's not that throwing exceptions is slow, is that even having them in your code is [allegedly] slow.

According to the article there are two methods used to implement exceptions in C++ - one that has higher overhead when you throw an exception (zero-cost) and one that has higher overhead when you call a function that might throw an exception (setjmp/longjmp).

Unfortunately the author didn't go over the latter method, which would have been more interesting.


Perhaps the worst problem with checking-vs-exceptions is, either solution dominates your code structure, obscuring the algorithm logic.

The holy grail would be some method of ensuring the code cannot fail e.g. weirdly constrained argument semantics. Thus separating algorithm from constraints instead of shuffling them together on the page like a deck of cards.


If only c++ had some sort of static type system which could be leveraged to provide compile-time checks...

But seriously, this is a large part of the power of c++'s type system. Taking the article's example, if the argument types were of (user class) 'non_zero_float', there's no possibility for error.

You still have to check that your input is non-zero at some point, but you've now focused it into one place (the 'non_zero_float' class ctor), and other chunks of your program depending on those type semantics no longer need to worry about it.


You can really make that type do a compile-time check on runtime values?

It would be better to have some way of getting the compiler to optimize constraints, perhaps by proving at compile time that the error is impossible.


You can't prevent exceptions when you do IO or dynamically allocate memory.


Today with terabyte harddrives, gigabytes of RAM and broadband connections, when is the binary size a more important factor than both execution speed and ease of development? Especially when the binary size difference is probably not huge?

Shouldn't the advice of this article just be "use exceptions"?


when is the binary size a more important factor than both execution speed and ease of development?

Binary size (or maybe more accurately in this case, binary code layout) can be highly relevant for speed due to the instruction cache.

As for ease of development, there are issues with C++ exceptions regarding this as well: some C++ libraries aren't exception safe, and neither are practically all C libraries. This is something you need to worry about whenever you pass a function pointer into a library, as there might be an exception-unsafe function higher up in the stack. Propagating an exception up through it is potentially extremely dangerous.

That said, using exceptions can still be a good idea, especially if your code doesn't need to be portable or if you know the platforms in advance, and you are careful about passing around function pointers. All you need to do is ensure that any of your code that might be called from third-party code with questionable exception semantics won't throw or propagate any exceptions, e.g. by installing a catch-all exception handler in it.


Binary working set sizes are often lower with exceptions than without, because exception handling code can be moved elsewhere by the compiler. Error-checking code, on the other hand, cannot be so easily detected, and hence moved.

I think the C++ implementation of exceptions has a lot to answer for though, in poisoning too many developers on the concept. It really is an awful implementation.


C++ seems full of missed opportunities. I fear a lot of them are due to the slavish backward compatibility to C.


Not all development targets desktops or laptops.


Space is time.

From the Gentoo wiki: -Os is very useful for large applications, like Firefox, as it will reduce load time, memory usage, cache misses, disk usage etc. Code compiled with -Os can be faster than -O2 or -O3 because of this. It's also recommended for older computers with a low amount of RAM, disk space or cache on the CPU. But beware that -Os is not as well tested as -O2 and might trigger compiler bugs.

I believe Apple compiles (a lot or all?) of their stuff with -Os.

Anyway C++ exceptions are awful. ;)


How many hundreds of megabytes or gigabytes is our operating system installation?

Size matters a lot because harddrives are stinkin' snails when compared to CPU and RAM. All that stuff needs to be loaded from somewhere, and while SSD's have changed the scheme a bit, there's still a major gap between storage and memory.


On my current platform (STM32L micro) we'll have 256K flash, 48K RAM, no hard drive. It's very reasonable to use C++ on such a processor but exception handling might not be something you want to pay for.


It is not. Unless for trivial things, using C++ for a system with 48KB of RAM is completely non sense (48KB is quite good amount of memory for plain C but not for C++).


Symbian uses their own kind-of-exceptions (called TRAP, I think) and I've heard that decision to not use C++ exception was funded on binary size constraints.


Symbian LEAVEs and TRAPs predate C++ exceptions. Symbian/EPOC is old! :)


Cache sizes have not seen gains proportional to RAM or HDs.


Sure they have.

My 486 built in 1993 had 8 KB cache, 4 MB RAM, and a 120 MB HD.

My desktop built in 2009 has 2 MB cache, 2 GB RAM, and a 250 GB HD.

Okay, the cache has lagged behind by one or three doublings compared to the other storage types. But that's still pretty close to proportional in a world of exponential gains.


Memory speeds have increased much more slowly than processors have, so the cost of page faults, bad locality, etc. have grown proportionally worse over time.

http://seven-degrees-of-freedom.blogspot.com/2009/10/latency...


You're comparing L1 cache with L2 cache. Even a modern CPU like the Core 2 Duo only has about 32 KB of L1 cache per core.


> My 486 built in 1993 had 8 KB cache, > My desktop built in 2009 has 2 MB cache

That "2 MB" is either L2 or L3, which your 486 didn't have.

The L1 on your desktop is not much larger than the only cache that your 486 had.

As frequency increases, the length of the path that a signal can travel in one clock decreases. Fortunately, cycle-time increases have been accompanied by transistor size decreases, so the net result is that L1 sizes have been roughly constant.


I bet that your 1993 486 had 256KB of L2 cache on the motherboard, so from 256KB to 2MB is less than 10 times, for 500 times the RAM and 2000 times the hard disk size.


The value of Cache does not increase linearly with size. You also run into latency issues, so Having L4 cache on a modern motherboard would have little value.


The value of cache memory is the "hit ratio". If increasing cache size by 50% increases cache-hit from 95% to 99%, is worth it, as the 5% of cache-misses could reduce CPU performance to one half.


Good point :)

Here's a different take, then, and probably harder to verify, but I am guessing is true:

Cache utilization has increased much more than RAM or HD, not just because programs are handling more data but also because of increases in program size and number of programs being run simultaneously.

Your hard drive is probably not full... RAM could be, depends on your workload... but I bet most caches are churning like mad, more than they used to be.


A few issues. Does he actually benchmark? Things don't always work in practice the way you would think.

If you are going for ultra-high performance, do you even have error-checking? Do you write it in assembler?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: