Hacker News new | past | comments | ask | show | jobs | submit login
Size cost of C++ exception handling on embedded platforms (andriidevel.blogspot.com)
172 points by Tatyanazaxarova on May 16, 2016 | hide | past | favorite | 80 comments



In a compiler's intermediate representation, exceptions are typically modeled as multiple returns. E.g. in LLVM the `invoke` op specifies a label to go to if an exception is returned http://llvm.org/docs/LangRef.html#invoke-instruction

By the time this reaches the backend, the exception handling is usually converted into "zero cost" exceptions where raising an exception calls a handler which uses a lookup table to work out how to unwind the stack and what destructors to run etc. Here's a really good explanation I found https://mortoray.com/2013/09/12/the-true-cost-of-zero-cost-e...

This "zero cost" exception handling has no performance penalty on normal CPUs in normal execution as exceptions are exceptional so the conditional call to the handler easy on the branch predicter and the clutter of unwinding doesn't fill the caches.

(Zero cost execption handling appeared first iirc in Metrowerks compilers, but quickly became the standard way on most modern platforms and is now in the ABIs.)

The multiple-return approach may well be much better for total program size.

Whether you can use the multiple-return approach on bare ARM - when the ARM ABI specifies zero cost exception handling - is nothing I know about.


GCC does employ DWARF exception handling (the zero-cost EH model) on ARM7TDMI. While the cost is zero in time, it is not zero in space. The EH tables, unwinder code, and support code for them have a non-zero space cost. While the space cost is normally considered negligible in a workstation or server environment, in a microcontroller environment it most certainly is not.


well, if ypu want exceptions that practically don't exist neither in time nor space, use something else :)


To add clarity to this explanation where I think it's needed: The llvm 'invoke' op is effectively a pseudo-op and does not necessarily indicate what a backend will do. It looks something like this (the exact details, I forget):

  llvm.invoke function-to-call, block-if-success, block-if-failure
The backend translates this into:

  function-to-call
  block-if-success:
  ....
Notice that there is no conditional. However, if in function-to-call, you have:

  throw exception()
This will be translated into a table lookup that checks which exception handler is valid for the current block. This is the slow part and is generally no worse (theoretically) than frame-based exception handling.


How slow is slow? I don't have a good sense of scale for this. Are we talking hundreds of nanoseconds, or microseconds, or what?


In my experience (desktop and server, haven't tried it on a microcontroller) exceptions have for all practical purposes zero cost until you actually throw one, at which point the cost is ten thousand clock cycles. That figure has stayed surprisingly constant over two decades of hardware and compilers for three different languages. (And, obviously, it's a negligible cost for an exceptional situation, but you wouldn't want to use it for flow control in a crunch-heavy inner loop.)


FWIW, the Symbian file server used an exception to get out of a crucial loop. It was a bit of laughable trivia, but a bunch of us set out to prove it was a bad idea... but on profiling it was provably noise. We moved from setjmp implementation to EABI zero-cost and it was still no issue.

Just a data-point and a good case of profiling beating intuition :)


Theoretically, it's just as fast as frame-based exception handling which follows pointers up the stack. However, things that might impact the performance are things like code locality. For example, if the exceptional code hasn't been loaded into memory since it hasn't been needed, your program will need to stall while the code is loaded into memory.

So basically, assume it is fast as hell (zero-cost) until you hit an exceptional case (so don't use exceptions for flow control.) In fact, I no longer use exceptions at all. I just "make a note and move on". Having objects that don't cause catastrophic failures when they are "null" is a good first step.


I'm not sure what you mean by "whether you can use the multiple-return approach", since the ABI is usually irrelevant on bare metal. The only issue would be tooling support and the standard library.


This article is very interesting but it address only the constant overhead. It would be interesting to know is there is also a factor on the size of the program (Sexp = k*Snoexp + C). I'm not sure if this factor (here 'k') will be greater or smaller than 1 (without exception, you have to generate code for every if/then needed for managing the error code).

If k<1, the choice become a trade-of : Small programs are smaller without exception and big programs are smaller with exception.

Has anyone try to measure this factor ?


Exceptions are a really valuable programming tool , but you can live without them. The cost of not having exceptions is usually giving every function in your code the option to return an error value.

It takes some time getting used to, but it also creates the habbit of checking error codes on every function you call. It also makes handling errors something you do as soon as possible (and in the most recoverable way).


True, and a good type system will enforce that you check for errors. However, depending on how often errors occur in your program, this can actually make your code slower. Good exception handling implementations are "zero-cost", meaning that there is no performance impact in the happy path (when no exceptions are raised). C-style error handling (an if statement) is actually slower, at least in non-embedded environments.


In C, there are ways you can get the optimizer to turn if-based error handling into something that only spends one or two instructions; using __builtin_expect[0] can decrease the cost of a condition by letting the branch predictor pick the "right" thing to expect. That way you end up only paying the price of the branch (and maybe the jump) instructions without any pipeline flushes or stalls.

[0]: http://blog.man7.org/2012/10/how-much-do-builtinexpect-likel...


I wonder how much slower. Are there any measurements out there? Of a CPU correctly predicting the branches of all "happy path" error code checks and seeing how much overhead that is?


I'm going to go as far as saying that having to manually add error checking and propagation code to every function in your program is so bad (in several ways) as to be flat-out unacceptable. In my experience, the main kind of situation where it's okay to do without exceptions is when you can handle an error by printing an error message and promptly exiting the program. (Of course, this can be regarded as throwing a coarse-grained exception for which the operating system provides the handler and cleanup.)


Exceptions have the same problem as return codes in that you need to be careful to leave things in a consistent state if a function throws an error. RAII can help to some extent in both cases, but doesn't magically enforce that your classes and data structures are internally consistent.

My experience has been that error paths have a way of getting things into inconsistent states, and then you're in trouble regardless.

Checking return codes is annoying but if you're wanting to write robust software, your code needs to be in some way aware of every possible error path.


You can indeed live without them. In C++, the trade-off is that you are now responsible for manually writing and maintaining any and all error propagation code, including remembering to check every relevant return value.


Plus you're pretty much screwed on RAII since you cannot return a value from the initializer. So by not using exceptions in C++, you also will often end up requiring a separate init function that can return errors.


> ...including remembering to check every relevant return value.

To be fair, there are compiler flags to check this. And if not, there are static analysis tools that do the same.


I was slightly amazed I had never heard of such a thing, but sure enough, it does exist: It's -Wunused-result on gcc (https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html). You still need to manually add the warn_unused_result attribute to each function though.

There's also the [[nodiscard]] attribute of c++17.


Yup, that's how Rust does it and I vastly prefer it to C++(or Java) exceptions. The compiler forces you to handle the value so you still have all the enforcement of checked exceptions.

Option/Result are really well designed and let you express all sorts of wonderful flow. Being able to .and_then(), .or_else() or the combination of them and many others is just sublime. The doc page goes into quite a bit of detail: https://doc.rust-lang.org/book/error-handling.html


I think this shows clearly that compilers are still quite bad at "optimising in the small", since they tend to all be based on the "link in everything and let later passes strip out some of it" principle, whereas what's really needed is for them to link in code only if it can be proved it will be needed/can't be proved to be unneeded. And even after his efforts at removing unused code, there's still plenty remaining (more than order of magnitude difference.) Ideally the compiler should be able to optimise the exception code to become identical to the no-exceptions code, something that a human could easily do in this case but the compiler couldn't.


Link Time Optimisation is what you're referring to (-flto) and that wont solve the issue. exception handling is inherently unsuited for embedded systems.

The size & performance overhead is one thing, but the model of attempting to "handle" errors is incorrect. In most cases, a PANIC situation should lead to a watchdog reset.

Exceptions should be for exceptional cases, not the usual program logic. In embedded systems, exceptional cases == reset.


Exceptions should be for exceptional cases

Why? Simply because the words look alike?

I think you're painting a false dichotomy here. There's a whole world of program state between "usual program logic" and "exceptional cases".


defined recovery cases are not exceptions.

exceptions are breaks from what you had assumed to be constant.

There is a reason program logic is not placed into exception handlers... the readability would be completely destroyed.


You're thinking of asserts, not exceptions.

Exception usage can usually be split into three buckets: (1) programming errors where abort() or equivalent isn't suitable (e.g. libraries); (2) semantic errors at the application level where language provided unwinding is used as a convenience to jump back to the core event / request loop and provide an error to the user or client; and (3) to provide out-of-band error information for failures when interacting with non-deterministic systems (e.g. failure to open a file or communicate with a device, where the natural function to write returns a value, rather than a success or error code).

There are alternative solutions to all three, and all three may not apply to every environment. For embedded systems, case (1) may indeed not apply. But cases (2) and (3) may be useful as a programming convenience to automate the idiom of checking error conditions and aborting the current operation. If checking error conditions and aborting is fully automated (like monadic Result handlers in e.g. Rust) then you start approaching an isomorphic semantics to exceptions, with no necessary difference in implementation details.


nope, definitely not thinking of asserts, they are nothing to do with this situation.

program logic encoded in an exception handler is undeniably less easy to read than explicitly coded error cases.

Also, the non-locality of the decision making means the further away from the error-site you are, the more context you have to keep in your head.

This similar to what deep inheritance hierarchies suffer from, non-local logic. you end up jumping all around your source tree trying to figure out the full context that an error has occurred in.


"program logic encoded in an exception handler is undeniably less easy to read than explicitly coded error cases"

For which category of exceptions? For interaction with non-deterministic systems where you can make a localized decision - and this is fairly rare - I'd agree with you. For all the other categories, I think you're wrong. If you never used exceptions in the other ways, this will of course colour your thinking.


That's a very strong statement. You really believe there can't possibly exist any instance where program logic in an exception handler ends up being easier to read? That assumes you know enough about every possible permutation of logic flow to know that exceptions could not benefit it.


Of course not, I'm talking about a general rule.

No rules are universal.


Sure. The real point I'm getting at is that overly strong statements lead to arguments, where people take your statement at face value. Hyperbole is rarely useful in a serious discussion. It just means people have to work to determine your real stance because it may not match exactly what you said.

You could have said "I've never seen a case where exception handling resulted in more clear and easier to read code, and I doubt I'll ever encounter such a situation" and I think that would convey your opinion clearer (assuming I understand it correctly).

> No rules are universal.

We're talking about CS and programming here, where there are plenty of cases where things have been formally proven. Some rules are universal. No reason to use absolutist statements where they don't apple.


(replying to both)

protocol stack logic is just that... logic, no need to short circuit returns with the use of exceptions.

Exceptional case are things that you have a-priori determined to be constant, i.e. the existence of a FLASH device, an RTC working correctly for example. If any of those devices fail, it could be considered an exceptional case, therefore the only action left to you is to reset and hope the condition clears. boot-loops obviously have to be handled also.

There is a world of difference between defined recovery cases and just throwing an exception because you dont know how to "handle" it.


"You" this particular function might not know how to handle it, but it's possible that some other part of the program does.

To run with your example, I could imagine handling a missing storage device in lots of different ways, including a) Retry the operation b) Alert the user and block until a device (re)appears c) Switch to an alternative storage location d) Replace the to-be-read-in values with some sensible defaults.

It seems to me that it would be easier to write this logic once and place it in an exception handler than it would be to wrap every I/O operation in a thicket of if/else clauses.


Depends on the what the embedded system is doing. Right now I am working on a system with a real-time cycle handler and a non-real time thread that handles RPCs. For the real-time part, what you say is more-or less right.

But in the RPC thread is a lot more like non-embedded code: if I detect an error deep in the stack, I can just throw an exception and catch at the top level. The PC gets an error message and the embedded system treats it as a no-op.


defined recovery cases are not exceptions. thats program logic.

Placing program logic into exception handlers is obfuscation.


Can you unpack this for me? Are you talking about the RPC error handling I described?

As best I understand your argument, it is circular. You assert that if it is an exception, then the only thing to do is reset. When I give a counter-example, you define it as "not an exception" -- presumably because a reset is not desirable.

I agree that defined recovery cases are part of the program logic. But what reason -- beyond a slogan, do you have for reject exceptions as a tool for implmenting that part of the logic? How is it "obfuscation" to do the catch-and-report in the RPC front-end?

The danger I know of with exceptions is that throw sites are invisible, which makes it easy to neglect clean-up actions on error paths. Is that what you meant?


all I'm saying is that if you have a defined recovery process, put it in normal logic, not exception handlers.

If you don't have a defined recovery process, that technically would be an exceptional case... but because of the nature of embedded systems, a reset is the only reasonable recovery mechanism (a catch all).

Therefore, there is no place for exception handling in embedded systems (as a general rule. Of course there are exceptions... ;o)

The dislike of exception handlers is also that they harm readability, The further you get away from a call-site the more context you have to hold in your head making it more complex.

Keeping error recovery local to the logic improves everything, from readability to code size to complexity and performance.


How much is this space increase because of dependences upon the STL library that is included to do the exception handling versus the actual exception hanlding itself.

I would like to see a comparison that doesn't use STL for the exceptions in C++ and then measure that difference.

I find that the use of STL increases the size of the resulting binary significantly by itself.

(Now maybe since STL is the official C++ library maybe this is still a fair comparison, but STL is known for bloating the size of binaries.)


I realize we're talking embedded here, but what is "bloat" in the context of 2016? I have a suite of 35 command line apps with header-only dependencies on Boost and the whole shebang builds in under ten seconds on 2012 MacBook. Many of the binaries (64-bit) are under 200k.

EDIT: I've hand-coded 68xx, 56xxx and 8051 when 4K RAM was a luxury. But this "C++ is bloat" and "exceptions are universally wrong" discussion feels a little 1995 to me. It's 2016: if you have software it's not that hard to find a $5 system to run it on. And people who actually write embedded code know how to pick their tools. And at least one of them isn't necessarily afraid of STL or exceptions... me.

EDIT 2: jotux comment below nails it.


>but what is "bloat" in the context of 2016?

I've recently (last 3 years) moved my bare-metal embedded software development from C to C++ so I've actually run into issues of STL adding too much to the binary size.

To give a real-world example, I'm working on a custom device that is a type of data cartridge. I have a "big" ARM-A5(Atmel SAMA5D36) processor running embedded linux that does all the heavy lifting but goes down when main power is removed. I have a small ARM-Cortex-M0(LPC824) that is always running on a battery and manages power, watches for button-presses, and a few other janitorial items on the device. The small processor has 32kB of program space and 8kB of RAM, and my project is mostly C++. Here are some specific examples of how using any STL will bloat program size:

Using std::string, and touching any of the STL string handling immediately adds 20kB to my binary. Using stl::list is 6-10kB of program space (actually not that bad and std::vector is pretty efficient). Adding exception handling adds 5kB of code and a single exception adds 6kB of code (not stl but just an example).

I've worked on a lot of embedded projects and my general rule of thumb for, "How big of a part do I need to use fully-featured C++?" is 256kB of program space -- something most bare-metal embedded software engineers would call a large amount of program memory. On projects where I have less than that I don't use STL and basically use C++ as C with classes, function/operator overloading and templates.


Just to add to what jotux already pointed out. $5 is an order of magnitude too much for the systems I work on. Calling a string handler can mean $0.35 more per unit cost for the extra space needed. Across 10k units, this matters enough to care.


200KB is way too large for a lot of embedded micros.


I'm sorry you got downvoted. You sparked a conversation with very interesting information I didn't know myself.


I think we have different ideas about what an embedded environment is if a macbook is your reference platform.


"64-bit" and "using Boost" are the clues that I was making a contrasting, not supporting, argument.


> I would like to see a comparison that doesn't use STL for the exceptions in C++ and then measure that difference.

Even better, how about getting info directly from the binary about how much of the code was inlined from STL headers?

If you compile with debugging info, this info exists already. It's put there so that the debugger knows what source to show you when the code breaks on a particular instruction.

I'm working on a tool right now that would offer this information.


Welp time to use Option types for error handling ;)


Rust still requires stack unwinding info (dwarf CFI) so it's still going to be bloated on anything non-trivial.


You can turn off unwinding as of a few days ago in nightly. We'll see how long it takes to hit stable.


This is one of my issues with Rust, there are a a couple dozen knobs that need to be turned and may or may not break things to get somewhat sensible behavior (imo). And creating a shared library with Rust with it's default behavior of crashing on OOM (Linux can be configured to not do overcommit), etc is just plain bad.. oh yes there is a knob for that too!


But still, it's a x49 increase between code with exceptions and without. It would be logical to assume that additional exception handling code is the same regardless of the original code size — but I'm not sure about how correct such an assumption is.


The exception support code is a constant overhead. The remaining difference in size will come from the unwind tables - but it's difficult to compare apples with apples there, because a program written to not use exceptions will need additional error handling logic, which has its own significant size overhead.


Exceptions are the wrong model for embedded systems, generally if something exceptional happens, you need a reset. Attempting to "handle" exceptional cases is incorrect, a reset is the correct option.

Exceptions != normal program flow.


You can divide embedded systems between industrial control systems and headless consumer devices, like set-top boxes or smart TVs. For the latter, you really don't want to invoke a device reset every time the input stream is malformed.


true, but I would not consider a set-top tv a true embedded system, rather a specialised user-facing system. So yes, you wouldn't reset a user facing system.

I would characterise embedded systems as those having to maintain functionality without user intervention.

(also, comms issues such as input stream errors are the normal program logic, i.e. defined error cases).


That depends entirely on what your embedded system is doing.


True, however people who come from languages such as Python, where it is much common to handle exception rather than using `if else` checks. I think this is kind of stubbornness on dev's part not to change their coding patterns depending upon the platform.


Ada (Ravenscar profile) allows exceptions: http://www.sigada.org/ada_letters/jun2004/ravenscar_article....


I'd also be curious about the run-time costs (performance) of an exception-ready system that doesn't actually ever handle exceptions vs one that doesn't consider exceptions a possibility.


I'm a beginner, but why use exceptions instead of returning an error code and displaying a message?


Here's a great StackOverflow question/answer on this: http://stackoverflow.com/questions/4670987/why-is-it-better-...

I program in Java and I like that methods have * to declare what they exceptions they throw. This makes it very obvious from a consumer standpoint.

* except for runtime exceptions


Checked exceptions were/are one of Java's biggest mis-features (IMHO). You can't force a client to deal with exceptions. Checked exceptions ends up encouraging empty catch blocks, which is usually worse than propagating an exception.


I'm a C++ programmer and I wish I had checked exceptions. Checked exceptions are really equivalent to an either monad, except that they look better in imperative code, especially if you lack some sort of 'do' notation.

What Java lacks is a) enough compile time abstraction capabilities to inspect and parametrize exception specifications so that application specific exceptions can be forwarded through generic/framework code and b) a way to defer the checking to runtime (possibly via a special annotation) so that there would be no reason to ever write empty catch blocks.


If the method you're designing is going to throw an exception that wouldn't make sense to handle from a consumer standpoint, creating an exception that extends `RuntimeException` would be my advice. I agree that empty catch blocks are not a good idea.


It would b a good idea to catch it. Except Java has a FileNotFound exception when trying to open something, which to me should just be part of the normal control flow and is not something exceptional.

That and you tend to get code that just "throws Exception" propagated all the way down to main. I know I've done it for little projects at Uni (when I last programmed in Java, 3.0 I think?)


"Displaying an error message"... where?

This is about embedded systems. quite often there is no user and no display (for status purposes).


An exception is not simply a return statement, which only returns program control to the calling function. An exception can transfer program control to anywhere up the call stack, it really behaves more as a scoped goto with context information than a function return.

Put differently: without exceptions, each function in your call stack is responsible for error handling of its callees, and for returning the correct error information to its caller. With exceptions, you only need error handling at the catch site.


This is one of the opinions the Go language is built around. Exceptions are the one kind of non-local control flow we're still using long after GOTO was deprecated.


longjmp() might be the better analogy.


Returning an error code is error-prone, you need to check for the error code for every function you use.


you also need to handle all exceptions... there is no escape.

to have correct behaviour, all error cases need to have defined recovery. in this, error-return === exceptions.


Not true. If you don't catch an exception, you're automatically saying that it's a problem you can't fix, and your program will be terminated. If you fail to check return codes, then your program tries to keep running in a broken state.


You're misunderstanding...

The whole point is to never run in a broken state.... that path leads to expensive misery.

The point of resetting is to clear out bad state... if the condition has no defined recovery process then you cannot continue safely.

I'm not saying to not handle an error... I'm saying to not attempt recovery from undefined errors.

If you don't know what went wrong, don't attempt to 'handle' it... reset.


The difference is that running in an unknown/broken state is the default when using error codes. Getting into such a state with exception handling typically requires explicit programmer action, such as the "catch everything and ignore it" pattern.


Sorry, that is just wrong headed.

What is needed is true error recovery thought, not syntax that encourages proper to just put in empty exception handlers.

Your point is arguing for error returns in my view.


I recently reimplemented machinery in our compiler which implements C++ exceptions. I figure it might be useful to share a few interesting issues.

The way C++ exceptions are used creates very interesting interactions in the design of their implementation.

There a hidden cost many forget: call frame information restricts what the compiler can do. Why is this the case? Even if exceptions are super rare, the compiler still needs to make sure the program will obey the language rules if an exception is somehow thrown at an appropriate program point.

The compiler must emit instructions which can be represented by the call frame information in order for the unwind to be able to reason about the stack.

This is usually not a problem on Linux because the DWARF CFI is very expressive. That expressiveness comes at a cost: it is quite inefficient when it comes to size.

Other platforms, Windows NT (ARM, x64 and IPF) and iOS, recognized that this increase in size is a bad state of affairs and thus aimed to make the CFI more compact. By doing this, they greatly reduced the size of CFI but unfortunately created restrictions on what a compiler can do.

As for trickiness inherent in C++ exceptions, C++ supports a fairly esoteric feature: exceptions may be rethrown without being in a try or catch:

  void rethrow() {
    throw;
  }
An easy way to make this sort of thing work would be to thread a secret in/out parameter which represents the exception state.

But how is this typically implemented?

Well, remember, the ethos of exceptions in C++ is that they are rare. Rare enough that implementors are discouraged from optimizing the speed of C++ exceptions.

Instead, thread local storage is typically used to go from any particular thread back to it's context.

Things get pretty darn complicated pretty quickly with features like dynamic exception specifications:

  void callee() throw(double) {
    throw 0;
  }
  void caller() {
    try {
      callee();
    } catch (...) {
      puts("got here!");
    }
  }
On first examination, "got here!" should be unreachable because the call to "callee" results in a violation of the exception specification.

However, this is not necessarily the case! What _actually_ happens is that some code runs between the throw and the catch: std::unexpected is called.

Now, std::unexpected might throw an exception of it's own! If this new exception matches the exception specification, the exception would pass into the catch block in "caller". If it doesn't, the exception thrown within std::unexpected might result in another violation!

Wow, this is get complicated... OK, so what happens if it results in another violation? Well, the exception gets cleaned up and replaced with, you guessed it, another exception! We'd be left with an exception of type std::bad_exception leaving std::unexpected and thus "callee". Because the catch clause in "caller" is compatible with std::bad_exception, control is transferred to the catch block.

This is the tip of the iceberg. A huge amount of machinery is sitting around, waiting to engage to make exceptions work.


C++ exceptions have a runtime overhead, because they have to set up a handler (insert a record into a linked list) upon entering the try block and remove it when leaving the try block. Now of course checking return values introduces a lot of branches, so go figure out which one of them is worse...

still it is very advisable to have likely pragmas around if conditions that check return values.

The big problem with exceptions is that it introduces a lot of implicit code paths, and lots of possibilities for resource leaks (if raw pointers are involved).


That's one way they can be implemented, but now they tend to be zero-cost implementation where only when an exception occurs does it figure-out where it was and what to do.


the article speaks about 'embedded' systems - these are not x84-64 systems (intel cpus need too much power) so they don't have a 'zero cost' exception table (that is also not exactly zero cost)

Can you show me an embedded system with zero cost exceptions?


  $ CXX=g++ppc CXXFLAGS=-g make -f /dev/null foo.o
  g++ppc -g   -c -o foo.o foo.cc
  $ objdumpppc -drS foo.o                         
  
  foo.o:     file format elf32-powerpc-vxworks
  
  
  Disassembly of section .text:
  
  00000000 <_Z3fooi>:
  int
  foo(int i)
     0:   94 21 ff d0     stwu    r1,-48(r1)
     4:   7c 08 02 a6     mflr    r0
     8:   90 01 00 34     stw     r0,52(r1)
     c:   93 e1 00 2c     stw     r31,44(r1)
    10:   7c 3f 0b 78     mr      r31,r1
    14:   90 7f 00 18     stw     r3,24(r31)
  {
          try { if (i < 0) throw (i); }
    18:   80 1f 00 18     lwz     r0,24(r31)
    1c:   2f 80 00 00     cmpwi   cr7,r0,0
    20:   40 9c 00 74     bge-    cr7,94 <_Z3fooi+0x94>
    24:   38 60 00 04     li      r3,4
    28:   48 00 00 01     bl      28 <_Z3fooi+0x28>
                          28: R_PPC_REL24 __cxa_allocate_exception
    2c:   7c 60 1b 78     mr      r0,r3
    30:   7c 0b 03 78     mr      r11,r0
    34:   7d 69 5b 78     mr      r9,r11
    38:   80 1f 00 18     lwz     r0,24(r31)
    3c:   90 09 00 00     stw     r0,0(r9)
    40:   7d 63 5b 78     mr      r3,r11
    44:   3d 20 00 00     lis     r9,0
                          46: R_PPC_ADDR16_HA     _ZTIi
    48:   38 89 00 00     addi    r4,r9,0
                          4a: R_PPC_ADDR16_LO     _ZTIi
    4c:   38 a0 00 00     li      r5,0
    50:   48 00 00 01     bl      50 <_Z3fooi+0x50>
                          50: R_PPC_REL24 __cxa_throw
    54:   90 7f 00 1c     stw     r3,28(r31)
    58:   7c 80 23 78     mr      r0,r4
    5c:   2f 80 00 01     cmpwi   cr7,r0,1
    60:   41 9e 00 0c     beq-    cr7,6c <_Z3fooi+0x6c>
    64:   80 7f 00 1c     lwz     r3,28(r31)
    68:   48 00 00 01     bl      68 <_Z3fooi+0x68>
                          68: R_PPC_REL24 _Unwind_Resume
  
          catch (int e) { i = -e; }
    6c:   80 7f 00 1c     lwz     r3,28(r31)
    70:   48 00 00 01     bl      70 <_Z3fooi+0x70>
                          70: R_PPC_REL24 __cxa_begin_catch
    74:   7c 60 1b 78     mr      r0,r3
    78:   7c 09 03 78     mr      r9,r0
    7c:   80 09 00 00     lwz     r0,0(r9)
    80:   90 1f 00 08     stw     r0,8(r31)
    84:   80 1f 00 08     lwz     r0,8(r31)
    88:   7c 00 00 d0     neg     r0,r0
    8c:   90 1f 00 18     stw     r0,24(r31)
    90:   48 00 00 01     bl      90 <_Z3fooi+0x90>
                          90: R_PPC_REL24 __cxa_end_catch
  
          return (i);
    94:   80 1f 00 18     lwz     r0,24(r31)
  }
    98:   7c 03 03 78     mr      r3,r0
    9c:   81 61 00 00     lwz     r11,0(r1)
    a0:   80 0b 00 04     lwz     r0,4(r11)
    a4:   7c 08 03 a6     mtlr    r0
    a8:   83 eb ff fc     lwz     r31,-4(r11)
    ac:   7d 61 5b 78     mr      r1,r11
    b0:   4e 80 00 20     blr
  $ g++ppc -v 2>&1 | tail -1                      
  gcc version 4.3.3 (Wind River VxWorks G++ 4.3-315) 
  $




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: