Hacker News new | past | comments | ask | show | jobs | submit login

That pattern is not zero-cost: It breaks inlining and other compile-time optimizations. It also can hurt maintainability and readability. If the type of processing_function() is significantly generic, figuring out what is being called in the loop can be hard (as in undecidable; it is equivalent to the halting problem).

In an extreme case that I've seen in real code, processing_function() is called send(t: Object) -> Object or recv(u: Object), and all non-trivial control flow in the system is laundered through those two function names.

So, you have 100's or 1000's of send, recv pairs, and grep (or even a fancy IDE) can't match them up.




For a non-trivial codebase, this style is incredibly valuable for readability. With it, decisions about execution are made in a single place, rather than distributed throughout the code. If the conditionals are embedded deeply in the call graph, sure a reader may be able to understand a single function when looking at it, but they won't know the implications of that function or be able to reason about the program as a whole. I've seen a huge class of bugs that are eliminated by properly using polymorphism rather than embedded conditionals.

That being said, yeah, its possible to just layer unnecessary abstraction and way over complicate things, making the layers utterly incomprehensible. But that seems to be the result of a misunderstanding- the point here isn't to _abstract_, it's to properly model polymorphic behavior. Any abstraction that takes place to do that is suspicious at best.

I think the performance aspect is negligible for 98% of programs. The compiler may or may not be able to inline the underlying function, depends on the compiler. Depends on the optimizations. And in almost all cases, if you're relying on compiler optimizations to fix your performance problems, you're looking in the wrong place. Not to say such cases don't exist, and there's a slew of people who work in those areas, but I trust they know their business well enough to evaluate the trade offs here and make the right decisions.


If you're using a language with proper static typing and generics support (so C++, Rust, Swift, Go, etc), disabling the optimizer is typically a 10x hit. Most of what the optimizer does relies on statically analyzing call graphs, which it can't do if you're slinging function pointers around all over the place.

Godbolt.org has a great example for Swift. Switch the editor to swift mode, load the "Sum over Array" example, and compile it with defaults. It produces reams and reams of assembly code. Now, pass -O in as a compiler flag to enable optimization, and it produces extremely tight code. (Swift 5.0 even noticed the two functions are semantically identical, and deduped them. 5.9 doesn't, but it also produces more compact code.)

It's a similar story for rust, since the example is written in a functional style with an iterator. The C and C++ versions only have an imperative for loop variant, so the optimizer doesn't matter as much for them. (Though you could easily produce a C++ version of the code that takes a sum() function, and has similar bloat until optimization is enabled.)


It seems necessary to point out that moving the goalposts during a discussion isn’t terribly productive.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: