Programmers should not be forced to sacrifice flexibility from using function calls, objects or getters/setters in order to speed up a program. The performance hit from those things should either be miniscule or optimised away by the compiler as appropriate.
This is too black and white. Don't fret, you can use function calls all you want in python... the situations in which Guido is talking about being conscious of your stack frame are pretty rare in typical practice.
Also, getters/setters are totally inane in python (and probably most other languages).
In fact, in general Guido's advice reads as a good warning to folks showing up from other languages who's first reaction might be to create a com/mycubiclefarm/exceptions/abstract/ directory and start writing SeriousBaseClassesForMyExceptions.
Sounds like you're looking for a Sufficiently Smart Compiler[0]. James Hague has a good piece on why this might not be so desirable[1].
One of the reasons I'm fond of Python is that, while there is a tradeoff between flexibility and performance, it gives you the means to sacrifice flexibility to aid you in improving performance -- once you've identified what (if any) actual performance bottlenecks you face.
In general, function calls cannot be inlined by the Python compiler because (almost) any function name may be re-bound to a different object at run time.
A very smart compiler could probably attempt to prove that no such modification can occur at run time throughout the whole program; but this is much harder than simply deciding whether inlining a given call is worth it or not.
And more to the point, PyPy can do Python inlining with its tracing JIT. The method, in both cases, is similar: find some type assumptions that are useful and usually true, generate code under those assumptions, and fall back on more generic code if they're ever false.
Actually, my experience with PyPy, while generally positive, has exhibited many of the characteristics that article talks about in terms of downsides to "sufficiently-smart compilers." It's almost always much faster than cpython, but how much faster is highly variable, and not especially predictably so; seemingly-insignificant changes can have large and unexpected performance implications, and it's difficult as a developer to have the proper mental model to figure out what's actually going on.
In CPython land, Python is slower, but performs predictably, and if you want to speed it up you write C, which is much faster, and also performs predictably, though it takes some developer effort. In PyPy, you get some of the speed of C without the effort, but without the predictability either.
I'm not sure that counts in the same way. You would have the branch prediction issue even if you program in assembly language and have precise control over the instructions the computer executes.
>Sounds like you're looking for a Sufficiently Smart Compiler[0]. James Hague has a good piece on why this might not be so desirable[1].
So, sounds like he's looking for something like v8.
if v8 can be 10-30 times faster than Python, for an equal or even more dynamic language, I don't see why Python should need to manual tune the things Guido describes in order to get some speed.
If you are writing very performance critical code, you shouldn't write in Python anyway. You chose Python because ease of development is more important than raw performance.
The overhead of function calls and property access in Python is part of the price for the increased flexibility of the language.
Rossums tips are for the corner case where you want to improve performance but you don't bother rewriting in C, e.g. if you have a small bottleneck in a larger program.
There are few applications that require 100% of the code to be high performance. Most performance problems are of the "bottleneck" kind, so writing your entire app in a language like Python for that ease of development is smart. It gives you options to optimize that 1-2% of your code that is critical without incurring a development penalty on the other 98-99%.
GVR's tips can be seen as an escalation path for optimizing. Don't jump into C before you've actually optimized your Python, because you might not need to.
In 10 years of writing Python, I've yet to hit a point where I've needed to do that. I'm kind of looking forward to it, actually.
I don't know how much experience you have with python.. but you don't need getters/setters in order to gain flexibility. You can always redefine how assignment/accessing works for any property later.
Even in Java you don't need the trivial getters/setters anymore. Everyone uses an IDE with refactoring capabilities, so the change can be done instantly when necessary. Why uglify your code in advance?
Of course it depends on who's using your code. In java, if you have an api that other people are using, then you must use getters and setters. In python there is no such restriction.
Not much, so I will ask a question. What's more pythonic - avoiding getters/setters, or having assignment do more than simple assignment without the caller knowing about it?
Or does the issue simply not come up in practice, because you rarely need to redefine how assignment/accessing works?
For what it's worth, the only thing you sacrifice in Python by omitting trivial a getter/setter pair is the docstring: unlike Java, the syntax doesn't change, and, for Python classes, there is no such thing as .NET-style "binary compatibility".