Python 3 annotations and performancesDaniele Esposti's Blog, in 02 June 2013
Python 3 introduced the concept of parameter and return type annotation with PEP 3107. Other interpreted languages supports type annotations for example ActionScript3 and this improved the quality of the generated byte-code and the execution speed because the interpreter can output specialised code for the annotated type.
But does is also the case of Python 3? Does the interpreter gain improved the performances of annotated code?
To test if annotated code is faster than plain Python code I’ll integrate the function f(x) = x^2 -x using a midpoint rule method.
def f(x): return x**2 - x def integrate(a, b, f, nbins): """ Return the integral from a to b of function f using the midpoint rule """ h = float(b - a) / nbins sum = 0.0 x = a + h/2 # first midpoint while (x < b): sum += h * f(x) x += h return sum
If the interpreted take advantage of the annotations I’ll expecting an improvement of performances over the non-annotated code. The annotated code is the same as the plain one except for annotations in the function’s definitions:
def f_ann(x:float) -> float: ...implementation... def integrate_ann(a:float, b:float, f, nbins:int) -> float: ...implementation...
Let’s start by analysing the byte-code generated for the plain and annotated code:
>>> dis.dis(f) 2 0 LOAD_FAST 0 (x) 3 LOAD_CONST 1 (2) 6 BINARY_POWER 7 LOAD_FAST 0 (x) 10 BINARY_SUBTRACT 11 RETURN_VALUE >>> dis.dis(f_ann) 2 0 LOAD_FAST 0 (x) 3 LOAD_CONST 1 (2) 6 BINARY_POWER 7 LOAD_FAST 0 (x) 10 BINARY_SUBTRACT 11 RETURN_VALUE
No differences between the two generated byte-code. Disassembling the source code of
integrate_ann functions reports the same byte-code too.
At this point I think the result of the next are pretty obvious: the annotated code will not run faster because the op-codes are the same as the plain one. But we cannot be sure untile actually run the code.
Time to benchmark the code using the
timeit module; the interpreted used by the benchmark is Python 3.3.1:
$ python -m timeit \ -s "from plain import integrate, f" \ "integrate(0.5, 1.5, f, 1000)" 1000 loops, best of 3: 501 usec per loop $ python -m timeit \ -s "from annotation import integrate_ann, f_ann" \ "integrate_ann(0.5, 1.5, f_ann, 1000)" 1000 loops, best of 3: 499 usec per loop
No big difference in the execution time, the 2 usec cannot be considered an improvement.
I’m not surprised and disappointed about the results. Improving the performances by annotating the variables is a nice-to-have feature but looking and the PEP’s Abstract section the goal is not performances but code readability and introspection.
This doesn’t imply in the future the interpreter can use the annotations to generate optimised byte-code but I fear this cannot be done without extending the current op-code list.
 MacQuigg David (2010, January 28). Illustrating the use of Python to teach math and science