Python vs C performance, what could be more interesting? :)
Python is a high-level, interpreted language. Interpreted means that Python isn’t compiled into machine code, which means there is a performance hit in comparison to compiled languages. However, Python is extremely easy to learn and use, and performs good enough for most things. It is also what I consider to be a general purpose language, capable of doing a little of everything, from running the GPIO on a raspberry pi, to creating AI, and even some simple games! I’d highly recommend this as a first language, as it’s very similar to english(i.e.
and instead of
or instead of
||, etc.), and teaches good formatting(indents are used instead of brackets). The most well-known implementation of Python is “CPython”.
One advantage of Python, and other interpreted languages, is that Python code can be run on any machine that has Python, without the need to compile the program for each CPU architecture.
C is one of the most basic languages out there. Pretty much everything since it’s existence is written in C, including other languages such as Python, and even C itself, in a process called bootstrapping; a simple compiler is built to compile a more complex compiler which is built to compile an even better compiler, etc.! Unlike Python, it is compiled, and in insanely fast. However, it is a bit more complicated to learn and use, and has very few features in comparison to other languages. What I consider to be the most “basic” feature it lacks is strings; you can create an array of characters instead, but there is no “string”.
While C is fast, it does require being recompiled for each CPU architecture you intend to run on. C programs also rely on external libraries, which must be installed, unless the program contains everything it needs(i.e. staticly compiled).
I like making programs to search for prime numbers, so I made two: one in Python, and one in C. For this comparison, I will be using the
time command, and using the
Real value. You can read more about real, user, and sys times here.
I will run each program three times, with the following arguments:
The arguments will tell the program to find all prime numbers up to 1000, and write them to
/dev/null. I did this because the Python program, and hopefully the C one sometime in the future, automatically pick up where they left off if the file exists. As I want to measure execution time, writing to
/dev/null basically puts them into a black hole.
I will run each program three times, and then get the average. This is not scientific, as I have the GUI, Google Chrome, and a bunch of other things running, but the point is to get a general idea. With that said, let’s get started!
Note: I originally wanted to do 10,000, but after quite a few minutes of waiting on Python(it was only on 3,000 after a few minutes on the first trial), I caved.
Hardware: For those of you wondering, these tests will be running on my laptop that has 4GB of RAM, and a dual core i3 with hyperthreading. However, multicore doesn’t really matter as my program is only coded to use one thread.
First, let’s start with Python, or CPython to be more precise. If you want to do some tests of your own, clone the repository, go into the
Math folder, and run:
time python primeMine.py 1000 "/dev/null"
real 0m21.424s user 0m21.370s sys 0m0.036s
real 0m21.455s user 0m21.429s sys 0m0.020s
real 0m20.911s user 0m20.881s sys 0m0.024s
Next, I’ll do the same process with the version in C. Both programs use the same concept to determine if a number is prime, so this will be a good general summary of performance comparisons.
The command, after cloning the C repository and going into the
Math folder will be:
gcc primeMine.c -o primeMine time ./primeMine 1000 "/dev/null"
real 0m0.426s user 0m0.421s sys 0m0.004s
real 0m0.441s user 0m0.430s sys 0m0.008s
real 0m0.429s user 0m0.424s sys 0m0.004s
Giving Python another chance
Python was significantly slower in this example, there is no denying that. But, is there an easy way to make it faster? The answer is: PyPy
PyPy is an alternative Python runtime that uses a Just In Time compiler, or JIT, which generally gives it a performance boost over the standard CPython(except for really short programs). It can be installed on Ubuntu with:
sudo apt install pypy
and can be run by replacing
pypy when running commands. While
pypy is compatible with many CPython programs, not all libraries are available. Many built-in, and other popular libraries, are however available, and function just fine. Now, let’s run the test again, but using
pypy instead of CPython, with no changes made to the code.
time pypy primeMine.py 1000 "/dev/null"
real 0m0.381s user 0m0.235s sys 0m0.076s
real 0m0.326s user 0m0.257s sys 0m0.049s
real 0m0.300s user 0m0.241s sys 0m0.042s
As you can see by the averages, the performance goes as follows:
In first place, comes PyPy, with an average
real time of: 0.34s. This surprised me too, as I expected C to be faster. In fact, while doing some tests on my iMac, C did come in first.
In second place, comes C, with an average
real time of 0.43s. I still can’t believe C wasn’t first.
In third place, comes CPython, with an average
real time of 21.26s.
What language you choose depends on personal preference, as well as what the project is about. In many cases, CPython will perform well enough for whatever it is you want to do, and is simple to install and program in. If it’s not fast enough, see what PyPy can do for you.
I will still say that C is generally best for performance. Even though PyPy beat it in this test, there are likely other things in which C will be even faster. Also, many people only have CPython installed, so your program will run, but at lower speeds than PyPy, while C is pretty much always C.
Anyways, run the tests yourselves, and let me know which one came out fastest for you!
By the way, I’m still shocked that PyPy beat C. So, I did a quick Google search, and came across this Quora question, and the answer by Gabriel Hidasy:
For the same algorithm in pure interpreted python is very unlikely, but if you use an optimal algorithm in Python and a bad one in C (Bogosort in C, Quicksort in Python for sorting an array for example) then Python can easily be faster But there are some quirks here and there to consider: One is that many Python libraries are in (very well written) C, it’s very possible that numeric operations with Numpy will outperform a naive implementation of them in C for example. And there is PyPy, a Python JIT compiler that can optimize your code in runtime, for some tasks it can be better than pure native code, its rare to find a case where it outperforms C (here is one [PyPy faster than C on a carefully crafted example](http://morepypy.blogspot.hu/2011/02/pypy-faster-than-c-on-carefully-crafted.html)). One advantage of using PyPy over C is that it can be a lot easier to make the optimal algorithm robust and safe in it, but in the end carefully optimized C will beat it.
So, I went to the blog post linked to, and found that there are other times in which PyPy ran faster than C. Also, the first sentence of the article:
Recent round of optimizations, especially [loop invariant code motion](http://morepypy.blogspot.com/2011/01/loop-invariant-code-motion.html) has been very good for small to medium examples.
This got me interested in what loop invariant code motion is, so I went to Wikipedia, and found the answer:
In computer programming, loop-invariant code consists of statements or expressions (in an imperative programming language) which can be moved outside the body of a loop without affecting the semantics of the program. Loop-invariant code motion (also called hoisting or scalar promotion) is a compiler optimization which performs this movement automatically.
Note: The links in the Wikipedia article have not been copied into the excerpt above(too many of them :) ).
Part 2 will give C another chance by using optimization, and running the benchmark again.