Speeding up list operations in Python with C - c

Let us assume that p and q are lists in Python of common length n. Each list contains the contents of range(n) in some order (which is important!). We can assume that n is small (i.e. does not exceed 2^16). I now define an operation on these lists using the following code
def mult(p,q):
return [q[i] for i in p]
Clearly mult(p,q) is again a list containing the contents of range(n) in some order. This python code is an example of the composition of permutations (see http://en.wikipedia.org/wiki/Permutation).
I would like to make this code run as fast as possible in Python. I tried replacing p and q by numpy arrays to see if this would speed things up but the difference was negligible under timeit tests (note that numpy is not designed with the above function in mind). I also wrote a C extension for Python to try and speed things up but this did not seem to help (I was however using functions such as PySequence_Fast_GET_ITEM which are likely the same functions that Python itself uses).
Would it be possible to write a new type for Python in C (as is described here http://docs.python.org/2/extending/newtypes.html) which would have the property that the above mult function would be fast(er)? Or, indeed, write any program in C which would give Python such a type.
I am asking this question to see whether or not I am barking up the wrong tree. In particular, is there essentially some inherent property of Python which means this can never be sped up? I am mostly interested in Python 2.7 but would be interested to know of any solutions for Python 3+.

As Abid Rahman's comment indicates, using NumPy properly is a better bet than implementing your own C datastructure.
import numpy as np
p = np.array(range(1000))
q = np.array(range(1000))
%timeit [q[i] for i in p]
# 1000 loops, best of 3: 312 us per loop
%timeit q[p]
# 100000 loops, best of 3: 4.31 us per loop
NumPy basically does what you were hoping to do yourself (push the array access down to the C level). However, if you just do a list comprehension, all the looping will be handled in Python, so it won't be much faster than the original with regular Python lists.

Related

C Vectorization: Is it possible to do elementwise operation in array like python-vectorization?

I am moving from python to C, in the hope of faster implementation, and trying to learn vectorization in C equivalent to python vectorization. For example, assume that we have binary array Input_Binary_Array, if I want to multiply each element for the index, say, i, by 2**i and then sum all non-zero, in python-vectorization we do the following:
case 1 : Value = (2. ** (np.nonzero(Input_Binary_Array)[0] + 1)).sum()
Or if we do slicing and do elementwise addition/subtraction/multiplication, we do the following:
case 2 : Array_opr= (Input_Binary_Array[size:] * 2**Size -Input_Binary_Array[:-size])
C is a powerful low-level language, so simple for/while loop is quite faster, but I am not sure that there are no equivalent vectorizations like python.
So, my question is, is there an explicit vectorization code for:
1.
multiplying all elements of an array
with a constant number (scalar)
2.
elementwise addition, subtraction, division for 2 given arrays of same size.
3.
slicing, summing, cumulative summing
or, the simple for, while loop is the only faster option to do above operations like python vectorization (case 1, 2)?
The answer is to either use a library to achieve those things, or write one. The C language by itself is fairly minimalist. It's part of the appeal. Some libraries out there include the Intel MLK, and there's gsl, which has that along with huge number of other functions, and more.
Now, with that said, I would recommend that if moving from Python to C is your plan, moving from Python to C++ is the better one. The reason that I say this is because C++ already has a lot of the tools you would be looking for to build what you like syntactically.
Specifically, you want to look at C++ std::vector, iterators, ranges and lambda expressions, all within C++20 and working rather well. I was able to make my own iterator on my own sort of weird collection and then have Linq style functions tacked onto it, with Linq semantics...
So I could say
mycollection<int> myvector = { 1, 2, 4, 5 };
Something like that anyway - the initializer expression rules I forget sometimes.
auto v = mycollection
.where( []( auto& itm ) { itm > 3; } )
.sum( []( auto& itm ) { return itm; } );
and get more or less what I expect.
Since you control the iterator down to every single detail you could ever want (and the std framework already thought of many), you can make it go as fast as you need, use multiple cores and so on.
In fact, I think STL for MS and maybe GCC both actually have swap in parallel algorithms where you just use them.
So C is good, but consider C++, if you are going that "C like" route. Because that's the only way you'll get the performance you want with the syntax you need.
Iterators basically let you wrap the concept of a for loop as an object. So,
So, my question is, is there an explicit vectorization code for:
1.
multiplying all elements of an array
with a constant number (scalar)
The C language itself does not have a syntax for expressing that with a single simple statement. One would ordinarily write a loop to perform the multiplication element by element, or possibly find or write a library that handles it.
Note also that as far as I am aware, the Python language does not have this either. In particular, the product of a Python list and an integer n is not scalar multiplication of the list elements, but rather a list with n times as many elements. Some of your Python examples look like they may be using Numpy, which can provide for that sort of thing, but that's a third-party package, analogous to a library in C.
elementwise addition, subtraction, division for 2 given arrays of same
size.
Same as above. Including this not being in Python, either, at least not as the effect of any built-in Python operator on objects of a built-in type.
slicing, summing, cumulative summing
C has limited array slicing, in that you can access contiguous subarrays of an array more or less as arrays in their own right. The term "slice" is not typically used, however.
C does not have built-in sum() functions for arrays.
or, the simple for, while loop is the only faster option to do above
operations like python vectorization (case 1, 2)?
There are lots and lots of third-party C libraries, including some good ones for linear algebra. The C language and standard library does not have such features built in. One would typically choose among writing explicit loops, writing a custom library, and integrating a third party library based on how much your code relies on such operations, whether you need or can benefit from customization to your particular cases, and whether the operations need to be highly optimized.

broadcasting across tensors in `pytorch`

I am using pytorch as an array processing language (not for the traditional deep learning purposes), and I am wondering what the canonical way is to do "batching" parallelism.
For example, suppose I want to compute svds of two dimensional layers of a 3-d tensor (using torch.svd(), say), and I want to return a tuple of stacked us, stacked s, stacked v.
Presumably, through the magic of SIMD parallelism, this should be doable in roughly the same time as a single layer svd (on gpu), but how to program it?
PyTorch is a high level software library with lots of python wrappers for highly optimized compiled code. A function or operator either supports batch data or not.
There is no other way around it than writing your own C/C++/CUDA code and invoke it with python.
Luckily, most functions support batch processing (including torch.svd() as pointed out by jodag) and it can be assumed that the developers (or the compiler) paid attention to data parallelism in the implementation. I recommend you to stack your tensors wherever you can. It usually leads to significant speedups.
Note that the batch dimension is always the first dimension of a tensor. PyTorch supports broadcasting for common operators like +, -, *, / as documented here. Because of possible ambiguities you are sometimes required to reshape your data accordingly to make clear what you want. For example if you want to add a batch of scalars to a batch of vectors you need to do something like:
a = torch.zeros(2, 2)
b = torch.arange(2)
a + b.view(2, 1) # or b.reshape(2, 1)
# tensor([[0., 0.],
[1., 1.]])

Implementing a R code in C

I use R and I implemented a Monte Carlo simulation in R which takes long time because of the for loops. Then I realized that I can do the for loops in C, using R API. So I generate my vectors, matrices in R then I call functions from C(which will do the for loops) and finally I present my results in R. However, I only know the basics of C and I cannot figure how to transform some functions to C. For instance I start with a function in R like this:
t=sample(1:(P*Q), size=1)
How can I do this in C? Also I have an expression in R:
A.q=phi[,which(q==1)]
How can I use "which" expression in C?
Before you start writing C code, you would be better off rewriting your R code to make it run faster. sample is vectorised. Can you move the call to it outside the loop? That should speed things up. Even better, can you get rid of the loop entirely?
Also, you don't need to use which when you are indexing. R accepts logical vectors as indicies. Compare:
A.q=phi[,which(q==1)]
A.q=phi[,q==1]
Finally, I recommend not calling your variables t or q since there are functions with those names. Try giving your variables descriptive names instead - it will make your code more readable.

how to incorporate C or C++ code into my R code to speed up a MCMC program, using a Metropolis-Hastings algorithm

I am seeking advice on how to incorporate C or C++ code into my R code to speed up a MCMC program, using a Metropolis-Hastings algorithm. I am using an MCMC approach to model the likelihood, given various covariates, that an individual will be assigned a particular rank in a social status hierarchy by a 3rd party (the judge): each judge (approx 80, across 4 villages) was asked to rank a group of individuals (approx 80, across 4 villages) based on their assessment of each individual's social status. Therefore, for each judge I have a vector of ranks corresponding to their judgement of each individual's position in the hierarchy.
To model this I assume that, when assigning ranks, judges are basing their decisions on the relative value of some latent measure of an individual's utility, u. Given this, it can then be assumed that a vector of ranks, r, produced by a given judge is a function of an unobserved vector, u, describing the utility of the individuals being ranked, where the individual with the kth highest value of u will be assigned the kth rank. I model u, using the covariates of interest, as a multivariate normally distributed variable and then determine the likelihood of the observed ranks, given the distribution of u generated by the model.
In addition to estimating the effect of, at most, 5 covariates, I also estimate hyperparameters describing variance between judges and items. Therefore, for every iteration of the chain I estimate a multivariate normal density approximately 8-10 times. As a result, 5000 iterations can take up to 14 hours. Obviously, I need to run it for much more than 5000 runs and so I need a means for dramatically speeding up the process. Given this, my questions are as follows:
(i) Am I right to assume that the best speed gains will be had by running some, if not all of my chain in C or C++?
(ii) assuming the answer to question 1 is yes, how do I go about this? For example, is there a way for me to retain all my R functions, but simply do the looping in C or C++: i.e. can I call my R functions from C and then do looping?
(iii) I guess what I really want to know is how best to approach the incorporation of C or C++ code into my program.
First make sure your slow R version is correct. Debugging R code might be easier than debugging C code. Done that? Great. You now have correct code you can compare against.
Next, find out what is taking the time. Use Rprof to run your code and see what is taking the time. I did this for some code I inherited once, and discovered it was spending 90% of the time in the t() function. This was because the programmer had a matrix, A, and was doing t(A) in a zillion places. I did one tA=t(A) at the start, and replaced every t(A) with tA. Massive speedup for no effort. Profile your code first.
Now, you've found your bottleneck. Is it code you can speed up in R? Is it a loop that you can vectorise? Do that. Check your results against your gold standard correct code. Always. Yes, I know its hard to compare algorithms that rely on random numbers, so set the seeds the same and try again.
Still not fast enough? Okay, now maybe you need to rewrite parts (the lowest level parts, generally, and those that were taking the most time in the profiling) in C or C++ or Fortran, or if you are really going for it, in GPU code.
Again, really check the code is giving the same answers as the correct R code. Really check it. If at this stage you find any bugs anywhere in the general method, fix them in what you thought was the correct R code and in your latest version, and rerun all your tests. Build lots of automatic tests. Run them often.
Read up about code refactoring. It's called refactoring because if you tell your boss you are rewriting your code, he or she will say 'why didn't you write it correctly first time?'. If you say you are refactoring your code, they'll say "hmmm... good". THIS ACTUALLY HAPPENS.
As others have said, Rcpp is made of win.
A complete example using R, C++ and Rcpp is provided by this blog post which was inspired by a this post on Darren Wilkinson's blog (and he has more follow-ups). The example is also included with recent releases of Rcpp in a directory RcppGibbs and should get you going.
I have a blog post which discusses exactly this topic which I suggest you take a look at:
http://darrenjw.wordpress.com/2011/07/31/faster-gibbs-sampling-mcmc-from-within-r/
(this post is more relevant than the post of mine that Dirk refers to).
I think the best method currently to integrate C or C++ is the Rcpp package of Dirk Eddelbuettel. You can find a lot of information at his website. There is also a talk at Google that is available through youtube that might be interesting.
Check out this project:
https://github.com/armstrtw/rcppbugs
Also, here is a link to the R/Fin 2012 talk:
https://github.com/downloads/armstrtw/rcppbugs/rcppbugs.pdf
I would suggest to benchmark each step of the MCMC sampler and identify the bottleneck. If you put each full conditional or M-H-step into a function, you can use the R compiler package which might give you 5%-10% speed gain. The next step is to use RCPP.
I think it would be really nice to have a general-purpose RCPP function which generates just one single draw using the M-H algorithm given a likelihood function.
However, with RCPP some things become difficult if you only know the R language: non-standard random distributions (especially truncated ones) and using arrays. You have to think more like a C programmer there.
Multivariate Normal is actually a big issue in R. Dmvnorm is very inefficient and slow. Dmnorm is faster, but it would give me NaNs quicker than dmvnorm in some models.
Neither does take an array of covariance matrices, so it is impossible to vectorize code in many instances. As long as you have a common covariance and means, however, you can vectorize, which is the R-ish strategy to speed up (and which is the oppositve of what you would do in C).

Nested for loops extremely slow in MATLAB (preallocated)

I am trying to learn MATLAB and one of the first problems I encountered was to guess the background from an image sequence with a static camera and moving objects. For a start I just want to do a mean or median on pixels over time, so it's just a single function I would like to apply to one of the rows of the 4 dimensional array.
I have loaded my RGB images in a 4 dimensional array with the following dimensions:
uint8 [ num_images, width, height, RGB ]
Here is the function I wrote which includes 4 nested loops. I use preallocation but still, it is extremely slow. In C++ I believe this function could run at least 10x-20x faster, and I think on CUDA it could actually run in real time. In MATLAB it takes about 20 seconds with the 4 nested loops. My stack is 100 images with 640x480x3 dimensions.
function background = calc_background(stack)
tic;
si = size(stack,1);
sy = size(stack,2);
sx = size(stack,3);
sc = size(stack,4);
background = zeros(sy,sx,sc);
A = zeros(si,1);
for x = 1:sx
for y = 1:sy
for c = 1:sc
for i = 1:si
A(i) = stack(i,y,x,c);
end
background(y,x,c) = median(A);
end
end
end
background = uint8(background);
disp(toc);
end
Could you tell me how to make this code much faster? I have tried experimenting with somehow getting the data directly from the array using only the indexes and it seems MUCH faster. It completes in 3 seconds vs. 20 seconds, so that’s a 7x performance difference, just by writing a smaller function.
function background = calc_background2(stack)
tic;
% bad code, confusing
% background = uint8(squeeze(median(stack(:, 1:size(stack,2), 1:size(stack,3), 1:3 ))));
% good code (credits: Laurent)
background=uint8((squeeze(median(stack,1)));
disp(toc);
end
So now I don't understand if MATLAB could be this fast then why is the nested loop version so slow? I am not making any dynamic resizing and MATLAB must be running the same 4 nested loops inside.
Why is this happening?
Is there any way to make nested loops run fast, like it would happen naturally in C++?
Or should I get used to the idea of programming MATLAB in this crazy one line statements way to get optimal performance?
Update
Thank you for all the great answers, now I understand a lot more. My original code with stack(:, 1:size(stack,2), 1:size(stack,3), 1:3 )) didn't make any sense, it is exactly the same as stack, I was just lucky with median's default option of using the 1st dimension for its working range.
I think it's better to ask how to write an efficient question in an other question, so I asked it here:
How to write vectorized functions in MATLAB
If I understand your question, you're asking why Matlab is faster for matrix operations than for procedural programming calls. The answer is simply that that's how it's designed. If you really want to know what makes it that way, you can read this newsletter from Matlab's website which discusses some of the underlying technology, but you probably won't get a great answer, as the software is proprietary. I also found some relevant pages by simply googling, and this old SO question
also seems to address your question.
Matlab is an interpreted language, meaning that it must evaluate each line of code of your script.
Evaluating is a lengthy process since it must parse, 'compile' and interpret each line*.
Using for loops with simple operations means that matlab takes far more time parsing/compiling than actually executing your code.
Builtin functions, on the other hand are coded in a compiled language and heavily optimized. They're very fast, hence the speed difference.
Bottom line: we're very used to procedural language and for loops, but there's almost always a nice and fast way to do the same things in a vectorized way.
* To be complete and to pay honour to whom honour is due: recent versions of Matlab actually tries to accelerate loops by analyzing repeated operations to compile chunks of repetitive operations into native executable. This is called Just In Time compilation (JIT) and was pointed out by Jonas in the following comments.
Original answer:
If I understood well (and you want the median of the first dimension) you might try:
background=uint8((squeeze(median(stack,1)));
Well, the difference between both is their method of executing code. To sketch it very roughly: in C you feed your code to a compiler which will try to optimize your code or at any rate convert it to machine code. This takes some time, but when you actually execute your program, it is in machine code already and therefore executes very fast. You compiler can take a lot of time trying to optimize the code for you, in general you don't care whether it takes 1 minute or 10 minutes to compile a distribution-ready program.
MATLAB (and other interpreted languages) don't generally work that way. When you execute your program, an interpreter will interprete each line of code and transform it into a sequence of machine code on the fly. This is a bit slower if you write for-loops as it has to interprete the code over and over again (at least in principle, there are other overheads which might matter more for the newest versions of MATLAB). Here the hurdle is the fact that everything has to be done at runtime: the interpreter can perform some optimizations, but it is not useful to perform time-consuming optimizations that might increase performance by a lot in some cases as they will cause performance to suffer in most other cases.
You might ask what you gain by using MATLAB? You gain flexibility and clear semantics. When you want to do a matrix multiplication, you just write it as such; in C this would yield a double for loop. You have to worry very little about data types, memory management, ...
Behind the scenes, MATLAB uses compiled code (Fortan/C/C++ if I'm not mistaken) to perform large operations: so a matrix multiplication is really performed by a piece of machine code which was compiled from another language. For smaller operations, this is the case as well, but you won't notice the speed of these calculations as most of your time is spent in management code (passing variables, allocating memory, ...).
To sum it all up: yes you should get used to such compact statements. If you see a line of code like Laurent's example, you immediately see that it computes a median of stack. Your code requires 11 lines of code to express the same, so when you are looking at code like yours (which might be embedded in hundreds of lines of other code), you will have a harder time understanding what is happening and pinpointing where a certain operation is performed.
To argue even further: you shouldn't program in MATLAB in the same way as you'd program in C/C++; nor should you do the other way round. Each language has its stronger and weaker points, learn to know them and use each language for what it's made for. E.g. you could write a whole compiler or webserver in MATLAB but in general that will be really slow as MATLAB was not intended to handle or concatenate strings (it can, but it might be very slow).

Resources