Growing an R matrix inside a C loop - c

I have a routine that generates a series of data vectors, one iteration at a time. I would like to find a way to "grow" either a list or a matrix that holds these vectors. I tried to create a list,
PROTECT( myList = allocVector( VECSXP, 1 ) )
But is there a way to grow the list, by pushing a vector element in the end?
Also, I wouldn't mind using a matrix, since the vectors I generate are of the same length.

Rf_lengthgets in Rinternals.h; implemented in builtin.c:lengthgets. The returned pointer needs to be PROTECTed, so one pattern is
SEXP myList;
PROTECT_INDEX ipx;
PROTECT_WITH_INDEX(myList = allocVector( VECSXP, 1 ), &ipx);
REPROTECT(mylist = Rf_lengthgets(mylist, 100), ipx);
If one were growing a list based on some unknown stopping condition, the approach might be like in R, with pre-allocate and fill followed by extension; the following is psuedo-code:
const int BUF_SIZE = 100;
PROTECT_INDEX ipx;
SEXP myList;
int i, someCondition = 1;
PROTECT_WITH_INDEX(myList=allocVector(VECSXP, BUF_SIZE), &ipx);
for (i = 0; some_condition; ++i) {
if (Rf_length(myList) == i) {
const int len = Rf_length(myList) + BUF_SIZE;
REPROTECT(myList = Rf_lengthgets(mYlist, BUF_SIZE), &ipx);
}
PROTECT(result = some_calculation();
SET_VECTOR_ELT(myList, i, result);
UNPROTECT(1);
// set some_condition
}
Rf_lengthgets(myList, i); // no need to re-PROTECT; we're leaving C
UNPROTECT(1)
return myList;
This performs a deep copy of myList, so can become expensive and in some ways if ht emain objective to evaluate some_calculation, then it seems like it's easier and not too much less efficient to do the pre-allocate and extend operations in an R loop, calling some_calculation and doing assignment inside the loop.

This is IMHO a good example of where C++ beats C hands-down.
In C++, you can use a STL container (such as vector) and easily insert elements one at a time using push_back(). You never use malloc or free (or new and delete), and you never touch pointers. There is just no way to do that in C.
As well, you can make use of the Rcpp interface between R and C++ which makes getting the data you have grown in C++ over to R a lot easier.

Related

Is static const array preprocessed?

I have a lot of constants and they can be separated by groups, so I used some static const arrays of doubles.
But I need to do some calculations with this array. Therefore, I created another array that stores the calculated values - because I use them a lot.
However, I make a lot of index of the arrays, so it gets very slow too and my code, by now, is O(6^n²) - with n between 1 and 12.
My question is: what is faster, make the same calculations a lot of time, or index this array that stores those calculated?
I thought to make a lot of defines (because I know it's preprocessed), but I can't index defines what would make the code extremely big and unclear.
short code (example)
const static double array1[12] = {2,4,6,8,...};
const static double array2[12] = {1,2,3,4,...};
...
// in some function
{
...
double stored1[12];
for(int i = start1; i < end1; i++)
stored1[i] = array1[i] + i*array1[i-1];
for(int i = start2; i < end2; i++)
stored1[i] = array2[i] + i*array2[i-1];
// then, I'll have to index stored1 a lot of times - or create 12 auxiliary variables
// when I need to use those arrays of stored values
// I use these values in loops of loops of loops (they are some summations)
// I don't use loops, but recursive functions to make this loops, but in both
// cases, I have to index a lot of time this array (or make the same calculations)
...
}

Optimising C for performance vs memory optimisation using multidimensional arrays

I am struggling to decide between two optimisations for building a numerical solver for the poisson equation.
Essentially, I have a two dimensional array, of which I require n doubles in the first row, n/2 in the second n/4 in the third and so on...
Now my difficulty is deciding whether or not to use a contiguous 2d array grid[m][n], which for a large n would have many unused zeroes but would probably reduce the chance of a cache miss. The other, and more memory efficient method, would be to dynamically allocate an array of pointers to arrays of decreasing size. This is considerably more efficient in terms of memory storage but would it potentially hinder performance?
I don't think I clearly understand the trade-offs in this situation. Could anybody help?
For reference, I made a nice plot of the memory requirements in each case:
There is no hard and fast answer to this one. If your algorithm needs more memory than you expect to be given then you need to find one which is possibly slower but fits within your constraints.
Beyond that, the only option is to implement both and then compare their performance. If saving memory results in a 10% slowdown is that acceptable for your use? If the version using more memory is 50% faster but only runs on the biggest computers will it be used? These are the questions that we have to grapple with in Computer Science. But you can only look at them once you have numbers. Otherwise you are just guessing and a fair amount of the time our intuition when it comes to optimizations are not correct.
Build a custom array that will follow the rules you have set.
The implementation will use a simple 1d contiguous array. You will need a function that will return the start of array given the row. Something like this:
int* Get( int* array , int n , int row ) //might contain logical errors
{
int pos = 0 ;
while( row-- )
{
pos += n ;
n /= 2 ;
}
return array + pos ;
}
Where n is the same n you described and is rounded down on every iteration.
You will have to call this function only once per entire row.
This function will never take more that O(log n) time, but if you want you can replace it with a single expression: http://en.wikipedia.org/wiki/Geometric_series#Formula
You could use a single array and just calculate your offset yourself
size_t get_offset(int n, int row, int column) {
size_t offset = column;
while (row--) {
offset += n;
n << 1;
}
return offset;
}
double * array = calloc(sizeof(double), get_offset(n, 64, 0));
access via
array[get_offset(column, row)]

C initializing a (very) large integer array with values corresponding to index

Edit3: Optimized by limiting the initialization of the array to only odd numbers. Thank you #Ronnie !
Edit2: Thank you all, seems as if there's nothing more I can do for this.
Edit: I know Python and Haskell are implemented in other languages and more or less perform the same operation I have bellow, and that the complied C code will beat them out any day. I'm just wondering if standard C (or any libraries) have built-in functions for doing this faster.
I'm implementing a prime sieve in C using Eratosthenes' algorithm and need to initialize an integer array of arbitrary size n from 0 to n. I know that in Python you could do:
integer_array = range(n)
and that's it. Or in Haskell:
integer_array = [1..n]
However, I can't seem to find an analogous method implemented in C. The solution I've come up with initializes the array and then iterates over it, assigning each value to the index at that point, but it feels incredibly inefficient.
int init_array()
{
/*
* assigning upper_limit manually in function for now, will expand to take value for
* upper_limit from the command line later.
*/
int upper_limit = 100000000;
int size = floor(upper_limit / 2) + 1;
int *int_array = malloc(sizeof(int) * size);
// debug macro, basically replaces assert(), disregard.
check(int_array != NULL, "Memory allocation error");
int_array[0] = 0;
int_array[1] = 2;
int i;
for(i = 2; i < size; i++) {
int_array[i] = (i * 2) - 1;
}
// checking some arbitrary point in the array to make sure it assigned properly.
// the value at any index 'i' should equal (i * 2) - 1 for i >= 2
printf("%d\n", int_array[1000]); // should equal 1999
printf("%d\n", int_array[size-1]); // should equal 99999999
free(int_array);
return 0;
error:
return -1;
}
Is there a better way to do this? (no, apparently there's not!)
The solution I've come up with initializes the array and then iterates over it, assigning each value to the index at that point, but it feels incredibly inefficient.
You may be able to cut down on the number of lines of code, but I do not think this has anything to do with "efficiency".
While there is only one line of code in Haskell and Python, what happens under the hood is the same thing as your C code does (in the best case; it could perform much worse depending on how it is implemented).
There are standard library functions to fill an array with constant values (and they could conceivably perform better, although I would not bet on that), but this does not apply here.
Here a better algorithm is probably a better bet in terms of optimising the allocation:-
Halve the size int_array_ptr by taking advantage of the fact that
you'll only need to test for odd numbers in the sieve
Run this through some wheel factorisation for numbers 3,5,7 to reduce the subsequent comparisons by 70%+
That should speed things up.

Maintain a sorted array that a separate, iterative function can keep accessing

I'm writing code for a decision tree in C. Right now it gives me the correct result (0% training error, low test error), but it takes a long time to run.
The problem lies in how often I run qsort. My basic algorithm is this:
for every feature
sort that feature column using qsort
remove duplicate feature values in that column
for every unique feature value
split
determine entropy given that split
save the best feature to split + split value
for every training_example
if training_example's value for best feature < best split value, store in Left[]
else store in Right[]
recursively call this function, using only the Left[] training examples
recursively call this function, using only the Right[] training examples
Because the last two lines are iterative calls, and because the tree can extend for dozens and dozens of branches, the number of calls to qsort is huge (especially for my dataset that has > 1000 features).
My idea to reduce the runtime is to create a 2d array (in a separate function) where each column is a sorted feature column. Then, as long as I maintain a vector of row numbers of the training examples in Left[] and Right[] for each recursive call, I can just call this separate function, grab the rows I want in the pre-sorted feature vector, and save the cost of having to qsort each time.
I'm fairly new to C and so I'm not sure how to code this. In MatLab I can just have a global array that any function can change or access, looking for something like that in C.
Global arrays in C are totally possible. There are actually two ways of doing that. In the first case the dimensions of the array are fixed for the application:
#define NROWS 100
#define NCOLS 100
int array[NROWS][NCOLS];
int main(void)
{
int i, j;
for (i = 0; i < NROWS; i++)
for (j = 0; j < NCOLS; j++)
{
array[i][j] = i+j;
}
return 0;
}
In the second example the dimensions may depend on values from the input.
#include <stdlib.h>
int **array;
int main(void)
{
int nrows = 100;
int ncols = 100;
int i, j;
array = malloc(nrows*sizeof(*array));
for (i = 0; i < nrows; i++)
{
array[i] = malloc(ncols*sizeof(*(array[i])));
for (j = 0; j < ncols; j++)
{
array[i][j] = i+j;
}
}
}
Although the access to the arrays in both examples looks deceivingly similar, the implementation of the arrays is quite different. In the first example the array is located in one piece of memory and the strides to access rows is a whole row. In the second example each row access is a pointer to a row, which is one piece of memory. The various rows can however be located in different areas of the memory. In the second example rows might also have a different length. In that case you would need to store the length of each row somewhere too.
I don't fully understand what you are trying to achieve, because I'm not familiar with the terminology of decision tree, feature and the standard approaches to training sets. But you may also want to have a look at other data structures to maintain sorted data:
http://en.wikipedia.org/wiki/Red–black_tree maintains a more or less balanced and sorted tree.
AVL tree a bit slower but more balanced and sorted tree.
Trie a sorted tree on lists of elements.
Hash function to easily map a complex element to an integral value that can be used to sort the elements. Good for finding exact elements, but there is no real order in the elements itself.
P.S1: Coming from Matlab you may want to consider a different language from C to move to. C++ has standard libraries to support above data structures. Java, Python come to mind or even Haskell if you are daring. Pointer handling in C can be quite tedious and error prone.
P.S2: I'm unable to include a - in a URL on StackOverflow. So the Red-black tree links is a bit off and can't be clicked. If someone can edit my post to fix it, then I would appreciate that.

Optimizing C loops

I'm new to C from many years of Matlab for numerical programming. I've developed a program to solve a large system of differential equations, but I'm pretty sure I've done something stupid as, after profiling the code, I was surprised to see three loops that were taking ~90% of the computation time, despite the fact they are performing the most trivial steps of the program.
My question is in three parts based on these expensive loops:
Initialization of an array to zero. When J is declared to be a double array are the values of the array initialized to zero? If not, is there a fast way to set all the elements to zero?
void spam(){
double J[151][151];
/* Other relevant variables declared */
calcJac(data,J,y);
/* Use J */
}
static void calcJac(UserData data, double J[151][151],N_Vector y)
{
/* The first expensive loop */
int iter, jter;
for (iter=0; iter<151; iter++) {
for (jter = 0; jter<151; jter++) {
J[iter][jter] = 0;
}
}
/* More code to populate J from data and y that runs very quickly */
}
During the course of solving I need to solve matrix equations defined by P = I - gamma*J. The construction of P is taking longer than solving the system of equations it defines, so something I'm doing is likely in error. In the relatively slow loop below, is accessing a matrix that is contained in a structure 'data' the the slow component or is it something else about the loop?
for (iter = 1; iter<151; iter++) {
for(jter = 1; jter<151; jter++){
P[iter-1][jter-1] = - gamma*(data->J[iter][jter]);
}
}
Is there a best practice for matrix multiplication? In the loop below, Ith(v,iter) is a macro for getting the iter-th component of a vector held in the N_Vector structure 'v' (a data type used by the Sundials solvers). Particularly, is there a best way to get the dot product between v and the rows of J?
Jv_scratch = 0;
int iter, jter;
for (iter=1; iter<151; iter++) {
for (jter=1; jter<151; jter++) {
Jv_scratch += J[iter][jter]*Ith(v,jter);
}
Ith(Jv,iter) = Jv_scratch;
Jv_scratch = 0;
}
1) No they're not you can memset the array as follows:
memset( J, 0, sizeof( double ) * 151 * 151 );
or you can use an array initialiser:
double J[151][151] = { 0.0 };
2) Well you are using a fairly complex calculation to calculate the position of P and the position of J.
You may well get better performance. by stepping through as pointers:
for (iter = 1; iter<151; iter++)
{
double* pP = (P - 1) + (151 * iter);
double* pJ = data->J + (151 * iter);
for(jter = 1; jter<151; jter++, pP++, pJ++ )
{
*pP = - gamma * *pJ;
}
}
This way you move various of the array index calculation outside of the loop.
3) The best practice is to try and move as many calculations out of the loop as possible. Much like I did on the loop above.
First, I'd advise you to split up your question into three separate questions. It's hard to answer all three; I, for example, have not worked much with numerical analysis, so I'll only answer the first one.
First, variables on the stack are not initialized for you. But there are faster ways to initialize them. In your case I'd advise using memset:
static void calcJac(UserData data, double J[151][151],N_Vector y)
{
memset((void*)J, 0, sizeof(double) * 151 * 151);
/* More code to populate J from data and y that runs very quickly */
}
memset is a fast library routine to fill a region of memory with a specific pattern of bytes. It just so happens that setting all bytes of a double to zero sets the double to zero, so take advantage of your library's fast routines (which will likely be written in assembler to take advantage of things like SSE).
Others have already answered some of your questions. On the subject of matrix multiplication; it is difficult to write a fast algorithm for this, unless you know a lot about cache architecture and so on (the slowness will be caused by the order that you access array elements causes thousands of cache misses).
You can try Googling for terms like "matrix-multiplication", "cache", "blocking" if you want to learn about the techniques used in fast libraries. But my advice is to just use a pre-existing maths library if performance is key.
Initialization of an array to zero.
When J is declared to be a double
array are the values of the array
initialized to zero? If not, is there
a fast way to set all the elements to
zero?
It depends on where the array is allocated. If it is declared at file scope, or as static, then the C standard guarantees that all elements are set to zero. The same is guaranteed if you set the first element to a value upon initialization, ie:
double J[151][151] = {0}; /* set first element to zero */
By setting the first element to something, the C standard guarantees that all other elements in the array are set to zero, as if the array were statically allocated.
Practically for this specific case, I very much doubt it will be wise to allocate 151*151*sizeof(double) bytes on the stack no matter which system you are using. You will likely have to allocate it dynamically, and then none of the above matters. You must then use memset() to set all bytes to zero.
In the
relatively slow loop below, is
accessing a matrix that is contained
in a structure 'data' the the slow
component or is it something else
about the loop?
You should ensure that the function called from it is inlined. Otherwise there isn't much else you can do to optimize the loop: what is optimal is highly system-dependent (ie how the physical cache memories are built). It is best to leave such optimization to the compiler.
You could of course obfuscate the code with manual optimization things such as counting down towards zero rather than up, or to use ++i rather than i++ etc etc. But the compiler really should be able to handle such things for you.
As for matrix addition, I don't know of the mathematically most efficient way, but I suspect it is of minor relevance to the efficiency of the code. The big time thief here is the double type. Unless you really have need for high accuracy, I'd consider using float or int to speed up the algorithm.

Resources