Using Matlab Coder generated algorithm for Production - c

I have a fine tuned algorithm in MATLAB that operates on matrices (ofcourse). I've used matlab coder to generate c code for this algorithm and it works as expected.
Here's a function call that I used in Matlab
x = B/A
wherein
B is of size 1*500 (rows * columns)
A is of size 10*500
x, the result is of size 1*10
When this is converted into C source using Matlab Coder. I noticed that the function definition accepts parameters that are same as above sizes.
void myfunction(const double B[500], const double A[5000], double x[10])
For prototype and testing purposes this seems okay. However, in production I prefer to have this function be used for different sizes too. For example 100 instead of 500 in above mentioned variables should also work. How can I remove dependence of matrix dimensions in my algorithm ?
Additionally, there are few lines of code that use hard coded integers. For example, there is code like
if (rankR <= 1.4903363393874656E-8)
// Some internal function calls
else
// Usage of standard sqrt
or
500.0 * fabs(A[0]) * 2.2204460492503131E-16
Could any one explain what are these hard coded integers ? Are these generated from the test data that I've used in MATLAB ?

If the function call you refer to is the entry-point function, you can define the size when setting up Coder. The simplest way to run the Coder is using the GUI from the 'Apps' menu inside MATLAB (or type 'coder' at the console). After specifying the entry-point function, step 2 is to define the type and size for each of the input variables.
For each dimension of your input variable (there can be more than 2 if necessary), you can specify the:
n - dimension is exactly n long
:n - dimension is up to n long
inf - dimension is unbounded
If the function call is not the entry-point function, and is buried inside your code (or if you are running the codegen function from the console), you can explicitly define variables as being of varying size:
coder.varsize('myVariableName');
Bear in mind that some functions can only be used (with Coder) with fixed-sized inputs.
Fuller description here:
http://uk.mathworks.com/help/fixedpoint/ug/defining-variable-size-data-for-code-generation.html#br9t627
Not sure about the random constants unfortunately.

Related

Igraph_vector_init_finally and igraph_vector_cumsum

So I'm looking at the source code of igraph library for c, because I need to create a new type of graph which is not included in that library but it's somehow related to a fitness model graph for free-scale networks. While reading the code relative to the build up of such a graph, I've found that these functions are called in many occasions:
(void) IGRAPH_VECTOR_INIT_FINALLY(igraph_vector*,long_int);
(void) igraph_vector_cumsum(igraph_vector*,igraph_vector*);
I can't seem to locate it through the folder and I've searched online but it seems that I can't just find what it does. For example, in a portion of the code i have:
/* Calculate the cumulative fitness scores */
IGRAPH_VECTOR_INIT_FINALLY(&cum_fitness_out, no_of_nodes);
IGRAPH_CHECK(igraph_vector_cumsum(&cum_fitness_out, fitness_out));
max_out = igraph_vector_tail(&cum_fitness_out);
p_cum_fitness_out = &cum_fitness_out;
where cum_fitness_out it's an empty vector, no_of nodes is the number of nodes, igraph check it's a function to check the return of the function igraph_cumsum, vector tail returns the last element of a vector...
IGRAPH_VECTOR_INIT_FINALLY(vector, size) is a shorthand for:
IGRAPH_CHECK(igraph_vector_init(vector, size));
IGRAPH_FINALLY(igraph_vector_destroy, vector);
Basically, it initializes a vector with the given number of undefined elements, checks whether the memory allocation was successful, and then puts the vector on top of the so-called finally stack that contains the list of pointers that should be destroyed in case of an error in the code that follows. More information about IGRAPH_FINALLY can be found here.
igraph_vector_cumsum() calculates the cumulative sum of a vector; its source can be found in src/vector.pmt. .pmt stands for "poor man's templates"; it is essentially a C source file with a bunch of macros that allow the library to quickly "generate" the same data type (e.g., vectors) for different base types (integers, doubles, Booleans etc) with some macro trickery.

Integrate C function with multiple outputs built with MATLAB Coder

I have been coding in MATLAB and I managed to convert my work to a single function, however it has several inputs and outputs. For the sake of simplicity, lets say it receives three inputs: X (vectorial and for reading only), Y (vectorial and for reading and writing) and Z (scalar and for writing only). Thanks to the reply here I was able to understand that I must create variables with special MATLAB types in order to pre-allocate space and then send them as parameters in my function in the C code.
An initial version with a single scalar output (Z) worked as expected, however taking the next step towards having multiple outputs has raised some questions. I'll try to be as concise as possible. Here's the header of my function in MATLAB and C code once I change Z to a vector:
[Y,Z]=foo(X,Y)
void foo(const unsigned int *X, float Y[n_Y], float Z[n_Z])
These are my doubts so far.
1 - I would expect that if Z is only created inside, it should not appear as an input for the C function. What should I do with it in order to obtain it outside the function? My idea would be to provide a fake variable with the same name that would later be overwritten.
2 - If Y is being changed, then the function should receive the pointer to Y. Is it being updated this way, as it should?
3 - Right now the dimensions are set for X as (1x:inf), which causes the pointer to show up. If I change to a smaller and realistic bound, that single input transforms into two, although nothing else changed (the variable creation in C is independent). Now there is const unsigned int X_data[], const int X_size[2] instead of just const unsigned int X. How should I deal with it within the C code?
The call to the function in C is being made as follows:
emxArray_uint32_T *X=emxCreate_uint32_T(1,n_X);
static emxArray_uint32_T *Y=emxCreate_real32_T(1,n_Y), *Z=emxCreate_real32_T(1,n_Z);
foo(X,&Y,&Z);
emxDestroyArray_uint32_T(X);
I should say that I have not tried to compile the lastest steps, since I need a specific environment to do so (laboratory). However, when I have access to it, the code needs to be almost ready to go. Also, without solving these doubts I think I shouldn't anyway. If it works somehow and I don't understand why, then it's the same as not working.

R external interface

I would like to implement some R package, written in C code.
The C code must:
take an array (of any type) as input.
produce array as output (of unpredictable size).
What is the best practice of implementing array passing?
At the moment C code is called with .C(). It accesses array directly from R, through pointer. Unfortunately same can't be done for output, as output dimensions need to be known in advance which is not true in my case.
Would it make sense to pass array from C to R through a file? For example, in ramfs, if using linux?
Update 1:
Here the exact same problem was discussed.
There, possible option of returning array with unknown dimensions was mentioned:
Splitting external function into two, at the point before calculating the array but after dimensions are known. First part would return dimensions, then empty array would be prepared, then second part would run and populate the array in R.
In my case full dimensions are known only once whole code is executed, so this method would mean running C code twice. Taking a guess on maximal array size isn't optional either.
Update 2: It seems only way to do this is to use .Call() instead, as power suggested. Here are few good examples: http://www.sfu.ca/~sblay/R-C-interface.ppt.
Thanks.
What is the best practice of implementing array passing?
Is the package already written in ANSI C? .C() would then be quick and easy.
If you are writing from scratch, I suggest .Call() and Rcpp. In this way, you can pass R objects to your C/C++ code.
Would it make sense to pass array through a file?
No
Read "Writing R Extensions".

Passing c arrays into fortran as a variable sized matrix

So, i've been commissioned to translate some fortran subroutines into C. These subroutines are being called as part of the control flow of a large porgram based primarily in C.
I am translating the functions one at a time, starting with the functions that are found at the top of call stacks.
The problem I am facing is the hand-off of array data from C to fortran.
Suppose we have declared an array in c as
int* someCArray = (int*)malloc( 50 * 4 * sizeof(int) );
Now, this array needs to be passed down into a fortran subroutine to be filled with data
someFortranFunc( someCArray, someOtherParams );
when the array arrives in fortran land, it is declared as a variable sized matrix as such:
subroutine somefortranfunc(somecarray,someotherparams)
integer somefarray(50,*)
The problem is that fortran doesn't seem to size the array correctly, becuase the program seg-faults. When I debug the program, I find that indexing to
somefarray(1,2)
reports that this is an invalid index. Any references to any items in the first column work fine, but there is only one available column in the array when it arrives in fortran.
I can't really change the fact that this is a variable sized array in fortran. Can anyone explain what is happening here, and is there a way that I can mitigate the problem from the C side of things?
[edit]
By the way, the fortran subroutine is being called from the replaced fortran code as
integer somedatastorage(plentybignumber)
integer someindex
...
call somefarray(somedatastorage(someindex))
where the data storage is a large 1d array. There isn't a problem with overrunning the size of the data storage. Somehow, though, the difference between passing the C array and the fortran (sub)array is causing a difference in the fortran subroutine.
Thanks!
Have you considered the Fortran ISO C Binding? I've had very good results with it to interface Fortran and C in both directions. My preference is to avoid rewriting existing, tested code. There are a few types that can't be transferred with the current version of the ISO C Binding, so a translation might be necessary.
What it shouldn't be that others suggested:
1. Size of int vs. size of Integer. Since the first column has the right values.
2. Row vs. column ordering. Would just get values in wrong order not segmentation faulted.
3. Reference vs value passing. Since the first column has the right values. Unless the compiler is doing something evil behind your back.
Are you sure you don't do this in some secret way?:
someCArray++
print out the value of the someCArray pointer right after you make it and right before you pass it. You also should print it out using the debugger in the fortran code just to verify that the compiler is not generating some temporary copies to help you.

Passing Numpy arrays to C code wrapped with Cython

I have a small bit of existing C code that I want to wrap using Cython. I want to be able to set up a number of numpy arrays, and then pass those arrays as arguments to the C code whose functions take standard c arrays (1d and 2d). I'm a little stuck in terms of figuring out how to write the proper .pyx code to properly handle things.
There are a handful of functions, but a typical function in the file funcs.h looks something like:
double InnerProduct(double *A, double **coords1, double **coords2, const int len)
I then have a .pyx file that has a corresponding line:
cdef extern from "funcs.h":
double InnerProduct(double *A, double **coords1, double **coords2, int len)
where I got rid of the const because cython doesn't support it. Where I'm stuck is what the wrapper code should then look like to pass a MxN numpy array to the **coords1 and **coords2 arguments.
I've struggled to find the correct documentation or tutorials for this type of problem. Any suggestions would be most appreciated.
You probably want Cython's "typed memoryviews" feature, which you can read about in full gory detail here. This is basically the newer, more unified way to work with numpy or other arrays. These can be exposed in Python-land as numpy arrays, or you can export them to Python (for example, here). You have to pay attention to how the striding works and make sure you're consistent about e.g. C-contiguous vs. FORTRAN-like arrays, but the docs are pretty clear on how to do that.
Without knowing a bit more about your function it's hard to be more concrete on exactly the best way to do this - i.e., is the C function read-only for the arrays? (I think yes based on the signature you gave, but am not 100% sure.) If so you don't need to worry about making copies if needed to get C-contiguous states, because the C function doesn't need to talk back to the Python-level numpy array. But typed memoryviews will let you do any of this with a minimum of fuss.
The cython interface code should be created according to the tutorial given here.
To get a C pointer to the data in a numpy array, you should use the ctypes attribute of the numpy array, which is described here.

Resources