I am using the legacy_code tool in MATLAB, to generate some S Functions, then I want the S Functions to be under analysis by the simulink coverage toolbox.
I am asking also here because maybe this is a C issue and not MATLAB related.
I am setting to true the flag to use the coverage toolbox when generating the S functions using the legacy_code tool.
def.Options.supportCoverage = true;
But I get the following error at compilation, I am using the MinGW 64 bits compiler for MATLAB in windows.
“lib_control.c", line 254: error: bad expr node kind (b:\matlab\polyspace\src\shared\cxx_front_end_kernel\edg\src\cp_gen_be.c, line 14084)
Warning: File "lib_control.c" not instrumented for coverage because of previous error
In codeinstrum.internal.LCInstrumenter/instrumentAllFiles
In codeinstrum.internal.SFcnInstrumenter/instrument
In slcovmexImpl
In slcovmex (line 48)
In legacycode.LCT/compile
In legacycode.LCT.legacyCodeImpl
In legacy_code (line 101)
In generate_sfun (line 70)
In the C code I have the following kind of functions:
void controller( int n_var,
double my_input,
double my_output )
{
double my_var[n_var];
for ( int i=0; i<n_var; i++ )
{
my_output = my_input + my_var[i];
}
}
The compiler is complaining about this line:
double my_var[n_var];
Do I have to do something to declare variables like this, so they can be included in the coverage analysis?
Is this error from MATLAB or is it a C error for instrumentation of files?
If I compile without the coverage flag there is no problems and the S Functions is generated without warnings.
Seems your code won't work because of issues.
First try to declare my_var like this
double *my_var = malloc(n_var * sizeof(double));
memset(my_var, 0, n_var * sizeof(double));
This is the correct way to allocate memory according to function parameter.
And there is also an issue.
my_output = my_input + my_var[i];
So it is correct solution.
*my_output = *my_input + my_var[i];
You are going to change value of parameter which is stack register variable
In C language, parameters are saved in to stack register so it will be freed after function ends.
so it won't reflect any changes
To do this, you need to send pointer of variable as parameter
void controller( int n_var,
double *my_input,
double *my_output ) {
*my_output = ....; // like this
}
and in caller side, you can do like this.
double a, b;
controller(10, &a, &b);
Hope this helps you
Related
I am writing 'C' using Visual Studio 2019 community with VisualGDB for an embedded ARM based project (STM32).
VisualGDB shows its error reporting uses the default gnu11 standard.
EDIT: I have made this code a little more complete:
typedef int(*CMD_Type)();
int CMD_0() { return 0; }
int CMD_1(float val) { return 1; }
int CMD_2(float val1, float val2) { return 2; }
int DoSomething ()
{
CMD_Type c = CMD_1;
if (c == CMD_2)
{
return c(1, 2);
}
}
I get red squiggles under the "==" saying that it Cannot apply binary '==' to <anonymous> (*)()> and <anonymous>(*)(int)
I also get red squiggles under two argument that I call c with when I call it with two parameters: function has zero parameters but is called> with two.
This compiles with no errors and works.
My understanding is that even though CMD_Type is type-def'ed as a pointer to a function that returns an integer and takes in no arguments, it is simply a pointer to a function and any arguments just get pushed onto the heap so this works. So I get why the compiler / intellisense is complaining.
Is this ok?
Can I turn off this warning if it compiles anyway?
FYI: I inherited this code :).
Any help would be appreciated.
Thanks
-Ed
I recently wrote a parser generator tool that takes a BNF grammar (as a string) and a set of actions (as a function pointer array) and output a parser (= a state automaton, allocated on the heap). I then use another function to use that parser on my input data and generates a abstract syntax tree.
In the initial parser generation, there is quite a lot of steps, and i was wondering if gcc or clang are able to optimize this, given constant inputs to the parser generation function (and never using the pointers values, only dereferencing them) ? Is is possible to run the function at compile time, and embed the result (aka, the allocated memory) in the executable ?
(obviously, that would be using link time optimization, since the compiler would need to be able to check that the whole function does indeed have the same result with the same parameters)
What you could do in this case is have code that generates code.
Have your initial parser generator as a separate piece of code that runs independently. The output of this code would be a header file containing a set of variable definitions initialized to the proper values. You then use this file in your main code.
As an example, suppose you have a program that needs to know the number of bits that are set in a given byte. You could do this manually whenever you need:
int count_bits(uint8_t b)
{
int count = 0;
while (b) {
count += b & 1;
b >>= 1;
}
return count;
}
Or you can generate the table in a separate program:
int main()
{
FILE *header = fopen("bitcount.h", "w");
if (!header) {
perror("fopen failed");
exit(1);
}
fprintf(header, "int bit_counts[256] = {\n");
int count;
unsigned v;
for (v=0,count=0; v<256; v++) {
uint8_t b = v;
while (b) {
count += b & 1;
b >>= 1;
}
fprintf(header, " %d,\n" count);
}
fprintf(header, "};\n");
fclose(header);
return 0;
}
This create a file called bitcount.h that looks like this:
int bit_counts[256] = {
0,
1,
1,
2,
...
7,
};
That you can include in your "real" code.
Recently, I tried to write mexfunctions using structure variables.
I watched the tutorial but got confused because of how the variable values are passed.
The following example (mexfunction_using_ex_wrong.m & mexfunction_using_ex_wrong.cpp) demonstrates how to fetch the variables passed from matlab in mexfunction.
However, in this case, the result is:
address i_c1=2067094464 i_c2=2067094464
i_c1=10 i_c2=10
address i_c1=1327990656 i_c2=2067100736
i_c1=2 i_c2=20
address i_c1=2067101056 i_c2=2067063424
i_c1=3 i_c2=30
As can be seen, the 1st element of the c1 & c2 array of a structure variable is accidentally the same.
But, in another example (mexfunction_using_ex_correct.m & mexfunction_using_ex_correct.cpp), the elements of array 1 (b1) and array 2(b2) of a structure variable are unrelated as I expect.
The result is:
address i_b1=1978456576 i_b2=1326968576
i_b1=1 i_b2=10
address i_b1=1978456584 i_b2=1326968584
i_b1=2 i_b2=20
address i_b1=1978456592 i_b2=1326968592
i_b1=3 i_b2=30
However, it's more common to use the 1st example in programming. so could anybody explain why in the 1st example the addresses of i_c1 & i_c2 are the same?
The following code is mexfunction_using_ex_wrong.m
clc
clear all
close all
mex mexfunction_using_ex_c_wrong.cpp;
a.b(1).c1=double(1);
a.b(2).c1=double(2);
a.b(3).c1=double(3);
a.b(1).c2=double(1);
a.b(2).c2=double(2);
a.b(3).c2=double(3);
mexfunction_using_ex_c_wrong(a);
The following code is mexfunction_using_ex_c_wrong.cpp
#include "mex.h"
void mexFunction(int nlhs,mxArray *plhs[],int nrhs,const mxArray *prhs[])
{
int i, j, k;
double *i_c1;
double *i_c2;
// for struct variables(pointers) inside fcwcontext
mxArray *mx_b, *mx_c1, *mx_c2;
mx_b=mxGetField(prhs[0], 0, "b");
for(i = 0;i < 3;i=i+1)
{
mx_c1=mxGetField(mx_b, i, "c1");
mx_c2=mxGetField(mx_b, i, "c2");
i_c1=mxGetPr(mx_c1);
i_c2=mxGetPr(mx_c2);
*i_c2=(*i_c2)*10;
printf("address i_c1=%d i_c2=%d\n", i_c1, i_c2);
printf(" i_c1=%g i_c2=%g\n", *i_c1, *i_c2);
}
}
The following code is mexfunction_using_ex_c_correct.m
clc
clear all
close all
mex mexfunction_using_ex_correct.cpp;
a.b1(1)=double(1);
a.b1(2)=double(2);
a.b1(3)=double(3);
a.b2(1)=double(1);
a.b2(2)=double(2);
a.b2(3)=double(3);
mexfunction_using_ex_correct(a);
The following code is mexfunction_using_ex_c_correct.cpp
#include "mex.h"
void mexFunction(int nlhs,mxArray *plhs[],int nrhs,const mxArray *prhs[])
{
int i, j, k;
double *i_b1;
double *i_b2;
mxArray *mx_b1, *mx_b2;
mx_b1=mxGetField(prhs[0], 0, "b1");
mx_b2=mxGetField(prhs[0], 0, "b2");
for(i = 0;i < 3;i=i+1)
{
i_b1=mxGetPr(mx_b1);
i_b2=mxGetPr(mx_b2);
i_b2[i]=i_b2[i]*10;
printf("address i_b1=%d i_b2=%d\n", &i_b1[i], &i_b2[i]);
printf(" i_b1=%g i_b2=%g\n", i_b1[i], i_b2[i]);
}
}
The addresses are not "accidentally the same" - they're intentionally the same, due to MATLAB's internal copy-on-write optimisations. If you look at the MEX documentation, you'll see warnings scattered around...
Do not modify any prhs values in your MEX-file. Changing the data in these read-only mxArrays can produce undesired side effects.
...in various forms...
Note Inputs to a MEX-file are constant read-only mxArrays. Do not modify the inputs. Using mxSetCell* or mxSetField* functions to modify the cells or fields of a MATLAB® argument causes unpredictable results.
...trying to make it very clear that you should absolutely not modify anything you recieve as an input. By calling mxGetPr() on input data and writing back to that pointer as you do with i_b2 and i_c2, you're getting right into that "unpredictable results" territory - if you look at a.b(1).c1 in the MATLAB workspace after the call, it'll really be 10 even though you "only" changed c2.
From MEX, you're looking at the raw data storage without any knowledge of, or access to, MATLAB's internal housekeeping, so the only safe way to modify anything is to use the mxCreate* or mxDuplicate* functions to get your own safe arrays you can then do whatever you want with, and pass back to MATLAB via plhs.
That said, I will admit to having abused in-place modification for a significant performance gain in one instance where I could guarantee my data was unique and unshared, but it's at best unsupported and at worst downright perilous.
I would like to be able to use my own memory allocation function for certain data structures (real valued vectors and arrays) in R. The reason for this is that I need my data to be 64bit aligned and I would like to use the numa library for having control over which memory node is used (I'm working on compute nodes with four 12-core AMD Opteron 6174 CPUs).
Now I have two functions for allocating and freeing memory: numa_alloc_onnode and numa_free (courtesy of this thread). I'm using R version 3.1.1, so I have access to the function allocVector3 (src/main/memory.c), which seems to me as the intended way of adding a custom memory allocator. I also found the struct R_allocator in src/include/R_ext
However it is not clear to me how to put these pieces together. Let's say, in R, I want the result res of an evaluation such as
res <- Y - mean(Y)
to be saved in a memory area allocated with my own function, how would I do this? Can I integrate allocVector3 directly at the R level? I assume I have to go through the R-C interface. As far as I know, I cannot just return a pointer to the allocated area, but have to pass the result as an argument. So in R I call something like
n <- length(Y)
res <- numeric(length=1)
.Call("R_allocate_using_myalloc", n, res)
res <- Y - mean(Y)
and in C
#include <R.h>
#include <Rinternals.h>
#include <numa.h>
SEXP R_allocate_using_myalloc(SEXP R_n, SEXP R_res){
PROTECT(R_n = coerceVector(R_n, INTSXP));
PROTECT(R_res = coerceVector(R_res, REALSXP));
int *restrict n = INTEGER(R_n);
R_allocator_t myAllocator;
myAllocator.mem_alloc = numa_alloc_onnode;
myAllocator.mem_free = numa_free;
myAllocator.res = NULL;
myAllocator.data = ???;
R_res = allocVector3(REALSXP, n, myAllocator);
UNPROTECT(2);
}
Unfortunately I cannot get beyond a variable has incomplete type 'R_allocator_t' compilation error (I had to remove the .data line since I have no clue as to what I should put there). Does any of the above code make sense? Is there an easier way of achieving what I want to? It seems a bit odd to have to allocate a small vector in R and the change its location in C just to be able to both control the memory allocation and have the vector available in R...
I'm trying to avoid using Rcpp, as I'm modifying a fairly large package and do not want to convert all C calls and thought that mixing different C interfaces could perform sub-optimally.
Any help is greatly appreciated.
I made some progress in solving my problem and I would like to share in case anyone else encounters a similar situation. Thanks to Kevin for his comment. I was missing the include statement he mentions. Unfortunately this was only one among many problems.
dyn.load("myAlloc.so")
size <- 3e9
myBigmat <- .Call("myAllocC", size)
print(object.size(myBigmat), units = "auto")
rm(myBigmat)
#include <R.h>
#include <Rinternals.h>
#include <R_ext/Rallocators.h>
#include <numa.h>
typedef struct allocator_data {
size_t size;
} allocator_data;
void* my_alloc(R_allocator_t *allocator, size_t size) {
((allocator_data*)allocator->data)->size = size;
return (void*) numa_alloc_local(size);
}
void my_free(R_allocator_t *allocator, void * addr) {
size_t size = ((allocator_data*)allocator->data)->size;
numa_free(addr, size);
}
SEXP myAllocC(SEXP a) {
allocator_data* my_allocator_data = malloc(sizeof(allocator_data));
my_allocator_data->size = 0;
R_allocator_t* my_allocator = malloc(sizeof(R_allocator_t));
my_allocator->mem_alloc = &my_alloc;
my_allocator->mem_free = &my_free;
my_allocator->res = NULL;
my_allocator->data = my_allocator_data;
R_xlen_t n = asReal(a);
SEXP result = PROTECT(allocVector3(REALSXP, n, my_allocator));
UNPROTECT(1);
return result;
}
For compiling the c code, I use R CMD SHLIB -std=c99 -L/usr/lib64 -lnuma myAlloc.c. As far as I can tell, this works fine. If anyone has improvements/corrections to offer, I'd be happy to include them.
One requirement from the original question that remains unresolved is the alignment issue. The block of memory returned by numa_alloc_local is correctly aligned, but other fields of the new VECTOR_SEXPREC (eg. the sxpinfo_struct header) push back the start of the data array. Is it somehow possible to align this starting point (the address returned by REAL())?
R has, in memory.c:
main/memory.c
84:#include <R_ext/Rallocators.h> /* for R_allocator_t structure */
so I think you need to include that header as well to get the custom allocator (RInternals.h merely declares it, without defining the struct or including that header)
I have implemented a facade pattern that uses C functions underneath and I would like to test it properly.
I do not really have control over these C functions. They are implemented in a header. Right now I #ifdef to use the real headers in production and my mock headers in tests. Is there a way in C to exchange the C functions at runtime by overwriting the C function address or something? I would like to get rid of the #ifdef in my code.
To expand on Bart's answer, consider the following trivial example.
#include <stdio.h>
#include <stdlib.h>
int (*functionPtr)(const char *format, ...);
int myPrintf(const char *fmt, ...)
{
char *tmpFmt = strdup(fmt);
int i;
for (i=0; i<strlen(tmpFmt); i++)
tmpFmt[i] = toupper(tmpFmt[i]);
// notice - we only print an upper case version of the format
// we totally disregard all but the first parameter to the function
printf(tmpFmt);
free(tmpFmt);
}
int main()
{
functionPtr = printf;
functionPtr("Hello world! - %d\n", 2013);
functionPtr = myPrintf;
functionPtr("Hello world! - %d\n", 2013);
return 0;
}
Output
Hello World! - 2013
HELLO WORLD! - %D
It is strange that you even need an ifdef-selected header. The code-to-test and your mocks should have the exact same function signatures in order to be a correct mock of the module-to-test. The only thing that then changes between a production-compilation and a test-compilation would be which .o files you give to the linker.
It is possible With Typemock Isolator++ without creating unnecessary new levels of indirection. It can be done inside the test without altering your production code. Consider the following example:
You have the Sum function in your code:
int Sum(int a, int b)
{
return a+b;
}
And you want to replace it with Sigma for your test:
int Sigma(int a, int b)
{
int sum = 0;
for( ; 0<a ; a--)
{
sum += b;
}
return sum;
}
In your test, mock Sum before using it:
WHEN_CALLED: call the method you want to fake.
ANY_VAL: specify the args values for which the mock will apply. in this case any 2 integers.
*DoStaticOrGlobalInstead: The alternative behavior you want for Sum.
In this example we call Sigma instead.
TEST_CLASS(C_Function_Tests)
{
public:
TEST_METHOD(Exchange_a_C_function_implementation_at_run_time_is_Possible)
{
void* context = NULL; //since Sum global it has no context
WHEN_CALLED(Sum (ANY_VAL(int), ANY_VAL(int))).DoStaticOrGlobalInstead(Sigma, context);
Assert::AreEqual(2, Sum(1,2));
}
};
*DoStaticOrGlobalInstead
It is possible to set other types of behaviors instead of calling an alternative method. You can throw an exception, return a value, ignore the method etc...
For instance:
TEST_METHOD(Alter_C_Function_Return_Value)
{
WHEN_CALLED(Sum (ANY_VAL(int), ANY_VAL(int))).Return(10);
Assert::AreEqual(10, Sum(1,2));
}
I don't think it's a good idea to overwrite functions at runtime. For one thing, the executable segment may be set as read-only and even if it wasn't you could end up stepping on another function's code if your assembly is too large.
I think you should create something like a function pointer collection for the one and the other set of implementations you want to use. Every time you want to call a function, you'll be calling from the selected function pointer collection. Having done that, you may also have proxy functions (that simply call from the selected set) to hide the function pointer syntax.