I am trying to make a c-program that will will a string, but I want it only to read a very small part of it.
The NMEA-telegram that I try to read is $WIXDR, and do receive the necessary strings.
Here's 2 examples of strings that I get into the CPU:
$WIXDR,C,1.9,C,0,H,83.2,P,0,P,1023.9,H,0*46
$WIXDR,V,0.01,M,0,Z,10,s,0,R,0.8,M,0,V,0.0,M,1,Z,0,s,1,R,0.0,M,1,R,89.9,M,2,R,0.0,M,3*60
If it were only 1 string (not both C and V), this would not be a problem for me.
The problem here is that it's 2 seperate strings. One with the temperature, and one with rain-info.
The only thing that I'm interested in is the value "1.9" from
$WIXDR,C,1.9,C,0......
Here's what I have so far:
void ProcessXDR(char* buffPtr)
{
char valueBuff[10];
int result, x;
float OutSideTemp;
USHORT uOutSideTemp;
// char charTemperature, charRain
IODBerr eCode;
//Outside Temperature
result = ReadAsciiVariable(buffPtr, &valueBuff[0], &buffPtr, sizeof(valueBuff));
sscanf(&valueBuff[0],"%f",&OutSideTemp);
OutSideTemp *= 10;
uOutSideTemp = (USHORT)OutSideTemp;
eCode = IODBWrite(ANALOG_IN,REG_COM_XDR,1,&uOutSideTemp,NULL);
}
// XDR ...
if(!strcmp(&nmeaHeader[0],"$WIXDR"))
{
if(PrintoutEnable)printf("XDR\n");
ProcessXDR(buffPtr);
Timer[TIMER_XDR] = 1200; // Update every minute
ComStateXDR = 1;
eCode = IODBWrite(DISCRETE_IN,REG_COM_STATE_XDR,1,&ComStateXDR,NULL);
}
There's more, but this is the main part that I have.
I have found the answer to my own question. The code that would do as I intented is as follows:
What my little code does, is to look for the letter C, and if the C is found, it will take the value after it and put it into "OutSideTemp". The reason I had to look for C is that there is also a similar string received with the letter V (Rain).
If someone have any input in a way it could be better, I don't mind, but this little piece here does what I need it to do.
Here's to example telegrams I receive (I wanted the value 3.0 to be put into "OutSideTemp"):
$WIXDR,C,3.0,C,0,H,59.2,P,0,P,1026.9,H,04F
$WIXDR,V,0.00,M,0,Z,0,s,0,R,0.0,M,0,V,0.0,M,1,Z,0,s,1,R,0.0,M,1,R,89.9,M,2,R,0.0,M,358
void ProcessXDR(char* buffPtr)
{
char valueBuff[10];
int result, x;
float OutSideTemp;
USHORT uOutSideTemp;
// char charTemperature, charRain
IODBerr eCode;
// Look for "C"
result = ReadAsciiVariable(buffPtr, &valueBuff[0], &buffPtr, sizeof(valueBuff));
// sscanf(&valueBuff[0],"%f",&charTemperature);
if (valueBuff[0] == 'C')
//Outside Temperature
result = ReadAsciiVariable(buffPtr, &valueBuff[0], &buffPtr, sizeof(valueBuff));
sscanf(&valueBuff[0],"%f",&OutSideTemp);
OutSideTemp *= 10;
uOutSideTemp = (USHORT)OutSideTemp;
eCode = IODBWrite(ANALOG_IN,REG_COM_XDR,1,&uOutSideTemp,NULL);
}
I am using the legacy_code tool in MATLAB, to generate some S Functions, then I want the S Functions to be under analysis by the simulink coverage toolbox.
I am asking also here because maybe this is a C issue and not MATLAB related.
I am setting to true the flag to use the coverage toolbox when generating the S functions using the legacy_code tool.
def.Options.supportCoverage = true;
But I get the following error at compilation, I am using the MinGW 64 bits compiler for MATLAB in windows.
“lib_control.c", line 254: error: bad expr node kind (b:\matlab\polyspace\src\shared\cxx_front_end_kernel\edg\src\cp_gen_be.c, line 14084)
Warning: File "lib_control.c" not instrumented for coverage because of previous error
In codeinstrum.internal.LCInstrumenter/instrumentAllFiles
In codeinstrum.internal.SFcnInstrumenter/instrument
In slcovmexImpl
In slcovmex (line 48)
In legacycode.LCT/compile
In legacycode.LCT.legacyCodeImpl
In legacy_code (line 101)
In generate_sfun (line 70)
In the C code I have the following kind of functions:
void controller( int n_var,
double my_input,
double my_output )
{
double my_var[n_var];
for ( int i=0; i<n_var; i++ )
{
my_output = my_input + my_var[i];
}
}
The compiler is complaining about this line:
double my_var[n_var];
Do I have to do something to declare variables like this, so they can be included in the coverage analysis?
Is this error from MATLAB or is it a C error for instrumentation of files?
If I compile without the coverage flag there is no problems and the S Functions is generated without warnings.
Seems your code won't work because of issues.
First try to declare my_var like this
double *my_var = malloc(n_var * sizeof(double));
memset(my_var, 0, n_var * sizeof(double));
This is the correct way to allocate memory according to function parameter.
And there is also an issue.
my_output = my_input + my_var[i];
So it is correct solution.
*my_output = *my_input + my_var[i];
You are going to change value of parameter which is stack register variable
In C language, parameters are saved in to stack register so it will be freed after function ends.
so it won't reflect any changes
To do this, you need to send pointer of variable as parameter
void controller( int n_var,
double *my_input,
double *my_output ) {
*my_output = ....; // like this
}
and in caller side, you can do like this.
double a, b;
controller(10, &a, &b);
Hope this helps you
I am a new to C++. and writing a CLI wrapper to consume managed code (DLL)
public:array<double, 3>^ CallingBrownianManagedCode(int numPaths, int dimension, double time, int seed, array<double, 2>^ covMatrix, double stepSize, int scrambling)
{
std::string str = "Sobol";
String^ newSystemString = gcnew String(str.c_str());
Object^ result;
result = IDevModel::RandomNumberGenerator::iBrownianBridge(numPaths, dimension, time, seed, covMatrix, stepSize, newSystemString, scrambling);
array<double, 3>^ devArray = (array<double, 3>^) result;
return devArray;
}
I've tested the above code using CLI console app it works as desired. Now I need to expose this method to
UnManged C++
__declspec(dllexport) void CallBrownianManagedAdapter()
{
BrownianWrapper::BrownianBridgeAdapter objectBrownian;
array<double, 3>^ brownianArray = objectBrownian.CallingBrownianManagedCodeTemp();
return brownianArray;
}
instead of void it has to be a 3d array. I tried returning manged array^, but it throws an error.
Can you please help me? I appreciate it.
Original code comment specifying the core question:
The error I am getting is while iterating through the while loop,
memory out of range or something... resizing to 300 ... Access
violation writing location that's the exact Fraze...
I'm trying to implement a faster .Net List<T> equivalent in C.
I'm using blittable data types in C#.
In the code below I've moved a function body to the main function just for testing after I have failed to understand where am I wrong.
The problem seems to be that inside the while loop UntArr does not increment.
What am I doing wrong?
typedef struct {
int Id;
char *StrVal;
}Unit; // a data unit as element of an array
unsigned int startTimer(unsigned int start);
unsigned int stopTimer(unsigned int start);
int main(){
Unit *UntArr= {NULL};
//Unit test[30000];
//decelerations comes first..
char *dummyStringDataObject;
int adummyNum,requestedMasterArrLength,requestedStringSize,MasterDataArrObjectMemorySize,elmsz;
int TestsTotalRounds, TestRoundsCounter,ccountr;
unsigned int start, stop, mar;
//Data Settings (manually for now)
requestedMasterArrLength=300;
requestedStringSize = 15;
//timings
start=0;stop=0;
//data sizes varies (x86/x64) compilation according to fastest results
MasterDataArrObjectMemorySize = sizeof(UntArr);
elmsz= sizeof(UntArr[0]);
TestRoundsCounter=-1;
start = startTimer(start);
while(++TestRoundsCounter<requestedMasterArrLength){
int count;
count=-1;
//allocate memory for the "Master Arr"
UntArr = (Unit *)malloc(sizeof(Unit)*requestedMasterArrLength);
dummyStringDataObject = (char*)malloc(sizeof(char)*requestedStringSize);
dummyStringDataObject = "abcdefgHijkLmNo";
while (++count<requestedMasterArrLength)
{
dummyStringDataObject[requestedStringSize-1]=count+'0';
puts(dummyStringDataObject);
ccountr=-1;
// tried
UntArr[count].Id = count;
UntArr[count].StrVal = (char*)malloc(sizeof(char)*requestedStringSize);
UntArr[count].StrVal = dummyStringDataObject;// as a whole
//while(++ccountr<15)// one by one cause a whole won't work ?
//UntArr[count].StrVal[ccountr] = dummyStringDataObject[ccountr];
}
free(UntArr);free(dummyStringDataObject);
}
stop = startTimer(start);
mar = stop - start;
MasterDataArrObjectMemorySize = sizeof(UntArr)/1024;
printf("Time taken in millisecond: %d ( %d sec)\r\n size: %d kb\r\n", mar,(mar/1000),MasterDataArrObjectMemorySize);
printf("UntArr.StrVal: %s",UntArr[7].StrVal);
getchar();
return 0;
}
unsigned int startTimer(unsigned int start){
start = clock();
return start;
}
unsigned int stopTimer(unsigned int start){
start = clock()-start;
return start;
}
testing the code one by one instead of within a while loop work as expected
//allocate memory for the "Master Arr"
UntArr = (Unit *)malloc(sizeof(Unit)*requestedMasterArrLength);
UntArr[0].Id = 0;
dummyStringDataObject = (char*)malloc(sizeof(char)*requestedStringSize);
dummyStringDataObject = "abcdefgHijkLmNo";
////allocate memory for the string object
UntArr[0].StrVal = (char*)malloc(sizeof(char)*requestedStringSize);
////test string manipulation
adummyNum=5;
UntArr[0].StrVal= dummyStringDataObject;
//
UntArr[0].StrVal[14] = adummyNum+'0';
////test is fine
as it happens and as i am new to pointers i have not realize that when debugging
i will not see the elements of given pointer to an array as i am used to
with normal Array[] but looping through result which i did not even try as when i was hovering above the Array* within the while loop expecting to see the elements as in a normal array:
Data[] DataArr = new Data[1000] <- i have expected to actually see the body of the array while looping and populating the Data* and did not realize it is not an actual array but a pointer to one so you can not see the elements/body.
the solution is via a function now as planed originally :
void dodata(int requestedMasterArrLength,int requestedStringSize){
int ccountr,count;
count=0;
UntArr=NULL;
UntArr = (Unit *)malloc(sizeof(Unit)*requestedMasterArrLength);
while(count!=requestedMasterArrLength)
{
char dummyStringDataObject[]= "abcdefgHi";
UntArr[count].StrVal=NULL;
dummyStringDataObject[requestedStringSize-1] = count+'0';
UntArr[count].Id= count;
ccountr=0;
UntArr[count].StrVal= (char*)malloc(sizeof(char)*requestedStringSize);
while(ccountr!=requestedStringSize){
UntArr[count].StrVal[ccountr] = dummyStringDataObject[ccountr];
++ccountr;
}
++count;
}
}
generaly speaking, x86 compilation would get better performance for this current task , populating an array of a struct.
so i have compiled it also in c++ and c#.
executing similar code in C# and C++
minimum time measured in c# ~ 3,100 ms.
minimum time measured in this code - C ~ 1700 ms.
minimum time measured in C++ ~ 900 ms.
i was surprised to see this last result c++ is the winner but why.
i thought c is closer to the system level, CPU, Memory...
I am writing a small oscillatory motion program that would run on Ubuntu and Windows.
After completing part of the program (major part) I tried to test it on windows, works fine (working with Pelles C)
Then, i copied my data on the Ubunutu computer, Running on a virtual Machine (VMware Workstation).
i compiled it fine using GCC and crashes with "Segmentation fault (core dumped)" error.
Output before crashing:
Simulation Starting...
Creating Containers.....
Done!
Initializing Containers.....
Initializing Container
Done!...
Initializing Container
Done!...
Initializing Container
Segmentation fault (core dumped)
--------------------------------
Part of zDA.c, the function required to initialize vectors before using in simulation
int InitializeArray (DATA *item) {
printf("\nInitializing Container");
item->num_allocated = 0;
item->num_elements = 0;
item->the_array = NULL;
printf("\nDone!...");
if (!item->the_array) {
return -1;
}
return 0;
}
While calling the function, snippet from zSim.c
int Simulate(SIMOPT so,PRINTOPT po) {
printf("\nSimulation Starting...\nCreating Containers...");
//Create Data Objects
//vectors
DATA Theta,Omega,T;
DATA *pTheta = Θ
DATA *pOmega = Ω
DATA *pT = &T;
//Initial Values
int method = so.method;
float g = so.g;
float l = so.l;
float itheta = so.theta;
float iomega = so.omega;
float dt = so.dt;
float df = so.df;
float dw = so.dw;
float q = so.q;
float maxtime = so.maxtime;
//backend variables
float i = 0; //Simulation Counter
int k=0; //Counter to Count array size;
int kmax = 0;
float th,thi,om,omi,t,ti; //Simulation variables
int gt,go,pl,mat;
printf("..");
printf("\nDone!\nInitializing Containers...");
printf("..");
//Initialize Containers
InitializeArray(pTheta);
InitializeArray(pOmega);
InitializeArray(pT); //**FOR SOME REASON, it stops working here -_-
printf("DONE! NIT"); //It worked fine on windows, there are no dependencies.
th = pTheta->the_array[0];
om = pOmega->the_array[0];
t = pT->the_array[0];
i don't get why it worked on windows, didn't on ubuntu, either the compiler on Pelles fixed something for me, or my virtual machine is going crazy,
i mean... it already Initialized 2 out of 3 arrays, what's wrong with the third :)"?
The initialization looks like it would not crash. But the code immediately following it looks suspect. The array has not been initialized (well ... it is initialized, but it is initialized to NULL), yet the following code accesses it:
th = pTheta->the_array[0];