How to fill an array with function inline - arrays

Are exists in .Net 5 something like Array.Fill(myArray, 1) but with function.
I mean Array.Fill(myArray, myRandom.Next(10)) or another function. Actually, this code is possible but it counts function once and fills the whole array with the result.
My goal can be achieved with myArray = Array.ConvertAll(Of Integer, Integer)(myArray, Function(i) myRandom.Next(10)) but it looks a little strange and also wastes CPU time by redundant array constructor calls.
Of course, it can be simply performed with a loop, but some inline better complies with the style of the current project.
Samples are provided in VB.Net but of course, the question is related to .Net not to any particular programing language.

Related

Code quality question about handling multiple functions with same signature in C

My program answers on incoming messages and do some logic based on ID`s and data included in messages.
I have a different function for each ID.
The project is pure C.
To make the code easy to work with I have adjusted all functions to the same style (same return and parameters).
I also want to evade the long switch-case constructions and make code easier to edit later, so I have created the following function:
AnswerStruct IDHandler(Request Message)
{
struct AnswerStruct ANS;
SIDHandler = IDfunctions[Message.ID];
ANS = SIDHandler(Message);
return ANS;
}
AnswerStruct is struct for answer messages.
Request is struct for incoming messages.
IDfunctions is array of pointers to functions which looks like this -
AnswerStruct func1(Request);
AnswerStruct func4(Request);
...
typedef AnswerStruct(*f)(Request);
AnswerStruct (*SIDHandler)(Request);
static f IDfunctions[IDMax] = {0, *func1, 0, 0, *func4, ...};
Function pointers placed in the array cells equal to their id`s, for example:
func1 related to message with ID=1.
func4 related to message with ID=4.
I think, that by using this array I make my life much easier.
I can call function which I need in one step (just go to the IDfunctions[ID]).
Also, adding new functions becomes a two step operation (just add function to the IDfunctions and write logic).
I doubt the efficiency of the selected solution, it seems clunky to me.
The question is - Is this a good architecture?
If no, how can I edit my solution to make it better?
Thanks.
I doubt the efficiency of the selected solution, it seems clunky to
me.
It can be less efficient to call a function via a function pointer than to call it directly by name, because the former denies the compiler any opportunity to optimize the call. But you have to consider whether that actually matters. In a system that dispatches function calls based on messages received from an external source, the I/O involved in receiving the messages is likely to be much more expensive than the indirect function calls, so the difference in call performance is unlikely to be significant.
On the other hand, your approach affords simpler logic and many fewer lines of code, which is a different and potentially more valuable kind of efficiency.
The question is - Is this a good architecture?
The general approach is perfectly good, and I don't see much to complain about in the implementation sketch provided.
Personally, I would declare array IDFunctions to be const (supposing, of course, that you don't intend to replace any of its members after their initialization), but that's a minor safety / performance detail, where again the performance dimension is probably irrelevant.

How can I parametrize a callback function that I submit to an external library

Say I have an external library that computes the optima, say minima, of a given function. Say its headers give me a function
double[] minimizer(ObjFun f)
where the headers define
typedef double (*ObjFun)(double x[])
and "minimizer" returns the minima of the function f of, say, a two dimensional vector x.
Now, I want to use this to minimize a parameterized function. I don't know how to express this in code exactly, but say if I am minimizing quadratic forms (just a silly example, I know these have closed form minima)
double quadraticForm(double x[]) {
return x[0]*x[0]*q11 + 2*x[0]*x[1]*q12 + x[1]*x[1]*q22
}
which is parameterized by the constants (q11, q12, q22). I want to write code where the user can input (q11, q12, q22) at runtime, I can generate a function to give to the library as a callback, and return the optima.
What is the recommended way to do this in C?
I am rusty with C, so asking about both feasibility and best practices. Really I am trying to solve this using C/Cython code. I was using python bindings to the library so far and using "inner functions" it was really obvious how to do this in python:
def getFunction(q11, q12, q22):
def f(x):
return x[0]*x[0]*q11 + 2*x[0]*x[1]*q12 + x[1]*x[1]*q22
return f
// now submit getFunction(/*user params*/) to the library
I am trying to figure out the C construct so that I can be better informed in creating a Cython equivalent.
The header defines the prototype of a function which can be used as a callback. I am assuming that you can't/won't change that header.
If your function has more parameters, they cannot be filled by the call.
Your function therefor cannot be called as callback, to avoid undefined behaviour or bogus values in parameters.
The function therefor cannot be given as callback; not with additional parameters.
Above means you need to drop the idea of "parameterizing" your function.
Your actual goal is to somehow allow the constants/coefficients to be changed during runtime.
Find a different way of doing that. Think of "dynamic configuration" instead of "parameterizing".
I.e. the function does not always expect those values at each call. It just has access to them.
(This suggests the configuration values are less often changed than the function is called, but does not require it.)
How:
I only can think of one simple way and it is pretty ugly and vulnerable (e.g. due to racing conditions, concurrent access, reentrance; you name it, it will hurt you ...):
Introduce a set of global variables, or better one struct-variable, for readability. (See recommendation below for "file-global" instead of "global".)
Set them at runtime to the desired values, using a separate function.
Initialise them to meaningful defaults, in case they never get written.
Read them at the start of the minimizing callback function.
Recommendation: Have everything (the minimizing function, the configuration variable and the function which sets the configuration at runtime) in one code file and make the configuration variable(s) static (i.e. restricts access to it this code file).
Note:
The answer is only the analysis that and why you should not try paraemeters.
The proposed method is not considered part of the answer; it is more simple than good.
I invite more holistic answers, which propose safer implementation.

Minimizing an array-returning function using "fminunc"

I am using MATLAB to build a code that does automatic tuning of the three PID controller gains. The way I am thinking of it, is to minimize the error (the difference between the desired state and the obtained one) of my system, for that, I coded a function that accepts the PID gains as input parameters and returns the calculated error, namely:
errors_vector = closedLoopSimulation(pidGains)
Since I have three set points (input commands), then the dimension of the output errors_vector is 3*N, where N is the number of time samples I have (1000 in my case). So that is the function I want to minimize, and for doing so, I tried using fminunc command, namely:
pidGains_ini = [2.4 0.1 0.4];
func = #closedLoopSimulation;
[pid, fval] = fminunc(func, pidGains_ini)
However, when I run the last piece of code, I get this error:
User supplied objective function must return a scalar value.
which is clearly due to the fact that that errors_vector is a 3*1000 array and not a scalar.
My questions would be, from the programming point of view, is there a way that I can make fminunc minimize functions that return arrays?
On the other hand, and from the Control Theory point of view, is there another way which I can optimize the PID gains automatically?
I hope I made myself clear enough.
Thanks
Minimizing a vector is not very well defined (there is something called multi-objective or multi-criteria optimization but that is somewhat specialized). "Normal" optimization methods can only minimize (or maximize) scalar objectives. I suspect in your case you could form such an objective by taking the sum of the squared errors and minimize that. To be complete: this is standard operating procedure and is often called "least squares".

SIMULINK Holding Previous Value of a Signal

I am trying to implement a pulse generator in SIMULINK that needs to know the previous 2 input values i.e. I need to know the previous 2 state values for the input signal. Also, I need to know the previous output value.
My pseudo code is:
IF !input AND input_prevValue AND !input_prevValue2
output = !output_pv
ELSE
output = output_pv;
I know that I can use legacy function importer and use C code to do this job in SIMULINK. However, the problem arises when you apply a configuration reference set to your model. The key problem is the flexibility. When you use this model somewhere else (say share it with a colleague or whoever), unless you have used a configuration reference set, you can rebuild the code (i.e. from S-Function Block) and run your model. But you cannot rebuild the code if the configuration reference set is applied.
My solution would be to implement the logic in a way that I can do the same without C functions. I tried to use the memory block in SIMULINK but apparently it doesn't do it. Does anyone know how to hold previous values for input and output in SIMULINK (for as long as the model is open)?
Have you tried with a MATLAB Function block? Alternatively, if you have a Stateflow license, this would lend itself nicely to a state chart.
EDIT
Based on your pseudo-code, I would expect the code in the MATLAB Function block to look like this
function op = logic_fcn(ip,ip_prev,ip_prev2,op_prev)
% #codegen
if ~ip && ip_prev && ~ip_prev2
op = ~op_prev;
else
op = op_prev;
end
where ip, ip_prev, ip_prev2 and op_prev are defined as boolean inputs and op as a boolean output. If you are using a fixed-step discrete solver, the memory block should work so that you would for example feed the output of the MATLAB Function block to a memory block (with the correct sample time), and the output of the memory block to the op_prev input of the MATLAB Function block.
You could (and should) test your function in MATLAB first (and/or a test Simulink model) to make sure it works and produces the output you expect for a given input.
This is reasonably straight forward to do with fundamental blocks,
Note that for the Switch block the "Criteria for passing first input:" has been changed to "u2~=0".

efficient sort with custom comparison, but no callback function

I have a need for an efficient sort that doesn't have a callback, but is as customizable as using qsort(). What I want is for it to work like an iterator, where it continuously calls into the sort API in a loop until it is done, doing the comparison in the loop rather than off in a callback function. This way the custom comparison is local to the calling function (and therefore has access to local variables, is potentially more efficient, etc). I have implemented this for an inefficient selection sort, but need it to be efficient, so prefer a quick sort derivative.
Has anyone done anything like this? I tried to do it for quick sort, but trying to turn the algorithm inside out hurt my brain too much.
Below is how it might look in use.
// the array of data we are sorting
MyData array[5000], *firstP, *secondP;
// (assume data is filled in)
Sorter sorter;
// initialize sorter
int result = sortInit (&sorter, array, 5000,
(void **)&firstP, (void **)&secondP, sizeof(MyData));
// loop until complete
while (sortIteration (&sorter, result) == 0) {
// here's where we do the custom comparison...here we
// just sort by member "value" but we could do anything
result = firstP->value - secondP->value;
}
Turning the sort function inside out as you propose isn't likely to make it faster. You're trading indirection on the comparison function for indirection on the item pointers.
It appears you want your comparison function to have access to state information. The quick-n-dirty way to create global variables or a global structure, assuming you don't have more than one thread going at once. The qsort function won't return until all the data is sorted, so in a single threaded environment this should be safe.
The only other thing I would suggest is to locate a source to qsort and modify it to take an extra parameter, a pointer to your state structure. You can then pass this pointer into your comparison function.
Take an existing implementation of qsort and update it to reference the Sorter object for its local variables. Instead of calling a compare function passed in, it would update its state and return to the caller.
Because of recursion in qsort, you'll need to keep some sort of a state stack in your Sorter object. You could accomplish that with an array or a linked-list using dynamic allocation (less efficient). Since most qsort implementations use tail recursion for the larger half and make a recursive call to qsort for the smaller half of the pivot point, you can sort at least 2n elements if your array can hold n states.
A simple solution is to use a inlineble sort function and a inlineble compare callback. When compiled with optimisation, both call get flatten into each other exactly like you want. The only downside is that your choice of sort algorithm is limited because if you recurse or alloc more memory you potentially lose any benefit from doing this. Method with small overhead, like this, work best with small data set.
You can use generic sort function with compare method, size, offset and stride.This way custom comparison can be done by parameter rather then callback. With this way you can use any algorithm. Just use some macro to fill in the most common case because you will have a lot of function argument.
Also, check out the STB library (https://github.com/nothings/stb).
It has sorting function similar to this among many other useful C tools.
What you're asking for has already been done -- it's called std::sort, and it's already in the C++ standard library. Better support for this (among many other things) is part of why well-written C++ is generally faster than C.
You could write a preprocessor macro to output a sort routine, and have the macro take a comparison expression as an argument.
#define GENERATE_SORT(name, type, comparison_expression) \
void name(type* begin, type* end) \
{ /* ... when needed, fill a and b and use comparison_expression */ }
GENERATE_SORT(sort_ints, (*a<*b))
void foo()
{
int array[10];
sort_ints(array, array+10);
}
Two points. I).
_asm
II). basic design limits of compilers.
Compilers have, as a basic purpose, the design goal of avoiding assembler or machine code. They achieve this by imposing certain limits. In this case, we give up a flexibility that we can easily do in assembly code. i.e. split the generated code of the sort into two pieces at the call to the compare function. copy the first half to somewhere. next copy the generated code of the compare function to there, just after the previous copied code of the first part. then copy the last half of the sort code. Finally, we have to deal with a whole series of minor details. See also the concept of "hot patching" running programs.

Resources