Are indexes easier to vectorize than pointers? - c

Is there any example (e.g. on https://godbolt.org/ ) where CLang generates worse code when an algorithm expressed by pointer iterations instead of array indexes? E.g. it can vectorize/unfold in one case but can't in the other one?
In simple examples apparently it doesn't matter. Here is a pointer iteration style:
while (len-- > 0) {
*dst++ = *src++;
}
Here is the logically same code in index style:
while (idx != len) {
dst[idx] = src[idx];
idx++;
}
Disregard any UB and/or off by one errors here.
Edit: the argument about indices being sugar is irrelevant, as desugraing doesn't change the algorithm style. So the following pointer based code is still in the index style:
while (idx != len) {
*(dst + idx) = *(src + idx);
idx++;
}
Note that the index-based loop has only 1 changing variable, while the pointer-based loop has 2, and the compiler must infer that they always change together.
You should look at this in the context of https://en.wikipedia.org/wiki/Induction_variable and https://en.wikipedia.org/wiki/Strength_reduction. Pointer style is essentially strength-reduced index-style, as addition is replaced by increments. And this reduction was beneficial for performance for some time, but no longer.
So my question boils down to if there are situations when this strength reduction cannot be performed or reversed by a compiler.
Another possible case is when indexes are not induction variables. So corresponding pointer code includes "arbitrary jumps" and it's somehow harder to transform the loop due to "history" of past iterations.

As long as no overloaded operator [] is involved, a subscript expression is literally defined to be identical to pointer arithmetic followed by dereferencing the result [expr.sub]/1. Thus, as long as both versions are indeed equivalent, compilers should generally be able to optimize both versions equally well (I'd probably go as far as considering a compiler's failure to optimize one but not the other a performance bug). That being said, note that there are lots of subtleties such as the wrap-around behavior of unsigned arithmetic that can make iterating over an index not exactly equivalent to iterating over a pointer…

Related

Theory of arrays in Z3: (1) model is difficult to understand, (2) do not know how to implement functions and (3) difference with sequences

Following to the question published in How expressive can we be with arrays in Z3(Py)? An example, I expressed the following formula in Z3Py:
Exists i::Integer s.t. (0<=i<|arr|) & (avg(arr)+t<arr[i])
This means: whether there is a position i::0<i<|arr| in the array whose value a[i] is greater than the average of the array avg(arr) plus a given threshold t.
The solution in Z3Py:
t = Int('t')
avg_arr = Int('avg_arr')
len_arr = Int('len_arr')
arr = Array('arr', IntSort(), IntSort())
phi_1 = And(0 <= i, i< len_arr)
phi_2 = (t+avg_arr<arr[i])
phi = Exists(i, And(phi_1, phi_2))
s = Solver()
s.add(phi)
print(s.check())
print(s.model())
Note that, (1) the formula is satisfiable and (2) each time I execute it, I get a different model. For instance, I just got: [avg_a = 0, t = 7718, len_arr = 1, arr = K(Int, 7719)].
I have three questions now:
What does arr = K(Int, 7719)] mean? Does this mean the array contains one Int element with value 7719? In that case, what does the K mean?
Of course, this implementation is wrong in the sense that the average and length values are independent from the array itself. How can I implement simple avg and len functions?
Where is the i index in the model given by the solver?
Also, in which sense would this implementation be different using sequences instead of arrays?
(1) arr = K(Int, 7719) means that it's a constant array. That is, at every location it has the value 7719. Note that this is truly "at every location," i.e., at every integer value. There's no "size" of the array in SMTLib parlance. For that, use sequences.
(2) Indeed, your average/length etc are not related at all to the array. There are ways of modeling this using quantifiers, but I'd recommend staying away from that. They are brittle, hard to code and maintain, and furthermore any interesting theorem you want to prove will get an unknown as answer.
(3) The i you declared and the i you used as the existential is completely independent of each other. (Latter is just a trick so z3 can recognize it as a value.) But I guess you removed that now.
The proper way to model such problems is using sequences. (Although, you shouldn't expect much proof performance there either.) Start here: https://microsoft.github.io/z3guide/docs/theories/Sequences/ and see how much you can push it through. Functions like avg will need a recursive definition most likely, for that you can use RecAddDefinition, for an example see: https://stackoverflow.com/a/68457868/936310
Stack-overflow works the best when you try to code these yourself and ask very specific questions about how to proceed, as opposed to overarching questions. (But you already knew that!) Best of luck..

Make all pointers in an array of pointers point to the same thing in C?

I have these two definitions:
uint8_t *idx[0x100];
uint8_t raw[0x1000];
Is there any other way than to loop over every element of idx to point them all to raw[0]?
for (i=0; i<sizeof(raw); i++)
idx[i] = &raw[0];
There must be a faster way than ↑ that. Is there an equivalent to memset for pointers?
The simple, straightforward loop is probably the best way (note that there's an error in your current loop as others pointed out).
The advantage is that those kind of loops are very easy to optimize, it's such a common case that compilers have gotten very good at it, and your compiler will use vector instructions and other optimizations as needed to keep it very fast without needing to hand-optimize yourself.
And of course at the same time it is more readable, more maintainable, than optimizing it by hand.
Of course if there's a special case, for example if you want to fill it with null pointers, or if you know what the content will be at compile time, then there are some slightly more efficient ways to do that, but in the general case making it easy for your compiler to optimize your code is the simplest way to get good performance.
from performance engineering's perspective, there is indeed a way to make it faster than
for (i=0; i<sizeof(raw); i++)
idx[i] = &raw[0];
if you make a comparison after turning off the optimizer in compiler. but the difference could be very minor.
let's do it:
uint8_t *idx[0x100];
uint8_t raw[0x1000];
#define lengthof(arr) (sizeof(arr) / sizeof(*arr))
uint8_t *start = idx;
int length = lengthof(idx);
uint8_t *end = idx + (length & ~1);
for (; start < end;)
{
*start++ = raw;
*start++ = raw;
}
if (length & 1)
*start++ = raw;
this is faster majorly because of two reasons:
direct operate on pointers. if you do idx[i], in assembly, (idx + i * sizeof *idx) will be perform each time, while *start has already had the answer in hand.
duplicate operation in each iteration. in this way, the code will have less branching while maintaining the locality. gcc -O2 mostly likely will do the trick for you.
We only see a fragment of code, if you are initializing a global array of pointers to point to a global array of uint8_t, there is a faster way: write an explicit initializer. The initalization is done at compile time and takes virtually no time at execution time.
If the array is automatic, I'm afraid there is no faster way to do this. If your compiler is clever and instructed to use optimizations (-O2, -O3, etc.) it will probably unroll the loop and generate pretty efficient code. Look at the assembly output to verify this. If it does not, you can unroll the loop yourself:
Assuming the array size is a multiple of 4:
for (i = 0; i < sizeof(idx) / sizeof(*idx); i += 4)
idx[i] = idx[i+1] = idx[i+2] = idx[i+3] = &raw[0];
Note that you should be careful with the sizeof operator: in addition to using the wrong array for the size computation, your code makes 2 implicit assumptions:
The array element is a char
idx is an array, not a pointer to an array.
It is advisable to use sizeof(idx) / sizeof(*idx) to compute the number of elements of the array: this expression works for all array element types, but idx still needs to be an array type. Defining a macro:
#define countof(a) (sizeof(a) / sizeof(*(a)))
Makes it more convenient, but hides the problem if a is a pointer.

Why does C support negative array indices?

From this post in SO, it is clear that C supports negative indices.
Why support such a potential memory violation in a program?
Shouldn't the compiler throw a Negative Index warning at least? (am using GCC)
Or is this calculation done in runtime?
EDIT 1: Can anybody hint at its uses?
EDIT 2: for 3.) Using counters of loops in [] of arrays/pointers indicates Run-time Calculation of Indices.
The calculation is done at runtime.
Negative indices don't necessarily have to cause a violation, and have their uses.
For example, let's say you have a pointer that is currently pointing to the 10th element in an array. Now, if you need to access the 8th element without changing the pointer, you can do that easily by using a negative index of -2.
char data[] = "01234567890123456789";
char* ptr = &data[9];
char c = ptr[-2]; // 7
Here is an example of use.
An Infinite Impulse Response filter is calculated partially from recent previous output values. Typically, there will be some array of input values and an array where output values are to be placed. If the current output element is yi, then yi may be calculated as yi = a0•xi + a1•xi–1 +a2•yi–1 +a3•yi–2.
A natural way to write code for this is something like:
void IIR(float *x, float *y, size_t n)
{
for (i = 0; i < n; ++i)
y[i] = a0*x[i] + a1*x[i-1] + a2*y[i-1] + a3*y[i-2];
}
Observe that when i is zero, y[i-1] and y[i-2] have negative indices. In this case, the caller is responsible for creating an array, setting the initial two elements to “starter values” for the output (often either zero or values held over from a previous buffer), and passing a pointer to where the first new value is to be written. Thus, this routine, IRR, normally receives a pointer into the middle of an array and uses negative indices to address some elements.
Why support such a potential memory violation in a program?
Because it follows the pointer arithmetic, and may be useful in certain case.
Shouldn't the compiler throw a Negative Index warning at least? (am using GCC)
The same reason the compiler won't warn you when you access array[10] when the array has only 10 elements. Because it leaves that work to the programmers.
Or is this calculation done in runtime?
Yes, the calculation is done in runtime.
Elaborating on Taymon's answer:
float arr[10];
float *p = &arr[2];
p[-2]
is now perfectly OK. I haven't seen a good use of negative indices, but why should the standard exclude it if it is in general undecidable whether you are pointing outside of a valid range.
OP: Why support ... a potential memory violation?
It has potential uses, for as OP says it is a potential violation and not certain memory violation. C is about allowing users to do many things, include all the rope they need to hang themselves.
OP: ... throw a Negative Index warning ...
If concerned, use unsigned index or better yet, use size_t.
OP ... calculation done in runtime?
Yes, quite often as in a[i], where i is not a constant.
OP: hint at its uses?
Example: one is processing a point in an array of points (Pt) and want to determine if the mid-point is a candidate for removal as it is co-incident. Assume the calling function has already determined that the Mid is neither the first nor last point.
static int IsCoincident(Pt *Mid) {
Pt *Left = &Mid[-1]; // fixed negative index
Pt *Right = &Mid[+1];
return foo(Left, Mid, Right);
}
Array subscripts are just syntactic sugar for dereferencing of pointers to arbitrary places in memory. The compiler can't warn you about negative indexes because it doesn't know at compile time where a pointer will be pointing to. Any given pointer arithmetic expression might or might not result in a valid address for memory access.
a[b] does the same thing as *(a+b). Since the latter allows the negative b, so does the former.
Example of using negative array indices.
I use negative indices to check message protocols. For example, one protocol format looks like:
<nnn/message/f>
or, equally valid:
<nnn/message>
The parameter f is optional and must be a single character if supplied.
If I want to get to the value of character f, I first get a pointer to the > character:
char * end_ptr = strchr(msg, '>');
char f_char = '1'; /* default value */
Now I check if f is supplied and extract it (here is where the negative array index is used):
if (end_ptr[-2] == '/')
{
f_char = end_ptr[-1];
}
Note that I've left out error checking and other code that is not relevant to this example.

C initializing a (very) large integer array with values corresponding to index

Edit3: Optimized by limiting the initialization of the array to only odd numbers. Thank you #Ronnie !
Edit2: Thank you all, seems as if there's nothing more I can do for this.
Edit: I know Python and Haskell are implemented in other languages and more or less perform the same operation I have bellow, and that the complied C code will beat them out any day. I'm just wondering if standard C (or any libraries) have built-in functions for doing this faster.
I'm implementing a prime sieve in C using Eratosthenes' algorithm and need to initialize an integer array of arbitrary size n from 0 to n. I know that in Python you could do:
integer_array = range(n)
and that's it. Or in Haskell:
integer_array = [1..n]
However, I can't seem to find an analogous method implemented in C. The solution I've come up with initializes the array and then iterates over it, assigning each value to the index at that point, but it feels incredibly inefficient.
int init_array()
{
/*
* assigning upper_limit manually in function for now, will expand to take value for
* upper_limit from the command line later.
*/
int upper_limit = 100000000;
int size = floor(upper_limit / 2) + 1;
int *int_array = malloc(sizeof(int) * size);
// debug macro, basically replaces assert(), disregard.
check(int_array != NULL, "Memory allocation error");
int_array[0] = 0;
int_array[1] = 2;
int i;
for(i = 2; i < size; i++) {
int_array[i] = (i * 2) - 1;
}
// checking some arbitrary point in the array to make sure it assigned properly.
// the value at any index 'i' should equal (i * 2) - 1 for i >= 2
printf("%d\n", int_array[1000]); // should equal 1999
printf("%d\n", int_array[size-1]); // should equal 99999999
free(int_array);
return 0;
error:
return -1;
}
Is there a better way to do this? (no, apparently there's not!)
The solution I've come up with initializes the array and then iterates over it, assigning each value to the index at that point, but it feels incredibly inefficient.
You may be able to cut down on the number of lines of code, but I do not think this has anything to do with "efficiency".
While there is only one line of code in Haskell and Python, what happens under the hood is the same thing as your C code does (in the best case; it could perform much worse depending on how it is implemented).
There are standard library functions to fill an array with constant values (and they could conceivably perform better, although I would not bet on that), but this does not apply here.
Here a better algorithm is probably a better bet in terms of optimising the allocation:-
Halve the size int_array_ptr by taking advantage of the fact that
you'll only need to test for odd numbers in the sieve
Run this through some wheel factorisation for numbers 3,5,7 to reduce the subsequent comparisons by 70%+
That should speed things up.

Optimizing C loops

I'm new to C from many years of Matlab for numerical programming. I've developed a program to solve a large system of differential equations, but I'm pretty sure I've done something stupid as, after profiling the code, I was surprised to see three loops that were taking ~90% of the computation time, despite the fact they are performing the most trivial steps of the program.
My question is in three parts based on these expensive loops:
Initialization of an array to zero. When J is declared to be a double array are the values of the array initialized to zero? If not, is there a fast way to set all the elements to zero?
void spam(){
double J[151][151];
/* Other relevant variables declared */
calcJac(data,J,y);
/* Use J */
}
static void calcJac(UserData data, double J[151][151],N_Vector y)
{
/* The first expensive loop */
int iter, jter;
for (iter=0; iter<151; iter++) {
for (jter = 0; jter<151; jter++) {
J[iter][jter] = 0;
}
}
/* More code to populate J from data and y that runs very quickly */
}
During the course of solving I need to solve matrix equations defined by P = I - gamma*J. The construction of P is taking longer than solving the system of equations it defines, so something I'm doing is likely in error. In the relatively slow loop below, is accessing a matrix that is contained in a structure 'data' the the slow component or is it something else about the loop?
for (iter = 1; iter<151; iter++) {
for(jter = 1; jter<151; jter++){
P[iter-1][jter-1] = - gamma*(data->J[iter][jter]);
}
}
Is there a best practice for matrix multiplication? In the loop below, Ith(v,iter) is a macro for getting the iter-th component of a vector held in the N_Vector structure 'v' (a data type used by the Sundials solvers). Particularly, is there a best way to get the dot product between v and the rows of J?
Jv_scratch = 0;
int iter, jter;
for (iter=1; iter<151; iter++) {
for (jter=1; jter<151; jter++) {
Jv_scratch += J[iter][jter]*Ith(v,jter);
}
Ith(Jv,iter) = Jv_scratch;
Jv_scratch = 0;
}
1) No they're not you can memset the array as follows:
memset( J, 0, sizeof( double ) * 151 * 151 );
or you can use an array initialiser:
double J[151][151] = { 0.0 };
2) Well you are using a fairly complex calculation to calculate the position of P and the position of J.
You may well get better performance. by stepping through as pointers:
for (iter = 1; iter<151; iter++)
{
double* pP = (P - 1) + (151 * iter);
double* pJ = data->J + (151 * iter);
for(jter = 1; jter<151; jter++, pP++, pJ++ )
{
*pP = - gamma * *pJ;
}
}
This way you move various of the array index calculation outside of the loop.
3) The best practice is to try and move as many calculations out of the loop as possible. Much like I did on the loop above.
First, I'd advise you to split up your question into three separate questions. It's hard to answer all three; I, for example, have not worked much with numerical analysis, so I'll only answer the first one.
First, variables on the stack are not initialized for you. But there are faster ways to initialize them. In your case I'd advise using memset:
static void calcJac(UserData data, double J[151][151],N_Vector y)
{
memset((void*)J, 0, sizeof(double) * 151 * 151);
/* More code to populate J from data and y that runs very quickly */
}
memset is a fast library routine to fill a region of memory with a specific pattern of bytes. It just so happens that setting all bytes of a double to zero sets the double to zero, so take advantage of your library's fast routines (which will likely be written in assembler to take advantage of things like SSE).
Others have already answered some of your questions. On the subject of matrix multiplication; it is difficult to write a fast algorithm for this, unless you know a lot about cache architecture and so on (the slowness will be caused by the order that you access array elements causes thousands of cache misses).
You can try Googling for terms like "matrix-multiplication", "cache", "blocking" if you want to learn about the techniques used in fast libraries. But my advice is to just use a pre-existing maths library if performance is key.
Initialization of an array to zero.
When J is declared to be a double
array are the values of the array
initialized to zero? If not, is there
a fast way to set all the elements to
zero?
It depends on where the array is allocated. If it is declared at file scope, or as static, then the C standard guarantees that all elements are set to zero. The same is guaranteed if you set the first element to a value upon initialization, ie:
double J[151][151] = {0}; /* set first element to zero */
By setting the first element to something, the C standard guarantees that all other elements in the array are set to zero, as if the array were statically allocated.
Practically for this specific case, I very much doubt it will be wise to allocate 151*151*sizeof(double) bytes on the stack no matter which system you are using. You will likely have to allocate it dynamically, and then none of the above matters. You must then use memset() to set all bytes to zero.
In the
relatively slow loop below, is
accessing a matrix that is contained
in a structure 'data' the the slow
component or is it something else
about the loop?
You should ensure that the function called from it is inlined. Otherwise there isn't much else you can do to optimize the loop: what is optimal is highly system-dependent (ie how the physical cache memories are built). It is best to leave such optimization to the compiler.
You could of course obfuscate the code with manual optimization things such as counting down towards zero rather than up, or to use ++i rather than i++ etc etc. But the compiler really should be able to handle such things for you.
As for matrix addition, I don't know of the mathematically most efficient way, but I suspect it is of minor relevance to the efficiency of the code. The big time thief here is the double type. Unless you really have need for high accuracy, I'd consider using float or int to speed up the algorithm.

Resources