Runtime of Initializing an Array Zero-Filled - c

If I were to define the following array using the zero-fill initialization syntax on the stack:
int arr[ 10 ] = { 0 };
... is the run time constant or linear?
My assumption is that it's a linear run time -- my assumption is only targeting the fact that calloc must go over every byte to zero-fill it.
If you could also provide a why and not just it's order xxx that would be tremendous!

The runtime is linear in the array size.
To see why, here's a sample implementation of memset, which initializes an array to an arbitrary value. At the assembly-language level, this is no different than what goes on in your code.
void *memset(void *dst, int val, size_t count) {
unsigned char *start = dst;
for (size_t i = 0; i < count; i++)
*start++ = value;
return dst;
}
Of course, compilers will often use intrinsics to set multiple array elements at a time. Depending on the size of the array and things like alignment and padding, this might make the runtime over array length more like a staircase, with the step size based on the vector length. Over small differences in array size, this would effectively make the runtime constant, but the general pattern is still linear.

This is actually a tip of the ice berg question. What you are really asking is what is the order (Big Oh) of initializing an array. Essentially, the code is looping thru each element of the array and setting them to zero. You could write a for loop to do the same thing.
The Order of magnitude of that loop is O(n), that is, the time spent in the loop increases in proportion to the number of elements being initialized.
If the hardware supported an instruction that says to set all bytes from location X to Y to zero and that instruction worked in M instruction cycles and M never changed regardless of the number of bytes being set to zero, then that would be of order k, or O(k).
In general, O(k) is probably referred to as constant time and O(n) as linear.

Related

Compute efficiently Max and Min on data stream

I'm working in C with a stream of data. Basically I receive a column array of 6 elements every n milliseconds. I would like to compute the max value for each row of data.
To make this clear this is how my data looks like (this is a toy example, actually I'll have thousand of columns acquired):
[6] [-10] [5]
[1] [5] [3]
[5] [30] [10]
[2] [-10] [0]
[-2][5] [10]
[-5][0] [1]
So basically (as I said) I receive a column of data every n milliseconds, and I want to compute the max and min value row-wise. So in my previous example my result would be:
max_values=[6,5,30,2,10,1]
min_values=[-10,1,5,-10,-2,-5]
I want to point out that I have no access to the full matrix, I can only work over single columns of 6 elements that I receive every n milliseconds.
This is my simple code algorithm so far (I'm omitting the whole code since it's part of a bigger project):
for(int i=0;i<6;i++){
if(input[i]>temp_max[i]){
temp_max[i]=input[i];
}
if(input[i]<temp_min[i]){
temp_min[i]=input[i];
}
}
Where input, temp_max and temp_min are all float arrays of dimension 6.
Basically my code executes this piece of code everytime a new input array is available and updates the maximum and minimum accordingly.
Since I'm interested in performance (this is going to run on an embedded system), is there any way to improve this part of the code? Calling a comparison for each single element of the 2 arrays doesn't seem the most smart idea.
With random input data (i.e. unordered data), it'll be pretty hard (aka impossible) to find min/max without a comparison per element.
You may get some minor improvement from something like:
temp_max[0]=input[0];
temp_min[0]=input[0];
for(int i=1;i<6;i++){ // Only 1..6
if(input[i]>temp_max[i]){
temp_max[i]=input[i];
}
else // If current element was larger than max, you don't need to check min
{
if(input[i]<temp_min[i]){
temp_min[i]=input[i];
}
}
}
but I doubt this will be a significant improvement.
Branching is slow, especially on embedded systems. Scalar computation too.
Hopefully, your targeted processor seems to be an ARM-based processor supporting the NEON SIMD instruction set (apparently one based on a 64-bits ARM-V8 A53 architecture). NEON can compute 4 32-bits floating-point operations in a row. This should be much faster than the current code (which compilers apparently fail to vectorize).
Here is an example code (untested):
void minmax_optim(float temp_min[6], float temp_max[6], float input[6]) {
/* Compute the first 4 floats */
float32x4_t vInput = vld1q_f32(input);
float32x4_t vMin = vld1q_f32(temp_min);
float32x4_t vMax = vld1q_f32(temp_max);
vMin = vminq_f32(vInput, vMin);
vMax = vmaxq_f32(vInput, vMax);
vst1q_f32(temp_min, vMin);
vst1q_f32(temp_max, vMax);
/* Remainder 2 floats */
float32x2_t vLastInput = vld1_f32(input+4);
float32x2_t vLastMin = vld1_f32(temp_min+4);
float32x2_t vLastMax = vld1_f32(temp_max+4);
vLastMin = vmin_f32(vLastInput, vLastMin);
vLastMax = vmax_f32(vLastInput, vLastMax);
vst1_f32(temp_min+4, vLastMin);
vst1_f32(temp_max+4, vLastMax);
}
The resulting code should be much faster. One can see on goldbolt that the number of instructions of this vectorized implementation is drastically smaller than the reference implementation without any conditional jump instructions.
You nailed it -- you have to keep a temporary max and min arrays. Unfortunately, if we're talking strictly C, it seems to be the single possible and thus most performant algorithm possible.
Since you've mentioned it's going to run on embedded system (but omitted which), please make sure you have hardware floating point support. If there isn't, that's going to be high performance penality. If you have high-end hardware, you can look for availability of vector instructions, but then that's platform-specific, possibly by use of assembly.
To my impression the approach as such cannot be substantially improved, as the input is not available as a whole. That being said, the inner comparisons can be compacted. The assignemnts
if(input[i]>temp_max[i]){
temp_max[i]=input[i];
}
if(input[i]<temp_min[i]){
temp_min[i]=input[i];
}
can be improved to
if(input[i]>temp_max[i]){
temp_max[i]=input[i];
}
else if(input[i]<temp_min[i]){
temp_min[i]=input[i];
}
because if the current value replaces the temporary maximum, it cannot also replace the temporary minimum (assuming some sensible initialization).
Only for max but it is easy to expand
#define MAX(a,b,c) (a) > (b) ? ((b) > (c) ? (b) : (a) > (c) ? (a) : (c) ) : (b) > (c) ? (b) : (c)
void rowmax(int *a, int *b, int *c, int *result, size_t size)
{
for(size_t index = 0; index < size; index++)
{
result[index] = MAX(a[index], b[index], c[index]);
}
}

Dynamically indexing an array in C

Is it possible to create arrays based of their index as in
int x = 4;
int y = 5;
int someNr = 123;
int foo[x][y] = someNr;
dynamically/on the run, without creating foo[0...3][0...4]?
If not, is there a data structure that allow me to do something similar to this in C?
No.
As written your code make no sense at all. You need foo to be declared somewhere and then you can index into it with foo[x][y] = someNr;. But you cant just make foo spring into existence which is what it looks like you are trying to do.
Either create foo with correct sizes (only you can say what they are) int foo[16][16]; for example or use a different data structure.
In C++ you could do a map<pair<int, int>, int>
Variable Length Arrays
Even if x and y were replaced by constants, you could not initialize the array using the notation shown. You'd need to use:
int fixed[3][4] = { someNr };
or similar (extra braces, perhaps; more values perhaps). You can, however, declare/define variable length arrays (VLA), but you cannot initialize them at all. So, you could write:
int x = 4;
int y = 5;
int someNr = 123;
int foo[x][y];
for (int i = 0; i < x; i++)
{
for (int j = 0; j < y; j++)
foo[i][j] = someNr + i * (x + 1) + j;
}
Obviously, you can't use x and y as indexes without writing (or reading) outside the bounds of the array. The onus is on you to ensure that there is enough space on the stack for the values chosen as the limits on the arrays (it won't be a problem at 3x4; it might be at 300x400 though, and will be at 3000x4000). You can also use dynamic allocation of VLAs to handle bigger matrices.
VLA support is mandatory in C99, optional in C11 and C18, and non-existent in strict C90.
Sparse arrays
If what you want is 'sparse array support', there is no built-in facility in C that will assist you. You have to devise (or find) code that will handle that for you. It can certainly be done; Fortran programmers used to have to do it quite often in the bad old days when megabytes of memory were a luxury and MIPS meant millions of instruction per second and people were happy when their computer could do double-digit MIPS (and the Fortran 90 standard was still years in the future).
You'll need to devise a structure and a set of functions to handle the sparse array. You will probably need to decide whether you have values in every row, or whether you only record the data in some rows. You'll need a function to assign a value to a cell, and another to retrieve the value from a cell. You'll need to think what the value is when there is no explicit entry. (The thinking probably isn't hard. The default value is usually zero, but an infinity or a NaN (not a number) might be appropriate, depending on context.) You'd also need a function to allocate the base structure (would you specify the maximum sizes?) and another to release it.
Most efficient way to create a dynamic index of an array is to create an empty array of the same data type that the array to index is holding.
Let's imagine we are using integers in sake of simplicity. You can then stretch the concept to any other data type.
The ideal index depth will depend on the length of the data to index and will be somewhere close to the length of the data.
Let's say you have 1 million 64 bit integers in the array to index.
First of all you should order the data and eliminate duplicates. That's something easy to achieve by using qsort() (the quick sort C built in function) and some remove duplicate function such as
uint64_t remove_dupes(char *unord_arr, char *ord_arr, uint64_t arr_size)
{
uint64_t i, j=0;
for (i=1;i<arr_size;i++)
{
if ( strcmp(unord_arr[i], unord_arr[i-1]) != 0 ){
strcpy(ord_arr[j],unord_arr[i-1]);
j++;
}
if ( i == arr_size-1 ){
strcpy(ord_arr[j],unord_arr[i]);
j++;
}
}
return j;
}
Adapt the code above to your needs, you should free() the unordered array when the function finishes ordering it to the ordered array. The function above is very fast, it will return zero entries when the array to order contains one element, but that's probably something you can live with.
Once the data is ordered and unique, create an index with a length close to that of the data. It does not need to be of an exact length, although pledging to powers of 10 will make everything easier, in case of integers.
uint64_t* idx = calloc(pow(10, indexdepth), sizeof(uint64_t));
This will create an empty index array.
Then populate the index. Traverse your array to index just once and every time you detect a change in the number of significant figures (same as index depth) to the left add the position where that new number was detected.
If you choose an indexdepth of 2 you will have 10² = 100 possible values in your index, typically going from 0 to 99.
When you detect that some number starts by 10 (103456), you add an entry to the index, let's say that 103456 was detected at position 733, your index entry would be:
index[10] = 733;
Next entry begining by 11 should be added in the next index slot, let's say that first number beginning by 11 is found at position 2023
index[11] = 2023;
And so on.
When you later need to find some number in your original array storing 1 million entries, you don't have to iterate the whole array, you just need to check where in your index the first number starting by the first two significant digits is stored. Entry index[10] tells you where the first number starting by 10 is stored. You can then iterate forward until you find your match.
In my example I employed a small index, thus the average number of iterations that you will need to perform will be 1000000/100 = 10000
If you enlarge your index to somewhere close the length of the data the number of iterations will tend to 1, making any search blazing fast.
What I like to do is to create some simple algorithm that tells me what's the ideal depth of the index after knowing the type and length of the data to index.
Please, note that in the example that I have posed, 64 bit numbers are indexed by their first index depth significant figures, thus 10 and 100001 will be stored in the same index segment. That's not a problem on its own, nonetheless each master has his small book of secrets. Treating numbers as a fixed length hexadecimal string can help keeping a strict numerical order.
You don't have to change the base though, you could consider 10 to be 0000010 to keep it in the 00 index segment and keep base 10 numbers ordered, using different numerical bases is nonetheless trivial in C, which is of great help for this task.
As you make your index depth become larger, the amount of entries per index segment will be reduced
Please, do note that programming, especially lower level like C consists in comprehending the tradeof between CPU cycles and memory use in great part.
Creating the proposed index is a way to reduce the number of CPU cycles required to locate a value at the cost of using more memory as the index becomes larger. This is nonetheless the way to go nowadays, as masive amounts of memory are cheap.
As SSDs' speed become closer to that of RAM, using files to store indexes is to be taken on account. Nevertheless modern OSs tend to load in RAM as much as they can, thus using files would end up in something similar from a performance point of view.

Optimising C for performance vs memory optimisation using multidimensional arrays

I am struggling to decide between two optimisations for building a numerical solver for the poisson equation.
Essentially, I have a two dimensional array, of which I require n doubles in the first row, n/2 in the second n/4 in the third and so on...
Now my difficulty is deciding whether or not to use a contiguous 2d array grid[m][n], which for a large n would have many unused zeroes but would probably reduce the chance of a cache miss. The other, and more memory efficient method, would be to dynamically allocate an array of pointers to arrays of decreasing size. This is considerably more efficient in terms of memory storage but would it potentially hinder performance?
I don't think I clearly understand the trade-offs in this situation. Could anybody help?
For reference, I made a nice plot of the memory requirements in each case:
There is no hard and fast answer to this one. If your algorithm needs more memory than you expect to be given then you need to find one which is possibly slower but fits within your constraints.
Beyond that, the only option is to implement both and then compare their performance. If saving memory results in a 10% slowdown is that acceptable for your use? If the version using more memory is 50% faster but only runs on the biggest computers will it be used? These are the questions that we have to grapple with in Computer Science. But you can only look at them once you have numbers. Otherwise you are just guessing and a fair amount of the time our intuition when it comes to optimizations are not correct.
Build a custom array that will follow the rules you have set.
The implementation will use a simple 1d contiguous array. You will need a function that will return the start of array given the row. Something like this:
int* Get( int* array , int n , int row ) //might contain logical errors
{
int pos = 0 ;
while( row-- )
{
pos += n ;
n /= 2 ;
}
return array + pos ;
}
Where n is the same n you described and is rounded down on every iteration.
You will have to call this function only once per entire row.
This function will never take more that O(log n) time, but if you want you can replace it with a single expression: http://en.wikipedia.org/wiki/Geometric_series#Formula
You could use a single array and just calculate your offset yourself
size_t get_offset(int n, int row, int column) {
size_t offset = column;
while (row--) {
offset += n;
n << 1;
}
return offset;
}
double * array = calloc(sizeof(double), get_offset(n, 64, 0));
access via
array[get_offset(column, row)]

Why does C support negative array indices?

From this post in SO, it is clear that C supports negative indices.
Why support such a potential memory violation in a program?
Shouldn't the compiler throw a Negative Index warning at least? (am using GCC)
Or is this calculation done in runtime?
EDIT 1: Can anybody hint at its uses?
EDIT 2: for 3.) Using counters of loops in [] of arrays/pointers indicates Run-time Calculation of Indices.
The calculation is done at runtime.
Negative indices don't necessarily have to cause a violation, and have their uses.
For example, let's say you have a pointer that is currently pointing to the 10th element in an array. Now, if you need to access the 8th element without changing the pointer, you can do that easily by using a negative index of -2.
char data[] = "01234567890123456789";
char* ptr = &data[9];
char c = ptr[-2]; // 7
Here is an example of use.
An Infinite Impulse Response filter is calculated partially from recent previous output values. Typically, there will be some array of input values and an array where output values are to be placed. If the current output element is yi, then yi may be calculated as yi = a0•xi + a1•xi–1 +a2•yi–1 +a3•yi–2.
A natural way to write code for this is something like:
void IIR(float *x, float *y, size_t n)
{
for (i = 0; i < n; ++i)
y[i] = a0*x[i] + a1*x[i-1] + a2*y[i-1] + a3*y[i-2];
}
Observe that when i is zero, y[i-1] and y[i-2] have negative indices. In this case, the caller is responsible for creating an array, setting the initial two elements to “starter values” for the output (often either zero or values held over from a previous buffer), and passing a pointer to where the first new value is to be written. Thus, this routine, IRR, normally receives a pointer into the middle of an array and uses negative indices to address some elements.
Why support such a potential memory violation in a program?
Because it follows the pointer arithmetic, and may be useful in certain case.
Shouldn't the compiler throw a Negative Index warning at least? (am using GCC)
The same reason the compiler won't warn you when you access array[10] when the array has only 10 elements. Because it leaves that work to the programmers.
Or is this calculation done in runtime?
Yes, the calculation is done in runtime.
Elaborating on Taymon's answer:
float arr[10];
float *p = &arr[2];
p[-2]
is now perfectly OK. I haven't seen a good use of negative indices, but why should the standard exclude it if it is in general undecidable whether you are pointing outside of a valid range.
OP: Why support ... a potential memory violation?
It has potential uses, for as OP says it is a potential violation and not certain memory violation. C is about allowing users to do many things, include all the rope they need to hang themselves.
OP: ... throw a Negative Index warning ...
If concerned, use unsigned index or better yet, use size_t.
OP ... calculation done in runtime?
Yes, quite often as in a[i], where i is not a constant.
OP: hint at its uses?
Example: one is processing a point in an array of points (Pt) and want to determine if the mid-point is a candidate for removal as it is co-incident. Assume the calling function has already determined that the Mid is neither the first nor last point.
static int IsCoincident(Pt *Mid) {
Pt *Left = &Mid[-1]; // fixed negative index
Pt *Right = &Mid[+1];
return foo(Left, Mid, Right);
}
Array subscripts are just syntactic sugar for dereferencing of pointers to arbitrary places in memory. The compiler can't warn you about negative indexes because it doesn't know at compile time where a pointer will be pointing to. Any given pointer arithmetic expression might or might not result in a valid address for memory access.
a[b] does the same thing as *(a+b). Since the latter allows the negative b, so does the former.
Example of using negative array indices.
I use negative indices to check message protocols. For example, one protocol format looks like:
<nnn/message/f>
or, equally valid:
<nnn/message>
The parameter f is optional and must be a single character if supplied.
If I want to get to the value of character f, I first get a pointer to the > character:
char * end_ptr = strchr(msg, '>');
char f_char = '1'; /* default value */
Now I check if f is supplied and extract it (here is where the negative array index is used):
if (end_ptr[-2] == '/')
{
f_char = end_ptr[-1];
}
Note that I've left out error checking and other code that is not relevant to this example.

Optimizing C loops

I'm new to C from many years of Matlab for numerical programming. I've developed a program to solve a large system of differential equations, but I'm pretty sure I've done something stupid as, after profiling the code, I was surprised to see three loops that were taking ~90% of the computation time, despite the fact they are performing the most trivial steps of the program.
My question is in three parts based on these expensive loops:
Initialization of an array to zero. When J is declared to be a double array are the values of the array initialized to zero? If not, is there a fast way to set all the elements to zero?
void spam(){
double J[151][151];
/* Other relevant variables declared */
calcJac(data,J,y);
/* Use J */
}
static void calcJac(UserData data, double J[151][151],N_Vector y)
{
/* The first expensive loop */
int iter, jter;
for (iter=0; iter<151; iter++) {
for (jter = 0; jter<151; jter++) {
J[iter][jter] = 0;
}
}
/* More code to populate J from data and y that runs very quickly */
}
During the course of solving I need to solve matrix equations defined by P = I - gamma*J. The construction of P is taking longer than solving the system of equations it defines, so something I'm doing is likely in error. In the relatively slow loop below, is accessing a matrix that is contained in a structure 'data' the the slow component or is it something else about the loop?
for (iter = 1; iter<151; iter++) {
for(jter = 1; jter<151; jter++){
P[iter-1][jter-1] = - gamma*(data->J[iter][jter]);
}
}
Is there a best practice for matrix multiplication? In the loop below, Ith(v,iter) is a macro for getting the iter-th component of a vector held in the N_Vector structure 'v' (a data type used by the Sundials solvers). Particularly, is there a best way to get the dot product between v and the rows of J?
Jv_scratch = 0;
int iter, jter;
for (iter=1; iter<151; iter++) {
for (jter=1; jter<151; jter++) {
Jv_scratch += J[iter][jter]*Ith(v,jter);
}
Ith(Jv,iter) = Jv_scratch;
Jv_scratch = 0;
}
1) No they're not you can memset the array as follows:
memset( J, 0, sizeof( double ) * 151 * 151 );
or you can use an array initialiser:
double J[151][151] = { 0.0 };
2) Well you are using a fairly complex calculation to calculate the position of P and the position of J.
You may well get better performance. by stepping through as pointers:
for (iter = 1; iter<151; iter++)
{
double* pP = (P - 1) + (151 * iter);
double* pJ = data->J + (151 * iter);
for(jter = 1; jter<151; jter++, pP++, pJ++ )
{
*pP = - gamma * *pJ;
}
}
This way you move various of the array index calculation outside of the loop.
3) The best practice is to try and move as many calculations out of the loop as possible. Much like I did on the loop above.
First, I'd advise you to split up your question into three separate questions. It's hard to answer all three; I, for example, have not worked much with numerical analysis, so I'll only answer the first one.
First, variables on the stack are not initialized for you. But there are faster ways to initialize them. In your case I'd advise using memset:
static void calcJac(UserData data, double J[151][151],N_Vector y)
{
memset((void*)J, 0, sizeof(double) * 151 * 151);
/* More code to populate J from data and y that runs very quickly */
}
memset is a fast library routine to fill a region of memory with a specific pattern of bytes. It just so happens that setting all bytes of a double to zero sets the double to zero, so take advantage of your library's fast routines (which will likely be written in assembler to take advantage of things like SSE).
Others have already answered some of your questions. On the subject of matrix multiplication; it is difficult to write a fast algorithm for this, unless you know a lot about cache architecture and so on (the slowness will be caused by the order that you access array elements causes thousands of cache misses).
You can try Googling for terms like "matrix-multiplication", "cache", "blocking" if you want to learn about the techniques used in fast libraries. But my advice is to just use a pre-existing maths library if performance is key.
Initialization of an array to zero.
When J is declared to be a double
array are the values of the array
initialized to zero? If not, is there
a fast way to set all the elements to
zero?
It depends on where the array is allocated. If it is declared at file scope, or as static, then the C standard guarantees that all elements are set to zero. The same is guaranteed if you set the first element to a value upon initialization, ie:
double J[151][151] = {0}; /* set first element to zero */
By setting the first element to something, the C standard guarantees that all other elements in the array are set to zero, as if the array were statically allocated.
Practically for this specific case, I very much doubt it will be wise to allocate 151*151*sizeof(double) bytes on the stack no matter which system you are using. You will likely have to allocate it dynamically, and then none of the above matters. You must then use memset() to set all bytes to zero.
In the
relatively slow loop below, is
accessing a matrix that is contained
in a structure 'data' the the slow
component or is it something else
about the loop?
You should ensure that the function called from it is inlined. Otherwise there isn't much else you can do to optimize the loop: what is optimal is highly system-dependent (ie how the physical cache memories are built). It is best to leave such optimization to the compiler.
You could of course obfuscate the code with manual optimization things such as counting down towards zero rather than up, or to use ++i rather than i++ etc etc. But the compiler really should be able to handle such things for you.
As for matrix addition, I don't know of the mathematically most efficient way, but I suspect it is of minor relevance to the efficiency of the code. The big time thief here is the double type. Unless you really have need for high accuracy, I'd consider using float or int to speed up the algorithm.

Resources