What is faster in C: structs or arrays? - c

I want to implement (what represents abstractly) a two dimensional 4x4 matrix. All the code I write for matrix multiplication et cetera will be entirely "unrolled" as it were -- that is to say, I will not be using loops to access and write data entries in the matrix.
My question is: In C, would it be faster to use a struct as such:
typedef struct {
double e0, e1, e2, e3, e4, ..., e15
} My4x4Matrix;
Or would this be faster:
typedef double My4x4Matrix[16];
Given that I will be accessing each matrix element individually as such:
My4x4Matrix a,b,c;
// (Some initialization of a and b.)
...
c.e0=a.e0+b.e0;
c.e1=a.e1+b.e1;
...
Or
My4x4Matrix a,b,c;
// (Some initialization of a and b.)
...
c[0]=a[0]+b[0];
c[1]=a[1]+b[1];
...
Or are they exactly the same speed?

Any decent compiler will generate the exact same code, byte-for-byte. However, using arrays allows you a lot more flexibility; when accessing the matrix elements, you can choose whether you want to access fixed locations or address positions with variables.
I also highly question your choice to "unwind" (unroll?) all the operations by hand. Any good compiler can fully unroll loops with a constant number of iterations for you, and can perhaps even generate SIMD code and/or optimally schedule the order of instructions. You'll have a hard time doing better by hand, and you'll end up with code that's hideous to read. The fact that you asked this question suggests to me that you're probably not sufficiently experienced to do better than even a naive optimizing compiler.

Struct elements (fields) can only be accessed by their names explicitly specified in the program's source, which means that every time you access a field the actual field must be selected and hardcoded at compile time. If you wanted to implement the same thing with arrays, that would mean that you would use explicit constant compile-time array indices (as in your example). In this case the performance of the two will be exactly the same and the code generated will be exactly the same (excluding from consideration "malicious" compilers).
However, note that arrays provide you with an extra degree of freedom: if necessary, you can select array elements by a run-time index. This is something that's not possible with structs. Only you know whether it matters to you.
On the other hand, note also that arrays in C are not copyable, which means that you'll be forced to use memcpy to copy your array-based My4x4Matrix. With struct-based version normal language-level copying will work. With arrays this issue can be worked around by wrapping the actual array in a struct.

I guess both are the same speed. The difference between a struct and an array is just its meaning (in human terms.) Both will be compiled as memory addresses.

I would say the best way is to create a test to try it yourself. Results may vary based on system environments and compilers.

Related

Fixed size arrays in Haskell

I was trying to write a library for linear algebra operations in Haskell. In order to be able to define safe operations for matrices and vectors I wanted to encode their dimensions in their types. After some research I found that using DataKinds one is able to do that, similar to the way it's done here. For example:
data Vector (n :: Nat) a
dot :: Num a => Vector n a -> Vector n a -> a
In the aforementioned article, as well as in some libraries, the size of the vectors is a phantom type and the vector type itself is a wrapper around an Array. In trying to figure out if there is a array type with its size at the type-level in the standard library I started wondering about the underlying representation of arrays. From what I could gather form this commentary on GHC memory layout, arrays need to store their size on the heap so a 3-dimensional vector would need to take up 1 more word than necessary. Of course we could use the following definition:
data Vector3 a = Vector3 a a a
which might be fine if we only care about 3D geometry, but it doesn't allow for vectors of arbitrary size and also it makes indexing awkward.
So, my question is this. Wouldn't it be useful and a potential memory optimization to have an array type with statically known size in the standard library? As far, as I understand the only thing that it would need is a different info table, which would store the size, instead of it being stored for at each heap object. Also, the compiler could choose between Array and SmallArray automatically.
Wouldn't it be useful and a potential memory optimization to have an array type with statically known size in the standard library?
Sure. I suspect if you wrote up your use case carefully and implemented this, GHC HQ would accept a patch. You might want to do the writeup first and double-check that they're into it to avoid wasting time on a patch they won't accept, though; I certainly don't speak for them.
Also, the compiler could choose between Array and SmallArray automatically.
I'm not an expert here, but I kinda doubt this. Usually supporting polymorphism means you need a uniform representation.

Using a giant workspace array for a Fortran subroutine

Quite often when I look at legacy Fortran code for linear algebra subroutines, I notice this:
Instead of working with a bunch of separate arrays, they will concatenate all their arrays into one big workspace array and use pointers to demarcate where a variable begins.
They even concatenate independent non-array variables into arrays. Are there benefits to doing this, and should I be doing this if I want to write optimized code?
No, don't do that if you want to keep sane mind. This is a practise from 1960s-1980s when there was no dynamic allocation possible and they wanted only small number of working arrays in the argument list.
In old subroutines you had a long list of arguments and then one or two working arrays:
call SUB(N1, N2, N3, M1, M2, M3, A, B, C, WRK, IWRK)
if you needed to pass 10 working arrays instead of one it would be too difficult to call it.
But in 21st century the most important thing is to keep your code readable and clear and only after that optimize it.
BTW having some quantities to close in memory can be even detrimental due to false sharing.
That does not mean you should fragment your memory too much, but it makes sense to keep stuff together when you will indeed access it sequentially. That's why structure of arrays are used instead of arrays of structures.
In general (independent of the programming language that is used): having "consecutive" blocks of well, anything is often helpful.
The operating system, or even the hardware might be able to benefit from having a single huge section in memory to deal with; compared to look at 50 or 100 different locations.
A good starter for such discussions would be this question for example.
But I agree 100% with the other answer: unless you get massive performance gains out of using such techniques, you should always prefer to write "clean" (aka readable) code. And that translates to avoiding such practices.

array notation vs pointer notation C

Is there any advantage to using pointer notation over array notation? I realize that there may be some special cases where pointer notation is better, but it seems to me that array notation is clearer. My professor told us that he prefers pointer notation "because it's C", but it's not something he will be marking. And I know that there are differences with declaring strings as character arrays vs declaring a pointer as a string - I'm just talking about in general looping through an array.
If you write a straightforward loop, both array and pointer forms typically compile to the same machine code.
There are differences in especially non-constant loop exit conditions, but it only matters if you are trying to optimize the loop for a specific compiler and architecture.
So, how about we consider a real world example that relies on both?
These types implement a double-precision floating-point matrix of dynamically determined size, with separate reference-counted data storage:
struct owner {
long refcount;
size_t size;
double data[]; /* C99 flexible array member */
};
struct matrix {
long rows;
long cols;
long rowstep;
long colstep;
double *origin;
struct owner *owner;
};
The idea is that when you need a matrix, you describe it using a local variable of type struct matrix. All data referred to is stored in dynamically allocated struct owner structures, in the C99 flexible array member. After you no longer need the matrix, you must explicitly "drop" it. This allows multiple matrices refer to the same data: you can even have separate row, column, or diagonal vectors, with any change to one immediately reflected in the all others (because they refer to the same data values).
When a matrix is associated with data, either by creating an empty matrix, or by referring to existing data referred to by another matrix, the owner structure refcount is incremented. Whenever a matrix is dropped, the referred to owner structure refcount is decremented. The owner structure is freed, when the refcount drops to zero. This means you only need to remember to "drop" each matrix you used, and the data referred to will be correctly managed and released as soon as possible (unneeded), but never too early.
This all assumes a single-threaded process; multithreaded handling is quite a bit more complicated.
To access element in matrix struct matrix m, row r, column c, assuming 0 <= r < m.rows and 0 <= c < m.cols, you use m.origin[r*m.rowstep + c*m.colstep].
If you want to transpose a matrix, you simply swap m.rows and m.cols, and m.rowstep and m.colstep. All that changes, is the order in which the data (stored in the owner structure) is read.
(Note that origin points to the double which appears at row 0, column 0, in the matrix; and that rowstep and colstep can be negative. This allows all kinds of weird "views" to the otherwise dull regular data, like mirrors and diagonals and so on.)
If we did not have the C99 flexible array member -- say, we only had pointers, and no array notation at all --, the owner structure data member would have to be a pointer. It would mean an additional redirection at the hardware level (slowing down the data accesses a bit). We would either need to allocate the memory pointed by data separately, or use tricks to point to an address following the owner structure itself, but suitably aligned for a double.
Multidimensional arrays do have their uses -- basically, when the sizes of all dimensions (or all but one dimension) are known --, and it's nice for the compiler to take care of the indexing, but it does not have to mean they are always easier than methods using pointers. For example, in the above matrix structure case, we can always define some helper preprocessor macros, like
#define MATRIXELEM(m, r, c) ((m).origin[(r)*(m).rowstep + (c)*(m).colstep])
which admittedly has the downside that it evaluates the first parameter, m, three times. (It means that MATRIXELEM(m++,0,0) would actually try to increment m three times.) In this particular case, m is normally a local variable of struct matrix type, which should minimize surprises. One could have e.g.
struct matrix m1, m2;
/* Stuff that initializes m1 and m2, and makes sure they point
to valid matrix data */
MATRIXELEM(m1, 0, 0) = MATRIXELEM(m2, 0, 0);
The "extra" parentheses in such macros ensure that if you use a calculation, for example i + 4*j as row, the index calculation is correct ((i + 4*j)*m.rowstep and not i + 4*j*m.rowstep). In preprocessor macros, those parentheses are not really "extra" at all. In addition to ensure the correct calculation, having the "extra" parentheses also tell other programmers that the macro writer has been careful in avoiding such arithmetic-related bugs. (I for one consider it "good form" to put the parentheses there, even in cases where they are not needed for syntax unambiquity, if it conveys that "assurance" to other developers reading the code.)
And this, after all this text, leads to my most important point: Some things are easier expressed and understood by us human programmers using array notation than pointer notation, and vice versa. "Foo"[1] is pretty obviously equal to 'o', whereas *("Foo"+1) is not nearly as obvious. (Then again, neither is 1["foo"], but you can blame the C standardization folks for that.)
Based on the examples above, I consider the two notations complementary; they do have large overlap especially in simple loops -- in which case it is okay to just pick one --, but being able to utilize both notations and pick one not based on ones proficiency in one but based on ones opinion on as to what makes most sense wrt. readability and maintainability, is an important skill for any C programmer, in my not very humble opinion.
Actually if you, say, pass an array argument to a function in C, you actually pass a pointer to its beginning. This doesn't really passes an array in a common sense, first, because passing an array would include passing its actual length, second, because passing an array (as a value) would imply its copying.
In other word, you really pass an iterator pointing to an array beginning (like std::vector::begin() in C++) but you pretend that you pass the array itself. It's very confusing in fact. So, using pointers represents things those are really happening in a much more clear way, and it definitely should be preferred.
There may be some advantages of array notation too but I don't think they overweight the drawbacks mentioned. First, using array notation emphasizes the difference between pointer to a single value and pointer to continuous block. And then, you may specify an expected size of passed array for your own reference. But that size isn't actually passed to expressions or functions or somehow checked which fact is very confusing.

What are the penalties of generic functions versus function pointer arrays in C?

I'm writing a matrix library (part of SciRuby) with multiple storage types ('stypes') and multiple data types ('dtypes'). For example, a matrix's stype may currently be dense, yale (AKA 'csr'), or list-of-lists; and its dtype may be int8, int16, int32, int64, float32, float64, complex64, etc.
It's super easy to write a template processor in Ruby or sed which takes a basic function (like sparse matrix multiplication) and creates a custom version for each possible dtype. I could even write such a template to handle two different dtypes, say if we wanted to multiply an int32 by a float64.
The same can be done in certain cases for different stypes. Eventually, though, you could end up with a very large set of functions, many of which never even get used in the course of most people's use.
It's also easy to use function pointer arrays to enable access to these functions -- and imagining even a 3-dimensional function pointer array is not too hard:
MultFuncs[lhs->stype][lhs->dtype][rhs->dtype](lhs->shape[0], rhs->shape[1], lhs->data, rhs->data, result->data);
// This might point to some function like this:
// i32_f64_dense_mult(size_t, size_t, int32_t*, float64*, float64*);
The extreme alternative to function pointer arrays, of course, which would be incredibly complicated to code and maintain, is hierarchical switch or if/else statements:
switch(lhs->stype) {
case STYPE_SPARSE:
switch(lhs->dtype) {
case DTYPE_INT32:
switch(rhs->dtype) {
case DTYPE_FLOAT64:
i32_f64_mult(lhs->shape[0], rhs->shape[1], lhs->ija, rhs->ija, lhs->a, rhs->a, result->data);
break;
// ... and so on ...
It also seems that this would be O(sd2), where s=number of stypes, d=number of dtypes for every operation, whereas the function pointer array would be O(r), where r=number of dimensions in the array.
But there's also a third option.
The third option is to use function pointer arrays for common operations (e.g., copying from one unknown type to another):
SetFuncs[lhs->dtype][rhs->dtype](5, // copy five consecutive items
&to, // destination
dtype_sizeof[lhs->dtype], // dtype_sizeof is a const size_t array giving sizeof(int8_t), sizeof(int16_t), etc.
&from, // source
dtype_sizeof[rhs->dtype]);
And then to call that from a generic sparse matrix multiplication function, which might be declared like this:
void generic_sparse_multiply(size_t* ija, size_t* ijb, void* a, void* b, int dtype_a, int dtype_b);
And that would use SetFuncs[dtype_a][dtype_b] to reference the correct assignment function, for example. The downside, then, is that you might have to implement a whole bunch of these -- IncrementFuncs, DecrementFuncs, MultFuncs, AddFuncs, SubFuncs, etc. -- because you'd never know what types to expect.
So, finally, my questions:
What is the cost, if any, of having enormous multi-dimensional const arrays of function pointers? Large library or executable? Slow load time? etc.
Does use of generics like IncrementFuncs, SetFuncs, etc. (which all probably depend on memcpy or typecasts) present barriers to compile-time optimization?
If one were to use switch statements as described above, would these be optimized out by modern compilers? Or would they be evaluated every single time?
I realize this is an incredibly complicated array of questions.
If you can simply refer me to a good resource and prefer not to answer directly, that's perfectly fine. I used the Google extensively before posting this, but wasn't quite sure what search terms to use.
First of all, try to reduce the complexity of the function(s). You should be able to have a declaration like
Result_t function (Param_t*);
where Param_t is a struct containing all those things you pass around. To use generic types, include an enum in the struct telling which type that is used, and a void* to that type.
1.What is the cost, if any, of having enormous multi-dimensional const arrays of function pointers? Large library or executable? Slow
load time? etc.
Definitely larger executable. Load time depends on what system the code is for. If it is for a RAM-based system (PC etc), then the load time might increase, but it shouldn't have any major impact of performance. Though of course it depends on how large "enormous" is :)
2.Does use of generics like IncrementFuncs, SetFuncs, etc. (which all probably depend on memcpy or typecasts) present barriers to
compile-time optimization?
Probably not, there's just so much that the compiler can optimize. When dealing with generic data types in C, it often boils down to memcpy() in the end, which in itself hopefully is implemented to be as fast as copying gets.
3.If one were to use switch statements as described above, would these be optimized out by modern compilers? Or would they be evaluated
every single time?
Ironically, the compiler would probably optimize it into something like an array of function pointers. The compiler can however likely not predict the nature of the data, especially if it gets set in runtime.

How to best sort a portion of a circular buffer?

I have a circular, statically allocated buffer in C, which I'm using as a queue for a depth breadth first search. I'd like have the top N elements in the queue sorted. It would be easy to just use a regular qsort() - except it's a circular buffer, and the top N elements might wrap around. I could, of course, write my own sorting implementation that uses modular arithmetic and knows how to wrap around the array, but I've always thought that writing sorting functions is a good exercise, but something better left to libraries.
I thought of several approaches:
Use a separate linear buffer - first copy the elements from the circular buffer, then apply qsort, then copy them back. Using an additional buffer means an additional O(N) space requirement, which brings me to
Sort the "top" and "bottom" halve using qsort, and then merge them using the additional buffer
Same as 2. but do the final merge in-place (I haven't found much on in-place merging, but the implementations I've seen don't seem worth the reduced space complexity)
On the other hand, spending an hour contemplating how to elegantly avoid writing my own quicksort, instead of adding those 25 (or so) lines might not be the most productive either...
Correction: Made a stupid mistake of switching DFS and BFS (I prefer writing a DFS, but in this particular case I have to use a BFS), sorry for the confusion.
Further description of the original problem:
I'm implementing a breadth first search (for something not unlike the fifteen puzzle, just more complicated, with about O(n^2) possible expansions in each state, instead of 4). The "bruteforce" algorithm is done, but it's "stupid" - at each point, it expands all valid states, in a hard-coded order. The queue is implemented as a circular buffer (unsigned queue[MAXLENGTH]), and it stores integer indices into a table of states. Apart from two simple functions to queue and dequeue an index, it has no encapsulation - it's just a simple, statically allocated array of unsigned's.
Now I want to add some heuristics. The first thing I want to try is to sort the expanded child states after expansion ("expand them in a better order") - just like I would if I were programming a simple best-first DFS. For this, I want to take part of the queue (representing the most recent expanded states), and sort them using some kind of heuristic. I could also expand the states in a different order (so in this case, it's not really important if I break the FIFO properties of the queue).
My goal is not to implement A*, or a depth first search based algorithm (I can't afford to expand all states, but if I don't, I'll start having problems with infinite cycles in the state space, so I'd have to use something like iterative deepening).
I think you need to take a big step back from the problem and try to solve it as a whole - chances are good that the semi-sorted circular buffer is not the best way to store your data. If it is, then you're already committed and you will have to write the buffer to sort the elements - whether that means performing an occasional sort with an outside library, or doing it when elements are inserted I don't know. But at the end of the day it's going to be ugly because a FIFO and sorted buffer are fundamentally different.
Previous answer, which assumes your sort library has a robust and feature filled API (as requested in your question, this does not require you to write your own mod sort or anything - it depends on the library supporting arbitrary located data, usually through a callback function. If your sort doesn't support linked lists, it can't handle this):
The circular buffer has already solved this problem using % (mod) arithmetic. QSort, etc don't care about the locations in memory - they just need a scheme to address the data in a linear manner.
They work as well for linked lists (which are not linear in memory) as they do for 'real' linear non circular arrays.
So if you have a circular array with 100 entries, and you find you need to sort the top 10, and the top ten happen to wrap in half at the top, then you feed the sort the following two bits of information:
The function to locate an array item is (x % 100)
The items to be sorted are at locations 95 to 105
The function will convert the addresses the sort uses into an index used in the real array, and the fact that the array wraps around is hidden, although it may look weird to sort an array past its bounds, a circular array, by definition, has no bounds. The % operator handles that for you, and you might as well be referring to the part of the array as 1295 to 1305 for all it cares.
Bonus points for having an array with 2^n elements.
Additional points of consideration:
It sounds to me that you're using a sorting library which is incapable of sorting anything other than a linear array - so it can't sort linked lists, or arrays with anything other than simple ordering. You really only have three choices:
You can re-write the library to be more flexible (ie, when you call it you give it a set of function pointers for comparison operations, and data access operations)
You can re-write your array so it somehow fits your existing libraries
You can write custom sorts for your particular solution.
Now, for my part I'd re-write the sort code so it was more flexible (or duplicate it and edit the new copy so you have sorts which are fast for linear arrays, and sorts which are flexible for non-linear arrays)
But the reality is that right now your sort library is so simple you can't even tell it how to access data that is non linearly stored.
If it's that simple, there should be no hesitation to adapting the library itself to your particular needs, or adapting your buffer to the library.
Trying an ugly kludge, like somehow turning your buffer into a linear array, sorting it, and then putting it back in is just that - an ugly kludge that you're going to have to understand and maintain later. You're going to 'break' into your FIFO and fiddle with the innards.
-Adam
I'm not seeing exactly the solution you asked for in c. You might consider one of these ideas:
If you have access to the source for your libc's qsort(), you might copy it and simply replace all the array access and indexing code with appropriately generalized equivalents. This gives you some modest assurance that the underling sort is efficient and has few bugs. No help with the risk of introducing your own bugs, of course. Big O like the system qsort, but possibly with a worse multiplier.
If the region to be sorted is small compared to the size of the buffer, you could use the straight ahead linear sort, guarding the call with a test-for-wrap and doing the copy-to-linear-buffer-sort-then-copy-back routine only if needed. Introduces an extra O(n) operation in the cases that trip the guard (for n the size of the region to be sorted), which makes the average O(n^2/N) < O(n).
I see that C++ is not an option for you. ::sigh:: I will leave this here in case someone else can use it.
If C++ is an option you could (subclass the buffer if needed and) overload the [] operator to make the standard sort algorithms work. Again, should work like the standard sort with a multiplier penalty.
Perhaps a priority queue could be adapted to solve your issue.'
You could rotate the circular queue until the subset in question no longer wraps around. Then just pass that subset to qsort like normal. This might be expensive if you need to sort frequently or if the array element size is very large. But if your array elements are just pointers to other objects then rotating the queue may be fast enough. And in fact if they are just pointers then your first approach might also be fast enough: making a separate linear copy of a subset, sorting it, and writing the results back.
Do you know about the rules regarding optimization? You can google them (you'll find a few versions, but they all say pretty much the same thing, DON'T).
It sounds like you are optimizing without testing. That's a huge no-no. On the other hand, you're using straight C, so you are probably on a restricted platform that requires some level of attention to speed, so I expect you need to skip the first two rules because I assume you have no choice:
Rules of optimization:
Don't optimize.
If you know what you are doing, see rule #1
You can go to the more advanced rules:
Rules of optimization (cont):
If you have a spec that requires a certain level of performance, write the code unoptimized and write a test to see if it meets that spec. If it meets it, you're done. NEVER write code taking performance into consideration until you have reached this point.
If you complete step 3 and your code does not meet the specs, recode it leaving your original "most obvious" code in there as comments and retest. If it does not meet the requirements, throw it away and use the unoptimized code.
If your improvements made the tests pass, ensure that the tests remain in the codebase and are re-run, and that your original code remains in there as comments.
Note: that should be 3. 4. 5. Something is screwed up--I'm not even using any markup tags.
Okay, so finally--I'm not saying this because I read it somewhere. I've spent DAYS trying to untangle some god-awful messes that other people coded because it was "Optimized"--and the really funny part is that 9 times out of 10, the compiler could have optimized it better than they did.
I realize that there are times when you will NEED to optimize, all I'm saying is write it unoptimized, test and recode it. It really won't take you much longer--might even make writing the optimized code easier.
The only reason I'm posting this is because almost every line you've written concerns performance, and I'm worried that the next person to see your code is going to be some poor sap like me.
How about somthing like this example here. This example easely sorts a part or whatever you want without having to redefine a lot of extra memory.
It takes inly two pointers a status bit and a counter for the for loop.
#define _PRINT_PROGRESS
#define N 10
BYTE buff[N]={4,5,2,1,3,5,8,6,4,3};
BYTE *a = buff;
BYTE *b = buff;
BYTE changed = 0;
int main(void)
{
BYTE n=0;
do
{
b++;
changed = 0;
for(n=0;n<(N-1);n++)
{
if(*a > *b)
{
*a ^= *b;
*b ^= *a;
*a ^= *b;
changed = 1;
}
a++;
b++;
}
a = buff;
b = buff;
#ifdef _PRINT_PROGRESS
for(n=0;n<N;n++)
printf("%d",buff[n]);
printf("\n");
}
#endif
while(changed);
system( "pause" );
}

Resources