Copying array of structs with memcpy vs direct approach [duplicate] - c

This question already has an answer here:
memcpy vs assignment in C
(1 answer)
Closed 9 years ago.
Suppose pp is a pointer to an array of structs of length n. [was dynamically allocated]
Suppose I want to create a copy of that array of structs and make a pointer to it, the following way:
struct someStruct* pp2 = malloc(_appropriate_size_);
memcpy(pp2, pp, _appropriate_length_);
I also can make a loop and do pp2[i]=pp[i] for 0 <= i <= n.
What is the difference between these approaches, and which is better and why?

There is no definitive answer for all architectures. You need to do profiling to figure out what method is best.
However IMHO I would imagine that memcpy would be faster simply that somebody has taken the time for your particular architecture/platform and is able to use particular nuances to speed things up.

The former uses identifiers that are forbidden in C: anything prefixed with _. The latter is invalid in C89, as struct assignment was rationalised in C99. Assuming neither of these factors cause issues, there is no functional difference.
Better isn't well defined; If you define "better" as compiles in C89, then the former might be better. However, if you define "better" as has no undefined behaviour in C99, then the latter is better. If you define "better" as more optimal, then there is no definitive answer as one implementation may produce poor performance for both, while another may produce poor performance for one and perfectly optimal code for the other, or even the same code for both. This is pretty unlikely to be a bottleneck in your algorithm, anyway...

I would say the memcpy to be faster - usually tuned for the underlying platform and may possibly use DMA initiated transfers (without L1/L2 cache copies). The for-loop possibly may involved extra transfers. However, it depends on how smart the underlying compiler is - if it spots statically defined value for n, it may replace it with memcpy. It is worth timing the routines or checking the assembly code as Mystical did mention.

Related

What is the difference between two different swapping function?

I would like to know the difference between 2 codes in performance. What are the advantages and disadvantages?
Code 1:
temp = a;
a = b;
b = temp;
Code 2:
a = a + b;
b = a - b;
a = a - b;
The advantages of the first technique are that it is a universal idiom which is obvious and correct. It will work everywhere, on variables of any type. It is quite likely to be recognized by an optimizing compiler and replaced by an actual 'swap' instruction, if available. So besides being more clear and more correct, the first technique is likely to be more efficient, also.
The advantages of the second technique are that it avoids the use of a temporary variable, and that it is a deliciously obscure trick which is beloved by those who incessantly collect obscure tricks, and pose misguided "gotcha" interview questions involving obscure tricks, and (for all I know) who make their own programs less maintainable, less portable, and less reliable by cluttering them with obscure tricks.
The disadvantages of the first technique are: None.
(Theoretically, one might say there's a disadvantage in that it uses a temporary variable, but really, that's no disadvantage at all, because temporary variables are free. I don't think there's anyone on the planet who is still coding for a processor so limited in memory and registers that "saving" a temporary variable in this sort of way is something to actually worry about.)
The disadvantages of the second technique are that it is harder to write, harder for the reader to understand, and likely less efficient (perhaps significantly so). It "works" only on arithmetic types, not structures or other types. It won't work (it will quietly corrupt data) if it should happen be used in an attempt to swap data with itself. (More on this possibility later.) And if those aren't all bad enough, it is likely to be fundamentally buggy even under "ordinary" circumstances, since it could overflow, and with floating-point types it could alter one or both values slightly due to roundoff error, and with pointer types it's undefined if the pointers being swapped do not point within the same object.
You asked specifically about performance, so let's say a few more words about that. (Disclaimer: I am not an expert on microoptimization; I tend to think about instruction-level performance in rather abstract, handwavey terms.)
The first technique uses three assignments. The second technique uses an addition and two subtractions. On many machines an arithmetic operation takes the same number of cycles as a simple value assignment, so in many cases the performance of the two techniques will be identical. But it's hard to imagine how the second technique could ever be more efficient, while it's easy to imagine how the first technique could be more efficient. In particular, as I mentioned already, the first technique is easier for a compiler to recognize and turn into a more-efficient SWP instruction, if the target processor has one.
And now, some digressions. The second technique as presented here is a less-delicious variant of the traditional, deliciously obscure trick for swapping two variables without using a temporary. The traditional, deliciously obscure trick for swapping two variables without using a temporary is:
a ^= b;
b ^= a;
a ^= b;
Once upon a time it was fashionable in some circles to render these techniques in an even more deliciously obscure way:
a ^= b ^= a ^= b; /* WRONG */
a += b -= a -= b; /* WRONG */
But these renditions (while, yes, being absolutely exquisitely deliciously obscure if you like that sort of thing) have the additional crashing disadvantage that they represent undefined behavior, since they try to modify a multiple times in the same expression without an intervening sequence point. (See also the canonical SO question on that topic.)
In fairness, I have to mention that there is one actual circumstance under which the first technique's use of a temporary variable can be a significant disadvantage, and the second technique's lack of one can be therefore be an actual advantage. That one circumstance is if you are trying to write a generic 'swap' macro, along the lines of
#define Swap(a, b) (a = a + b, b = a - b, a = a - b)
The idea is that you can use this macro anywhere, and on variables of any type, and (since it's a macro, and therefore magic) you don't even have to use & on the arguments you call it with, as you would if it were a function. But in traditional C, at least, if you wanted to write a Swap macro like this, it was essentially impossible to do so using technique 1, because there was no way to declare the necessary temporary variable.
You weren't asking about this sub-problem, but since I brought it up, I have to say that the solution (although it is eternally frustrating to the lovers of delicious obscurity) is to just not attempt to write a "generic" macro to swap two values in the first place. You can't do it in C. (As a matter of fact, you could do it in C++, with the new definition of auto, and these days I guess C has some new way of writing generic macros, too.)
And there is actually an additional, crashing problem when you try to write a 'swap' macro this way, which is that it will not work — it will set one or both variables to 0 instead of swapping the values — if the caller ever tries to swap a value with itself. You might say that's not a problem, since maybe nobody would ever write Swap(x, x), but in a less-than-perfectly-optimal sorting routine they might very easily write Swap(a[i], a[j]) where sometimes i happened to be equal to j, or Swap(*p, *q) where sometimes the pointer p happened to be equal to q.
See also the C FAQ List, questions 3.3b, 10.3 and 20.15c.
Always use the first one. The second one can introduce subtle bugs. If the variables are of type int and a+b is greater than INT_MAX then the addition will yield undefined behavior.
When it comes to performance, the difference is likely barely measurable.

Is dividing the task into functions is beneficial or harmful ? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working with embedded systems(Mostly ARM Cortex M3-M4) using C language.And was wondering what are the advantages/ disadvantages of dividing the task into many functions such as;
void Handle_Something(void)
{
// do Task-1
// do Task-2
// do Task-3
//etc.
}
to
void Hanlde_Something(void)
{
// Handle_Task1();
// Handle_Tasl2();
//etc.
}
How are these two approaches can be examined with respect to stack usage and overall processing speed, and which is safer/better for what reason ? (You can assume this is outside of ISR)
From what I know, Memory in the stack is allocated/deallocated for local variables in each call/return cycle, thus dividing the task seems reasonable in terms of memory usage but when doing this, I sometimes get Hardfaults from different sources(mostly Bus or Undefined Instruction errors.) that I couldn't figured out why.
Also, working speed is very crucial for many applications in my field, so ı do need to know which methode provides faster responses.
I would appreciate an enlightment. Thanks everybody in advance
This is what's known as "pre-mature optimization".
In the old days when compilers where horrible, they couldn't inline functions by themselves. So in the old days a keyword inline was added to C - similar non-standard versions also existed before the year 1999. This was used to tell a bad compiler how it should generate code better.
Nowadays this is mostly history. Compilers are better than programmers at determining what and when to inline. They may however struggle when a called function is located in a different "translation unit" (basically, in a different .c file). But in your case I take it this is not the case, but Handle_Task1() etc can be regarded as functions in the same file.
With the above in mind:
How are these two approaches can be examined with respect to stack usage and overall processing speed
They are to be regarded as identical. They use the same stack space and take the same time to execute.
Unless you have a bad, older compiler - in which case function calls always take extra space and execution time. Since you are working with modern MCU:s, this should not be the case, or you desperately need to get a better compiler.
As a rule of thumb, it is always better practice to split up larger functions in several smaller, for the sake of readability and maintenance. Even in hard real-time systems, there exist very few cases where function call overhead is an actual bottleneck even when bad compilers are used.
Memory on the stack isn't allocated/reallocated from some complex memory pool. The stack pointer is simply increased/decreased. An operation that is basically free in all but the tightest/smallest loops imaginable (and those will probably be optimized by the compiler).
Don't group together functions because they could reuse variables e.g. don't create a bunch of int tempInt; long tempLong; variables you use throughout your entire program. A variable should serve only a single purpose and its scope should be kept as tight as possible. Also see: is it good or bad to reuse the variables?
Expanding on that, keeping the scope of all variables as local as possible might even cause your compiler to keep the variables in a cpu register only. A shortly used variable might actually never be allocated!
Try to limit functions to only a singly purpose and try avoiding side effects: if you avoid global variables a function becomes easier to test, optimize and understand as each time you call it with the exact same set of arguments it will preform the exact same action. Have a look at: Why are global variables bad, in a single threaded, non-os, embedded application
Each solution has advantages and disadvantages.
The first approach allows to execute the code (a priori) faster, because the asm code won't have instructions related to jumps. However, you have to take the readability into account, in terms of mixing different kind of functionalities in the same function (or creating large functions, which is not a good idea from the guidelines point of view).
The second solution could be easier to understand, because each function contains a simple task, furthermore it is easier for documenting (that is, you dont have to explain different "purposes" in the same function). As I said, this
solution is slower, because your "scheduler" contains jumps, nevertheless you could declare the simple tasks as inline, given that you can split the code in several simple tasks with a proper documentation and the compiler will generate an assembler as the first approach, that is, avoiding the jumps.
Another point is the use of memory. If your simple tasks are being called from different parts of the code, the first solution and the second solution with inline are worse (in terms of memory) than the second solution without inline, because the function is added as many times as it is called from different parts of your code.
Working with modules is always more efficient in terms of error handling, debugging and re-reading. Considering some heavy working libraries (SLAM, PCL etc.) as functions, they are used as external functions and they don't cause a significant loss of performance(tbh sometimes it's almost impossible to embed such large functions into your code). You may face slightly higher stack use as #Colin commented.

C performance and choice of functions parameters

I'm making a little bignum library as an exercise (it is my first little project, i'm a newbie). I'm using the c language.
I defined a structure number in this way:
typedef struct number{
char *mantissa;
long exponent;
enumSign sign;
}number;
(I'm including the structure because I don't know if size matters here),
and some functions to do basic arithmetic operations.
My question is:
is it more efficient if i use:
number do_sum(number n, number q)
or,
void do_sum(number *n, number *q, number *result)
?
I tried to record the time of execution in both cases (the functions being almost identical) but the results were not consistent.
Could you please explain also what happens in both cases?
In order to find the answer to questions of this kind, you need to
Learn how to view the assembly output of your compiler in each case. All decent C compilers have an option to produce assembly output. If you do that, you will see that one version is considerably more complicated than the other, both for the caller and for the callee. The version which is more complicated is the one returning a struct. And greater complexity usually (but not always) means worse performance.
Use a profiler to try each case and see which one performs better. Spoiler: the version that returns a struct will perform worse than the version that accepts a pointer to an existing struct.
Performance notwithstanding, C is quite fast either way, so I would recommend that you do not pay so much attention to performance while learning to code. The kind of performance that tends to matter is algorithmic performance, which is language agnostic. The performance of individual low level operations hardly ever matters much.
In general, unless the particular implementation calls for otherwise, it is preferred to pass pointers to large structs rather than structs themselves.
In case you are passing the whole struct into the function, its copy is created in memory, which takes time (and RAM space). On the other hand, if you are passing a pointer, only integer that contains an address gets copied. It probably does not matter for the struct of your size, because it's so small (hence no observed difference), but you may see a difference in handling large structs in slow embedded systems.

Compiler behavior?

I am reviewing some source code and I was wondering if the following was thread safe? I have heard of compiler or CPU instruction/read reordering (would it have something to do with branch prediction?) and the Data->unsafe_variable variable below can be modified at any time by another thread.
My question is: depending on how the compiler/CPU reorder read/writes, would it be possible that the below code would allow the Data->unsafe_variable to be fetched twice? (see 2nd snippet)
Note: I do not worry about the first access, any data can be there as long as it does not pass the 'if', I am just concerned by the possibility that the data would be fetched another time after the 'if'. I was also wondering if the cast into volatile here would help preventing a double fetch?
int function(void* Data) {
// Data is allocated on the heap
// What it contains at this point is not important
size_t _varSize = ((volatile DATA *)Data)->unsafe_variable;
if (_varSize > x * y)
{
return FALSE;
}
// I do not want Data->unsafe_variable to be fetch once this point reached,
// I want to use the value "supposedly" stored in _varSize
// Would any compiler/CPU reordering would allow it to be double fetched?
size_t size = _varSize - t * q;
function_xy(size);
return TRUE;
}
Basically I do not want the program to behave like this for security reasons:
_varSize = ((volatile DATA *)Data)->unsafe_variable;
if (_varSize > x * y)
{
return FALSE;
}
size_t size = ((volatile DATA *)Data)->unsafe_variable - t * q;
function10(size);
I am simplifying here and they cannot use mutex. However, would it be safer to use _ReadWriteBarrier() or MemoryBarrier() after the fist line instead of a volatile cast? (VS compiler)
Edit: Giving slightly more context to the code.
The code is broken for many reasons. I'll just point out one of the more subtle ones as others have pointed out the more obvious ones. The object is not volatile. Casting a pointer to a pointer to a volatile object doesn't make the object volatile, it just lies to the compiler.
But there's a much bigger point -- you are going about this totally the wrong way. You are supposed to be checking whether the code is correct, that is, whether it is guaranteed to work. You aren't clever enough, nobody is, to think of every possible way the system might fail to do what you assume it will do. So instead, just don't make those assumptions.
Thinking about things like CPU read re-ordering is totally wrong. You should expect the CPU to do what, and only what, it is required to do. You should definitely not think about specific mechanisms by which it might fail, but only whether it is guaranteed to work.
What you are doing is like trying to figure out if an employee is guaranteed to show up for work by checking if he had his flu shot, checking if he is still alive, and so on. You can't check for, or even think of, every possible way he might fail to show up. So if find that you have to check those kinds of things, then it's not guaranteed and relying on it is broken. Period.
You cannot make reliable code by saying "the CPU doesn't do anything that can break this, so it's okay". You can make reliable code by saying "I make sure my code doesn't rely on anything that isn't guaranteed by the relevant standards."
You are provided with all the tools you need to do the job, including memory barriers, atomic operations, mutexes, and so on. Please use them.
You are not clever enough to think of every way something not guaranteed to work might fail. And you have a plethora of things that are guaranteed to work. Fix this code, and if possible, have a talk with the person who wrote it about using proper synchronization.
This sounds a bit ranty, and I apologize for that. But I've seen too much code that used "tricks" like this that worked perfectly on the test machines but then broke when a new CPU came out, a new compiler, or a new version of the OS. Fixing code like this can be an incredible pain because these hacks hide the actual synchronization requirements. The right answer is almost always to code clearly and precisely what you actually want, rather than to assume that you'll get it because you don't know of any reason you won't.
This is valuable advice from painful experience.
The standard(s) are clear. If any thread may be modifying the object, all accesses, in all threads, must be synchronized, or you have undefined behavior.
The only portable solution for C++ is C++11 atomics, which is available in upcoming VS 2012.
As for C, I do not know if recent C standards bring some portable facilities, I am not following that, but as you are using Visal Studio, it does not matter anyway, as Microsoft is not implementing recent C standards.
Still, if you know you are developing for Visual Studio, you can rely on guarantees provided by this compiler, which apply to both C and C++. Some of them are implicit (accessing volatile variables implies also some memory barriers applied), some are explicit, like using _MemoryBarrier intrinsic.
The whole topic of the memory model is discussed in depth in Lockless Programming Considerations for Xbox 360 and Microsoft Windows, this should give you a good overview. Beware: the topic you are entering is full of hard topics and nasty surprises.
Note: Relying on volatile is not portable, but if you are using old C / C++ standards, there is no portable solution anyway, therefore be prepared to facing the need of reimplementing this for different platform should the need ever arise. When writing portable threaded code, volatile is considered almost useless:
For multi-threaded programming, there two key issues that volatile is often mistakenly thought to address:
atomicity
memory consistency, i.e. the order of a thread's operations as seen by another thread.

Micro-optimizations in C, which ones are there? Is there anyone really useful? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I understand most of the micro-optimizations out there but are they really useful?
Exempli gratia: does doing ++i instead of i++, or while(1) or for(;;) really result in performance improvements (either in memory fingerprint or CPU cycles)?
So the question is, what micro-optimizations can be done in C? Are they really useful?
You should rely on your compiler to optimise this stuff. Concentrate on using appropriate algorithms and writing reliable, readable and maintainable code.
The day tclhttpd, a webserver written in Tcl, one of the slowest scripting language, managed to outperform Apache, a webserver written in C, one of the supposedly fastest compiled language, was the day I was convinced that micro-optimizations significantly pales in comparison to using a faster algorithm/technique*.
Never worry about micro-optimizations until you can prove in a debugger that it is the problem. Even then, I would recommend first coming here to SO and ask if it is a good idea hoping someone would convince you not to do it.
It is counter-intuitive but very often code, especially tight nested loops or recursion, are optimized by adding code rather than removing them. The gaming industry has come up with countless tricks to speed up nested loops using filters to avoid unnecessary processing. Those filters add significantly more instructions than the difference between i++ and ++i.
*note: We have learned a lot since then. The realization that a slow scripting language can outperform compiled machine code because spawning threads is expensive led to the developments of lighttpd, NginX and Apache2.
There's a difference, I think, between a micro-optimization, a trick, and alternative means of doing something. It can be a micro-optimization to use ++i instead of i++, though I would think of it as merely avoiding a pessimization, because when you pre-increment (or decrement) the compiler need not insert code to keep track of the current value of the variable for use in the expression. If using pre-increment/decrement doesn't change the semantics of the expression, then you should use it and avoid the overhead.
A trick, on the other hand, is code that uses a non-obvious mechanism to achieve a result faster than a straight-forward mechanism would. Tricks should be avoided unless absolutely needed. Gaining a small percentage of speed-up is generally not worth the damage to code readability unless that small percentage reflects a meaningful amount of time. Extremely long-running programs, especially calculation-heavy ones, or real-time programs are often candidates for tricks because the amount of time saved may be necessary to meet the systems performance goals. Tricks should be clearly documented if used.
Alternatives, are just that. There may be no performance gain or little; they just represent two different ways of expressing the same intent. The compiler may even produce the same code. In this case, choose the most readable expression. I would say to do so even if it results in some performance loss (though see the preceding paragraph).
I think you do not need to think about these micro-optimizations because most of them is done by compiler. These things can only make code more difficult to read.
Remember, [edited] premature [/edited] optimization is an evil.
To be honest, that question, while valid, is not relevant today - why?
Compiler writers are a lot more smarter than they were 20 years ago, rewind back in time, then these optimizations would have been very relevant, we were all working with old 80286/386 processors, and coders would often resort to tricks to squeeze even more bytes out of the compiled code.
Today, processors are too fast, compiler writers knows the intimate details of operand instructions to make every thing work, considering that there is pipe-lining, core processors, acres of RAM, remember, with a 80386 processor, there would be 4Mb RAM and if you're lucky, 8Mb was considered superior!!
The paradigm has shifted, it was about squeezing every byte out of compiled code, now it is more on programmer productivity and getting the release out the door much sooner.
The above I have stated the nature of the processor, and compilers, I was talking about the Intel 80x86 processor family, Borland/Microsoft compilers.
Hope this helps,
Best regards,
Tom.
If you can easily see that two different code sequences produce identical results, without making assumptions about the data other than what's present in the code, then the compiler can too, and generally will.
It's only when the transformation from one to the other is highly non-obvious or requires assuming something that you may know to be true but the compiler has no way to infer (eg. that an operation cannot overflow or that two pointers will never alias, even though they aren't declared with the restrict keyword) that you should spend time thinking about these things. Even then, the best thing to do is usually to find a way to inform the compiler about the assumptions that it can make.
If you do find specific cases where the compiler misses simple transformations, 99% of the time you should just file a bug against the compiler and get on with working on more important things.
Keeping the fact that memory is the new disk in mind will likely improve your performance far more than applying any of those micro-optimizations.
For a slightly more pragmatic take on the question of ++i vs. i++ (at least in a C++ context) see http://llvm.org/docs/CodingStandards.html#micro_preincrement.
If Chris Lattner says it, I've got to pay attention. ;-)
You would do better to consider every program you write primarily as a language in which you communicate your ideas, intentions and reasoning to other human beings who will have to bug-fix, reuse and understand it. They will spend more time on decoding garbled code than any compiler or runtime system will do executing it.
To summarise, say what you mean in the clearest way, using the common idioms of the language in question.
For these specific examples in C, for(;;) is the idiom for an infinite loop and "i++" is the usual idiom for "add one to i" unless you use the value in an expression, in which case it depends whether the value with the clearest meaning is the one before or after the increment.
Here's real optimization, in my experience.
Someone on SO once remarked that micro-optimization was like "getting a haircut to lose weight". On American TV there is a show called "The Biggest Loser" where obese people compete to lose weight. If they were able to get their body weight down to a few grams, then getting a haircut would help.
Maybe that's overstating the analogy to micro-optimization, because I have seen (and written) code where micro-optimization actually did make a difference, but when starting off there is a lot more to be gained by simply not solving problems you don't have.
x ^= y
y ^= x
x ^= y
++i should be prefered over i++ for situations where you don't use the return value because it better represents the semantics of what you are trying to do (increment i) rather than any possible optimisation (it might be slightly faster, and is probably not worse).
Generally, loops that count towards zero are faster than loops that count towards some other number. I can imagine a situation where the compiler can't make this optimization for you, but you can make it yourself.
Say that you have and array of length x, where x is some very big number, and that you need to perform some operation on each element of x. Further, let's say that you don't care what order these operations occur in. You might do this...
int i;
for (i = 0; i < x; i++)
doStuff(array[i]);
But, you could get a little optimization by doing it this way instead -
int i;
for (i = x-1; i != 0; i--)
{
doStuff(array[i]);
}
doStuff(array[0]);
The compiler doesn't do it for you because it can't assume that order is unimportant.
MaR's example code is better. Consider this, assuming doStuff() returns an int:
int i = x;
while (i != 0)
{
--i;
printf("%d\n",doStuff(array[i]));
}
This is ok as long as printing the array contents in reverse order is acceptable, but the compiler can't decide that for you.
This being an optimization is hardware dependent. From what I remember about writing assembler (many, many years ago), counting up rather than counting down to zero requires an extra machine instruction each time you go through the loop.
If your test is something like (x < y), then evaluation of the test goes something like this:
subtract y from x, storing the result in some register r1
test r1, to set the n and z flags
branch based on the values of the n and z flags
If your test is ( x != 0), you can do this:
test x, to set the z flag
branch based on the value of the z flag
You get to skip a subtract instruction for each iteration.
There are architectures where you can have the subtract instruction set the flags based on the result of the subtraction, but I'm pretty sure x86 isn't one of them, so most of us aren't using compilers that have access to such a machine instruction.

Resources