Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is this type of expression in C valid (on all compilers)?
If it is, is this good C?
char cloneString (char clone[], char string[], int Length)
{
if(!Length)
Length=64
;
int i = 0
;
while((clone[i++] = string[i]) != '\0', --Length)
;
clone[i] = '\0';
printf("cloneString = %s\n", clone);
};
Would this be better, worse, indifferent?
char *cloneString (char clone[], char string[], int Length)
{
if(!Length)
Length=STRING_LENGTH
;
char *r = clone
;
while
( //(clone[i++] = string[i]) != '\0'
*clone++ = *string++
, --Length
);
*clone = '\0';
return clone = r
;
printf("cloneString = %s\n", clone);
};
Stackoverflow wants me to add more text to this question!
Okay! I'm concerned about
a.) expressions such as c==(a=b)
b.) performance between indexing vs pointer
Any comments?
Thanks so much.
Yes, it's syntactically valid on all compilers (though semantically valid on none), and no, it isn't considered good C. Most developers will agree that the comma operator is a bad thing, and most developers will generally agree that a single line of code should do only one specific thing. The while loop does a whole four and has undefined behavior:
it increments i;
it assigns string[i] to clone[i++]; (undefined behavior: you should use i only once in a statement that increments/decrements it)
it checks that string[i] isn't 0 (but discards the result of the comparison);
it decrements Length, and terminates the loop if Length == 0 after being decremented.
Not to mention that assuming that Length is 64 if it wasn't provided is a terrible idea and leaves plenty of room for more undefined behavior that can easily be exploited to crash or hack the program.
I see that you wrote it yourself and that you're concerned about performance, and this is apparently the reason you're sticking everything together. Don't. Code made short by squeezing statements together isn't faster than code longer because the statements haven't been squeezed together. It still does the same number of things. In your case, you're introducing bugs by squeezing things together.
The code has Undefined Behavior:
The expression
(clone[i++] = string[i])
both modifies and accesses the object i from two different subexpressions in an unsequenced way, which is not allowed. A compiler might use the old value of i in string[i], or might use the new value of i, or might do something entirely different and unexpected.
Simple answer no.
Why return char and the function has no return statement?
Why 64?
I assume that the two arrays are of length Length - Add documentation to say this.
Why the ; on a new line and not after the statement?
...
Ok so I decided to evolve my comments into an actual answer. Although this doesn’t address the specific piece of code in your question, it answers the underlying issue and I think you will find it illuminating as you can use this — let’s call it guide — on your general programming.
What I advocate, especially if you are just learning programming is to focus on readability instead of small gimmicks that you think or was told that improve speed / performance.
Let’s take a simple example. The idiomatic way to iterate through a vector in C (not in C++) is using indexing:
int i;
for (i = 0; i < size; ++i) {
v[i] = /* code */;
}
I was told when I started programming that v[i] is actually computed as *(v + i) so in generated assembler this is broken down (please note that this discussion is simplified):
multiply i with sizeof(int)
add that result to the address of v
access the element at this computed address
So basically you have 3 operations.
Let’s compare this with accessing via pointers:
int *p;
for (p = v; p != v + size; ++p) {
*p = /*..*/;
}
This has the advantage that *p actually expands to just one instruction:
access the element at the address p.
2 extra instructions don’t seam much but if your program spends most of it’s time in this loop (either extremely large size or multiple calls to (the functions containing this) loop) you realise that the second version makes your program almost 3 times faster. That is a lot. So if you are like me when I started, you will choose the second variant. Don’t!
So the first version has readability (you explicitly describe that you access the i-th element of vector v), the second one uses a gimmick in detriment of readability (you say that you access a memory location). Now this might not be the best example for unreadable code, but the principle is valid.
So why do I tell you to use the first version: until you have a firm grasp on concepts like cache, branching, induction variables (and a lot more) and how they apply in real world compilers and programs performance, you should stay clear of these gimmicks and rely on the compiler to do the optimizations. They are very smart and will generate the same code for both variants (with optimization enabled of course). So the second variant actually differs just by readability and is identical performance-wise with the first.
Another example:
const char * str = "Some string"
int i;
// variant 1:
for (i = 0; i < strlen(str); ++i) {
// code
}
// variant 2:
int l = strlen(str);
for (i = 0; i < l; ++i) {
// code
}
The natural way would be to write the first variant. You might think that the second improves performance because you call the function strlen on each iteration of the loop. And you know that getting the length of a string means iterating through all the string until you reach the end. So basically a call to strlen means adding an inner loop. Ouch that has to slow the program down. Not necessarily: the compiler can optimize the call out because it always produces the same result. Actually you can do harm as you introduce a new variable which will have to be assigned a different register from a very limited registry pool (a little extreme example, but nevertheless a point is to be made here).
Don’t spend your energy on things like this until much later.
Let me show you something else that will illustrate further more that any assumptions that you make about performance will be most likely be false and misleading (I am not trying to tell you that you are a bad programmer — far from it — just that as you learn, you should invest your energy in something else than performance):
Let’s multiply two matrices:
for (k = 0; k < n; ++k) {
for (i = 0; i < n; ++i) {
for (j = 0; j < n; ++j) {
r[i][j] += a[i][k] * b[k][j];
}
}
}
versus
for (k = 0; k < n; ++k) {
for (j = 0; j < n; ++j) {
for (i = 0; i < n; ++i) {
r[i][j] += a[i][k] * b[k][j];
}
}
}
The only difference between the two is the order the operations get executed. They are the exact same operations (number, kind and operands), just in a different order. The result is equivalent (addition is commutative) so on paper they should take the EXACT amount of time to execute. In practice, even with optimizations enable (some very smart compilers can however reorder the loops) the second example can be up to 2-3 times slower than the first. And even the first variant is still a long long way from being optimal (in regards to speed).
So basic point: worry about UB as the other answers show you, don’t worry about performance at this stage.
The second block of code is better.
The line
printf("cloneString = %s\n", clone);
there will never get executed since there a return statement before that.
To make your code a bit more readable, change
while
(
*clone++ = *string++
, --Length
);
to
while ( Length > 0 )
{
*clone++ = *string++;
--Length;
}
This is probably a better approach to your problem:
#include <stdio.h>
#include <string.h>
void cloneString(char *clone, char *string)
{
for (int i = 0; i != strlen(string); i++)
clone[i] = string[i];
printf("Clone string: %s\n", clone);
}
That been said, there's already a standard function to to that:
strncpy(const char *dest, const char *source, int n)
dest is the destination string, and source is the string that must be copied. This function will copy a maximum of n characters.
So, your code will be:
#include <stdio.h>
#include <string.h>
void cloneString(char *clone, char *string)
{
strncpy(clone, string, strlen(string));
printf("Clone string: %s\n", clone);
}
I have an array of ints and I'd like to set all values in the array to 'x' everytime a function is called.
I've looked at memset but that would only work for an array of bytes I think.
I could do the obvious for loop, but I'm guessing there is a standard lib function out there that will accomplish this better. Anyone know?
Just loop it, pretty much. Or memset to 0, if you know the value is zero (similar for other values for which you have knowledge of the bit representation). There won't be a standard lib solution, since the standard lib can't know of particular user types.
If you're on an x86 system, you can use some assembly for this. For instance, in gcc:
__asm__(
"rep stosb"
: "=a"('x'), "=c"(count), "=D"(array)
);
Should do the trick.
rep stosb takes the value in AL and assigns it to consecutive memory locations pointed to by ES:EDI. The number of the locations is specified in ECX.
As an aside, in recent processors Intel has made many efforts to improve the performance of MOVSB and STOSB, so this is a good way to go about it.
In addition to memset and looping (which are both O(n) time), it can be actually done in O(1) - but at the cost of triple the amount of memory, and more expensive look ups later on.
This article describes how it can be done.
The idea is to maintain additional stack (logically, implemented as array+ pointer to top) and array, the additional array will indicate when it was first initialized (a number from 0 to n) and the stack will indicate which elements were already modified.
When you access array[i], if stack[additionalArray[i]] == i && i < top the value of the array is array[i]. Otherwise - it is the "initialized" value.
When doing array[i] = x, if it was not initialized yet (as seen before), you should set additionalArray[i] = stack[top] and increase top.
This results in O(1) initialization, but as said it requires additional memory and each access is more expansive.
Below logic will helps you.
...
int a[100] = {0};
int b = 5;
memset_ex(a, 100, &b, sizeof(int));
...
memset_ex(void *buf, int buf_size, void *value, int size_of_type)
{
int i = 0;
for(i = 0; i <= (buf_size - size_of_type); i +=size_of_type)
{
memcpy((buf + i), value, size_of_type);
}
}
Im learning C and saw the first loop listed below in the book im reading. Im curious whats the difference between the two as I am used to using the second one and cant figure out the difference even though they return different results.
for(i = 0; i < 10; ++i){}
for(i = 0; i <= 10; i++){}
The first one iterates to 9, the second iterates to 10. That's all.
The pre-/post- increment operation makes no difference.
Un-optimized code generated for both versions:
for(int i = 0; i < 10; ++i)
00E517AE mov dword ptr [i],0
00E517B5 jmp wmain+30h (0E517C0h)
00E517B7 mov eax,dword ptr [i]
00E517BA add eax,1
00E517BD mov dword ptr [i],eax
00E517C0 cmp dword ptr [i],0Ah
00E517C4 jge wmain+53h (0E517E3h)
{
}
for(int i = 0; i <= 10; i++)
00E517E3 mov dword ptr [i],0
00E517EA jmp wmain+65h (0E517F5h)
00E517EC mov eax,dword ptr [i]
00E517EF add eax,1
00E517F2 mov dword ptr [i],eax
00E517F5 cmp dword ptr [i],0Ah
00E517F9 jg wmain+88h (0E51818h)
{
}
So, even here, there is no performance penalty. The fact that i++ is slower than ++i is just not true (at least in this context, where it doesn't make a difference). It would be slower for, say int y = i++, but in this case, the two would do different things, which is not the case here. The performance issue might have been valid on compilers from 20 years ago, but not anymore.
The pre-/post- increment operation will work when you will use them while assging values.
Say
i=10;
j = i++;
Here value of i will be 11 but, value of j will be 10. because i will increment after values is assigned to j ie Post Increment
i=10;
j = ++i;
Here value of i will be 11 and value of j will also be 11. because i will increment before value is assigned to j ie Pre Increment
In the first you have pre increment, and the second has a post increment.
The only thing is the condition, i.e in the first you are checking upto 9 and in the second its upto 10.
In both the loops the increment operator makes no difference in this case
first one will run 10 times. the second one will run 11 times.
Most people have already stated that the number of iterations differs by one, and that the pre- and post increment do not make any difference here.
For c, I would say the first loop is what you more commonly come across. I think the reason for that is that c uses zero-based arrays, and thus the maximum value of the array (or string, as that's an array of chars) is not used as an index in the array (that would be out of bounds). Thus, when looping through an array of length 10 in this example, the first loop would be the more logical one, since you can safely use i as an index for the array. The second loop would result in an error (probably a segmentation fault).
You say you're used to the second one. I don't know why you're used to that, but I assume some other programming language or the fact that in math, loops (summations and such) run up to the limit inclusive (but often then start at one). Zero-based indices can be slightly frustrating in such cases.
In short, in my experience, you'll find the first loop more often, but there are plenty of use-cases for the second.
As for the ++i versus i++: I'm inclined to the latter, since this part of the for statement happens at the end of the loop. A postfix notation feels therefore more logical. But once again, that doesn't really matter.
gprof is not working properly on my system (MinGW) so I'd like to know which one of the following snippets is more efficient, on average.
I'm aware that internally C compilers convert everything into pointers arithmetic, but nevertheless I'd like to know if any of the following snippets has any significant advantage over the others.
The array has been allocated dynamically in contiguous memory as 1d array and may be re-allocated at run time (its for a simple board game, in which the player is allowed to re-define the board's size, as often as he wants to).
Please note that i & j must get calculated and passed into the function set_cell() in every loop iteration (gridType is a simple struct with a few ints and a pointer to another cell struct).
Thanks in advance!
Allocate memory
grid = calloc( (nrows * ncols), sizeof(gridType) );
Snippet #1 (parse sequentially as 1D)
gridType *gp = grid;
register int i=0 ,j=0; // we need to pass those in set_cell()
if ( !grid )
return;
for (gp=grid; gp < grid+(nrows*ncols); gp++)
{
set_cell( gp, i, j, !G_OPENED, !G_FOUND, value, NULL );
if (j == ncols-1) { // last col of current row has been reached
j=0;
i++;
}
else // last col of current row has NOT been reached
j++;
}
Snippet #2 (parse as 2D array, using pointers only)
gridType *gp1, *gp2;
if ( !grid )
return;
for (gp1=grid; gp1 < grid+nrows; gp1+=ncols)
for (gp2=gp1; gp2 < gp1+ncols; gp2++)
set_cell( gp2, (gp1-grid), (gp2-gp1), !G_OPENED, !G_FOUND, value, NULL );
Snippet #3 (parse as 2D, using counters only)
register int i,j; // we need to pass those in set_cell()
for (i=0; i<nrows; i++)
for (j=0; j<ncols; j++)
set_cell( &grid[i * ncols + j], i, j, !G_OPENED, !G_FOUND, value, NULL);
Free memory
free( grid );
EDIT:
I fixed #2 form gp1++) to gp1+=ncols), in the 1st loop, after Paul's correction (thx!)
For anything like this, the answer is going to depend on the compiler and the machine you're running it on. You could try each of your code snippets, and calculating how long each one takes.
However, this is a prime example of premature optimization. The best thing to do is to pick the snippet which looks the clearest and most maintainable. You'll get much more benefit from doing that in the long run than from any savings you'd make from choosing the one that's fastest on your machine (which might not be fastest on someone else's anyway!)
Well, snippet 2 doesn't exactly work. You need different incrementing behavior; the outer loop should read for (gp1 = grid; gp1 < grid + (nrows * ncols); gp1 += ncols).
Of the other two, any compiler that's paying attention will almost certainly convert snippet 3 into something equivalent to snippet 1. But really, there's no way to know without profiling them.
Also, remember the words of Knuth: "Premature optimization is the ROOT OF ALL EVIL. I have seen more damage done in the name of 'optimization' than for all other causes combined, including sheer, wrongheaded stupidity." People who write compilers are smarter than you (unless you're secretly Knuth or Hofstadter), so let the compiler do its job and you can get on with yours. Trying to write "clever" optimized code will usually just confuse the compiler, preventing it from writing even better, more optimized code.
This is the way I'd write it. IMHO it's shorter, clearer and simpler than any of your ways.
int i, j;
gridType *gp = grid;
for (i = 0; i < nrows; i++)
for (j = 0; j < ncols; j++)
set_cell( gp++, i, j, !G_OPENED, !G_FOUND, value, NULL );
gprof not working isn't a real
excuse. You can still set up a
benchmark and measure execution
time.
You might not be able to measure any
difference on modern CPUs until
nrows*ncols is getting very
large or the reallocation happens
very often, so you might optimize the wrong part of your code.
This certainly is micro-optimization as the most runtime will most probably be spent in set_cell and everything else could be optimized to the same or very similar code by the compiler.
You don't know until you measure it.
Any decent compiler may produce the same code, even if it doesn't the effects of caching, pilelining, predictive branching and other clever stuff means that simply guessing the number of instructions isn't enough
Our computer science teacher once said that for some reason it is faster to count down than to count up.
For example if you need to use a FOR loop and the loop index is not used somewhere (like printing a line of N * to the screen)
I mean that code like this:
for (i = N; i >= 0; i--)
putchar('*');
is faster than:
for (i = 0; i < N; i++)
putchar('*');
Is it really true? And if so, does anyone know why?
Is it really true? and if so does anyone know why?
In ancient days, when computers were still chipped out of fused silica by hand, when 8-bit microcontrollers roamed the Earth, and when your teacher was young (or your teacher's teacher was young), there was a common machine instruction called decrement and skip if zero (DSZ). Hotshot assembly programmers used this instruction to implement loops. Later machines got fancier instructions, but there were still quite a few processors on which it was cheaper to compare something with zero than to compare with anything else. (It's true even on some modern RISC machines, like PPC or SPARC, which reserve a whole register to be always zero.)
So, if you rig your loops to compare with zero instead of N, what might happen?
You might save a register
You might get a compare instruction with a smaller binary encoding
If a previous instruction happens to set a flag (likely only on x86 family machines), you might not even need an explicit compare instruction
Are these differences likely to result in any measurable improvement on real programs on a modern out-of-order processor? Highly unlikely. In fact, I'd be impressed if you could show a measurable improvement even on a microbenchmark.
Summary: I smack your teacher upside the head! You shouldn't be learning obsolete pseudo-facts about how to organize loops. You should be learning that the most important thing about loops is to be sure that they terminate, produce correct answers, and are easy to read. I wish your teacher would focus on the important stuff and not mythology.
Here's what might happen on some hardware depending on what the compiler can deduce about the range of the numbers you're using: with the incrementing loop you have to test i<N each time round the loop. For the decrementing version, the carry flag (set as a side effect of the subtraction) may automatically tell you if i>=0. That saves a test per time round the loop.
In reality, on modern pipelined processor hardware, this stuff is almost certainly irrelevant as there isn't a simple 1-1 mapping from instructions to clock cycles. (Though I could imagine it coming up if you were doing things like generating precisely timed video signals from a microcontroller. But then you'd write in assembly language anyway.)
In the Intel x86 instruction set, building a loop to count down to zero can usually be done with fewer instructions than a loop that counts up to a non-zero exit condition. Specifically, the ECX register is traditionally used as a loop counter in x86 asm, and the Intel instruction set has a special jcxz jump instruction that tests the ECX register for zero and jumps based on the result of the test.
However, the performance difference will be negligible unless your loop is already very sensitive to clock cycle counts. Counting down to zero might shave 4 or 5 clock cycles off each iteration of the loop compared to counting up, so it's really more of a novelty than a useful technique.
Also, a good optimizing compiler these days should be able to convert your count up loop source code into count down to zero machine code (depending on how you use the loop index variable) so there really isn't any reason to write your loops in strange ways just to squeeze a cycle or two here and there.
Yes..!!
Counting from N down to 0 is slightly faster that Counting from 0 to N in the sense of how hardware will handle comparison..
Note the comparison in each loop
i>=0
i<N
Most processors have comparison with zero instruction..so the first one will be translated to machine code as:
Load i
Compare and jump if Less than or Equal zero
But the second one needs to load N form Memory each time
load i
load N
Sub i and N
Compare and jump if Less than or Equal zero
So it is not because of counting down or up.. But because of how your code will be translated into machine code..
So counting from 10 to 100 is the same as counting form 100 to 10
But counting from i=100 to 0 is faster than from i=0 to 100 - in most cases
And counting from i=N to 0 is faster than from i=0 to N
Note that nowadays compilers may do this optimization for you (if it is smart enough)
Note also that pipeline can cause Belady's anomaly-like effect (can not be sure what will be better)
At last: please note that the 2 for loops you have presented are not equivalent.. the first prints one more * ....
Related:
Why does n++ execute faster than n=n+1?
In C to psudo-assembly:
for (i = 0; i < 10; i++) {
foo(i);
}
turns into
clear i
top_of_loop:
call foo
increment i
compare 10, i
jump_less top_of_loop
while:
for (i = 10; i >= 0; i--) {
foo(i);
}
turns into
load i, 10
top_of_loop:
call foo
decrement i
jump_not_neg top_of_loop
Note the lack of the compare in the second psudo-assembly. On many architectures there are flags that are set by arithmatic operations (add, subtract, multiply, divide, increment, decrement) which you can use for jumps. These often give you what is essentially a comparison of the result of the operation with 0 for free. In fact on many architectures
x = x - 0
is semantically the same as
compare x, 0
Also, the compare against a 10 in my example could result in worse code. 10 may have to live in a register, so if they are in short supply that costs and may result in extra code to move things around or reload the 10 every time through the loop.
Compilers can sometimes rearrange the code to take advantage of this, but it is often difficult because they are often unable to be sure that reversing the direction through the loop is semantically equivalent.
Count down faster in case like this:
for (i = someObject.getAllObjects.size(); i >= 0; i--) {…}
because someObject.getAllObjects.size() executes once at the beginning.
Sure, similar behaviour can be achieved by calling size() out of the loop, as Peter mentioned:
size = someObject.getAllObjects.size();
for (i = 0; i < size; i++) {…}
What matters much more than whether you're increasing or decreasing your counter is whether you're going up memory or down memory. Most caches are optimized for going up memory, not down memory. Since memory access time is the bottleneck that most programs today face, this means that changing your program so that you go up memory might result in a performance boost even if this requires comparing your counter to a non-zero value. In some of my programs, I saw a significant improvement in performance by changing my code to go up memory instead of down it.
Skeptical? Just write a program to time loops going up/down memory. Here's the output that I got:
Average Up Memory = 4839 mus
Average Down Memory = 5552 mus
Average Up Memory = 18638 mus
Average Down Memory = 19053 mus
(where "mus" stands for microseconds) from running this program:
#include <chrono>
#include <iostream>
#include <random>
#include <vector>
using namespace std;
//Sum all numbers going up memory.
template<class Iterator, class T>
inline void sum_abs_up(Iterator first, Iterator one_past_last, T &total) {
T sum = 0;
auto it = first;
do {
sum += *it;
it++;
} while (it != one_past_last);
total += sum;
}
//Sum all numbers going down memory.
template<class Iterator, class T>
inline void sum_abs_down(Iterator first, Iterator one_past_last, T &total) {
T sum = 0;
auto it = one_past_last;
do {
it--;
sum += *it;
} while (it != first);
total += sum;
}
//Time how long it takes to make num_repititions identical calls to sum_abs_down().
//We will divide this time by num_repitions to get the average time.
template<class T>
chrono::nanoseconds TimeDown(vector<T> &vec, const vector<T> &vec_original,
size_t num_repititions, T &running_sum) {
chrono::nanoseconds total{0};
for (size_t i = 0; i < num_repititions; i++) {
auto start_time = chrono::high_resolution_clock::now();
sum_abs_down(vec.begin(), vec.end(), running_sum);
total += chrono::high_resolution_clock::now() - start_time;
vec = vec_original;
}
return total;
}
template<class T>
chrono::nanoseconds TimeUp(vector<T> &vec, const vector<T> &vec_original,
size_t num_repititions, T &running_sum) {
chrono::nanoseconds total{0};
for (size_t i = 0; i < num_repititions; i++) {
auto start_time = chrono::high_resolution_clock::now();
sum_abs_up(vec.begin(), vec.end(), running_sum);
total += chrono::high_resolution_clock::now() - start_time;
vec = vec_original;
}
return total;
}
template<class Iterator, typename T>
void FillWithRandomNumbers(Iterator start, Iterator one_past_end, T a, T b) {
random_device rnd_device;
mt19937 generator(rnd_device());
uniform_int_distribution<T> dist(a, b);
for (auto it = start; it != one_past_end; it++)
*it = dist(generator);
return ;
}
template<class Iterator>
void FillWithRandomNumbers(Iterator start, Iterator one_past_end, double a, double b) {
random_device rnd_device;
mt19937_64 generator(rnd_device());
uniform_real_distribution<double> dist(a, b);
for (auto it = start; it != one_past_end; it++)
*it = dist(generator);
return ;
}
template<class ValueType>
void TimeFunctions(size_t num_repititions, size_t vec_size = (1u << 24)) {
auto lower = numeric_limits<ValueType>::min();
auto upper = numeric_limits<ValueType>::max();
vector<ValueType> vec(vec_size);
FillWithRandomNumbers(vec.begin(), vec.end(), lower, upper);
const auto vec_original = vec;
ValueType sum_up = 0, sum_down = 0;
auto time_up = TimeUp(vec, vec_original, num_repititions, sum_up).count();
auto time_down = TimeDown(vec, vec_original, num_repititions, sum_down).count();
cout << "Average Up Memory = " << time_up/(num_repititions * 1000) << " mus\n";
cout << "Average Down Memory = " << time_down/(num_repititions * 1000) << " mus"
<< endl;
return ;
}
int main() {
size_t num_repititions = 1 << 10;
TimeFunctions<int>(num_repititions);
cout << '\n';
TimeFunctions<double>(num_repititions);
return 0;
}
Both sum_abs_up and sum_abs_down do the same thing (sum the vector of numbers) and are timed the same way with the only difference being that sum_abs_up goes up memory while sum_abs_down goes down memory. I even pass vec by reference so that both functions access the same memory locations. Nevertheless, sum_abs_up is consistently faster than sum_abs_down. Give it a run yourself (I compiled it with g++ -O3).
It's important to note how tight the loop that I'm timing is. If a loop's body is large (has a lot of code) then it likely won't matter whether its iterator goes up or down memory since the time it takes to execute the loop's body will likely completely dominate. Also, it's important to mention that with some rare loops, going down memory is sometimes faster than going up it. But even with such loops it was never the case that going up memory was always slower than going down (unlike small-bodied loops that go up memory, for which the opposite is frequently true; in fact, for a small handful of loops I've timed, the increase in performance by going up memory was 40+%).
The point is, as a rule of thumb, if you have the option, if the loop's body is small, and if there's little difference between having your loop go up memory instead of down it, then you should go up memory.
FYI vec_original is there for experimentation, to make it easy to change sum_abs_up and sum_abs_down in a way that makes them alter vec while not allowing these changes to affect future timings. I highly recommend playing around with sum_abs_up and sum_abs_down and timing the results.
On some older CPUs there are/were instructions like DJNZ == "decrement and jump if not zero". This allowed for efficient loops where you loaded an initial count value into a register and then you could effectively manage a decrementing loop with one instruction. We're talking 1980s ISAs here though - your teacher is seriously out of touch if he thinks this "rule of thumb" still applies with modern CPUs.
Is it faster to count down than up?
Maybe. But far more than 99% of the time it won't matter, so you should use the most 'sensible' test for terminating the loop, and by sensible, I mean that it takes the least amount of thought by a reader to figure out what the loop is doing (including what makes it stop). Make your code match the mental (or documented) model of what the code is doing.
If the loop is working it's way up through an array (or list, or whatever), an incrementing counter will often match up better with how the reader might be thinking of what the loop is doing - code your loop this way.
But if you're working through a container that has N items, and are removing the items as you go, it might make more cognitive sense to work the counter down.
A bit more detail on the 'maybe' in the answer:
It's true that on most architectures, testing for a calculation resulting in zero (or going from zero to negative) requires no explicit test instruction - the result can be checked directly. If you want to test whether a calculation results in some other number, the instruction stream will generally have to have an explicit instruction to test for that value. However, especially with modern CPUs, this test will usually add less than noise-level additional time to a looping construct. Particularly if that loop is performing I/O.
On the other hand, if you count down from zero, and use the counter as an array index, for example, you might find the code working against the memory architecture of the system - memory reads will often cause a cache to 'look ahead' several memory locations past the current one in anticipation of a sequential read. If you're working backwards through memory, the caching system might not anticipate reads of a memory location at a lower memory address. In this case, it's possible that looping 'backwards' might hurt performance. However, I'd still probably code the loop this way (as long as performance didn't become an issue) because correctness is paramount, and making the code match a model is a great way to help ensure correctness. Incorrect code is as unoptimized as you can get.
So I would tend to forget the professor's advice (of course, not on his test though - you should still be pragmatic as far as the classroom goes), unless and until the performance of the code really mattered.
Bob,
Not until you are doing microoptimizations, at which point you will have the manual for your CPU to hand. Further, if you were doing that sort of thing, you probably wouldn't be needing to ask this question anyway. :-) But, your teacher evidently doesn't subscribe to that idea....
There are 4 things to consider in your loop example:
for (i=N;
i>=0; //thing 1
i--) //thing 2
{
putchar('*'); //thing 3
}
Comparison
Comparison is (as others have indicated) relevant to particular processor architectures. There are more types of processors than those that run Windows. In particular, there might be an instruction that simplifies and speeds up comparisons with 0.
Adjustment
In some cases, it is faster to adjust up or down. Typically a good compiler will figure it out and redo the loop if it can. Not all compilers are good though.
Loop Body
You are accessing a syscall with putchar. That is massively slow. Plus, you are rendering onto the screen (indirectly). That is even slower. Think 1000:1 ratio or more. In this situation, the loop body totally and utterly outweighs the cost of the loop adjustment/comparison.
Caches
A cache and memory layout can have a large effect on performance. In this situation, it doesn't matter. However, if you were accessing an array and needed optimal performance, it would behoove you to investigate how your compiler and your processor laid out memory accessses and to tune your software to make the most of that. The stock example is the one given in relation to matrix multiplication.
It can be faster.
On the NIOS II processor I'm currently working with, the traditional for loop
for(i=0;i<100;i++)
produces the assembly:
ldw r2,-3340(fp) %load i to r2
addi r2,r2,1 %increase i by 1
stw r2,-3340(fp) %save value of i
ldw r2,-3340(fp) %load value again (???)
cmplti r2,r2,100 %compare if less than equal 100
bne r2,zero,0xa018 %jump
If we count down
for(i=100;i--;)
we get an assembly that needs 2 instructions less.
ldw r2,-3340(fp)
addi r3,r2,-1
stw r3,-3340(fp)
bne r2,zero,0xa01c
If we have nested loops, where the inner loop is executed a lot, we can have a measurable difference:
int i,j,a=0;
for(i=100;i--;){
for(j=10000;j--;){
a = j+1;
}
}
If the inner loop is written like above, the execution time is: 0.12199999999999999734 seconds.
If the inner loop is written the traditional way, the execution time is: 0.17199999999999998623 seconds. So the loop counting down is about 30% faster.
But: this test was made with all GCC optimizations turned off. If we turn them on, the compiler is actually smarter than this handish optimization and even keeps the value in a register during the whole loop and we would get an assembly like
addi r2,r2,-1
bne r2,zero,0xa01c
In this particular example the compiler even notices, that variable a will allways be 1 after the loop execution and skips the loops alltogether.
However I experienced that sometimes if the loop body is complex enough, the compiler is not able to do this optimization, so the safest way to always get a fast loop execution is to write:
register int i;
for(i=10000;i--;)
{ ... }
Of course this only works, if it does not matter that the loop is executed in reverse and like Betamoo said, only if you are counting down to zero.
regardless of the direction always use the prefix form (++i instead of i++)!
for (i=N; i>=0; --i)
or
for (i=0; i<N; ++i)
Explanation: http://www.eskimo.com/~scs/cclass/notes/sx7b.html
Furthermore you can write
for (i=N; i; --i)
But i would expect modern compilers to be able to do exactly these optimizations.
It is an interesting question, but as a practical matter I don't think it's important and does not make one loop any better than the other.
According to this wikipedia page: Leap second, "...the solar day becomes 1.7 ms longer every century due mainly to tidal friction." But if you are counting days until your birthday, do you really care about this tiny difference in time?
It's more important that the source code is easy to read and understand. Those two loops are a good example of why readability is important -- they don't loop the same number of times.
I would bet that most programmers read (i = 0; i < N; i++) and understand immediately that this loops N times. A loop of (i = 1; i <= N; i++), for me anyway, is a little less clear, and with (i = N; i > 0; i--) I have to think about it for a moment. It's best if the intent of the code goes directly into the brain without any thinking required.
Strangely, it appears that there IS a difference. At least, in PHP. Consider following benchmark:
<?php
print "<br>".PHP_VERSION;
$iter = 100000000;
$i=$t1=$t2=0;
$t1 = microtime(true);
for($i=0;$i<$iter;$i++){}
$t2 = microtime(true);
print '<br>$i++ : '.($t2-$t1);
$t1 = microtime(true);
for($i=$iter;$i>0;$i--){}
$t2 = microtime(true);
print '<br>$i-- : '.($t2-$t1);
$t1 = microtime(true);
for($i=0;$i<$iter;++$i){}
$t2 = microtime(true);
print '<br>++$i : '.($t2-$t1);
$t1 = microtime(true);
for($i=$iter;$i>0;--$i){}
$t2 = microtime(true);
print '<br>--$i : '.($t2-$t1);
Results are interesting:
PHP 5.2.13
$i++ : 8.8842368125916
$i-- : 8.1797409057617
++$i : 8.0271911621094
--$i : 7.1027431488037
PHP 5.3.1
$i++ : 8.9625310897827
$i-- : 8.5790238380432
++$i : 5.9647901058197
--$i : 5.4021768569946
If someone knows why, it would be nice to know :)
EDIT: Results are the same even if you start counting not from 0, but other arbitrary value. So there is probably not only comparison to zero which makes a difference?
What your teacher have said was some oblique statement without much clarification.
It is NOT that decrementing is faster than incrementing but you can create much much faster loop with decrement than with increment.
Without going on at length about it, without need of using loop counter etc - what matters below is just speed and loop count (non zero).
Here is how most people implement loop with 10 iterations:
int i;
for (i = 0; i < 10; i++)
{
//something here
}
For 99% of cases it is all one may need but along with PHP, PYTHON, JavaScript there is the whole world of time critical software (usually embedded, OS, games etc) where CPU ticks really matter so look briefly at assembly code of:
int i;
for (i = 0; i < 10; i++)
{
//something here
}
after compilation (without optimisation) compiled version may look like this (VS2015):
-------- C7 45 B0 00 00 00 00 mov dword ptr [i],0
-------- EB 09 jmp labelB
labelA 8B 45 B0 mov eax,dword ptr [i]
-------- 83 C0 01 add eax,1
-------- 89 45 B0 mov dword ptr [i],eax
labelB 83 7D B0 0A cmp dword ptr [i],0Ah
-------- 7D 02 jge out1
-------- EB EF jmp labelA
out1:
The whole loop is 8 instructions (26 bytes). In it - there are actually 6 instructions (17 bytes) with 2 branches. Yes yes I know it can be done better (its just an example).
Now consider this frequent construct which you will often find written by embedded developer:
i = 10;
do
{
//something here
} while (--i);
It also iterates 10 times (yes I know i value is different compared with shown for loop but we care about iteration count here).
This may be compiled into this:
00074EBC C7 45 B0 01 00 00 00 mov dword ptr [i],1
00074EC3 8B 45 B0 mov eax,dword ptr [i]
00074EC6 83 E8 01 sub eax,1
00074EC9 89 45 B0 mov dword ptr [i],eax
00074ECC 75 F5 jne main+0C3h (074EC3h)
5 instructions (18 bytes) and just one branch. Actually there are 4 instruction in the loop (11 bytes).
The best thing is that some CPUs (x86/x64 compatible included) have instruction that may decrement a register, later compare result with zero and perform branch if result is different than zero. Virtually ALL PC cpus implement this instruction. Using it the loop is actually just one (yes one) 2 byte instruction:
00144ECE B9 0A 00 00 00 mov ecx,0Ah
label:
// something here
00144ED3 E2 FE loop label (0144ED3h) // decrement ecx and jump to label if not zero
Do I have to explain which is faster?
Now even if particular CPU does not implement above instruction all it requires to emulate it is a decrement followed by conditional jump if result of previous instruction happens to be zero.
So regardless of some cases that you may point out as an comment why I am wrong etc etc I EMPHASISE - YES IT IS BENEFICIAL TO LOOP DOWNWARDS if you know how, why and when.
PS. Yes I know that wise compiler (with appropriate optimisation level) will rewrite for loop (with ascending loop counter) into do..while equivalent for constant loop iterations ... (or unroll it) ...
No, that's not really true. One situation where it could be faster is when you would otherwise be calling a function to check the bounds during every iteration of a loop.
for(int i=myCollection.size(); i >= 0; i--)
{
...
}
But if it's less clear to do it that way, it's not worthwhile. In modern languages, you should use a foreach loop when possible, anyway. You specifically mention the case where you should use a foreach loop -- when you don't need the index.
The point is that when counting down you don't need to check i >= 0 separately to decrementing i. Observe:
for (i = 5; i--;) {
alert(i); // alert boxes showing 4, 3, 2, 1, 0
}
Both the comparison and decrementing i can be done in the one expression.
See other answers for why this boils down to fewer x86 instructions.
As to whether it makes a meaningful difference in your application, well I guess that depends on how many loops you have and how deeply nested they are. But to me, it's just as readable to do it this way, so I do it anyway.
Now, I think you had enough assembly lectures:) I would like to present you another reason for top->down approach.
The reason to go from the top is very simple. In the body of the loop, you might accidentally change the boundary, which might end in incorrect behaviour or even non-terminating loop.
Look at this small portion of Java code (the language does not matter I guess for this reason):
System.out.println("top->down");
int n = 999;
for (int i = n; i >= 0; i--) {
n++;
System.out.println("i = " + i + "\t n = " + n);
}
System.out.println("bottom->up");
n = 1;
for (int i = 0; i < n; i++) {
n++;
System.out.println("i = " + i + "\t n = " + n);
}
So my point is you should consider prefering going from the top down or having a constant as a boundary.
At an assembler level a loop that counts down to zero is generally slightly faster than one that counts up to a given value. If the result of a calculation is equal to zero most processors will set a zero flag. If subtracting one makes a calculation wrap around past zero this will normally change the carry flag (on some processors it will set it on others it will clear it), so the comparison with zero comes essentially for free.
This is even more true when the number of iterations is not a constant but a variable.
In trivial cases the compiler may be able to optimise the count direction of a loop automatically but in more complex cases it may be that the programmer knows that the direction of the loop is irrelevant to the overall behaviour but the compiler cannot prove that.