which code execute faster? - c

these are two codes
int d;
d=0;
d=a+b;
print d+c+e;
code 2:
print a+b+c+e;
I am trying c programming.
I am having some doubts in execution of this code
which code executes faster? and use less memory?

Given what you have posted,
Example 1
int d;
d=0;
d=a+b;
/* print d+c+e;*/
printf("%i\n", d+c+e);
Example 2
/* print a+b+c+e; */
printf("%i\n", a+b+c+e);
Which is faster is tricky, if your compiler optimizes d away in Example 1 they are equivalent. On the other hand, if your compiler can't determine that d=0 is discarded (and it may not) then it can't decide that d is really const int d = a+b; and the examples will not be equivalent with Example 2 being (slightly) faster.

Related

Is there a case where bitwise swapping won't work?

At school someday several years ago I had to do a swap function that swaps two integers, I wanted to do this using bitwise operations without using a third variable, so I came up with this:
void swap( int * a, int * b ) {
*a = *a ^ *b;
*b = *a ^ *b;
*a = *a ^ *b;
}
I thought it was good but when my function was tested by the school's correction program it found an error (of course when I asked they didn't want to tell me), and still today I don't know what didn't work, so I wonder in which case this method wouldn't work.
I wanted to do this using bitwise operations without using a third variable
Do you mind if I ask why? Was there a practical reason for this limitation, or was it just an intellectual puzzle?
when my function was tested by the school's correction program it found an error
I can't be sure what the correction program was complaining about, but one class of inputs this sort of solution is known to fail on is exemplified by
int x = 5;
swap(&x, &x);
printf("%d\n", x);
This prints 0, not 5.
You might say, "Why would anyone swap something with itself?"
They probably wouldn't, as I've shown it, but perhaps you can imagine that, in a mediocrely-written sort algorithm, it might end up doing the equivalent of
if(a[i] < a[j]) {
/* they are in order */
} else {
swap(&a[i], &a[j]);
}
Now, if it ever happens that i and j are the same, the swap function will wrongly zero out a[i].
See also What is the difference between two different swapping function?

Improvement in execution time by adding automatic variables

I was playing with the C language on my own and I was trying to write the fastest possible algorithm to find amicable numbers.
This is what I wrote (I've just started, so please do not suggest me methods to improve the algorithm, since I want to write it on my own):
#include <stdio.h>
#include <time.h>
#define MAX (200000)
int dividersSum(int);
void amicable();
int main() {
clock_t start = clock();
amicable();
double executionTime = ((double)clock() - start) / CLOCKS_PER_SEC;
printf("\nEXECUTION TIME: %lf", executionTime);
return 0;
}
int dividersSum(int n) {
int i, sum;
for (sum = 1, i = 2; i <= n / 2; i++) {
if (!(n % i)) {
sum += n / i;
}
}
return sum;
}
void amicable() {
int a, divSum, tot = 0;
for (a = 1; a < MAX; a++) {
divSum = dividersSum(a);
if (divSum > a && dividersSum(divSum) == a) {
printf("\n\t%d\t\t%d", a, dividersSum(a));
tot++;
}
}
printf("\n\nTOT: %d", tot);
}
Now, this works fine. Or, at least, probably not that fine since it took exactly 40 seconds to complete, but it works.
But if I change this line:
int i, sum;
Into this:
int i, sum, a = 4, b = 4, c = 4, d = 4, e = 4, f = 4;
It "drastically" improves. It takes 36 seconds to complete.
I get these execution times from the console timer. I know it isn't accurate at all (indeed as soon as I have the chance to work on this algorithm again I'll try to use time.h library), but I tried the 2 versions of the code over 50 times, and I always get 40 or more seconds for the "normal" version, and 36 or less for the other one.
I've also tried to change the machine where I run the program, but it always takes about 10% less to execute the modified version.
Of course this make no sense to me (I'm pretty new to programming, and I've Googled it but nothing even because I don't really know what to look for...), the only thing I can think about is an optimization of the compiler (I use the hated Dev c++), but which optimization? And if this is the case, why doesn't it use the same optimization also in the "normal" code, since it makes it faster?
Oh, if you're wondering why I tried to declare random variables, the reason is that I wanted to test if there was a measurable worsening in using more variables. I now know it is a very stupid way to test this, but as I said at the beginning of the post, I was "playing"...
Well, I asked my teacher at university. He ran the two versions on his machine, and he was fairly surprised at the beginning (43 seconds the "normal" one and 36 the faster).
Then he told me that he didn't know exactly why this happens, but he speculated that it happens due to the way the compiler organizes the code. Probably those extra variables force the compiler to store the code in different pages of the memory, and this is why this happens.
Of course he wasn't sure about his answer, but it seems fair to me.
It's interesting how things like this can happen sometimes.
Moreover, as Brendan said in the comments section:
If the compiler isn't ignoring (or warning about) unused variables (even though it's a relatively trivial optimisation), then the compiler is poo (either bad in general or crippled by command line options), and the answer to your question should be "the compiler is poo" (e.g. fails to optimise the second version into exactly the same output code as the first version).
Of course if someone thinks to have a better explaination I would be happy to listen to him!

C function variable changing

I have a basic code to learn function call. But I did not understand something in this code. I got confused when I compare to with my answer and expected output.
My code is below:
#include <stdio.h>
void f(int a, int b, double c){
printf("%d \n", a);printf("%d \n", b);printf("%f \n", c);
}
int main(void){
int i = 0, x = 7;
float a = 2.25;
f (x=5, x-7, a);
printf("\n\n");
f (x = 6, x-7, a);
printf("\n\n");
printf("%d %d\n",i, i++ );
printf("%d %d\n",i, ++i );
return 0;
}
At the last 2 printf statements, My answer was as:
0 0
1 1
But the output as:
1 0
2 2
Can you explain why?
It is undefined behavior in C. It may vary as per the execution or many other things. The order of evaluation of function arguments are unspecified. You can never explain the behavior you see by any standard rule. It would have given different result when you run it in front of teacher in a different machine.
Better you write the code which avoids all these sort of ambiguity.
The example which is explicit about this from standard 6.5.2.2p12
In the function call
(*pf[f1()]) (f2(), f3() + f4())
the functions f1, f2, f3, and f4 may be called in any order. All side effects have to be
completed before the function pointed to by pf[f1()] is called.
Same way when you pass the arguments - their evaluation order may vary. And your example of printf is also another such example.
Check the slide from which you got to know about this - must be a slide one Undefined behavior
Can you explain why?
Because the compiler had calculated the arguments from right to left. It was allowed to do it either way, and it finally produced the code which did it like that. You know, it didn't want to die like Buridan's donkey ;-)
You may say that C compilers have no free will. Yes, they don't, but the generated code depends on many different things, such as compiler brand, version, command-line options etc. And C standard doesn't impose any limitations on C compilers in that particular case. So it's officially called "undefined behaviour". Just never do this.

Why is Haskell so slow compared to C for Fibonacci sequence?

I am just beginner in Haskell. And I writing a code to display the N numbers in the Fibonacci sequence. Here is my code in Haskell,
fib_seq 1 = 1:[]
fib_seq 2 = 1:1:[]
fib_seq n = sum(take 2 (fib_seq (n-1))):fib_seq (n-1)
When I run this code for higher numbers like fib_seq 40 in GHCI, it takes a long time to evaluate it and my computer hangs and I have to interrupt. However, when I write the same exact logic in C, (I just print instead of saving it in the list),
#include<stdio.h>
int fib_seq (int n){
if(n==1) return 1;
else if(n==2) return 1;
else return fib_seq(n-1)+fib_seq(n-2); }
void print_fib(int n){
if(n==0) return;
else printf("%i ", fib_seq(n));
print_fib(n-1); }
int main(int argn, char* argc){
print_fib(40);
return 0; }
The code is very fast. Takes about 1 second to run when compiled with GCC. Is Haskell supposed to be this slow than C? I have looked up other answers on the internet and they say something about memoization. I am beginning Haskell and I don't know what that means. What I am saying is that the C code and Haskell code I wrote both do the same exact steps and Haskell is so much slower than C, it hangs my GHCI. A 1-2 seconds difference is something I will never worry about, and if C also had taken the same exact time as Haskell, I would also not worry about. But Haskell crashing and C doing it in 1 seconds is unacceptable.
The following program, compiled with ghc -O2 test.hs, is +/-2% the speed of the C code you posted, compiled with gcc -O2 test.c.
fib_seq :: Int -> Int
fib_seq 1 = 1
fib_seq 2 = 1
fib_seq n = fib_seq (n-1) + fib_seq (n-2)
main = mapM_ (print . fib_seq) [40,39..1]
Some comments:
Unlike you, I implemented the exact same logic. I doubt this is the real difference, though; see the remaining comments for much more likely causes.
I specified the same types as C uses for the arithmetic. You didn't, which is likely to run into two problems: using Integer instead of Int for largenum arithmetic, and having a class-polymorphic type instead of a monomorphic one adding overhead on every function call.
I compiled. ghci is built to be interactive as quickly as possible, not to produce quick code.
I don't have the right version of llvm installed at the moment, but it will often crunch through heavily-numeric code like this much better than ghc's own codegen. I wouldn't be too surprised if it ended up being faster than gcc.
Of course using one of the many well-known better algorithms for fibonacci is going to trump all this nonsense.
Guess what happens if "fib_seq (n-1)" is evaluated twice on each recursion.
And then try this:
fib_seq 1 = 1:[]
fib_seq 2 = 1:1:[]
fib_seq n = sum(take 2 f):f
where f = fib_seq (n-1)

How to understand a recursive function call for large inputs

The result of the following code is 0,1,2,0, I totally understand after writing explicitly every call. But I wonder whether there is an easier method to understand what the recursive function want to realize and find the result faster? I mean we can't write all the call if a=1000.
#include<stdio.h>
void fun(int);
typedef int (*pf) (int, int);
int proc(pf, int, int);
int main()
{
int a=3;
fun(a);
return 0;
}
void fun(int n)
{
if(n > 0)
{
fun(--n);
printf("%d,", n);
fun(--n);
}
}
Your question isn't "what does this do?" but "how do I understand recursive functions for large values?".
Recursion is a great tool for certain kinds of problems. If, for some reason, you ever had to print that sequence of numbers, the above code would be a good way to solve the problem. Recursion is also used in contexts where you have a recursive structure (like a tree or a list) or are dealing with recursive input, like parsers.
You might see the code for a recursive function and think "what does this do?" but it's more likely that the opposite will happen: you will find a problem that you need to solve by writing a program. With experience you will learn to see which problems require a recursive solution, and that's a skill you must develop as a programmer.
The principle of recursion is that you perform a [usually simple] function repeatedly. So to understand a recursive function you usually need to understand only one step, and how to repeat it.
With the above code you don't necessarily need to answer "what output does this code give" but instead "how does it work, and what process does the code follow". You can do both on paper. By stepping through the algorithm you can usually gain insight into what it does. One factor that complicates this example is that it isn't tail-call recursive. This means you must do more work to understand the program.
To 'understand' any program you don't necessarily need to be able to simulate it and calculate the output, and the same applies here.
All you need to do is add some debug statements to understand it a little better. This is the result of the if statements I added to track through it:
start program
before if, n is = 3
before fun call, n is = 3
before if, n is = 2
before fun call, n is = 2
before if, n is = 1
before fun call, n is = 1
before if, n is = 0
printing = 0
before if, n is = -1
after second fun call, n is = -1
printing = 1
before if, n is = 0
after second fun call, n is = 0
printing = 2
before if, n is = 1
before fun call, n is = 1
before if, n is = 0
printing = 0
before if, n is = -1
after second fun call, n is = -1
after second fun call, n is = 1
end program

Resources