printf() is printing the wrong value - c

This is my full code, and its printing random negative values each time I run it not sure what is wrong.
using Ubuntu to run and "gcc -Wall -Wextra test.c"
#include <stdio.h>
int main () {
unsigned int x = 10;
unsigned int y = 16;
unsigned int p = x + y;
printf("%d\n", &p);
return 0;
}

You are passing the address of p. You need to pass the value.
printf("%d\n", p);
As you have it, your code is printing the address of p, whatever that happens to be.
In addition, since you are using unsigned int, you probably want to use the %u formatter insted of %d.

Related

Calculating the range of long in C

I am writing a program in C to calculate the range of different data types. Please look at the following code:
#include <stdio.h>
main()
{
int a;
long b;
for (a = 0; a <= 0; --a)
;
++a;
printf("INT_MIN: %d\n", a);
for (a = 0; a >= 0; ++a)
;
--a;
printf("INT_MAX: %d\n", a);
for (b = 0; b <= 0; --b)
;
++b;
printf("LONG_MIN: %d\n", b);
for (b = 0; b >= 0; ++b)
;
--b;
printf("LONG_MAX: %d\n", b);
}
The output was:
INT_MIN: -32768
INT_MIN: 32767
LONG_MIN: 0
LONT_MAX: -1
The program took a long pause to print the long values. I also put a printf inside the third loop to test the program (not mentioned here). I found that b did not exit the loop even when it became positive.
I used the same method of calculation. Why did it work for int but not for long?
You are using the wrong format specifier. Since b is of type long, use
printf("LONG_MIN: %ld\n", b);
In fact, if you enabled all warnings, the compiler probably would warn you, e.g:
t.c:19:30: warning: format specifies type 'int' but the argument has type 'long' [-Wformat]
printf("LONG_MIN: %d\n", b);
In C it is undefined behaviour to decrement a signed integer beyond its minimum value (and similiarly for incrementing above the maximum value). Your program could do literally anything.
For example, gcc compiles your program to an infinite loop with no output.
The proper approach is:
#include <limits.h>
#include <stdio.h>
int main()
{
printf("INT_MIN = %d\n", INT_MIN);
// etc.
}
In
printf("LONG_MIN: %d\n", b);
the format specifier is %d which works for integers(int). It should be changed to %ld to print long integers(long) and so is the case with
printf("LONG_MAX: %d\n", b);
These statements should be
printf("LONG_MIN: %ld\n", b);
&
printf("LONG_MAX: %ld\n", b);
This approach may not work for all compilers(eg gcc) and an easier approach would be to use limits.h.
Also check Integer Limits.
As already stated, the code you provided invokes undefined behavior. Thus it could calculate what you want or launch nuclear missiles ...
The reason for the undefined behavior is the signed integer overflow that you are provoking in order to "test" the range of the data types.
If you just want to know the range of int, long and friends, then limits.h is the place to look for. But if you really want ...
[..] to calculate the range [..]
... for what ever reason, then you could do so with the unsigned variant of the respective type (though see the note at the end), and calculate the maximum like so:
unsigned long calculate_max_ulong(void) {
unsigned long follow = 0;
unsigned long lead = 1;
while (lead != 0) {
++lead;
++follow;
}
return follow;
}
This only results in an unsigned integer wrap (from the max value to 0), which is not classified as undefined behavior. With the result from above, you can get the minimum and maximum of the corresponding signed type like so:
assert(sizeof(long) == sizeof(unsigned long));
unsigned long umax_h = calculate_max_ulong() / 2u;
long max = umax_h;
long min = - max - 1;
(Ideone link)
Assuming two's complement for signed and that the unsigned type has only one value bit more than the signed type. See ยง6.2.6.2/2 (N1570, for example) for further information.

How to multiply float with integers in C?

When I execute this code it returns me 1610612736
void main(){
float a=3.3f;
int b=2;
printf("%d",a*b);
}
Why and how to fix this ?
edit : It's not even a matter of integer and float, if i replace int b=2: by float b=2.0f it return the same silly result
The result of the multiplication of a float and an int is a float. Besides that, it will get promoted to double when passing to printf. You need a %a, %e, %f or %g format. The %d format is used to print int types.
Editorial note: The return value of main should be int. Here's a fixed program:
#include <stdio.h>
int main(void)
{
float a = 3.3f;
int b = 2;
printf("%a\n", a * b);
printf("%e\n", a * b);
printf("%f\n", a * b);
printf("%g\n", a * b);
return 0;
}
and its output:
$ ./example
0x1.a66666p+2
6.600000e+00
6.600000
6.6
Alternately, you could also do
printf("%d\n", (int)(a*b));
and this would print the result you're (kind of) expecting.
You should always explicitly typecast the variables to match the format string, otherwise you could see some weird values printed.

If unsigned types should not have negative values, why is it negative when I turn on all the bits?

#include <stdio.h>
#include <math.h>
#include <limits.h>
int main(void)
{
unsigned long x = 0;
x = x ^ ~x;
printf("%d\n", x);
x = (unsigned long)pow(2, sizeof(x)*8);
printf("%d\n", x);
x = ULONG_MAX;
printf("%d\n", x);
return 0;
}
I am using CodeBlocks12.11, and MinGW 4.7.0-1 on Windows 7. And for some reason I am having trouble making my variable x acquire the largest possible decimal value representation. Why does this happen, I am sure that x = ULONG_MAX should work but it also results in -1, now surely that is not right! I tried compiling it outside of Code-Blocks as well.
What am I missing here?
You have to print unsigned variables with u. A long is prefixed with l, hence you need lu in this case.
printf("%lu\n", x);

RAND_MAX exhibiting unusual behavior upon recasting

I wish to generate random numbers between 0 and 1. (Obviously, this has application elsewhere.)
My test code:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main() {
double uR;
srand(1);
for(int i=0;i<5;i++){
uR = rand()/(RAND_MAX+1.000);
printf("%d \n", uR);
}
}
And here's the output after the code is compiled with GCC:
gcc -ansi -std=c99 -o rand randtest.c
./rand
0
-251658240
910163968
352321536
-528482304
Upon inspection, it turns out that casting the integer RAND_MAX to a double has the effect of changing its value from 2147483647 to -4194304. This occurs regardless of the method used to change RAND_MAX to type double; so far, I've tried (double)RAND_MAX and double max = RAND_MAX as well.
Why does the number's value change? How can I stop that from happening?
You can't print a double with %d. If you use %f, it works just fine.
You are printing a double value as a decimal integer - which is causing you confusion.
Use %.6f or something similar.
You are passing a double (uR) to printf when it expects a signed int. You should cast it or print with %f
printf("%d \n", (int)uR);

C: Expected output

#include <stdio.h>
int main()
{
long long x = 0x8ce4b16b;
long long y = x<<4;
printf("%lx, %lx, abc\n", x, y);
return 0;
}
I'm getting
8ce4b16b, 0, abc... Is this okay?
However if I change printf like printf("%lld, %lx, abc\n", x, y);
The output becomes:
2363797867, ce4b16b0, abc
Why could have been this behaviour!! :(
Using incorrect format specifier in printf invokes Undefined Behaviour. The correct format specifier for long long is %lld.
Also make sure that you dont have signed integer overflow in your code because that's UB too.
You should use printf("%llx, %llx, abc\n", x, y); in my mind. %lx for long integer.

Resources