Incorrect value printed by the 'pow' function in C - c

Why does the below code gives 127 as output, when it has to be 128. i have even tried to figure out, but I don't understand why 127?
#include<stdio.h>
#include<math.h>
int main()
{
signed char ch;
int size,bits;
size = sizeof(ch);
bits = size * 8;
printf("totals bits is : %d\n",bits);
printf("Range is : %u\n", (char)(pow((double)2, (double)(7))));
}

If you want 128 as result then typecast pow() result as int instead of char. for e.g
printf("Range is : %u\n", (int)(pow((double)2, (double)(7)))); /* this print 128 */
Why this
printf("Range is : %u\n", (char)(pow((double)2, (double)(7))));
prints 127 as pow((double)2,(double)7) is 128 but at same time that whole result vale explicitly type casted as char and default char is signed which ranges from -128 to +127 , hence it prints 127.
Side note, pow() is floating point function as #lundin suggested & same you can find here. you can use
unsigned char ch = 1 << 7;
to get the same in particular case.

Related

why itoa fuction returns 32 bits if the size of variable in 16 bit

size of short int is 2 bytes(16 bits) on my 64 bit processor and mingw compiler but when I convert short int variable to a binary string using itoa function
it returns string of 32 bits
#include<stdio.h>
int main(){
char buffer [50];
short int a=-2;
itoa(a,buffer,2); //converting a to binnary
printf("%s %d",buffer,sizeof(a));
}
Output
11111111111111111111111111111110 2
The answer is in understanding C's promotion of short datatypes (and char's, too!) to int's when those values are used as parameters passed to a function and understanding the consequences of sign extension.
This may be more understandable with a very simple example:
#include <stdio.h>
int main() {
printf( "%08X %08X\n", (unsigned)(-2), (unsigned short)(-2));
// Both are cast to 'unsigned' to avoid UB
return 0;
}
/* Prints:
FFFFFFFE 0000FFFE
*/
Both parameters to printf() were, as usual, promoted to 32 bit int's. The left hand value is -2 (decimal) in 32bit notation. By using the cast to specify the other parameter should not be subjected to sign extension, the printed value shows that it was treated as a 32 bit representation of the original 16 bit short.
itoa() is not available in my compiler for testing, but this should give the expected results
itoa( (unsigned short)a, buffer, 2 );
your problem is so simple , refer to itoa() manual , you will notice its prototype which is
char * itoa(int n, char * buffer, int radix);
so it takes an int that to be converted and you are passing a short int so it's converted from 2 byte width to 4 byte width , that's why it's printing a 32 bits.
to solve this problem :
you can simply shift left the array by 16 position by the following simple for loop :
for (int i = 0; i < 17; ++i) {
buffer[i] = buffer[i+16];
}
and it shall give the same result , here is edited version of your code:
#include<stdio.h>
#include <stdlib.h>
int main(){
char buffer [50];
short int a= -2;
itoa(a,buffer,2);
for (int i = 0; i < 17; ++i) {
buffer[i] = buffer[i+16];
}
printf("%s %d",buffer,sizeof(a));
}
and this is the output:
1111111111111110 2

Char automatically converts to int (I guess)

I have following code
char temp[] = { 0xAE, 0xFF };
printf("%X\n", temp[0]);
Why output is FFFFFFAE, not just AE?
I tried
printf("%X\n", 0b10101110);
And output is correct: AE.
Suggestions?
The answer you're getting, FFFFFFAE, is a result of the char data type being signed. If you check the value, you'll notice that it's equal to -82, where -82 + 256 = 174, or 0xAE in hexadecimal.
The reason you get the correct output when you print 0b10101110 or even 174 is because you're using the literal values directly, whereas in your example you're first putting the 0xAE value in a signed char where the value is then being sort of "reinterpreted modulo 128", if you wanna think of it that way.
So in other words:
0 = 0 = 0x00
127 = 127 = 0x7F
128 = -128 = 0xFFFFFF80
129 = -127 = 0xFFFFFF81
174 = -82 = 0xFFFFFFAE
255 = -1 = 0xFFFFFFFF
256 = 0 = 0x00
To fix this "problem", you could declare the same array you initially did, just make sure to use an unsigned char type array and your values should print as you expect.
#include <stdio.h>
#include <stdlib.h>
int main()
{
unsigned char temp[] = { 0xAE, 0xFF };
printf("%X\n", temp[0]);
printf("%d\n\n", temp[0]);
printf("%X\n", temp[1]);
printf("%d\n\n", temp[1]);
return EXIT_SUCCESS;
}
Output:
AE
174
FF
255
https://linux.die.net/man/3/printf
According to the man page, %x or %X accept an unsigned integer. Thus it will read 4 bytes from the stack.
In any case, under most architectures you can't pass a parameter that is less then a word (i.e. int or long) in size, and in your case it will be converted to int.
In the first case, you're passing a char, so it will be casted to int. Both are signed, so a signed cast is performed, thus you see preceding FFs.
In your second example, you're actually passing an int all the way, so no cast is performed.
If you'd try:
printf("%X\n", (char) 0b10101110);
You'd see that FFFFFFAE will be printed.
When you pass a smaller than int data type (as char is) to a variadic function (as printf(3) is) the parameter is converted to int in case the parameter is signed and to unsigned int in the case it is unsigned. What is being done and you observe is a sign extension, as the most significative bit of the char variable is active, it is replicated to the thre bytes needed to complete an int.
To solve this and to have the data in 8 bits, you have two possibilities:
Allow your signed char to convert to an int (with sign extension) then mask the bits 8 and above.
printf("%X\n", (int) my_char & 0xff);
Declare your variable as unsigned, so it is promoted to an unsigned int.
unsigned char my_char;
...
printf("%X\n", my_char);
This code causes undefined behaviour. The argument to %X must have type unsigned int, but you supply char.
Undefined behaviour means that anything can happen; including, but not limited to, extra F's appearing in the output.

Character to binary function doesn't work as expected

I have made a function to translate a number to its binary form:
size_t atobin (char n)
{
size_t bin= 0, pow;
for (size_t c= 1; n>0; c++)
{
pow= 1;
for (size_t i= 1; i<c; i++) //This loop is for getting the power of 10
pow*= 10;
bin+= (n%2)*pow;
n/= 2;
}
return bin;
}
It works great for numbers 1 to 127, but for greater numbers (128 to 255) the result is 0... I've tried using the type long long unsigned int for each variable but the result was the same. Someone has an idea about why?
char by default in C is considered to be of signed.
char is of 8 bits(mostly). And for signed char the MSB is used for sign. As a result you can only use 7 bits.
(0111 1111)2 = (127)10 The maximum value that your fucntion can work with. (as you are passing a type of variable which can hold 127 at max).
If you use unsigned char then the MSB is not used as sign-bit. All 8 bits are used giving us a maximum possible value (1111 1111)2 = (255)10
For signed number min/max value is -127 to +127.
For unsigned number min/max value is 0 to +255.
So even if you make the type of the passed parameter unsigned char the maximum value it can hold is +255.
A bit more detail:
Q) What happens when you assign >127 values to your char parameter?
It is signed char by default. It is of 8 bits. But it can't hold it. So what will happen?
The result is implementation defined. But
Suppose the value is 130. In binary it is 10000010. In most of the cases this returns -126. So that will be the value of n.
n>0; fails. Loop is never entered. And it returns 0.
Now if we make it unsigned char then it can hold values between 0 and 255 (inclusive). And that is what you want to have here.
Note:
Q) What happens when >255 values are stored in unsigned char?
The value is reduced to modulo of (max value unsigned char can hold+1) which is
256.
So apply modulo operation and put the result. That will be stored in
unsigned char.

when assigning the unsigned char with the integer greater than 255 it gives the different output , why?

#include<stdio.h>
int main()
{
unsigned char c =292;
printf("%d\n",c);
return 0;
}
The following code gives the output "36".
I wanted to know why this happens?
Because 292 does not fit in a variable of type unsigned char.
I suggest you to compile this program:
#include <stdio.h>
#include <limits.h>
int main()
{
unsigned char c =292;
printf("%d %d\n", c, UCHAR_MAX);
return 0;
}
and check the output:
prog.c: In function 'main':
prog.c:5:21: warning: unsigned conversion from 'int' to 'unsigned char' changes value from '292' to '36' [-Woverflow]
unsigned char c =292;
^~~
36 255
So, UCHAR_MAX in my system is 255, and that's the largest value you are allowed to assign to c.
292 just overflows c, and because it's an unsigned type, it goes from 0 to 255, thus it wraps around, giving you 292 - (255 + 1) = 36.
The size of char data_type is 1 byte and its range is 0 to 255.
but here initialization is more than 255(i.e. c=292>255)
Hence, c stores (292-255)th value (i.e. 37th value) and the value c stores is 36(because 0 is the first value).
It means you have initialize c = 36.
And finally, printf() func. fetch value from memory and print the value 36.
When you convert 292 to binary, you will get 0001 0010 0100 (9 bits).
But unfortunately, a character variable can store only 1 byte (8 bits).
So it will take the last 8 bits. ie : 0010 0100 which is equal to 36 in decimal.
Hope this helps

I have used `unsigned char` type in order to give me data range up to 255, but not happen

I Have a simple code to convert from decimal to binary as follow
unsigned char number= 150, reminder=0;
while(number > 0){
printf("number=%d, ", number);
reminder = number % 2;
number /= 2;
printf("reminder=%d\n", reminder);
}
printf("\n");
the problem is when I input a decimal number greater than 127, gives me a binary number represent a negative number.
How gave me a negative number and I have used unsigned char not char only ?
(online example)
Note: I'm using visual studio 2010.
for printf use %u instead of %d
%d is for signed and %u is for unsigned number printing.
I think you need the following:
printf("reminder=%u\n", (unsigned int)reminder);
Does that work for you? For an explanation of why, see the dicussions on this question here.
basically a char is 8 bit long(1 BYTE) and it can hold -128 to 127 value for a signed char and 0 to 255 value for unsigned char ...and in any compiler whether it is 16 bit or 32 it only occupies 1 byte.......
now the story behind getting a -ve number is explained below....
since the range of char both signed and unsigned is shown above, whenever the compiler encounters value exceeded from the max, the compiler starts again from the beginning.
Ex: for signed char if value exceeds 127 it returns a -ve number i.e the starting range of signed char. for 128 it reutrns a -128. for 129 - (-127) etc..
same is for unsigned char...
sample pgm :
#include<stdio.h>
#include<conio.h>
int main()
{
char ch1 = 128;
unsigned char ch2 = 257;
printf("%d\n %d",ch1,ch2);
getch();
}
o/p : -128
1
Sample program 2 :
#include<stdio.h>
Void main()
{
unsigned char ch;
for(ch=0;ch<=255;ch++)
{
printf("%d-%c",ch,ch);
}
}
This program should print the ASCII value and the corresponding characters. But it wont. this program is an indefinite loop. The reason is that ch has been defined as a char. And char cannot take values bigger than +127. Hence when the value of cha is +127 and we perform ch++ It becomes -128 instead of +128. -128 is less than 255, hence the condition gets satisfied. Here onwards ch would take values like -127,-126,-----,-1,0,+1,+2,------,+127,-128,-127 etc.
Hope this helps..

Resources