Overflow char to int conversion - c

#include <stdio.h>
void func(int a);
int main() {
char ch = 256;
func(ch);
return 0;
}
void func(int a) {
printf("%d\n", a);
}
The output of the above program is 0.
Can anyone explain how

256 is too large for the char type on your machine.
So it wraps around to the minimum value 0, as per the C standard (from N1570, emphasis mine):
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if the value
can be represented by the new type, it is unchanged.
2 Otherwise, if
the new type is unsigned, the value is converted by repeatedly adding
or subtracting one more than the maximum value that can be represented
in the new type until the value is in the range of the new type.60)
3
Otherwise, the new type is signed and the value cannot be represented
in it; either the result is implementation-defined or an
implementation-defined signal is raised.
The 256 literal is an int, and char is an integer type that is either signed or unsigned, so one of the above emphasized behaviors will occur.

Because Character set is limited to 255 characters only so your output is 0
http://www.asciitable.com/ have complete list of characterset details and char is limited to 256 only
unsigned char 1 byte -> 0 to 255
signed char 1 byte -> -128 to 127

Trying to store 256 in 8 bits...it gets all the 1[00000000]. So basically it overflows and the 0 bits are only left.
#include <stdio.h>
void func(int a);
int main() {
char ch = 256;
printf("%c\n",ch); //nothing prints. That's what happens when
// when we print 'NUL'
func(ch);
return 0;
}
void func(int a) {
printf("%d\n", a);
}

The maximum range of char is 127. The value 256 is 129 more than 127 and when we add 1 to 127 (which is maximum value of char) it goes to minimum value -128. So if we add 129 to 127 it results in 0.
for char
127 + 1 = -128.
127 + 129 = 0
So for the below program the output is 1.
#include <stdio.h>
void func(int a);
int main() {
char ch = 257;
func(ch);
return 0;
}
void func(int a) {
printf("%d\n", a);
}

If you compile this with gcc it actually tells you why:
testo.c: In function ‘main’:
testo.c:6:15: warning: overflow in implicit constant conversion [-Woverflow]
char ch = 256;
^

Related

Addition of two chars, example char a = 'A' and b = 'B'

Why does this program output a negative value?
#include <stdio.h>
int main() {
char a = 'a', b = 'b', c;
c = a + b;
printf("%d", c);
}
Shouldn't these values be converted into ASCII then added up?
On the most common platforms, char is a signed type which can represent values from -127 to +128. You appear to be using a platform on which char has these properties.
Note that on some platforms, char is unsigned, so that the range of representable values is 0 to 255. However, you do not appear to be using such a platform.
Adding 'a' to 'b' is the same as adding their ASCII values (97 and 98 respectively), for a result of 195. This result is not representable as a char on your platform. In many implementations, the first bit is a sign bit, so you get -61.
Using unsigned char gives the result that you expect for printable 7-bit ASCII characters.
#include <stdio.h>
int main() {
char a = 'a', b = 'b';
unsigned char c;
c = a + b;
printf("%d\n",a);
printf("%d\n",b);
printf("%d\n", c);
}
Outputs:
97
98
195
a = 97
b = 98
a+b = 195
195 is out of the signed 8-bit range (-128 ... 127)
195 = 0b11000011
This equals -61 in signed 8-bit representation.
As explained by 3Dave char is a signed type and adding up two variables of such type can lead to overflow and it produces a negative result. Even if you use unsigned char this sum can result in an overflow, but not with negative value.

why itoa fuction returns 32 bits if the size of variable in 16 bit

size of short int is 2 bytes(16 bits) on my 64 bit processor and mingw compiler but when I convert short int variable to a binary string using itoa function
it returns string of 32 bits
#include<stdio.h>
int main(){
char buffer [50];
short int a=-2;
itoa(a,buffer,2); //converting a to binnary
printf("%s %d",buffer,sizeof(a));
}
Output
11111111111111111111111111111110 2
The answer is in understanding C's promotion of short datatypes (and char's, too!) to int's when those values are used as parameters passed to a function and understanding the consequences of sign extension.
This may be more understandable with a very simple example:
#include <stdio.h>
int main() {
printf( "%08X %08X\n", (unsigned)(-2), (unsigned short)(-2));
// Both are cast to 'unsigned' to avoid UB
return 0;
}
/* Prints:
FFFFFFFE 0000FFFE
*/
Both parameters to printf() were, as usual, promoted to 32 bit int's. The left hand value is -2 (decimal) in 32bit notation. By using the cast to specify the other parameter should not be subjected to sign extension, the printed value shows that it was treated as a 32 bit representation of the original 16 bit short.
itoa() is not available in my compiler for testing, but this should give the expected results
itoa( (unsigned short)a, buffer, 2 );
your problem is so simple , refer to itoa() manual , you will notice its prototype which is
char * itoa(int n, char * buffer, int radix);
so it takes an int that to be converted and you are passing a short int so it's converted from 2 byte width to 4 byte width , that's why it's printing a 32 bits.
to solve this problem :
you can simply shift left the array by 16 position by the following simple for loop :
for (int i = 0; i < 17; ++i) {
buffer[i] = buffer[i+16];
}
and it shall give the same result , here is edited version of your code:
#include<stdio.h>
#include <stdlib.h>
int main(){
char buffer [50];
short int a= -2;
itoa(a,buffer,2);
for (int i = 0; i < 17; ++i) {
buffer[i] = buffer[i+16];
}
printf("%s %d",buffer,sizeof(a));
}
and this is the output:
1111111111111110 2

what is behavior when char is compared with unsigned short in c language?

When I run the following program:
void func(unsigned short maxNum, unsigned short di)
{
if (di == 0) {
return;
}
char i;
for (i = di; i <= maxNum; i += di) {
printf("%u ", i);
}
printf("\n");
}
int main(int argc, char **argv)
{
func(256, 100);
return 0;
}
It is endless loop, but i wonder when char is compared with unsigned short, is char translated to unsigned short? In this situation, char is overflow and larger than maxNum. I really do not know how to explain the results of this program.
Implementation defined behavior, Undefined behavior and CHAR_MAX < 256
Let us sort out:
... unsigned short maxNum
... unsigned short di
char i;
for (i = di; i <= maxNum; i += di) {
printf("%u ", i);
}
char may be a signed char or an unsigned char. Let us assume it is signed.
unsigned short may have the same range as unsigned when both are 16-bit. Yet it is more common to find unsigned short as 16-bit and int, unsigned as 32-bit.
Other possibles exist, yet let us go forward with the above two assumptions.
i = di could be interesting if the value assigned was outside the range of a char, but 100 is always within char range, so i is 100.
Each argument in i <= maxNum goes through usual integer promotions so the signed char i first becomes an int 100 and the 16-bit maxNum becomes an int 256. As 100 < 256 is true, the loop body is entered. Notice i would never expect to have a value as large as 256 since CHAR_MAX is less than 256 - even on following loops - This explains the seen forever loop. But wait there's more
With printf("%u ", i);, printf() expects a matching unsigned argument. But i as a type with less range then int gets promoted to a int with the same value as part of a ... argument. Usually printing mis-matched specifiers and type is undefined behavior with an exception: when the value is representable as both a signed and unsigned type. As 100 is the first time, all is OK.
At the loop end, i += di is like i = i + di;. The addition arguments go through usual integer promotions and become int 100 added to int 100. That sum is 200. So far nothing strange. Yet assigning a 200 to a signed char coverts the 200 as it is out of range. This is implementation defined behavior. The assigned value could have been 0 or 1 or 2.... Typically, the value is wrapped around ("modded") by adding/subtracting 256 until in range. 100 + 100 -256 --> -56.
But the 2nd printf("%u ", i); attempts printing -56 and that is undefined behavior.
Tip: enable all warnings, Good compilers will point out many of these problems and save you time.
I got the answer from http://www.idryman.org/blog/2012/11/21/integer-promotion/ , both char and unsigned short are translated to int which can explain the process and result of this programs.

Incorrect value printed by the 'pow' function in C

Why does the below code gives 127 as output, when it has to be 128. i have even tried to figure out, but I don't understand why 127?
#include<stdio.h>
#include<math.h>
int main()
{
signed char ch;
int size,bits;
size = sizeof(ch);
bits = size * 8;
printf("totals bits is : %d\n",bits);
printf("Range is : %u\n", (char)(pow((double)2, (double)(7))));
}
If you want 128 as result then typecast pow() result as int instead of char. for e.g
printf("Range is : %u\n", (int)(pow((double)2, (double)(7)))); /* this print 128 */
Why this
printf("Range is : %u\n", (char)(pow((double)2, (double)(7))));
prints 127 as pow((double)2,(double)7) is 128 but at same time that whole result vale explicitly type casted as char and default char is signed which ranges from -128 to +127 , hence it prints 127.
Side note, pow() is floating point function as #lundin suggested & same you can find here. you can use
unsigned char ch = 1 << 7;
to get the same in particular case.

when assigning the unsigned char with the integer greater than 255 it gives the different output , why?

#include<stdio.h>
int main()
{
unsigned char c =292;
printf("%d\n",c);
return 0;
}
The following code gives the output "36".
I wanted to know why this happens?
Because 292 does not fit in a variable of type unsigned char.
I suggest you to compile this program:
#include <stdio.h>
#include <limits.h>
int main()
{
unsigned char c =292;
printf("%d %d\n", c, UCHAR_MAX);
return 0;
}
and check the output:
prog.c: In function 'main':
prog.c:5:21: warning: unsigned conversion from 'int' to 'unsigned char' changes value from '292' to '36' [-Woverflow]
unsigned char c =292;
^~~
36 255
So, UCHAR_MAX in my system is 255, and that's the largest value you are allowed to assign to c.
292 just overflows c, and because it's an unsigned type, it goes from 0 to 255, thus it wraps around, giving you 292 - (255 + 1) = 36.
The size of char data_type is 1 byte and its range is 0 to 255.
but here initialization is more than 255(i.e. c=292>255)
Hence, c stores (292-255)th value (i.e. 37th value) and the value c stores is 36(because 0 is the first value).
It means you have initialize c = 36.
And finally, printf() func. fetch value from memory and print the value 36.
When you convert 292 to binary, you will get 0001 0010 0100 (9 bits).
But unfortunately, a character variable can store only 1 byte (8 bits).
So it will take the last 8 bits. ie : 0010 0100 which is equal to 36 in decimal.
Hope this helps

Resources