What will be the output in C? [duplicate] - c

This question already has answers here:
Is char signed or unsigned by default?
(6 answers)
Integer conversions(narrowing, widening), undefined behaviour
(2 answers)
Range of char type values in C
(6 answers)
Closed 5 years ago.
I am having trouble finding the output of this code. Please help me to find out the output of the following output segment.
#include<stdio.h>
int main(){
char c = 4;
c=c*200;
printf("%d\n",c);
return 0;
}
I want to know that why the output is giving 32. Would you please tell me? I want the exact calculations.

Warning, long winded answer ahead. Edited to reference the C standard and to be clearer and more concise with respect to the question being asked.
The correct answer for why you have 32 has been given a few times. Explaining the math using modular arithmetic is completely correct but might make it a little harder to grasp intuitively if you are new to programming. So, in addition to the existing correct answers, here's a visualization.
Your char is an 8 bit type, so it is made up of a series of 8 zeros and ones.
Looking at the raw bits in binary, when unsigned (let's leave signed types out of it for a moment as it will just confuse the point) your variable 'c' can take on values in the following range:
00000000 -> 0
11111111 -> 255
Now, c*200 = 800. This is of course larger than 255. In binary 800 looks like:
00000011 00100000
To represent this in memory you need at least 10 bits (see the two 1's in the upper byte). As an aside, the leading zeros don't explicitly need to be stored since they have no effect on the number. However the next largest data type will be 16 bits and it's easier to show consistently sized groupings of bits anyway, so there it is.
Since the char type is limited to 8 bits and cannot represent the result, there needs to be a conversion. ISO/IEC 9899:1999 section 6.3.1.3 says:
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if
the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
3 Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
So, if your new type is unsigned, following rule #2 if we subtract one more than the max value of the new type (256) from 800 we eventually end up in the range of the new type with 32. This behaviour also happens to effectively truncate the result, as you can see the higher bits which could not be represented have been discarded.
00100000 -> 32
The existing answers explain using the modulo operation, where 800 % 256 = 32. This is simply math that gives the remainder of a division operation. When we divide 800 by 256 we get 3 (because 256 fits into 800 at most three times) plus a remainder of 32. This is essentially the same as applying rule #2 here.
Hopefully this clarifies why you get a result of 32. However, as has been correctly pointed out, if the destination type is signed we're looking at rule #3, which says the behaviour is implementation-defined. Since the standard also says that the plain char type you are using may be signed or unsigned (and that this is implementation-defined) your particular case is then implementation-defined. However, in practice you will typically see the same behaviour where you lose the higher bits and hence you will still generally get 32.
Extending this a bit, if you were to have a signed 8-bit destination type, and you were to run your code with c=c*250 instead, you would have:
00000011 11101000 -> 1000
and you will probably find that after the conversion to the smaller signed type the result is similarly truncated as:
11101000
which in a signed type is interpreted as -24 for most systems which use two's complement. Indeed this is what happens when I run this on gcc, but again this is not guaranteed by the language itself.

Related

Range of datatypes in C

I was experimenting in C about the range of different datatypes and I stumbled into this problem. We know that the max value of int datatype is 2147483647. So I tried to assign a larger value to the int data type which was 21474836481234. It was way larger number than the maximum value of stored in int datatype. So when I printed the the output which came as "1234". I didn't understand how this number was printed. Can anyone explain to me? Thank You!
#include <stdio.h>
#include <limits.h>
void main()
{
int a;
a=214748364812323;
printf("a= %d",a);
}
It was way larger number than the maximum value of stored in int datatype . So when I printed the the output which came as "1234".I didn't understand how this number was printed.
The hexadecimal representation of 21474836481234 is 0x1388000004D2, which means that the low 32 bits is 000004D2, i.e. 1234 in decimal. When you assigned 0x1388000004D2 to a type that can only store 32 bits, it's the low 32 bits that get stored, so you end up with 000004D2.
Try doing the same thing with a smaller type. char is only 8 bits, so try storing a larger value like 0x10A and you'll see that the resulting value is only 0x0A.
When given an out of range value for a signed type, "the result is implementation-defined or an implementation-defined signal is raised."
This setup gets 1234 for 21474836481234 and 12323 for 214748364812323. These are the int values produced by treating the least significant 32 bits of the inputs as int values.
max value of int datatype is 2147483647
That may be true for your program, but it's not true in general. The standard only guarantees that the range will be at least -32,767 .. +32,767 and at least the range of a signed short int (and of a signed char).
The Wikipedia page about C data types is quite good.
So when I printed the the output which came as "1234".I didn't understand how this number was printed. Can anyone explain to me?
214748364812323 is a valid constant, yet outside OP's int range, perhasp it is long or long long.
Assigning that wider than int type to an int incurs:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is
implementation-defined or an implementation-defined signal is raised. C17dr § 6.3.1.3 3
a took on some value. OP reports 1234 for the constant 214748364812324 - That is valid C. Often it is the lower bits of the value, but it may differ on other implementations.
Save time. Enable all compilers warnings to be alerted to troublesome code like int a; a=214748364812323;
This is undefined behavior in c. For fun I tried in gcc and got the following warning:
t.c: In function ‘main’:
t.c:5:1: warning: overflow in implicit constant conversion [-Woverflow]
a=214748364812323;
This is the compiler telling me that an overflow is occurring when trying to assign the value to a.
I get 12323 as the result.
Decimal 214748364812323 = 0xC35000003023. int on my system is 32 bits, so it looks like gcc only assigned the least significant 32 bits of the value 0x00003023, which would be typical overflow behavior.

Signed integer and unsigned integer [duplicate]

This question already has answers here:
Why Do We have unsigned and signed int type in C?
(4 answers)
Closed 4 years ago.
I am studying c language and there are two different integer types, signed/unsigned.
Signed integers can present both positive and negative numbers. Why do we
need unsigned integers then?
One word answer is "range"!
When you declare a signed integer, it takes 4 bytes/ 32 bits memory (on a 32 bit system).
Out of those 32 bits, 1 bit is for sign and other 31 bits represent number. Means you can represent any number between -2,147,483,648 to 2,147,483,647 i.e. 2^31.
What if you want to use 2,147,483,648? Go for 8 bytes? But if you are not interested in negative numbers, then isn't it wastage of 4 bytes?
If you use unsigned int, all 32bits represent your number as you don't need to spare 1 bit for sign. Hence, with unsigned int, you can go from 0 to 4,294,967,295 i.e. 2^32
Same applies for other data types.
The reason is that Integer always has a fixed size. On most systems, an Integer is 32 bits large.
So no matter of having a signed or unsigned Integer, it takes always the same amout of memory. And that's where signed and unsigned differ: the range
Where an unsigned integer has a range of 0 to 4294967295 (2³²-1), the signed integer has a range of -2147483647 to 2147483648
unsigned types have an extra bit of storage, allowing them a maximum magnitude of 2CHAR_BIT * sizeof(type)-1 for positive values. This is why types like size_t, which are meant to store sizes of files, strings, arrays, etc. are unsigned.
With signed integers, one bit is reserved for the sign, so if an int is 32-bits long, you only get 31 bits to store the magnitude of the number. An unsigned int does not have this restriction; the MSB is used for magnitude as well, but it comes at the expense of no longer being able to be negative.
Signed integer overflow is undefined by the C standard, whereas unsigned integer overflow is guaranteed to wrap-around and reset to zero. For example, the following code invokes undefined behavior in C:
int a = INT_MAX;
a++;
Whereas this is guaranteed to wrap-around back to zero:
unsigned int a = UINT_MAX;
a++;
Unsigned types are generally better for performing bit operations on
There are two/three reasons, given that C must offer the greatest range of possibilities to the programmer.
The first is that an unsigned integer can hold a double (positive) value in respect to its signed counterpart. And we don't want to waste any single bit right?
The second is that a protocol, or some data structure a program must cope with, can use unsigned values, so it is handy to have that data type.
The third is that processors actually have unsigned types, so C language gives them available. May be that there are algorithms which relay on overflow, for example.
There can be still other motivations, probably I don't remember them all.
Personally, I make large use of unsigned integers in embedded applications. For example, using a single unsigned char as an index into a circular buffer of 256 elements, makes it simple and fast to increment the index without checking for overflow, because when the index overflows, it does exactly what I want it to do (reset to zero). Again, there are probably many other situations, I tell just the first that comes to my mind.
It's all about memory. They are used to represent a greater number without making use of a larger amount of memory.
Numbers are stored on the computer in binary form. Signed numeric values use a process called two's complement to transform positive numbers into negative ones where the first bit, the one that could represent the highest value is not taken into account for any calculation.
It means that the numeric signed type of your choice can only store a maximum value of N available bits minus 1 bit and the remaining bit will be used to determine the sign of the value, while an unsigned type of your choice can make use of all its available bits to store its value with the drawback of not being able to represent negative values.

assigning 128 to char variable in c

The output comes to be the 32-bit 2's complement of 128 that is 4294967168. How?
#include <stdio.h>
int main()
{
char a;
a=128;
if(a==-128)
{
printf("%u\n",a);
}
return 0;
}
Compiling your code with warnings turned on gives:
warning: overflow in conversion from 'int' to 'char' changes value from '128' to '-128' [-Woverflow]
which tell you that the assignment a=128; isn't well defined on your plat form.
The standard say:
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
So we can't know what is going on as it depends on your system.
However, if we do some guessing (and note this is just a guess):
128 as 8 bit would be 0b1000.0000
so when you call printf where you get a conversion to int there will be a sign extension like:
0b1000.0000 ==> 0b1111.1111.1111.1111.1111.1111.1000.0000
which - printed as unsigned represents the number 4294967168
The sequence of steps that got you there is something like this:
You assign 128 to a char.
On your implementation, char is signed char and has a maximum value of 127, so 128 overflows.
Your implementation interprets 128 as 0x80. It uses two’s-complement math, so (int8_t)0x80 represents (int8_t)-128.
For historical reasons (relating to the instruction sets of the DEC PDP minicomputers on which C was originally developed), C promotes signed types shorter than int to int in many contexts, including variadic arguments to functions such as printf(), which aren’t bound to a prototype and still use the old argument-promotion rules of K&R C instead.
On your implementation, int is 32 bits wide and also two’s-complement, so (int)-128 sign-extends to 0xFFFFFF80.
When you make a call like printf("%u", x), the runtime interprets the int argument as an unsigned int.
As an unsigned 32-bit integer, 0xFFFFFF80 represents 4,294,967,168.
The "%u\n" format specifier prints this out without commas (or other separators) followed by a newline.
This is all legal, but so are many other possible results. The code is buggy and not portable.
Make sure you don’t overflow the range of your type! (Or if that’s unavoidable, overflow for unsigned scalars is defined as modular arithmetic, so it’s better-behaved.) The workaround here is to use unsigned char, which has a range from 0 to (at least) 255, instead of char.
First of all, as I hope you understand, the code you've posted is full of errors, and you would not want to depend on its output. If you were trying to perform any of these manipulations in a real program, you would want to do so in some other, more well-defined, more portable way.
So I assume you're asking only out of curiosity, and I answer in the same spirit.
Type char on your machine is probably a signed 8-bit quantity. So its range is from -128 to +127. So +128 won't fit.
When you try to jam the value +128 into a signed 8-bit quantity, you probably end up with the value -128 instead. And that seems to be what's happening for you, based on the fact that your if statement is evidently succeeding.
So next we try to take the value -128 and print it as if it was an unsigned int, which on your machine is evidently an 32-bit type. It can hold numbers in the range 0 to 4294967295, which obviously does not include -128. But unsigned integers typically behave pretty nicely modulo their range, so if we add 4294967296 to -128 we get 4294967168, which is precisely the number you saw.
Now that we've worked through this, let's resolve in future not to jam numbers that won't fit into char variables, or to print signed quantities with the %u format specifier.

Cyclic Nature of char datatype [duplicate]

This question already has answers here:
Simple Character Interpretation In C
(9 answers)
Closed 5 years ago.
I had been learning C and came across topic called Cyclic Nature of Data Type in C.
It is like example
char c=125;
c=c+10;
printf("%d",c);
The output is -121.
the logic given was
125+1= 126
125+2= 127
125+3=-128
125+4=-127
125+5=-126
125+6=-125
125+7=-124
125+8=-123
125+9=-122
125+10=-121
This is due to cyclic nature if char datatype. Y does Char exhibit cyclic nature?? How is it possible to char??
On your system char is signed char. When a signed integral type overflows, the result is undefined. It may or may not be cyclic. Although on most of the machines performing 2's complement arithmetic, you may observe this as cyclic.
The char data type is signed type, as per your implementation. As such it can store values in range: -128 to 127. When you store a value greater than 127, you would end up with a value that might be in negative or positive number, depending on how large is the value stored and what kind of platform you are working on.
Signed integer overflow is undefined behavior in C and then not at all unsigned numbers are guaranteed to wrap around.
char isn’t special in this regard (besides its implementation-defined signedness), all conversions to signed types usually exhibit this “cyclic nature”. However, there are undefined and implementation-defined aspects of signed overflow, so be careful when doing such things.
What happens here:
In the expression
c=c+10
the operands of + are subject to the usual arithmetic conversions. They include integer promotion, which converts all values to int if all values of their type can be represented as an int. This means, the left operand of + (c) is converted to an int (an int can hold every char value1)). The result of the addition has type int. The assignment implicitly converts this value to a char, which happens to be signed on your platform. An (8-bit) signed char cannot hold the value 135 so it is converted in an implementation-defined way 2). For gcc:
For conversion to a type of width N, the value is reduced modulo 2N to be within range of the type; no signal is raised.
Your char has a width of 8, 28 is 256, and 135 ☰ -121 mod 256 (cf. e.g. 2’s complement on Wikipedia).
You didn’t say which compiler you use, but the behaviour should be the same for all compilers (there aren’t really any non-2’s-complement machines anymore and with 2’s complement, that’s the only reasonable signed conversion definition I can think of).
Note, that this implementation-defined behaviour only applies to conversions, not to overflows in arbitrary expressions, so e.g.
int n = INT_MAX;
n += 1;
is undefined behaviour and used for optimizations by some compilers (e.g. by optimizing such statements out), so such things should definitely be avoided.
A third case (unrelated here, but for sake of completeness) are unsigned integer types: No overflow occurs (there are exceptions, however, e.g. bit-shifting by more than the width of the type), the result is always reduced modulo 2N for a type with precision N.
Related:
A simple C program output is not as expected
Allowing signed integer overflows in C/C++
1 At least for 8-bit chars, signed chars, or ints with higher precision than char, so virtually always.
2 The C standard says (C99 and C11 (n1570) 6.3.1.3 p.3) “[…] either the result is implementation-defined or an implementation-defined signal is raised.” I don’t know of any implementation raising a signal in this case. But it’s probably better not to rely on that conversion without reading the compiler documentation.

C bit field variables are printing unexpected values

struct m
{
int parent:3;
int child:3;
int mother:2;
};
void main()
{
struct m son={2,-6,5};
printf("%d %d %d",son.parent,son.child,son.mother);
}
Can anybody please help in telling why the output of the program is 2 2 1 ?
Taking out all but the significant bits for the fields shown:
parent: 3 bits (1-sign bit + 2 more), value 010, result 2
child: 3 bits (1-sign bit + 2 more), value 010, result 2
mother: 2 bits (1 sign bit + 1 more), value 01, result 1
Details
It bears pointing out that your structure fields are declared as int bit-field values. By C99-§6.7.2,2, the following types are all equivalent: int, signed, or signed int. Therefore, your structure fields are signed. By C99-§6.2.6.2,2, one of your bits shall be consumed in representing the "sign" of the variable (negative or positive). Further, the same section states that excluding the sign-bit, the remaining bits representation must correspond to an associated unsigned type of the remaining bit-count. C99-§6.7.2,1 clearly defines how each of these bits represents a power of 2. Therefore, the only bit that is normally used as the sign-bit is the most significant bit (its the only one that is left, but I'm quite sure if this is an inaccurate interpretation of the standard I'll hear about it in due time). That you are assigning a negative number as one of your test values used for your sample suggests you may be aware of this, but many people newly exposed to bit fields are not. Thus, it bears noting.
The following sections of the C99 standard are referenced in the remainder of this answer. The first deals with promotions to different types, the next the valuation and potential value-change (if any). The last is important in understanding how a bit-fields int-type is determined.
C99-§6.3.1.1: Boolean, characters, and integers
2: If an int can represent all values of the original type (as restricted by the width, for a bit-field), the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions. All other types are unchanged by the integer promotions.
C99-§6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
C99-§6.7.2.1 Structure and Union Specifiers
10: A bit-field is interpreted as having a signed or unsigned integer type consisting of the specified number of bits. If the value 0 or 1 is stored into a nonzero-width bit-field of type _Bool, the value of the bit-field shall compare equal to the value stored; a _Bool bit-field has the semantics of a _Bool.
Consider the regular int bit representation of your test values. The following are on a 32-bit int implementation:
value : s bits
2 : 0 0000000 00000000 00000000 00000010 <== note bottom three bits
-6 : 1 1111111 11111111 11111111 11111010 <== note bottom three bits
5 : 0 0000000 00000000 00000000 00000101 <== note bottom two bits
Walking through each of these, applying the requirements from the standard references above.
int parent:3 :The first field is a 3-bit signed int, and is being assigned the decimal value 2. Does the rvalue type, int, encompass the lvalue type, int:3? Yes, so the types are good. Does the value 2 fit within the range of the lvalue type? Well, 2 can easily fit in an int:3, so no value mucking is required either. The first field works fine.
int child:3: The second field is also a 3-bit signed int, this time being assigned the decimal value -6. Once again, does the rvalue type (int) fully-encompass the lvalue type (int:3)? Yes, so again the types are fine. However, the minimum bit-count require to represent -6, a signed value, is 4 bits. (1010), accounting for the most signnificant bit as the sign-bit. Therefore the value -6 is out of range of the allowable storage of a 3-bit signed bit-field. Therefore, the result is implementation-defined per §6.3.1.3-3.
int mother:2 The final field is a 2-bit signed int, this time being assigned the decimal value 5. Once again, does the rvalue type (int) fully-encompass the lvalue type (int:2)? Yes, so again the types are fine. However, once again we're faced with a value that cannot fit within the target-type. The minimum bit-count need for representing a signed positive 5 is four: (0101). We only have two to work with. Therefore, the result is once again implementation-defined per §6.3.1.3-3.
Therefore, if I take this correctly, the implementation in this case simply hacks off all but the required bits needed to store fill the declared bit depth. And the results of that hackery is what you now have. 2 2 1
Note
It is entirely possible I flipped the order of the promotion incorrectly (it is easy for me to get lost in the standard, as I am dyslexic and flip things in my head periodically). If that is the case I would ask anyone with a stronger interpretation of the standard please point this out to me an I will address the answer accordingly.
The bit fields size for child and mother are too small to contain the constant values you are assigning to them and they're overflowing.
You can remark that in compilation phase you will get the following warnings:
test.c: In function ‘main’:
test.c:18:11: warning: overflow in implicit constant conversion
test.c:18:11: warning: overflow in implicit constant conversion
That because you have defined your variables as 3 bits from an int and not the whole int. Over flow means that is unable to store your values into a 3 bits memory.
so if you use the following structure definition you will avoid the warning in the compilation and you will get the right values:
struct m
{
int parent;
int child;
int mother;
};
You can't represent -6 in only 3 bits; similarly, you can't represent 5 in only two bits.
Is there a reason why parent, child, and mother need to be bit fields?
child requires minimum 4 bits to hold -6 (1010). And mother requires minimum 4 bits to hold 5 (0101).
printf considers only last 3 bits of child so its printing 2 and it considers only last 2 bits of mother so its printing 1.
You may think like child requires only 3 bits for storing -6, but it actually requires 4 bits including sign bit. Negative values use to be stored in 2`s complement mode.
Binary equivalent of 6 is 110
one`s complement of 6 is 001
two`s complement of 6 is 010
sign bit should be added at MSB.
So value of -6 is 1010. printf is omiting the sign bit.
The first of all:
5 is a 101 in binary, so it doesn't fit in 2 bits.
-6 also doesn't fit in 3 bits.
Your bit fields are too small. Also: is this just an exercise, or are you trying to (prematurely) optimize? You're not going to be very happy with the result... it'll be quite slow.
I finally found the reason after applying thought :-)
2 is represented in binary as 0010
-6 as 1010
and five as 0101
Now 2 can be represented by using only 3 bits so it gets stored as 010
-6 would get stored in 3 bits as 010
and five would get stored in 2 bits as 01
So the final output would be 2 2 1
Thanks everyone for your reply !!!!

Resources