What C standard says about bitwise XOR of two signed integers? [duplicate] - c

This question already has answers here:
Are the results of bitwise operations on signed integers defined?
(4 answers)
Closed 9 years ago.
Which ANSI C standard says something about bit wise XOR of two signed integers? I tried to read the standard, but it is vast.
XOR of two signed integers is valid as per C standard? What will happen to the sign bit of the result?

Bitwise operations operate on bits, that is, on the actual 1's and 0's of the data. Therefore, sign is irrelevant; the result of the operation is what you'd get from writing out the series of 1's and 0's that represents each integer and then XORing each corresponding pair of bits. (Even type information is irrelevant, as long as the operands are integers of some sort, as opposed to doubles, pointers, etc.)

Related

What will be the output in C? [duplicate]

This question already has answers here:
Is char signed or unsigned by default?
(6 answers)
Integer conversions(narrowing, widening), undefined behaviour
(2 answers)
Range of char type values in C
(6 answers)
Closed 5 years ago.
I am having trouble finding the output of this code. Please help me to find out the output of the following output segment.
#include<stdio.h>
int main(){
char c = 4;
c=c*200;
printf("%d\n",c);
return 0;
}
I want to know that why the output is giving 32. Would you please tell me? I want the exact calculations.
Warning, long winded answer ahead. Edited to reference the C standard and to be clearer and more concise with respect to the question being asked.
The correct answer for why you have 32 has been given a few times. Explaining the math using modular arithmetic is completely correct but might make it a little harder to grasp intuitively if you are new to programming. So, in addition to the existing correct answers, here's a visualization.
Your char is an 8 bit type, so it is made up of a series of 8 zeros and ones.
Looking at the raw bits in binary, when unsigned (let's leave signed types out of it for a moment as it will just confuse the point) your variable 'c' can take on values in the following range:
00000000 -> 0
11111111 -> 255
Now, c*200 = 800. This is of course larger than 255. In binary 800 looks like:
00000011 00100000
To represent this in memory you need at least 10 bits (see the two 1's in the upper byte). As an aside, the leading zeros don't explicitly need to be stored since they have no effect on the number. However the next largest data type will be 16 bits and it's easier to show consistently sized groupings of bits anyway, so there it is.
Since the char type is limited to 8 bits and cannot represent the result, there needs to be a conversion. ISO/IEC 9899:1999 section 6.3.1.3 says:
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if
the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
3 Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
So, if your new type is unsigned, following rule #2 if we subtract one more than the max value of the new type (256) from 800 we eventually end up in the range of the new type with 32. This behaviour also happens to effectively truncate the result, as you can see the higher bits which could not be represented have been discarded.
00100000 -> 32
The existing answers explain using the modulo operation, where 800 % 256 = 32. This is simply math that gives the remainder of a division operation. When we divide 800 by 256 we get 3 (because 256 fits into 800 at most three times) plus a remainder of 32. This is essentially the same as applying rule #2 here.
Hopefully this clarifies why you get a result of 32. However, as has been correctly pointed out, if the destination type is signed we're looking at rule #3, which says the behaviour is implementation-defined. Since the standard also says that the plain char type you are using may be signed or unsigned (and that this is implementation-defined) your particular case is then implementation-defined. However, in practice you will typically see the same behaviour where you lose the higher bits and hence you will still generally get 32.
Extending this a bit, if you were to have a signed 8-bit destination type, and you were to run your code with c=c*250 instead, you would have:
00000011 11101000 -> 1000
and you will probably find that after the conversion to the smaller signed type the result is similarly truncated as:
11101000
which in a signed type is interpreted as -24 for most systems which use two's complement. Indeed this is what happens when I run this on gcc, but again this is not guaranteed by the language itself.

What's the difference between int and unsigned-int value representations in bits in C [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I noticed by experiment that in unsigned int the value of a number is represented in 32 bit even if the number is taking 1-bit space, the rest of the bits would take 0 as a value. While in int, the value is being put in the bits needed with just 1 more bit added for the sign. Can someone please explain to me what's that?
i noticed by experiment that in unsigned int the value of a number is represented in 32 bit even if the number is taking 1-bit space, the rest of the bits would take 0 as a value. while in int, the value is being put in the bits needed with just 1 more bit added for the sign. can someone please explain to me what's that?
Sure. You're mistaken.
The C standard specifies that, as corresponding unsigned and signed integer types, unsigned int and (signed) int require the same amount of storage (C2011 6.2.5/6). The standard does not specify the exact sizes of these types, but 32 bits is a common choice. If the representation of an unsigned int takes 32 bits in a given C implementation, then so does the representation of that implementation's int.
Furthermore, although C allows a choice from among 3 alternative styles of negative-value representation, the correspondance between signed and unsigned integer representations is defined so that the value bits in the representation of an int -- those that are neither padding bits nor the one sign bit -- represent the same place value as the bits in the same position of the corresponding unsigned integer type (C2011, 6.2.6.2/2). Thus, the representation of a signed integer with non-negative value can be reinterpreted as the corresponding unsigned integer type without changing its numeric value.
Machines use fixed length representations for numbers (at least common machines). Say your machine is 32-bits, that means it uses 32-bits for numbers and their arithmetic.
Usually you have unsigned representation that can represent numbers from 0 to 2^32-1 (but every number uses 32-bits) and 2's-complement 32-bits representation for numbers from -2^31 to 2^31-1 (such a representation uses the most significant bit for the sign). But whatever is the encoding, a number always use the same number of bits whatever is its value.
The answer is very language dependent, and in some languages (like C++) the answer depends on the target CPU architecture.
many languages store both int and unsigned int in 32-bits of space.
"BigInt" support for numbers of unknown size exist in many languages. Which behave much as you describe where they expand based on the requirements of the number being stored.
Some languages, like ruby automatically convert between the two as the math operations demand.

What happens when I apply the unary "-" operator to an unsigned integer? [duplicate]

This question already has answers here:
Assigning negative numbers to an unsigned int?
(14 answers)
Closed 6 years ago.
This should be a pretty simple question but I can't seem to find the answer in my textbook and can't find the right keywords to find it online.
What does it mean when you have a negative sign in front of an unsigned int?
Specifically, if x is an unsigned int equal to 1, what is the bit value of -x?
Per the C standard, arithmetic for unsigned integers is performed modulo 2bit width. So, for a 32-bit integer, the negation will be taken mod 232 = 4294967296.
For a 32-bit number, then, the value you'll get if you negate a number n is going to be 0-n = 4294967296-n. In your specific case, assuming unsigned int is 32 bits wide, you'd get 4294967296-1 = 4294967295 = 0xffffffff (the number with all bits set).
The relevant text in the C standard is in ยง6.2.5/9:
a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type
It will overflow in the negative direction, i.e. if your int is 16 bits x will be 65535. The bit value will be 1111111111111111 (16 ones)
If int is 32 bits, x will be 4294967295
when you apply the "-", the Two's complement of the integer is stored in variable. see here for details

Number of bits in an integer [duplicate]

This question already has answers here:
Is int in C Always 32-bit?
(8 answers)
Is the size of C "int" 2 bytes or 4 bytes?
(13 answers)
Closed 9 years ago.
The number of bits in an integer in C is compiler and machine dependent. What is meant by this? Does the number of bits in an int vary with different C compilers and different processor architecture? Can you illustrate what it means?
This wikipedia article gives a good overview: http://en.wikipedia.org/wiki/Word_(data_type)
Types such as integers are represented in hardware. Hardware changes, and so do the size of certain types. The more bits in a type, the larger the number (for integers) or more precision you can store (for floating-point types).
There are some types which specifically specify the number of bits, such as int16.
It means exactly what it says and what you said in your own words.
For example, on some compilers and with some platforms, an int is 32 bits, on other compilers and platforms an int is 64 bits.
I remember long ago when I was programming on the Commodore Amiga, there were two different C compilers available from two different manufacturers. On one compiler, an int was 16 bits, on the other compiler an int was 32 bits.
You can use sizeof to determine how many bytes an int is on your compiler.

Left shift and right shift operations using negative numbers on positive integers [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Left shifting with a negative shift count
On a 16-bit compiler, why does 32<<-3 or 32>>-1 result in 0?
what is the major reason for such a behaviour
From K&R:
The shift operators << and >> perform left and right shifts of their
left operand by the number of bit positions given by the right operand,
which must be non-negative

Resources