if((x & 1) == 0)
printf("EVEN!\n");
else
printf("ODD!\n");
why X&1 will always gives 1 when number is odd.. I mean what is happening in memory during this operation can any one explain it ?
Every odd number in binary has the lowest order bit set. And X & 1 is always testing that bit. Hence that check is always true.
If you don't see why the first statement is true then just convert a few decimal numbers to binary and it should be clear.
1 == 00000001b
3 == 00000011b
5 == 00000101b
7 == 00000111b
etc
Then just to be sure, do the same for a few even numbers to verify that the lowest order bit is always 0:
2 == 00000010b
4 == 00000100b
6 == 00000110b
8 == 00001000b
etc
It is because of binary number. Every binary number is the set of the 2 to the power of some number. If you are looking for the 4 bit number, then only odd number is 1(2 to the power of 0). If you are looking for number for example 12 it is on binary representation 1100. When you put the mask of 1 it looks like this
1100
&0001
0000
if you pick a odd number for example 3, you have
0011
&0001
0001
As you can see only last digit of binary number is odd and 1 is 00000001.
No matter what sizeof(int) on a platform, the number 1 represented by all the bits set to 0 except the least significant bit, which is set to 1. Let's the take the most common representation -- a 32 bit number. On such a platform, 1 is:
00000000 00000000 00000000 00000001
If you perform a bitwise AND operation of any other number with 1, all the bits except the least significant bit will always be 0. The only time the least significant bit of the result will be 1 is when the least significant bit of the other number is also 1.
Those numbers are:
00000000 00000000 00000000 00000001, which is 1
00000000 00000000 00000000 00000011, which is 3
00000000 00000000 00000000 00000101, which is 5
00000000 00000000 00000000 00000111, which is 7
and so on. As you can see from the pattern, the least significant bits of all the odd numbers are 1. The least significant bit of all the even numbers are 0. Hence,
x & 1 is 1 for all odd values of x
x & 1 is 0 for all even values of x
Related
I'm reading Modern C (version Feb 13, 2018.) and on page 42 it says
It says that the bit with index 4 is the least significant bit. Isn't the bit with index 0 should be the least significant bit? (Same question about MSB.)
Which is right? What's the correct terminology?
Their definition of "most significant bit" and "least significant bit" is misleading:
8 bit Binary number : 1 1 1 1 0 0 0 0
Bit number 7 6 5 4 3 2 1 0
| | |
| | least significant bit
| |
| |
| least significant bit that is 1
|
most significant bit that is 1 and also just most significant bit
The book's definition does not align with common/typical/mainstream/correct usage. See Wikipedia, for instance:
In computing, the least significant bit (LSB) is the bit position in a binary integer giving the units value, that is, determining whether the number is even or odd.
The book, on the other hand, seems to consider only bits that are 1, so that in an 8-bit byte representing the number 16, which we can write:
00010000
the bit that is 1 has index 4 (it's b4 in the book's notation), and then it claims that that particular number's LSB is four.
The proper definition just uses LSB to denote that bit whose value is 1, i.e. the "units", and with that the LSB is the rightmost bit. This latter definition is more useful, and I really think the book is wrong.
They're using an unusual definition of LSB and MSB, which only refers to the bits that are set to 1. So in the case of 240, the first 1 bit is b4, not b0, because b0 through b3 are all 0.
I'm not sure why the book considers this definition of LSB/MSB to be useful. It's not generally interesting for integers, although it does come into play in floating point. Floating point numbers are scaled so integers above 1 have the low-order zero bits shifted away, and the exponent is incremented to make up for this (conversely, fractions have their high-order bits shifted away, and the exponent is decremented).
Suppose I have a signed char member num = 15 and I do num = ~num. Then, as per this post, we will get -16.
~(00001111) = (11110000)
But, if I consider the MSB as sign bit, shouldn't the answer be -112? How come this is resulting in -16? Why the second and third set bits from left are being ignored.
Can anyone please clarify.
EDIT
I want more detailed explaination of why the following program resulted in -16 and not -112
#include<stdio.h>
int main()
{
char num = 15;
num = ~num;
printf("%d\n", num);
return 0;
}
I expected it as 1(1110000) = -(1110000) = -112
~(00001111) = (11110000)
What you're doing is using the MSb (Most Significant bit) as a flag to decide to put a '-' sign, and then reading the rest of the bits normally. Instead, the MSb is a flag to do 2 things: put a '-' sign - and then NOT the value and add one (2's complement) - before printing out the rest of the bits.
This comes from the overflow/underflow nature of fixed-length bit values:
00000010 - 1 = 00000001 (2-1=1)
00000001 - 1 = 00000000 (1-1=0)
00000000 - 1 = 11111111 (0-1=-1)
C allows for three different representations of signed integers, but the most common is "two's complement". However, I'll briefly discuss "one's complement" as well to illustrate how there is a relationship between them.
One's complement
One's complement integers are split into sign and value bits. To use the 8-bit representation of the integer 19 as an example:
S|64 32 16 8 4 2 1
0| 0 0 1 0 0 1 1 = 19
Using the bitwise complement operator ~ flips all of the bits of the integer, including the sign bit:
S|64 32 16 8 4 2 1
1| 1 1 0 1 1 0 0 = ~19
When the sign bit is set, the interpretation of 1 and 0 bits is reversed (0=on, 1=off), and the value is considered negative. This means the value above is:
-(16 + 2 + 1) = -19
Two's complement
Unlike one's complement, an integer is not divided into a sign bit and value bits. Instead, what is regarded as a sign bit adds -2^(b - 1) to the rest of the value, where b is the number of bits. To use the example of an 8-bit representation of ~19 again:
-128 64 32 16 8 4 2 1
1 1 1 0 1 1 0 0 = ~19
-128 + 64 + 32 + 8 + 4
= -128 + 108
= -(128 - 108)
= -20
The relationship between them
The value of -19 is 1 more than -20 arithmetically, and this follows a generic pattern in which any value of -n in two's complement is always one more than the value of ~n, meaning the following always holds true for a value n:
-n = ~n + 1
~n = -n - 1 = -(n + 1)
This means that you can simply look at the 5-bit value 15, negate it and subtract 1 to get ~15:
~15 = (-(15) - 1)
= -16
-16 for a 5-bit value in two's complement is represented as:
-16 8 4 2 1
1 0 0 0 0 = -16
Flipping the bits using the ~ operator yields the original value 15:
-16 8 4 2 1
0 1 1 1 1 = ~(-16) = -(-16 + 1) = -(-15) = 15
Restrictions
I feel I should mention arithmetic overflow regarding two's complement. I'll use the example of a 2-bit signed integer to illustrate. There are 2^2=4 values for a 2-bit signed integer: -2, -1, 0, and 1. If you attempt to negate -2, it won't work:
-2 1
1 0 = -2
Writing +2 in plain binary yields 1 0, the same as the representation of -2 above. Because of this, +2 is not possible for a 2-bit signed integer. Using the equations above also reveals the same issue:
// Flip the bits to obtain the value of ~(-2)
~(-2) = -(-2 + 1)
~(-2) = 1
// Substitute 1 in place of ~(-2) to find the result of -(-2)
-(-2) = ~(-2) + 1
-(-2) = 1 + 1
-(-2) = 2
While this makes sense mathematically, the fact is that 2 is outside the representable range of values (only -2, -1, 0, and 1 are allowed). That is, adding 1 to 01 (1) results in 10 (-2). There's no way to magically add an extra bit in hardware to yield a new sign bit position, so instead you get an arithmetic overflow.
In more general terms, you cannot negate an integer in which only the sign bit is set with a two's complement representation of signed integers. On the other hand, you cannot even represent a value like -2 in a 2-bit one's complement representation because you only have a sign bit, and the other bit represents the value 1; you can only represent the values -1, -0, +0, and +1 with one's complement.
Given the following C code:
int x = atoi(argv[1]);
int y = (x & -x);
if (x==y)
printf("Wow... they are the same!\n");
What values of x will result in "Wow... they are the same!" getting printed? Why?
So. It generally depends, but I can assume, that your architecture represents numbers with sign in U2 format (everything is false if it's not in U2 format). Let's have an example.
We take 3, which representation will be like:
0011
and -3. which will be:
~ 0011
+ 1
-------
1101
and we make and
1101
& 0011
------
0001
so:
1101 != 0001
that's what is happening underhood. You have to find numbers that fit to this pattern. I do not know what kind of numbers fit it upfront. But basing on this you can predict this.
The question is asking about the binary & operator, and 2's compliment arithmetic.
I would look to how numbers are represented in 2's compliment, and what the binary & symbol does.
Assuming a 2's compliment representation for negative numbers, the only values for which this is true are positive numbers of the form 2^n where n >= 0, and 0.
When you take the 2's compliment of a number, you flip all bits and then add one. So the least significant bit will always match. The next bit won't match unless the prior carried over, and the same for the next bit.
An int is typically 32 bits, however I'll use 5 bits in the following examples for simplicity.
For example, 5 is 00101. Flipping all bits gives us 11010, then adding 1 gives us 11011. Then 00101 & 11011 = 00001. The only bit that matches a set bit is the last one, so 5 doesn't work.
Next we'll try 12, which is 01100. Flipping the bits gives us 10011, then adding 1 gives us 10100. Then 01100 & 10100 = 00100. Because of the carry-over the third bit is set, however the second bit is not, so 12 doesn't work either.
So the most significant bit which is set won't match unless all lower bits carry over when 1 is added. This is true only for numbers with one bit set, i.e. powers of 2.
If we now try 8, which is 01000, flipping the bits gives us 10111 and adding 1 gives us 11000. And 01000 & 11000 = 01000. In this case, the second bit is set, which is the only bit set in the original number. So the condition holds.
Negative numbers cannot satisfy this condition because positive numbers have the most significant bit set to 0, while negative numbers have the most significant bit set to 1. So a bitwise AND of a number and its negative will always have the most significant bit set to 0, meaning this number cannot be negative.
0 is a special case since it is its own negative. 0 & 0 = 0, so it also satisfies this condition.
Another special case is the smallest number you can represent. In the case of a 5-bit number this is -16, which is represented by 10000. Flipping all the bits gives you 01111 and adding 1 gives you 10000, which is the same number. On the surface it seems this number also satisfies the condition, however this is an overflow condition and implementations may not handle this case correctly. See this link for more details.
What does the following condition effectively check in C :
if(a & (1<<b))
I have been wracking my brains but I can't find a pattern.
Any help?
Also I have seen this used a lot in competitive programming, could anyone explain when and why this is used?
It is checking whether the bth bit of a is set.
1<<b will shift over a single set bit b times so that only one bit in the bth position is set.
Then the & will perform a bitwise and. Since we already know the only bit that is set in 1<<b, either it is set in a, in which case we get 1<<b, or it isn't, in which case we get 0.
In mathematical terms, this condition verifies if a's binary representation contains 2b. In terms of bits, this checks if b's bit of a is set to 1 (the number of the least significant bit is zero).
Recall that shifting 1 to the left by b positions produces a mask consisting of all zeros and a single 1 in position b counting from the right. A value of this mask is 2b.
When you perform a bitwise "AND" with such a mask, the result would be non-zero if, and only if, a's binary representation contains 2b.
Lets say for example a = 12 (binary: 1100) and you want to check that the third bit (binaries are read from right to left) is set to 1, to do that you can use & bitwise operator which work as following:
1 & 0 = 0
0 & 1 = 0
0 & 0 = 0
1 & 1 = 1
To check if the third bit in a is set to 1 we can do:
1100
0100 &
------
0100 (4 in decimal) True
if a = 8 (binary: l000) on the other hand:
1000
0100 &
------
0000 (0 in decimal) False
Now to get the 0100 value we can right shift 1 by 2 (1 << 2) wich will append two zeros from the right and we'll get 100, in binaries left trailing zeros doesn't change the value so 100 is the same as 0100.
I want to ask about C operator from this code. My friends ask it, but I never seen this operator:
binfo_out.biSizeImage = ( ( ( (binfo_out.biWidth * binfo_out.biBitCount) + 31) & ~31) / 8) * abs(out_bi.biHeight);
What this operator & ~31 mean? anybody can explain this?
The & operator is a bitwise AND. The ~ operator is a bitwise NOT (i.e. inverts the bits). As 31 is binary 11111, ~31 is binary 1111111....111100000 (i.e. a number which is all ones, but has five zeroes at the end). Anding a number with this thus clears the least significant five bits, which (if you think about it) rounds down to a multiple of 32.
What does the whole thing do? Note it adds 31 first. This has the effect that the whole thing rounds something UP to the next multiple of 32.
This might be used to calculate (for instance), how many bits are going to be used to store something if you can only use 32 bit quantities to store them, as there is going to be some wastage in the last 32 bit number.
31 in binary representation will be 11111 so ~31 = 5 zeros 00000 preceeded by 1's. so it is to make last 5 bits zero. i.e. to mask the last 5 bits.
here ~ is NOT operator i.e. it gives 1's complement. and & is AND operator.
& is the bitwise AND operator. It and's every corresponding bit of two operands on its both sides. In an example, it does the following:
Let char be a type of 8 bits.
unsigned char a = 5;
unsigned char b = 12;
Their bit representation would be as follows:
a --> 0 0 0 0 0 1 0 1 // 5
b --> 0 0 0 0 1 1 0 0 // 12
And the bitwise AND of those would be:
a & b --> 0 0 0 0 0 1 0 0 // 8
Now, the ~ is the bitwise NOT operator, and it negates every single bit of the operand it prefixes. In an example, it does the following:
With the same a from the previous example, the ~a would be:
~a --> 1 1 1 1 1 0 1 0 // 250
Now with all this knowledge, x & ~31 would be the bitwise AND of x and ~31, where the bit representation of ~31 looks like this:
~31 --> 1111 1111 1111 1111 1111 1111 1110 0000 // -32 on my end
So the result would be whatever the x has on its bits, other than its last 5 bits.
& ~31
means bitwise and of the operand on the left of & and a bitwise not of 31.
http://en.wikipedia.org/wiki/Bitwise_operation
The number 31 in binary is 11111 and ~ in this case is the unare one's compliment operator. So assuming 4-byte int:
~31 = 11111111 11111111 11111111 11100000
The & is the bitwise AND operator. So you're taking the value of:
((out_bi.biWidth * out_bi.biBitCount) + 31)
And performing a bitwise AND with the above value, which is essentially blanking the 5 low-order bits of the left-hand result.