I've been stuck on a bonus my professor gave for a couple of days now:
give x^y using only ~ and &
Assume the machine use twos complement, 32-bit representations of integers.
I've tried many different combinations, and also tried to write out the logic of the operator ^, but it hasn't been working out. Any hints or help would be much appreciated!
The XOR operator can in fact be written as a combination of those two. I'll put this in two steps:
A NAND B = NOT(A AND B)
A XOR B = (A NAND (A NAND B)) NAND (B NAND (A NAND B))
As described before on math:
https://math.stackexchange.com/questions/38473/is-xor-a-combination-of-and-and-not-operators
First, suppose you had each of the &, |, and ~ operators available to you. Could you implement ^ that way?
Next, see if you can find a way to express | purely in terms of & and ~.
Finally, combine those ideas together.
Good luck!
You could try to draw the truth tables for XOR, AND, and, OR
a b a^b
0 0 0
0 1 1
1 0 1
1 1 0
a b a|b
0 0 0
0 1 1
1 0 1
1 1 1
a b a&b
0 0 0
0 1 0
1 0 0
1 1 1
next find how to use | and & to build this
a|b give all the three first rows correct, a&b give the other line. If we negate it it can be used to mask the wanted lines! So we could phrase xor as:
(a or b) but not when (a and b)
There is no but in boolean algebra so it becomes an and which leads to this:
(a|b)&~(a&b)
Edit:
Pointed out I was answering the wrong question, use The law of DeMorgan to build the or
~(~a & ~b)
gives the answer to be
~(~a&~b)&~(a&b)
Related
I have a question: Is bitwise anding transitive, particularly in C and C++?
Say res=(1 & 2 & 3 & 4), is this same as res1=(1&2) and res2=(3&4) and
res= (res1 & res2). Will this be same?
Yes, bitwise AND is transitive as you've used the term.
It's perhaps easier to think of things as a stack of bits. So if we have four 4-bit numbers, we can do something like this:
A = 0xB;
B = 0x3;
C = 0x1;
D = 0xf;
If we simply stack them up:
A 1 0 1 1
B 0 0 1 1
C 0 0 0 1
D 1 1 1 1
Then the result of a bitwise AND looks at one column at a time, and produces a 1 for that column if and only if there's a 1 for every line in that column, so in the case above, we get: 0 0 0 1, because the last column is the only one that's all ones.
If we split that in half to get:
A 1 0 1 1
B 0 0 1 1
A&B 0 0 1 1
And:
C 0 0 0 1
D 1 1 1 1
C&D 0 0 0 1
Then and those intermediate results:
A&B 0 0 1 1
C&D 0 0 0 1
End 0 0 0 1
Our result is still going to be the same--anywhere there's a zero in a column, that'll produce a zero in the intermediate result, which will produce a zero in the final result.
The term you're looking for is associative. We generally wouldn't call a binary operator "transitive". And yes, & and | are both associative, by default. Obviously, you could overload the operators to be something nonsensical, but the default implementations will be associative. To see this, consider one-bit values a, b, and c and note that
(a & b) & c == a & (b & c)
because both will be 1 if and only if all three inputs are 1. And this is the operation that is being applied pointwise to each bit in your integer values. The same is true of |, simply replacing 1 with 0.
There are also some issues to consider if your integers are signed, as the behavior is dependent on the underlying bit representation.
Please, explain why by doing as follows I'll get a second bit of the number stored in i in it's internal representation.
(i & 2) / 2;
Doing i & 2 masks out all but the second bit in i. [1]
That means the expression evaluates to either 0 or 2 (binary 00 and 10 respectively).
Dividing that by 2 gives either 0 or 1 which is effectively the value of the second bit in i.
For example, if i = 7 i.e. 0111 in binary:
i & 2 gives 0010.
0010 is 2 in decimal.
2/2 gives 1 i.e. 0001.
[1] & is the bitwise AND in C. See here for an explanation on how bitwise AND works.
i & 2 masks out all but the second bit.
Dividing it by 2 is the same as shifting down 1 bit.
e.g.
i = 01100010
(i & 2) == (i & 00000010) = 00000010
(i & 2) / 2 == (i & 2) >> 1 = 00000001
The & operator is bitwise AND: for each bit, the result is 1 only if the corresponding bits of both arguments are 1. Since the only 1 bit in the number 2 is the second-lowest bit, a bitwise AND with 2 will force all the other bits to 0. The result of (i & 2) is either 2 if the second bit in i is set, or 0 otherwise.
Dividing by 2 just changes the result to 1 instead of 2 when the second bit of i is set. It isn't necessary if you're just concerned with whether the result is zero or nonzero.
2 is 10 in binary. & is a bitwise conjunction. So, i & 2 gets you the second-from-the-end bit of i. And dividing by 2 is the same as bit-shifting by 1 to the right, which gets the value of the last bit.
Actually, shifting to the right would be better here, as it clearly states your intent. So, this code would be normally written like this: (i & 0x02) >> 1
What does the following condition effectively check in C :
if(a & (1<<b))
I have been wracking my brains but I can't find a pattern.
Any help?
Also I have seen this used a lot in competitive programming, could anyone explain when and why this is used?
It is checking whether the bth bit of a is set.
1<<b will shift over a single set bit b times so that only one bit in the bth position is set.
Then the & will perform a bitwise and. Since we already know the only bit that is set in 1<<b, either it is set in a, in which case we get 1<<b, or it isn't, in which case we get 0.
In mathematical terms, this condition verifies if a's binary representation contains 2b. In terms of bits, this checks if b's bit of a is set to 1 (the number of the least significant bit is zero).
Recall that shifting 1 to the left by b positions produces a mask consisting of all zeros and a single 1 in position b counting from the right. A value of this mask is 2b.
When you perform a bitwise "AND" with such a mask, the result would be non-zero if, and only if, a's binary representation contains 2b.
Lets say for example a = 12 (binary: 1100) and you want to check that the third bit (binaries are read from right to left) is set to 1, to do that you can use & bitwise operator which work as following:
1 & 0 = 0
0 & 1 = 0
0 & 0 = 0
1 & 1 = 1
To check if the third bit in a is set to 1 we can do:
1100
0100 &
------
0100 (4 in decimal) True
if a = 8 (binary: l000) on the other hand:
1000
0100 &
------
0000 (0 in decimal) False
Now to get the 0100 value we can right shift 1 by 2 (1 << 2) wich will append two zeros from the right and we'll get 100, in binaries left trailing zeros doesn't change the value so 100 is the same as 0100.
I came across a common programming interview problem: given a list of unsigned integers, find the one integer which occurs an odd number of times in the list. For example, if given the list:
{2,3,5,2,5,5,3}
the solution would be the integer 5 since it occurs 3 times in the list while the other integers occur even number of times.
My original solution involved setting up a sorted array, then iterating through the array: For each odd element I would add the integer, while for each even element I would subtract; the end sum was the solution as the other integers would cancel out.
However, I discovered that a more efficient solution existed by simply performing an XOR on each element -- you don't even need a sorted array! That is to say:
2^3^5^2^5^5^3 = 5
I recall from a Discrete Structures class I took that the Associate Property is applicable to the XOR operation, and that's why this solution works:
a^a = 0
and:
a^a^a = a
Though I remember that the Associative Property works for XOR, I'm having trouble finding a logical proof for this property specific to XOR (most logic proofs on the Internet seem more focused on the AND and OR operations). Does anyone know why the Associative Property applies to the XOR operation?
I suspect it involves an XOR identity containing AND and/or OR.
The associative property says that (a^b)^c = a^(b^c). Since XOR is bitwise (the bits in the numbers are treated in parallel), we merely need to consider XOR for a single bit. Then the proof can be done by examining all possibilities:
abc (a^b) (a^b)^c (b^c) a^(b^c)
000 0 0 0 0
001 0 1 1 1
010 1 1 1 1
011 1 0 0 0
100 1 1 0 1
101 1 0 1 0
110 0 0 1 0
111 0 1 0 1
Since the third column, (a^b)^c, is identical to the fifth column, a^(b^c), the associative property holds.
As long as a ^ b == ~a & b | a & ~b, you can proove that :
(a ^ b) ^ c = ~((~a & b) | (a & ~b)) & c | ((~a & b) | (a & ~b)) & ~c
and
a ^ (b ^ c) = a & ~((~b & c) | (b & ~c)) | ~a & ((~b & c) | (b & ~c))
Are equals.
I am receiving a number N where N is a 4-bit integer and I need to change its LSB to 1 without changing the other 3 bits in the number using C.
Basically, all must read XXX1.
So lets say n = 2, the binary would be 0010. I would change the LSB to 1 making the number 0011.
I am struggling with finding a combination of operations that will do this. I am working with: !, ~, &, |, ^, <<, >>, +, -, =.
This has really been driving me crazy and I have been playing around with >>/<< and ~ and starting out with 0xF.
Try
number |= 1;
This should set the LSB to 1 regardless of what the number is. Why? Because the bitwise OR (|) operator does exactly what its name suggests: it logical ORs the two numbers' bits. So if you have, say, 1010b and 1b (10 and 1 in decimal), then the operator does this:
1 0 1 0
OR 0 0 0 1
= 1 0 1 1
And that's exactly what you want.
For your information, the
number |= 1;
statement is equivalent to
number = number | 1;
Use x = x | 0x01; to set the LSB to 1
A visualization
? ? ? ? ? ? ? ?
OR
0 0 0 0 0 0 0 1
----------------------
? ? ? ? ? ? ? 1
Therefore other bits will stay the same except the LSB is set to 1.
Use the bitwise or operator |. It looks at two numbers bit by bit, and returns the number generated by performing an OR with each bit.
int n = 2;
n = n | 1;
printf("%d\n", n); // prints the number 3
In binary, 2 = 0010, 3 = 0011, and 1 = 0001
0010
OR 0001
-------
0011
If n is not 0
n | !!n
works.
If n is 0, then !n is what you want.
UPDATE
The fancy one liner :P
n = n ? n | !!n : !n;