I came across a common programming interview problem: given a list of unsigned integers, find the one integer which occurs an odd number of times in the list. For example, if given the list:
{2,3,5,2,5,5,3}
the solution would be the integer 5 since it occurs 3 times in the list while the other integers occur even number of times.
My original solution involved setting up a sorted array, then iterating through the array: For each odd element I would add the integer, while for each even element I would subtract; the end sum was the solution as the other integers would cancel out.
However, I discovered that a more efficient solution existed by simply performing an XOR on each element -- you don't even need a sorted array! That is to say:
2^3^5^2^5^5^3 = 5
I recall from a Discrete Structures class I took that the Associate Property is applicable to the XOR operation, and that's why this solution works:
a^a = 0
and:
a^a^a = a
Though I remember that the Associative Property works for XOR, I'm having trouble finding a logical proof for this property specific to XOR (most logic proofs on the Internet seem more focused on the AND and OR operations). Does anyone know why the Associative Property applies to the XOR operation?
I suspect it involves an XOR identity containing AND and/or OR.
The associative property says that (a^b)^c = a^(b^c). Since XOR is bitwise (the bits in the numbers are treated in parallel), we merely need to consider XOR for a single bit. Then the proof can be done by examining all possibilities:
abc (a^b) (a^b)^c (b^c) a^(b^c)
000 0 0 0 0
001 0 1 1 1
010 1 1 1 1
011 1 0 0 0
100 1 1 0 1
101 1 0 1 0
110 0 0 1 0
111 0 1 0 1
Since the third column, (a^b)^c, is identical to the fifth column, a^(b^c), the associative property holds.
As long as a ^ b == ~a & b | a & ~b, you can proove that :
(a ^ b) ^ c = ~((~a & b) | (a & ~b)) & c | ((~a & b) | (a & ~b)) & ~c
and
a ^ (b ^ c) = a & ~((~b & c) | (b & ~c)) | ~a & ((~b & c) | (b & ~c))
Are equals.
Related
I've been stuck on a bonus my professor gave for a couple of days now:
give x^y using only ~ and &
Assume the machine use twos complement, 32-bit representations of integers.
I've tried many different combinations, and also tried to write out the logic of the operator ^, but it hasn't been working out. Any hints or help would be much appreciated!
The XOR operator can in fact be written as a combination of those two. I'll put this in two steps:
A NAND B = NOT(A AND B)
A XOR B = (A NAND (A NAND B)) NAND (B NAND (A NAND B))
As described before on math:
https://math.stackexchange.com/questions/38473/is-xor-a-combination-of-and-and-not-operators
First, suppose you had each of the &, |, and ~ operators available to you. Could you implement ^ that way?
Next, see if you can find a way to express | purely in terms of & and ~.
Finally, combine those ideas together.
Good luck!
You could try to draw the truth tables for XOR, AND, and, OR
a b a^b
0 0 0
0 1 1
1 0 1
1 1 0
a b a|b
0 0 0
0 1 1
1 0 1
1 1 1
a b a&b
0 0 0
0 1 0
1 0 0
1 1 1
next find how to use | and & to build this
a|b give all the three first rows correct, a&b give the other line. If we negate it it can be used to mask the wanted lines! So we could phrase xor as:
(a or b) but not when (a and b)
There is no but in boolean algebra so it becomes an and which leads to this:
(a|b)&~(a&b)
Edit:
Pointed out I was answering the wrong question, use The law of DeMorgan to build the or
~(~a & ~b)
gives the answer to be
~(~a&~b)&~(a&b)
I have a question: Is bitwise anding transitive, particularly in C and C++?
Say res=(1 & 2 & 3 & 4), is this same as res1=(1&2) and res2=(3&4) and
res= (res1 & res2). Will this be same?
Yes, bitwise AND is transitive as you've used the term.
It's perhaps easier to think of things as a stack of bits. So if we have four 4-bit numbers, we can do something like this:
A = 0xB;
B = 0x3;
C = 0x1;
D = 0xf;
If we simply stack them up:
A 1 0 1 1
B 0 0 1 1
C 0 0 0 1
D 1 1 1 1
Then the result of a bitwise AND looks at one column at a time, and produces a 1 for that column if and only if there's a 1 for every line in that column, so in the case above, we get: 0 0 0 1, because the last column is the only one that's all ones.
If we split that in half to get:
A 1 0 1 1
B 0 0 1 1
A&B 0 0 1 1
And:
C 0 0 0 1
D 1 1 1 1
C&D 0 0 0 1
Then and those intermediate results:
A&B 0 0 1 1
C&D 0 0 0 1
End 0 0 0 1
Our result is still going to be the same--anywhere there's a zero in a column, that'll produce a zero in the intermediate result, which will produce a zero in the final result.
The term you're looking for is associative. We generally wouldn't call a binary operator "transitive". And yes, & and | are both associative, by default. Obviously, you could overload the operators to be something nonsensical, but the default implementations will be associative. To see this, consider one-bit values a, b, and c and note that
(a & b) & c == a & (b & c)
because both will be 1 if and only if all three inputs are 1. And this is the operation that is being applied pointwise to each bit in your integer values. The same is true of |, simply replacing 1 with 0.
There are also some issues to consider if your integers are signed, as the behavior is dependent on the underlying bit representation.
I have been going over a perl book i have recently purchased, and while reading I noticed a block of code that confused me..
use integer;
$value = 257;
while($value){
unshift #digits, (0..9,a..f)[$value & 15];
$value /= 16;
}
print digits;
the book mentions the purpose was to reverse the order of digits. however, new to perl I am having trouble figuring out what [$value & 15] is doing.
It's a bitwise and operation.
What it's doing is performing a bitwise and using the value of 15 and whatever value is contained in $value.
The resulting value is the decimal value that corresponds to the result of a bitwise and with the lower 4 bits of the value.
Ex:
$value = 21
which has a binary representation of: 0b10101
Performing a bitwise and with 15 means that any bits in $value will be zeroed if they are either outside the lower 4 bit range, or contain no 1's in the lower 4 bits.
The result is:
0b10101
&
0b 1111
-------
0b00101 = 5
Looking up the truth tables for performing bitwise operations will help with stuff like this in the future, but when performing an AND with any value, the result is only true, when both bits are 1, 0 otherwise.
V1 | V2 | V1 & V2
-----------------
0 | 0 | 0
0 | 1 | 0
1 | 0 | 0
1 | 1 | 1
What operation does the following āCā statement perform?
star = star ^ 0b00100100;
(A) Toggles bits 2 and 5 of the variable star.
(B) Clears all bits except bits 2 and 5 of the variable star.
(C) Sets all bits except bits 2 and 5 of the variable star.
(D) Multiplies value in the variable star with 0b00100100.
I'm still clueless about this. Can someone help me out?
XOR operator (also called "logical addition") is defined like this:
a b a^b
-----------
0 0 0
0 1 1
1 0 1
1 1 0
So a^0 leaves a intact while a^1 toggles it.
For multiple-bit values, the operation is performed bitwise, i.e. between corresponding bits of the operands.
If you know how XOR works, and you know that ^ is XOR in C, then this should be pretty simple. You should know that XOR will flip bits where 1 is set, bits 2 and 5 of 0b00100100 are set, therefore it will flip those bits.
From an "during the test" standpoint, let's say you need to prove this to yourself, you really don't need to know the initial value of star to answer the question, If you know how ^ works then just throw anything in there:
00100100
^10101010 (star's made up value)
---------
10001110 (star's new value)
bit position: | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
|---|---|---|---|---|---|---|---
star's new v: | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0
|---|---|---|---|---|---|---|---
star's old v: | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0
Then check your answers again, did it:
(A) Toggles bits 2 and 5 of the variable star. (Yes)
(B) Clears all bits except bits 2 and 5 of the variable star. (Nope)
(C) Sets all bits except bits 2 and 5 of the variable star. (Nope)
(D) Multiplies value in the variable star with 0b00100100. (36x170 = 142? Nope)
It is (A) toggles bits 2 and 5.
The following is the truth table for the XOR operation:
x y x^y
0 0 0
1 0 1
0 1 1
1 1 0
You can see from the table that x XOR 0 = x and x XOR 1 = !x.
XOR is a bitwise operation, so it operates on individual bits. Therefore if you XOR star with some constant, it will toggle the 1 bits in the constant.
You can find some explanation e.g. here.
The exclusive OR has this truth table:
A B A^B
-----------
1 1 0
1 0 1
0 1 1
0 0 0
We can see that if B is true (1) then A is flipped (toggled), and if it's false (0) A is left alone. So the answer is (A).
XOR operator returns 0 if both inputs are same otherwise returns 1 if both inputs are different.For Example the Given Truth Table :-
a=1 b=1 => a^b=0,
a=0 b=0 => a^b=0,
a=0 b=1 => a^b=1,
a=1 b=0 => a^b=1.
well xor is binary operator that work on bits of 2 nos.
rule of xoring:for same bit ans is 0 and for different bit ans is 1
let
a= 1 0 1 0 1 1
b= 0 1 1 0 1 0
--------------
c= 1 1 0 0 0 1
--------------
compare bit of a and b bit by bit
if same put 0 else put 1
xor is basically used to find the unique in given set of duplicate no.
just xor all nos. and u will get the unique one(if only single unique is present)
I have written a linear solver employing Householder reflections/transformations in ANSI C which solves Ax=b given A and b. I want to use it to find the eigenvector associated with an eigenvalue, like this:
(A-lambda*I)x = 0
The problem is that the 0 vector is always the solution that I get (before someone says it, yes I have the correct eigenvalue with 100% certainty).
Here's an example which pretty accurately illustrates the issue:
Given A-lambda*I (example just happens to be Hermitian):
1 2 0 | 0
2 1 4 | 0
0 4 1 | 0
Householder reflections/transformation will yield something like this
# # # | 0
0 # # | 0
0 0 # | 0
Back substitution will find that solution is {0,0,0}, obviously.
It's been a while since I've written an eigensolver, but I seem to recall that the trick was to refactor it from (A - lambda*I) * x = 0 to A*x = lambda*x. Then your Householder or Givens steps will give you something like:
# # # | #
0 # # | #
0 0 1 | 1
...from which you can back substitute without reaching the degenerate 0 vector. Usually you'll want to deliver x in normalized form as well.
My memory is quite rusty here, so I'd recommend checking Golub & Van Loan for the definitive answer. There are quite a few tricks involved in getting this to work robustly, particularly for the non-symmetric case.
This is basically the same answer as #Drew, but explained a bit differently.
If A is the matrix
1 2 0
2 1 4
0 4 1
then the eigenvalues are lambda = 1, 1+sqrt(20), 1-sqrt(20). Let us take for simplicity lambda = 1. Then the augmented matrix for the system (A - lambda*I) * x = 0 is
0 2 0 | 0
2 0 4 | 0
0 4 0 | 0
Now you do the Householder / Givens to reduce it to upper triangular form. As you say, you get something of the form
# # # | 0
0 # # | 0
0 0 # | 0
However, the last # should be zero (or almost zero). Exactly what you get depends on the details of the transformations you do, but if I do it by hand I get
2 0 4 | 0
0 2 0 | 0
0 0 0 | 0
Now you do backsubstitution. In the first step, you solve the equation in the last row. However, this equation does not yield any information, so you can set x[2] (the last element of the vector x) to any value you want. If you set it to zero and continue the back-substitution with that value, you get the zero vector. If you set it to one (or any nonzero value), you get a nonzero vector. The idea behind Drew's answer is to replace the last row with 0 0 1 | 1 which sets x[2] to 1.
Round-off error means that the last #, which should be zero, is probably not quite zero but some small value like 1e-16. This can be ignored: just take it as zero and set x[2] to one.
Obligatory warning: I assume you are implementing this for fun or educational purposes. If you need to find eigenvectors in serious code, you are better off using code written by others as this stuff is tricky to get right.