Truth table for XOR function for non binary bases - xor

I know that the XOR operator in base 2 for two bits A and B is (A+B)%2. In others words, it's addition modulo 2.
If I want to compute the truth table for XOR operation in a ternary system (base 3), is it same as addition modulo 3?
Eg: In a base 3 system, does 2 XOR 2 = 1 (since (2+2)%3 = 1)?
I read this link which indicated that 2 XOR 2 in a base 3 system is 2 and I can't understand the formula behind that?
In general, for any base 'x', is the XOR operation for that base - addition modulo x?

While I don't know that XOR has technically been defined in higher bases, the properties of XOR can be maintained in higher bases such that:
a ⊕ b ⊕ b = a
a ⊕ b ⊕ a = b
As the blog post shows, using (base - (a + b) % base) % base works. The part you were missing is the first instance of base. In the example of 2 ⊕ 2 in base 3, we get (3 - (2 + 2) % 3) % 3) which does give 2. This formula only works with single digit numbers. If you want to extend to multiple digits, you would use the same formula with each pair of digits just as standard XOR in binary does it bitwise.
For example, 185 ⊕ 42 in base 10 when run for each pair of digits (i.e. hundreds, tens, ones) gives us:
(10 - (1 + 0) % 10) % 10 => 9
(10 - (8 + 4) % 10) % 10 => 8
(10 - (5 + 2) % 10) % 10 => 3
Or 983 when put together. If you run 983 ⊕ 145, you'll find it comes out to 85.

Well, XOR stands for eXclusive OR and it is a logical operation. And this operation is only defined in binary logic. In your link author defines completely different operator which works the same as XOR for binary base. You may call it an "extension" of XOR operation for bases greater than 2. But as you mentioned in your question, there are multiple ways to do this extension. And each way would preserve some properties of "original" XOR and loose some other. For example, you can stick to observation that
a ⊕ b = (a + b) mod 2
In this case your "extended XOR" for base 3 would produce output of 1 for inputs 2, 2. But this XOR3 will no longer satisfy other equations that work for standard XOR, e.g. these ones:
a ⊕ b ⊕ b = a
a ⊕ b ⊕ a = b
If you choose to "save" those, you will get the operation from your link. You can also preserve some different property of XOR, say
a ⊕ a = 0
and get another operation that is different from the former two.
So the short answer is: the phrase "XOR function for non binary bases" doesn't make any sense. XOR operator is only defined in binary logic. If you want to extend it for non-binary bases or non-integer number or complex numbers or whatever, you can do it and define some extension function with whatever behavior and whatever "truth table" you want. But this extension won't be a XOR function anymore.

Related

Hash tables (dispersion) representation on paper

Let's say that I have a function, h(x)=[½(x mod 8)]. When I draw the hash table (dispersion), will the indexes will be from 0->7 or 0->3. (The [] means it only takes the integer part.)
I'm expecting it to be 0->3, but some of my classmates say it's 0->7.
First, x mod 8 (x % 8 in C) is always between 0 and 7 inclusive. Now multiplying this with 1/2 will give you 0 to 3 and "0==>3" is the right answer.

DFA for binary numbers that have a remainder of 1 when divided by 3

I need a DFA for a set of all strings beginning with a 1 that, interpreted as the binary representation of an integer, have a remainder of 1 when divided by 3.
For example, the binary number 1010 b is decimal 10. When you divide 10 by 3 you get a remainder of 1, so 1010 is in the language. However, the binary number 1111 b is decimal 15. When you divide 15 by 3 you get a remainder of 0, so 1111 is not in the language.
I've attached my DFA below. Could you please check it?
It looks correct to me.
You could make two simplifications:
q4 represents (mod 0), so you could make it the starting state and get rid of q0 and q5. (Unless you are required to reject strings beginning with a 0? Your question doesn't specify.)
q1 and q3 can be merged. They both represent (mod 1) and have the same transitions.
These two changes would leave you with exactly 3 states, one for each remainder.

Determine the adjacency of two fibonacci number

I have many fibonacci numbers, if I want to determine whether two fibonacci number are adjacent or not, one basic approach is as follows:
Get the index of the first fibonacci number, say i1
Get the index of the second fibonacci number, say i2
Get the absolute value of i1-i2, that is |i1-i2|
If the value is 1, then return true.
else return false.
In the first step and the second step, it may need many comparisons to get the correct index by using accessing an array.
In the third step, it need one subtraction and one absolute operation.
I want to know whether there exists another approach to quickly to determine the adjacency of the fibonacci numbers.
I don't care whether this question could be solved mathematically or by any hacking techniques.
If anyone have some idea, please let me know. Thanks a lot!
No need to find the index of both number.
Given that the two number belongs to Fibonacci series, if their difference is greater than the min. number among them then those two are not adjacent. Other wise they are.
Because Fibonacci series follows following rule:
F(n) = F(n-1) + F(n-2) where F(n)>F(n-1)>F(n-2).
So F(n) - F(n-1) = F(n-2) ,
=> Diff(n,n-1) < F(n-1) < F(n-k) for k >= 1
Difference between two adjacent fibonaci number will always be less than the min number among them.
NOTE : This will only hold if numbers belong to Fibonacci series.
Simply calculate the difference between them. If it is smaller than the smaller of the 2 numbers they are adjacent, If it is bigger, they are not.
Each triplet in the Fibonacci sequence a, b, c conforms to the rule
c = a + b
So for every pair of adjacent Fibonaccis (x, y), the difference between them (y-x) is equal to the value of the previous Fibonacci, which of course must be less than x.
If 2 Fibonaccis, say (x, z) are not adjacent, then their difference must be greater than the smaller of the two. At minimum, (if they are one Fibonacci apart) the difference would be equal to the Fibonacci between them, (which is of course greater than the smaller of the two numbers).
Since for (a, b, c, d)
since c= a+b
and d = b+c
then d-b = (b+c) - b = c
By Binet's formula, the nth Fibonacci number is approximately sqrt(5)*phi**n, where phi is the golden ration. You can use base phi logarithms to recover the index easily:
from math import log, sqrt
def fibs(n):
nums = [1,1]
for i in range(n-2):
nums.append(sum(nums[-2:]))
return nums
phi = (1 + sqrt(5))/2
def fibIndex(f):
return round((log(sqrt(5)*f,phi)))
To test this:
for f in fibs(20): print(fibIndex(f),f)
Output:
2 1
2 1
3 2
4 3
5 5
6 8
7 13
8 21
9 34
10 55
11 89
12 144
13 233
14 377
15 610
16 987
17 1597
18 2584
19 4181
20 6765
Of course,
def adjacentFibs(f,g):
return abs(fibIndex(f) - fibIndex(g)) == 1
This fails with 1,1 -- but there is little point for explicit testing special logic for such an edge-case. Add it in if you want.
At some stage, floating-point round-off error will become an issue. For that, you would need to replace math.log by an integer log algorithm (e.g. one which involves binary search).
On Edit:
I concentrated on the question of how to recover the index (and I will keep the answer since that is an interesting problem in its own right), but as #LeandroCaniglia points out in their excellent comment, this is overkill if all you want to do is check if two Fibonacci numbers are adjacent, since another consequence of Binet's formula is that sufficiently large adjacent Fibonacci numbers have a ratio which differs from phi by a negligible amount. You could do something like:
def adjFibs(f,g):
f,g = min(f,g), max(f,g)
if g <= 34:
return adjacentFibs(f,g)
else:
return abs(g/f - phi) < 0.01
This assumes that they are indeed Fibonacci numbers. The index-based approach can be used to verify that they are (calculate the index and then use the full-fledged Binet's formula with that index).

Can anyone explain why '>>2' shift means 'divided by 4' in C codes?

I know and understand the result.
For example:
<br>
7 (decimal) = 00000111 (binary) <br>
and 7 >> 2 = 00000001 (binary) <br>
00000001 (binary) is same as 7 / 4 = 1 <br>
So 7 >> 2 = 7 / 4 <br>
<br>
But I'd like to know how this logic was created.
Can anyone elaborate on this logic?
(Maybe it just popped up in a genius' head?)
And are there any other similar logics like this ?
It didn't "pop-up" in a genius' head. Right shifting binary numbers would divide a number by 2 and left shifting the numbers would multiply it by 2. This is because 10 is 2 in binary. Multiplying a number by 10(be it binary or decimal or hexadecimal) appends a 0 to the number(which is effectively left shifting). Similarly, dividing by 10(or 2) removes a binary digit from the number(effectively right shifting). This is how the logic really works.
There are plenty of such bit-twiddlery(a word I invented a minute ago) in computer world.
http://graphics.stanford.edu/~seander/bithacks.html Here is for the starters.
This is my favorite book: http://www.amazon.com/Hackers-Delight-Edition-Henry-Warren/dp/0321842685/ref=dp_ob_image_bk on bit-twiddlery.
It is actually defined that way in the C standard.
From section 6.5.7:
The result of E1 >> E2 is E1 right-shifted E2 bit positions. [...]
the value of the result is the integral part of the quotient of E1 / 2E2
On most architectures, x >> 2 is only equal to x / 4 for non-negative numbers. For negative numbers, it usually rounds the opposite direction.
Compilers have always been able to optimize x / 4 into x >> 2. This technique is called "strength reduction", and even the oldest compilers can do this. So there is no benefit to writing x / 4 as x >> 2.
Elaborating on Aniket Inge's answer:
Number: 30710 = 1001100112
How multiply by 10 works in decimal system
10 * (30710)
= 10 * (3*102 + 7*100)
= 3*102+1 + 7*100+1
= 3*103 + 7*101
= 307010
= 30710 << 1
Similarly multiply by 2 in binary,
2 * (1001100112)
= 2 * (1*28 + 1*25 + 1*24 + 1*21 1*20)
= 1*28+1 + 1*25+1 + 1*24+1 + 1*21+1 1*20+1
= 1*29 + 1*26 + 1*25 + 1*22 + 1*21
= 10011001102
= 1001100112 << 1
I think you are confused by the "2" in:
7 >> 2
and are thinking it should divide by 2.
The "2" here means shift the number ("7" in this case) "2" bit positions to the right.
Shifting a number "1"bit position to the right will have the effect of dividing by 2:
8 >> 1 = 4 // In binary: (00001000) >> 1 = (00000100)
and shifting a number "2"bit positions to the right will have the effect of dividing by 4:
8 >> 2 = 2 // In binary: (00001000) >> 2 = (00000010)
Its inherent in the binary number system used in computer.
a similar logic is --- left shifting 'n' times means multiplying by 2^n.
An easy way to see why it works, is to look at the familiar decimal ten-based number system, 050 is fifty, shift it to the right, it becomes 005, five, equivalent to dividing it by 10. The same thing with shifting left, 050 becomes 500, five hundred, equivalent to multiplying it by 10.
All the other numeral systems work the same way.
they do that because shifting is more efficient than actual division. you're just moving all the digits to the right or left, logically multiplying/dividing by 2 per shift
If you're wondering why 7/4 = 1, that's because the rest of the result, (3/4) is truncated off so that it's an interger.
Just my two cents: I did not see any mention to the fact that shifting right does not always produce the same results as dividing by 2. Since right shifting rounds toward negative infinity and integer division rounds to zero, some values (like -1 in two's complement) will just not work as expected when divided.
It's because >> and << operators are shifting the binary data.
Binary value 1000 is the double of binary value 0100
Binary value 0010 is the quarter of binary value 1000
You can call it an idea of a genius mind or just the need of the computer language.
To my belief, a Computer as a device never divides or multiplies numbers, rather it only has a logic of adding or simply shifting the bits from here to there. You can make an algorithm work by telling your computer to multiply, subtract them up, but when the logic reaches for actual processing, your results will be either an outcome of shifting of bits or just adding of bits.
You can simply think that for getting the result of a number being divided by 4, the computer actually right shifts the bits to two places, and gives the result:
7 in 8-bit binary = 00000111
Shift Right 2 places = 00000001 // (Which is for sure equal to Decimal 1)
Further examples:
//-- We can divide 9 by four by Right Shifting 2 places
9 in 8-bit binary = 00001001
Shift right 2 places: 00000010 // (Which is equal to 9/4 or Decimal 2)
A person with deep knowledge of assembly language programming can explain it with more examples. If you want to know the actual sense behind all this, I guess you need to study bit level arithmetic and assembly language of computer.

mathematical equation for AND bitwise operation?

In a shift left operation for example,
5 << 1 = 10
10 << 1 = 20
then a mathematical equation can be made,
n << 1 = n * 2.
If there is an equation for a shift left operation,
then is it possible that there is also a
mathematical equation for
an AND operation?
or any other bitwise operators?
There is no straightforward single operation that maps to every bitwise operation. However, they can all be simulated through iterative means (or one really long formula).
(a & b)
can be done with:
(((a/1 % 2) * (b/1 % 2)) * 1) +
(((a/2 % 2) * (b/2 % 2)) * 2) +
(((a/4 % 2) * (b/4 % 2)) * 4) +
...
(((a/n % 2) * (b/n % 2)) * n)
Where n is 2 to the number of bits that A and B are composed minus one. This assumes integer division (remainder is discarded).
That depends on what you mean by "mathematical equation". There is no easy arithmetic one.
If you look at it from a formal number-theoretic standpoint you can describe bitwise "and" (and "or" and "xor") using only addition, multiplication and -- and this is a rather big "and" from the lay perspective -- first-order predicate logic. But that is most certainly not what you meant, not least because these tools are enough to describe anything a computer can do at all.
Except for specific circumstances, it is not possible to describe bitwise operations in other mathematical operations.
An and operation with 2n-1 is the same as a modulus operation with 2n. An and operation with the inverse of 2n-1 can be seen as a division by 2n, a truncation, and a multiplication by same.
It depends on what you mean by “mathematical”. If you are looking for simple school algebra, then answer is no. But mathematics is not sacred — mathematicians define new operations and concepts all the time.
For example, you can represent 32-bit numbers as vectors of 32 booleans, and then define “AND” operation on them which does standard boolean “and” between their corresponding elements.
Yes,they are sums. Consider for a binary word of length n. It can be written as the following;
A=a0*2^0+a1*2^1+a2*2^3....an*2^n. Where an is an element of {0,1}
Therefore if an is a bit in A and bn is a bit in B, then;
AandB=a0*b0*2^0+a1*b1*2^1...an*bn*2^n
similarly
AxorB=(a0+b0)mod2*2^0+(a1+b1)mod2*2^1...+(an+bn)mod2*2^n
Consider now the identity;
Axor1=notA
We now have the three operators we need (Bitwise AND,Bitwise XOR and Bitwise NOT)
From these two we can make anything we want.
For example, bitwise OR
not[(notA)and(notB)]=not[not(AorB)]=AorB
Its not guaranteed to be pretty though.
In response to the comment regarding mod2 arithmetic not being very basic, that's true in a sense. However,while its common because of the prevalence of computers nowadays, the entire subject we are touching on here is not particularly "basic". The OP has grasped something fundamental. There are finite algebraic structures studied in the mathematical field known as "Abstract Algebra" such as addition and multiplication modulo n (where n is some number such as 2, 8 or 2^32). There are other structures using binary operations (addition is a binary operation, it takes two operands and produces a result, as is multiplication, and xor) such as xor, and ,bit shifts etc, that are "isomorphic" to the addition and multiplication over integers mod n. that means they act the same way, they are associative, distributive etc. (although they may or may not be commutative, think of matrix multiplication) Its hard to tell someone where to start looking for more information. I guess the best way would be to start with a book on formal mathematics.(Mathematical proofs) You need that to understand any advanced mathematics text. Then a text on abstract algebra. If your a computer science major you will get a lot of this in your classes. If your a mathematics major, you will study these things in depth all in good time. If your a history major, Im not knocking history , im a history channel junkie, but you should switch majors because your wasting your talents!
Here is a proof that for 2-bit bitwise operations you cannot describe & with
just + - and * (check this, just came up with it now, so, who knows):
The question is, can we find a polynomial
x & y == P(x, y)
where
P(x, y) = a0_0 + a1_0*x + a0_1*y + a2_0*x^ + ...
Here's what it would have to look like:
0 1 2 3
--------
0| 0 0 0 0
1| 0 1 0 1
2| 0 0 2 2
3| 0 1 2 3
First, clearly a0_0 == 0. Next you can see that if P is
rewritten:
|------- Q(x, y) --------|
P(x, y) = xy*R(x,y) + a1_0*x + a0_1*y + ...
And y is held 0, while x varies over 0, 1, 2, 3; then Q(x, y) must be 0 for
each of those values. Likewise if x is held 0 and y varied. So Q(x, y)
may be set to 0 without loss of generality.
But now, since P(2, 2) = 2, yet 2 * 2 == 0, the polynomial P cannot
exist.
And, I think this would generalize to more bits, too.
So the answer is, if you're looking for just +, * and -, no you can't do
it.

Resources