I have the following problem on bit-string flicking.
Solve for X (a 5-bit string):
((10110 AND LCIRC-2 X) OR RCIRC-2 X) = 00010
I have no idea how to start
Represent each bit as letters A-E
((10110 AND LCIRC-2 ABCDE) OR RCIRC-2 ABCDE) = 00010
((10110 AND CDEAB) OR RCIRC-2 ABCDE) = 00010
(C0EA0 OR RCIRC-2 ABCDE) = 00010
(C0EA0 OR DEABC) = 00010
We can tell that A, C, D, and E are all 0 (C OR D = 0, E OR A = 0)
Therefore, for A OR B to equal 1, B must be 1
X = 01000
Related
So, while playing with the bitwise AND and bitwise OR operators, I noticed the following:
(a & b) + (a | b) = a + b
and there is a corresponding proof which mainly relies on the fact for any two bits x and y,
(a,b) = (0,0) --> (a&b, a|b) = (0, 0) = (a, b)
(a,b) = (0,1) --> (a&b, a|b) = (0, 1) = (a, b)
(a,b) = (1,0) --> (a&b, a|b) = (0, 1) = (b, a)
(a,b) = (1,1) --> (a&b, a|b) = (1, 1) = (b, a)
Now, I was wondering - is this a mere coincidence, or are these bitwise operations actually used in this way? I don't think that computers actually compute addition in this way, since it would be a recursive definition... but it seems too nice of a property to have been random!
AND and OR are idempotent and commutative:
a & a = a
a | a = a
a & b = b & a
a | b = b | a
They also absorb each other:
a & (a | b) = a
a | (a & b) = a
Thus:
a + b = (a | a) + (b & b)
a + b = (a | (a & b)) + (b & (a | b))
(a | a) + (b & b) = (a | (a & b)) + (b & (a | b))
a + b = (a & b) + (a | b)
Another way to see it is that, for two bits x and y, x + y = x | y and x & y = 0 unless both bits are set, and then x & y adds the missing bit to x | y.
when a and b value only 0 or 1 it is not surprising to have (a & b) + (a | b) == a + b
having both a and b valuing 1 (a & b) and (a | b) value 1, so you do 1+1 on each side of ==
else (a & b) is obviously 0 so you compare a|b and a+b while there is no possible carry on a+b and in that case a and b being a boolean do use + or | is the same
Note when a and b value only 0 or 1 then a&b equals a*b in all cases
(a & b) + (a | b) = a + b is indeed true in general, not just for 0 and 1.
An other way to look at it, is that whenever the bits in a and b at a particular position are different (so one of them is zero and the other is one), then in (a & b) + (a | b) the 0 is put on the left hand side of the + and the 1 is put on the right hand side. If the bits are the same then it doesn't make any difference.
It's like a more granular form of min(a, b) + max(a, b), at the bit level intead of the word level.
Reordering bits like that has no effect on the sum. Consider that both a and b are already sums, of the form a[0] + 2*a[1] + 4*a[2] .... a + b is a bigger sum, and (a & b) + (a | b) merely reordered the terms of that sum.
I don't understand the exercise 2-9, in K&R C programming language,
chapter 2, 2.10:
Exercise 2-9. In a two's complement number system, x &= (x-1) deletes the rightmost 1-bit in x . Explain why. Use this observation to write a faster version of bitcount .
the bitcount function is:
/* bitcount: count 1 bits in x */
int bitcount(unsigned x)
{
int b;
for (b = 0; x != 0; x >>= 1)
if (x & 01)
b++;
return b;
}
The function deletes the rightmost bit after checking if it is bit-1 and then pops in the last bit .
I can't understand why x&(x-1) deletes the right most 1-bit?
For example, suppose x is 1010 and x-1 is 1001 in binary, and x&(x-1) would be 1011, so the rightmost bit would be there and would be one, where am I wrong?
Also, the exercise mentioned two's complement, does it have something to do with this question?
Thanks a lot!!!
First, you need to believe that K&R are correct.
Second, you may have some mis-understanding on the words.
Let me clarify it again for you. The rightmost 1-bit does not mean the right most bit, but the right most bit which is 1 in the binary form.
Let's arbitrary assume that x is xxxxxxx1000(x can be 0 or 1). Then from right to left, the fourth bit is the "rightmost 1-bit". On the basis of this understanding, let's continue on the problem.
Why x &=(x-1) can delete the rightmost 1-bit?
In a two's complement number system, -1 is represented with all 1 bit-pattern.
So x-1 is actually x+(-1), which is xxxxxxx1000+11111111111. Here comes the tricky point.
before the righmost 1-bit, all 0 becomes 1 and the rightmost 1-bit becomes 0 and there is a carry 1 go to left side. And this 1 will continue to proceed to the left most and cause an overflow, meanwhile, all 'x' bit is still a because 'x'+'1'+'1'(carry) causes a 'x' bit.
Then x & (x-1) will delete the rightmost 1-bit.
Hope you can understand it now.
Thanks.
Here is a simple way to explain it. Let's arbitrarily assume that number Y is xxxxxxx1000 (x can be 0 or 1).
xxxxxxx1000 - 1 = xxxxxxx0111
xxxxxxx1000 & xxxxxxx0111 = xxxxxxx0000 (See, the "rightmost 1" is gone.)
So the number of repetitions of Y &= (Y-1) before Y becomes 0 will be the total number of 1's in Y.
Why do x & (x-1) delete the right most order bit? Just try and see:
If the righmost order bit is 1, x has a binary representation of a...b1 and x-1 is a...b0 so the bitwise and will give a...b1 because common bits are left unchanged by the and and 1 & 0 is 0
Else x has a binary representation of a...b10...0; x-1 is a...b01...1 and for same reason as above x & (x-1) will be a...b00...0 again clearing the rightmost order bit.
So instead of scanning all bits to find which one are 0 and which one are 1, you just iterate the operation x = x & (x-1) until x is 0: the number of steps will be the number of 1 bits. It is more efficient than the naive implementation because statistically you will use half number of steps.
Example of code:
int bitcount(unsigned int x) {
int nb = 0;
while (x != 0) {
x &= x-1;
nb++
}
return nb;
}
Ik I'm already very late (≈ 3.5yrs) but your example has mistake.
x = 1010 = 10
x - 1 = 1001 = 9
1010 & 1001 = 1000
So as you can see, it deleted the rightmost bit in 10.
7 = 111
6 = 110
5 = 101
4 = 100
3 = 011
2 = 010
1 = 001
0 = 000
Observe that the position of rightmost 1 in any number, the bit at that same position of that number minus one is 0. Thus ANDing x with x-1 will be reset (i.e. set to 0) the rightmost bit.
7 & 6 = 111 & 110 = 110 = 6
6 & 5 = 110 & 101 = 100 = 4
5 & 4 = 101 & 100 = 100 = 4
4 & 3 = 010 & 011 = 010 = 2
3 & 2 = 011 & 010 = 010 = 2
2 & 1 = 010 & 001 = 000 = 0
1 & 0 = 001 & 000 = 000 = 0
I would like to ask: with A, B, and C are any binary number. After getting C = A & B (& is AND operator), is there any possibility to recover A from B and C?
I know that the information of A will be lost through the operation. Can we form a function like B <...> C = A, and how complexity it can be?
For example:
A = 0011
B = 1010
C = A & B = 0010
The 2nd bit of C is 1, i.e. 2nd bit of A and B must be 1. However, the other bits lack information to be recovered.
Thank you in advance.
No, it's not possible. You can see this from the truth table for AND:
A B C (A & B)
0 0 0
0 1 0
1 0 0
1 1 1
Suppose you know that B is 0 and C is 0. A could be either 1 or 0, so it cannot be deduced from B and C.
You can recover only bits of A that have 1s in the corresponding bits of B. For bits of B that have zeros it does not matter what A has in the corresponding position, because the bit in C would be zero anyway:
A = 1xx0x011x0
B = 1001011101
----------
C = 1000001100
Positions of A marked with x can be zeros or ones; the information in them is going to be lost either way.
Assuming you are just talking binary logic not C variables, then no.
Consider:
a=0111, b=1010 therefore c=0010
So you have b=1010, c=0010 so now how can you find a?
The left most bit in c is a 0, in b it is 1 so we know a it must be 0
The second bit in c is 0, in b it is 0 so you can't tell what it was in a (either 1 or 0 leads to a 0 in c)
At this point we've proven you can't do it.
No, because there isn't a unique solution. Any value of A that has the same bits set as B would satisfy the equation, regardless of the other bits.
This is a question about equations. It is not possible as the degree of freedom is not zero. It is the same as asking a+b = 10 -- what is a and what is b?
You can't recover A, but you can write A = (X & ~B) ^ C. Here, X can be anything (and it gives all the A's).
Of course this will work only for B and C such that C & ~B == 0.
This is a parametrized solution. Example in python
>>> A = 32776466
>>> B = 89773888
>>> C = A & B
>>> C
22020352
>>> X = 1234567890 # arbitrary value
>>> U = (X & ~B) ^ C
>>> U
1238761874
>>> U & B # same result as A & B
22020352
I'm supposed to match the worded descriptions to the bitwise operations. W is one less than the total bits in a's and b's data structure. So if a is 32 bits long W is 31 Here are the worded descriptions:
1. One’s complement of a
2. a.
3. a&b.
4. a * 7.
5. a / 4 .
6. (a<0)?1:-1.
and here are the bitwise descriptions:
a. ̃( ̃a | (b ˆ (MIN_INT + MAX_INT)))
b. ((aˆb)& ̃b)|( ̃(aˆb)&b)
c. 1+(a<<3)+ ̃a
d. (a<<4)+(a<<2)+(a<<1)
e. ((a<0)?(a+3):a)>>2
f. a ˆ (MIN_INT + MAX_INT)
g. ̃((a|( ̃a+1))>>W)&1
h. ̃((a >> W) << 1)
i. a >> 2
I have a few of them solved namely:
a. ̃( ̃a | (b ˆ (MIN_INT + MAX_INT))) = a & b
b. ((aˆb)& ̃b)|( ̃(aˆb)&b) = a
c. 1+(a<<3)+ ̃a = 7 * a
d. (a<<4)+(a<<2)+(a<<1) = 16*a + 4*a + 2*a = 22*a
e. e. ((a<0)?(a+3):a)>>2 = (a<0)?(a/4 + 3/4) : a/4 = a/4 + ((a<0)?(3/4:0)
f. a ˆ (MIN_INT + MAX_INT) = ~a
i. a >> 2 = a/4
So basically all I need help with are g and h
g. ̃((a|( ̃a+1))>>W)&1
h. ̃((a >> W) << 1)
If you wouldn't mind could you also provide an explanation if you could?
I think this is what is going on with g:
g. ̃((a|( ̃a+1))>>W)&1 = ~((a|(two's complement of a) >>W)&1
= ~((a|sign of two's complement of a) &1 = ~(-a)&1
but this could be 1 or 0 so I don't think I did this right.
and for this one:
h. ̃((a >> W) << 1) = ~((sign of a) << 1) = ~((sign of a)*2)
and I don't know where to go from there...
Thank you for your help!!!
For g, consider that (a|~a) sets all bits to 1, so:
~((a|~a) >> W) & 1
~(all_ones >> W) & 1
~1 & 1
0
The only way adding 1 to ~a could possibly affect this result is if the addition flipped the most significant bit of ~a (due to the right shift by W). That can only happen if a is 0 or 2^W. In the latter case, we will get the same result as above because the top bit of (a|X) will always be set. However, when a is 0 ~a+1 (0's twos complement) is also 0 and the final result of the entire expression will instead be 1.
Therefore, g is 1 when a is zero, otherwise it is 0 (i.e. - g is equivalent to the C expression a == 0). That seemingly doesn't match any of your worded descriptions. Indeed, I don't see how any expression (X & 1) possibly matches any of your worded descriptions. None of your worded descriptions matches an expression that evaluates to only 0 or 1 (for all values of a, b).
For h, consider that if a is negative, then its top most bit is set. Because a is signed, right shifting it 31 positions drags the sign bit across all 32 bits of a. Then left shifting it one position sets the least significant bit to 0. Complementing that yields 1. If a is non-negative, then its top most bit is 0 and right shifting that 31 positions yields 0. Left shifting that 1 position still yields 0. Complementing that yields all bits set, which is the 2's complement rep of -1. Therefore, h is equivalent to (a < 0 ? 1 : -1) or #6 of your worded descriptions.
So, I got a strange problem.
Below is my code, it is a simple Euler method for integrating a Linear system of ODE's.
function [V, h, n, b, z] = hodgkin_huxley_CA1(t, Iapp, t_app)
%function based on the CA1 pyramidal neuron model:
h = zeros(length(t));
n = zeros(length(t));
b = zeros(length(t));
z = zeros(length(t));
V = zeros(length(t));
% Initial conditions
h(1) = 0.9771;
n(1) = 0.0259;
b(1) = 0.1787;
z(1) = 8.0222e-04;
V(1) = -71.2856;
% Euler method
delta_t = t(2) - t(1);
for i=1:(length(t)-1)
h(i+1) = h(i) + delta_t*feval(#h_prime,V(i),h(i));
n(i+1) = n(i) + delta_t*feval(#n_prime,V(i),n(i));
b(i+1) = b(i) + delta_t*feval(#b_prime,V(i),b(i));
z(i+1) = z(i) + delta_t*feval(#z_prime,V(i),z(i));
minf = m_inf(V(i));
pinf = p_inf(V(i));
ainf = a_inf(V(i));
if (t(i) >= t_app(1) && t(i) <= t_app(2))
I = Iapp;
else I = 0;
end;
V(i+1) = V(i) + delta_t*feval(#V_prime,V(i),h(i),n(i),b(i),z(i),minf,pinf,ainf,I);
end;
So, this function returns me 5 arrays, V,h,n,b and z. The problem is that if I use V(1), V(2), V(3), ..., I get the expected result. But when I tell matlab to print the whole array, I receive all values as 0.
So, if I plot this array, I will get 2 curves: one that is the right one, and one that is zero.
Note that this also happens to all the other variables h,n,b and z.
Anyone knows what may be happening?
Your outputs are meant to be vectors, however you are initializing them as square matrices.
You can simply replace:
h = zeros(length(t));
n = zeros(length(t));
b = zeros(length(t));
z = zeros(length(t));
V = zeros(length(t));
With
h = zeros(length(t),1);
n = zeros(length(t),1);
b = zeros(length(t),1);
z = zeros(length(t),1);
V = zeros(length(t),1);
The zeros function, with 1 input, creates a 2D square matrix of that dimension. With two or more inputs, it interprets the inputs as specification for all dimensions. For example:
>> x = zeros(3)
x =
0 0 0
0 0 0
0 0 0
>> x = zeros(3,1)
x =
0
0
0