I'm working on a puzzle right now....trying to write
if (x==5 || x==7)
With bitwise operations (in C). Been working on it for a while....can't figure it out.
Any help would be appreciated! Thanks
Ps this isn't homework...trying to study for a test.
EDIT so the format would be something like
if (x _ _) with a bitwise operation in the blanks
SORRY need to specify, can only be two characters (operator or numeric value)
So %8 for example
7d = 111b and 5d = 101b
So bit 0 must be on, bit 1 is don't care, bit 2 must be on and bits 3-31 must be off.
So, mask out bit 1 and test for 101b
so your test becomes ((x & ~2) == 5)
Then ask Bing or wikipedia about "Karnaugh Maps" so you can do your own expression reduction.
Tom's answer below is also correct and is simpler. You could write
((x & 5) == 5)
and this is slightly faster. Perhaps I should have used a Karnaugh map!
You can AND it with '101' and you get the same results for both 5 and 7, which is 101.
Related
I am very new to c programming and we are studying and/or truth tables. I get the general idea of how to do something like C & 5 would come out to 4, but I am very confused on the purpose of this? What is the end goal with truths and falses? What is the question you are seeking to answer when you evaluate such expressions? I'm sorry if this is dumb question but I feel like it is so stupid that no one has a clear answer for it
You need to understand the difference between & and && , they are very different things.
`&` is a bitwise operation. 0b1111 & 0b1001 = 0b1001
ie perform a logical and on each bit in a value. You give the example
0x0C & 0x05 = 0b00001100 & 0b00000101 = 0b00000100 = 4
&& is a way of combining logical tests, its close the the english word 'and'
if(a==4 && y ==9) xxx();
xxx will only be executed it x is 4 and y is 9
What creates confusion is that logical values in C map to 0 (false) and not 0 (true). So for example try this
int a = 6;
int j = (a == 4);
you will see that j is 0 (because the condition a==4 is false.
So you will see things like this
if(a & 0x01) yyy();
this performs the & operation and looks to see if the result is zero or not. So yyy will be executed if the last bit is one, ie if its odd
So here is the problem, I'm building arrays of 128 bits, in C, to represent beat divisions, those divisions can be logically AND/OR/XOR'd against one another, so you can have results like below. The problem I'm having is, how to determine when a pattern repeats, and what the start/end index of the first repeated section are, so that I can loop over just that section to prevent strange things happening when I reach the max (currently 128).
It seems like i'm going to need to increase the size to 256 or larger to account for situations where the more complex nested logic creates patterns that don't repeat for a while.. Looking for advice on how to detect the pattern algorithmic-ally within an array of bits.
2: 01010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101
3: 00100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100100
3 OR 2: 01110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101110101
3 AND 2: 00000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100
3 XOR 2: 01110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001110001
|| || || || || ...
5: 00001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000
5 OR (3 XOR 2): 01111001110001110001110011110101111001110001110001110011110101111001110001110001110011110101111001110001110001110011110101111001
|| || || || ||
5 OR (3 OR 2): 01111101110101110101110111110101111101110101110101110111110101111101110101110101110111110101111101110101110101110111110101111101
|| || || ||
5: 00001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000010000100001000
6: 00000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100000100
5 OR 6: 00001100010100100101000110000100001100010100100101000110000100001100010100100101000110000100001100010100100101000110000100001100
|| || || || ||
7: 00000010000001000000100000010000001000000100000010000001000000100000010000001000000100000010000001000000100000010000001000000100
7 XOR (5 OR 6): 00001110010101100101100110010100000100010000100111000111000100101100000100101101000110000110001101010100000101010110001100001000
shoot, 7 XOR (5 OR 6) doesn't repeat within 128 bits..
8: 00000001000000010000000100000001000000010000000100000001000000010000000100000001000000010000000100000001000000010000000100000001
16: 00000000000000010000000000000001000000000000000100000000000000010000000000000001000000000000000100000000000000010000000000000001
to provide a little more context, this is for a logical clock divider that i've written for a musical module (https://llllllll.co/t/chrono-sage-meadowphysics-logical-clock-divider-v1-2-5/27182) and the issue I'm trying to solve is the ability to have these logical/nested logic combinations of beat divisions line up so there is no stutter when a pattern wraps around.
Ian Abbott commented that the "repeating pattern should be no longer than the least common multiple of M and N" (M and N being the two metres, I assume, although you've shown examples combining more than two). In fact, the naive combination of more than one metre is exactly a repeating pattern of length equal to their least common multiple.
But it looks like you are making additional, arbitrary choices, indicated by the lines in the examples, about where you'd like the "downbeat." On the face of it, without more clarification, we would be hard pressed to tell a computer what these criteria are. What are you thinking about (or better yet, hearing) that advises you to place the downbeats where you did, rather than at the start in the examples? Or is the issue that we don't have access to the original metres that were combined?
3/4 x . x|x . x|x . x|x . x
4/4 x . x x|x . x x|x . x x
(lcm = 12)
Is there some advantage of writing
t = linspace(0,20,21)
over
t = 0:1:20
?
I understand the former produces a vector, as the first does.
Can anyone state me some situation where linspace is useful over t = 0:1:20?
It's not just the usability. Though the documentation says:
The linspace function generates linearly spaced vectors. It is
similar to the colon operator :, but gives direct control over the
number of points.
it is the same, the main difference and advantage of linspace is that it generates a vector of integers with the desired length (or default 100) and scales it afterwards to the desired range. The : colon creates the vector directly by increments.
Imagine you need to define bin edges for a histogram. And especially you need the certain bin edge 0.35 to be exactly on it's right place:
edges = [0.05:0.10:.55];
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 0 0 0
does not define the right bin edge, but:
edges = linspace(0.05,0.55,6); %// 6 = (0.55-0.05)/0.1+1
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 1 0 0
does.
Well, it's basically a floating point issue. Which can be avoided by linspace, as a single division of an integer is not that delicate, like the cumulative sum of floting point numbers. But as Mark Dickinson pointed out in the comments:
You shouldn't rely on any of the computed values being exactly what you expect. That is not what linspace is for. In my opinion it's a matter of how likely you will get floating point issues and how much you can reduce the probabilty for them or how small can you set the tolerances. Using linspace can reduce the probability of occurance of these issues, it's not a security.
That's the code of linspace:
n1 = n-1
c = (d2 - d1).*(n1-1) % opposite signs may cause overflow
if isinf(c)
y = d1 + (d2/n1).*(0:n1) - (d1/n1).*(0:n1)
else
y = d1 + (0:n1).*(d2 - d1)/n1
end
To sum up: linspace and colon are reliable at doing different tasks. linspace tries to ensure (as the name suggests) linear spacing, whereas colon tries to ensure symmetry
In your special case, as you create a vector of integers, there is no advantage of linspace (apart from usability), but when it comes to floating point delicate tasks, there may is.
The answer of Sam Roberts provides some additional information and clarifies further things, including some statements of MathWorks regarding the colon operator.
linspace and the colon operator do different things.
linspace creates a vector of integers of the specified length, and then scales it down to the specified interval with a division. In this way it ensures that the output vector is as linearly spaced as possible.
The colon operator adds increments to the starting point, and subtracts decrements from the end point to reach a middle point. In this way, it ensures that the output vector is as symmetric as possible.
The two methods thus have different aims, and will often give very slightly different answers, e.g.
>> a = 0:pi/1000:10*pi;
>> b = linspace(0,10*pi,10001);
>> all(a==b)
ans =
0
>> max(a-b)
ans =
3.5527e-15
In practice, however, the differences will often have little impact unless you are interested in tiny numerical details. I find linspace more convenient when the number of gaps is easy to express, whereas I find the colon operator more convenient when the increment is easy to express.
See this MathWorks technical note for more detail on the algorithm behind the colon operator. For more detail on linspace, you can just type edit linspace to see exactly what it does.
linspace is useful where you know the number of elements you want rather than the size of the "step" between them. So if I said make a vector with 360 elements between 0 and 2*pi as a contrived example it's either going to be
linspace(0, 2*pi, 360)
or if you just had the colon operator you would have to manually calculate the step size:
0:(2*pi - 0)/(360-1):2*pi
linspace is just more convenient
For a simple real world application, see this answer where linspace is helpful in creating a custom colour map
I am learning about left shift operators and for multiplying a number with 10 I am using this code.
long int num=a<<3+a<<1;
so that no. a first multiplies with 8 and then with 2 and on adding gets a*10 which is stored in num.
But its giving some strange result like for 5 its 2560, for 6 its 6144.
Can anyone please explain whats wrong in that implementation?
You have a problem with precedence - the order operators are performed. + binds more tightly than <<, so:
a<<3+a<<1
actually means: a << (a+3) << 1
for 5 that is 5 << 8 << 1 which is 2560 :)
You need: (a<<3) + (a<<1)
See: http://www.swansontec.com/sopc.html for clarification.
The format you are using actually goes this way..
num=a<<(3+a)<<1;
make some difference between the two application of shift operators by using parenthesis like
num=(a<<3)+(a<<1);
What about warning: suggest parentheses around ‘+’ inside ‘<<’
+ is processed before <<.
Use (a<<3)+(a<<1)
<< operator has less precedence than + operator (Thumb rule Unary Arthematic Relational Logical )
so use braces
int num = (a<<3) + (a<<1);
Pretty simple, really. I want to negate an integer which is represented in 2's complement, and to do so, I need to first flip all the bits in the byte. I know this is simple with XOR--just use XOR with a bitmask 11111111. But what about without XOR? (i.e. just AND and OR). Oh, and in this crappy assembly language I'm using, NOT doesn't exist. So no dice there, either.
You can't build a NOT gate out of AND and OR gates.
As I was asked to explain, here it is nicely formatted. Let's say you have any number of AND and OR gates. Your inputs are A, 0 and 1. You have six possibilities as you can make three pairs out of three signals (pick one that's left out) and two gates. Now:
Operation Result
A AND A A
A AND 1 A
A AND 0 0
A OR A A
A OR 1 1
A OR 0 A
So after you fed any of your signals into the first gate, your new set of signals is still just A, 0 and 1. Therefore any combination of these gates and signals will only get you A, 0 and 1. If your final output is A, then this means that for both values of A it won't equal !A, if your final output is 0 then A = 0 is such a value that your final value is not !A same for 1.
Edit: that monotony comment is also correct! Let me repeat here: if you change any of the inputs of AND / OR from 0 to 1 then the output won't decrease. Therefore if you claim to build a NOT gate then I will change your input from 0 to 1 , your output also can't decrease but it should -- that's a contradiction.
Does (foo & ~bar) | (~foo & bar) do the trick?
Edit: Oh, NOT doesn't exist. Didn't see that part!