How to set a particular bit to 1 in an array of say 16 bits (15:0) and clear the remaining bits at the same time - arrays

I have an array of let's say 16 bits (15:0). I have a registered 4 bit variable say 'pos' that changes based on other conditions. depending on the value of the variable pos i want to set a bit and clear the remaining bits (0). for eg., if pos=5, bit 5 should become 1 and all others set to 0 (cleared). 0000 0000 0010 0000 is the desired value. this should be synthesizable in system verilog
i am able to set the desired bit but clearing the remaining bits has been a challenge.

Use the logical shift operator to set a bit
value = 16'b1 << pos;

1) See the other answer with the shift
2) For more complex cases, like if you want to keep the right or left part of another value after or before the 1, you could simply assign the full answer/result out of the 16 possible, with a case statement.
3) You can also use a loop over all bits setting their own logical equation, like 1 if equal to the counter, 0 if above, the source bit if under/else, and so on...

Related

How to configure my CAN filter in list mode?

I have written some code to transmit/receive CAN messages and I am having some issues with my filter. Firstly I'm going to say that I understand mask mode and have managed to get it working with the following configuration:
uint16_t id = 0x12; // 0001 0010
uint16_t mask = 0xFC; // 1111 1100
sFilterConfig.FilterBank=0;
sFilterConfig.FilterMode=CAN_FILTERMODE_IDMASK;
sFilterConfig.FilterScale=CAN_FILTERSCALE_32BIT;
sFilterConfig.FilterIdHigh=id<<5;
sFilterConfig.FilterIdLow=0;
sFilterConfig.FilterMaskIdHigh=mask<<5;
sFilterConfig.FilterMaskIdLow=0;
sFilterConfig.FilterFIFOAssignment=0;
sFilterConfig.FilterActivation=ENABLE;
HAL_CAN_ConfigFilter(&hcan1, &sFilterConfig);
This accepts messages with ID 0x1X, where X is 0 to 3. I don't really understand the purpose of the final 2 bits of the ID since they are irrelevant to the mask, is my thinking there correct? Anyway that's not the main issue.
Now having read through RM0090 I'm trying to build a filter that will accept messages with ID 0x120 to 0x1FA with the below code:
uint16_t id = 0x120; // 0001 0010 0000
uint16_t mask = 0x1FA; // 0001 1111 1010
sFilterConfig.FilterBank=0;
sFilterConfig.FilterMode=CAN_FILTERMODE_IDLIST;
sFilterConfig.FilterScale=CAN_FILTERSCALE_16BIT;
sFilterConfig.FilterIdHigh=mask<<5;
sFilterConfig.FilterIdLow=id<<5;
sFilterConfig.FilterMaskIdHigh=0;//mask<<5;
sFilterConfig.FilterMaskIdLow=0;
sFilterConfig.FilterFIFOAssignment=0;
sFilterConfig.FilterActivation=ENABLE;
HAL_CAN_ConfigFilter(&hcan1, &sFilterConfig);
It doesn't work as expected, it only seems to accept IDs 0x120 and 0x00, is my understanding of list mode incorrect or my filter implementation? or both?
EDIT:
My understanding of Mask/List mode was wrong. I understand how to use masks but I thought list mode can be used to create a range of acceptable IDs but it seems that you can only use list mode to capture a couple specific IDs. I found this page quite helpful.
As the page I linked above says you can only get ranges in the form 2^N - (2^(N-1) - 1).
My question now becomes what is the point of Mask low/high and filterID low/high? Initially I thought maybe its the lower/higher 16 bits of the 32 bit register but each low/high variable is already uint32 so that idea didn't make sense to me. Any clarity will be appreciated.
Cheers!
I guess you are mixing filter and mask:
The filter mask is used to determine which bits in the identifier of the received frame are compared with the filter
If a mask bit is set to a zero, the corresponding ID bit will automatically be accepted, regardless of the value of the filter bit.
If a mask bit is set to a one, the corresponding ID bit will be compare with the value of the filter bit; if they match it is accepted otherwise the frame is rejected.

Generate permutations with k fixed bits

Suppose I have several N-bit numbers where K (1 < K < N) bits are fixed (i.e. 0 or 1). My goal is to generate all possible permutations.
Example: N = 3, K = 1 (middle bit is fixed to '0'). Then possible permutations are
000
001
100
101
Let's say I have number X=000 and array fixed={-1,0,-1} that stores information of fixed bits (-1 = bit not fixed, 0 or 1 = fixed).
Simple solution is to generate all permutations
000,001,...,111 and loop through each one bit by bit and test whether all fixed bits have correct value (stored in fixed). If at least one fixed bit differs from the corresponding value in fixed, then this permutation is removed from the result.
This is, however, inefficient because it takes 2^N instead of 2^(N-K) permutations. Is there an algorithm or approach to this problem that needs only 2^(N-K) permutations (which are directly in the result)?
Simple bit trick allows to solve this problem effectively.
Make binary masks:
A where all fixed bits are cleared (both fixed zeros and fixed ones!) and other bits are set
B where fixed ones are set
for example, x01x gives A = 1001, B = 0010
Traverse all submasks of A and set fixed ones with B before output:
sm = A
repeat:
out = sm or B
//use out bit combination
sm = (sm - 1) & A
until (sm = 0)
This method generates all needed bit combinations without excessive steps

What is the difference between bitmask and bitmap in C

What is the conceptual difference between them? I know bitmap is sort of bitfields in the structs..
struct{
int bit1: 1;
int bit2: 1;
int bit3: 1;
};
so in tht case is bitmask something we define for an enum?
A bitmask is an integer type that is used to "mask" certain bits when performing bitwise operations. For example, the bitmask 0xFFFFFFFF might be used to mask a 32-bit unsigned value because you want to operate on all bits at once, whereas 0x00000001 would only operate on the very last bit. You often see bitmasks defined as the 'flipped' version and then flipped using ~.
A bitmap, on the other hand, is a set of variables each mapped to an individual bit. There are many ways of achieving this, your struct is one (common) example of a bitmap.
You might put various masks in an enum to give yourself easier access to them, but it's not strictly necessary to do so.
Bitmap is more of data itself which comprise of bits.
One of the example could be say that there are 8 friends in a group and group performs various activities together.
Group participation in each activity now can be represented by "bitmap" which comprise of bits (each for one friend).
e.g.
Skii - 10110000 <<<<friend 5,6 and 8 will go to Skii
Movie - 10011000 <<< friend 4,5 and 8 will go to movie
College- 11111111 <<<all friends will go to college
Bitmask is more skeleton for bitmap. bitmask will be used to set and get bit values in bitmap.
friend1 - 00000001<<<< bitmask for friend 1
friend2 - 00000010 <<<bitmask for friend 2
friend5 - 00010000
(Note: I found it awkward to address friend0 :) , purist may consider everything as n-1)
Now using bitmap and bitmask we can determine
is friend1 going for Skii?
Skii & friend1 <<<< this would be zero
is friend 5 going to Movie?
Movie & friend5 <<< yes

Perfmon, how to combine Combine FirstValueA and FirstValueB?

I am using performance monitor to collect the counters data and save it to the DB. Here is the DB structure defined in the msdn http://msdn.microsoft.com/en-us/library/windows/desktop/aa371915(v=VS.85).aspx
Based on DB structure, here is the definition of the FirstValueA:
Combine this 32-bit value with the value of FirstValueB to create the
FirstValue member of PDH_RAW_COUNTER. FirstValueA contains the low
order bits.
And the FirstValueB:
Combine this 32-bit value with the value of FirstValueA to create the
FirstValue member of PDH_RAW_COUNTER. FirstValueB contains the high
order bits.
The fields FirstValueA and FirstValueB should be combined to create the FirstValue, and similarly the SecondValue.
How do you combine FirstValueA and FirstValueB to get the FirstValue in SQL Server?
So what they're saying is that you need to comingle the two, like this:
//for reference, this is 32 bits
12345678901234567890123456789012
000000000000000000000FirstValueA
000000000000000000000FirstValueB
What it says is we need to combine the two. It says that A is the low order, and B is the high order.
Let's refer to Wikipedia for http://en.wikipedia.org/wiki/Least_significant_bit and see that the low order is on the --> right, and the high order is on the <-- left.
low order -> right
high order <- left
A -> right
B <- left
So we're going to end up with (our previous example)
//for reference, this is 32 bits
12345678901234567890123456789012
000000000000000000000FirstValueA
000000000000000000000FirstValueB
becomes
//for reference, this is 32 bits
12345678901234567890123456789012
000000000000000000000FirstValueB000000000000000000000FirstValueA
Now, that doesn't work if the values look like this:
//for reference, this is 32 bits
12345678901234567890123456789012
1001101100110100101011010001010100101000010110000101010011101010
//the above string of 1's and 0's is more correct for the example
What you're given is not two binary strings, but two integers. So you have to multiply the left value by 2**32 and add it to the right value. (that's a 64 bit field by the way)
let's examine tho, why the low order bit is on the right and the high order is on the left:
Binary is written just like Arabic numerals. In Arabic numerals, the number:
123456
means one hundred twenty three thousand, four hundred fifty six. The one hundred thousand is the most significant part (given as we would shorten this to "just over one hundred thousand dollars" instead of "a lot over 6 dollars") and the six is the part we most freely drop. So we could say that the number were:
123 is the value that contains the high order bits, and 456 is the value that contains the low order bits. Here we would multiply by 10^3 to add them together (this is a mathematical fact, not a guess, so trust me on this) because it would look like this:
123
456
and so the same works for the binary:
//for reference, this is 32 bits
12345678901234567890123456789012
000000000000000000000FirstValueB
000000000000000000000FirstValueA
tl;dr:
Multiply B by 2^32 and add to A
Console.WriteLine("{0} {1} {2} : {3} {4}", p.CategoryName, p.InstanceName, p.CounterName, p.RawValue, p.CounterType.GetHashCode());
float FirstValue = p.NextValue();
Console.WriteLine("FirstValueA :{0}", (ulong)FirstValue & 4294967295);
Console.WriteLine("FirstValueB :{0}", (ulong)FirstValue >> 32);
Console.WriteLine("SecondValueA :{0}", p.NextSample().TimeStamp & 4294967295);
Console.WriteLine("SecondValueB :{0}", p.NextSample().TimeStamp >> 32);

Bitmask to flip bits ... without XOR?

Pretty simple, really. I want to negate an integer which is represented in 2's complement, and to do so, I need to first flip all the bits in the byte. I know this is simple with XOR--just use XOR with a bitmask 11111111. But what about without XOR? (i.e. just AND and OR). Oh, and in this crappy assembly language I'm using, NOT doesn't exist. So no dice there, either.
You can't build a NOT gate out of AND and OR gates.
As I was asked to explain, here it is nicely formatted. Let's say you have any number of AND and OR gates. Your inputs are A, 0 and 1. You have six possibilities as you can make three pairs out of three signals (pick one that's left out) and two gates. Now:
Operation Result
A AND A A
A AND 1 A
A AND 0 0
A OR A A
A OR 1 1
A OR 0 A
So after you fed any of your signals into the first gate, your new set of signals is still just A, 0 and 1. Therefore any combination of these gates and signals will only get you A, 0 and 1. If your final output is A, then this means that for both values of A it won't equal !A, if your final output is 0 then A = 0 is such a value that your final value is not !A same for 1.
Edit: that monotony comment is also correct! Let me repeat here: if you change any of the inputs of AND / OR from 0 to 1 then the output won't decrease. Therefore if you claim to build a NOT gate then I will change your input from 0 to 1 , your output also can't decrease but it should -- that's a contradiction.
Does (foo & ~bar) | (~foo & bar) do the trick?
Edit: Oh, NOT doesn't exist. Didn't see that part!

Resources