Qt Creator + GDB + MingW - Bitwise And question - c

I tried to evaluate how bitwise AND operation results get evaluated. I am using Qt Creator + GDB + MingW on Windows.
I did an simple test:
#define BITMASK_CAN_JUMP 1 << 0 // 0x0001
#define BITMASK_CAN_WALK 1 << 1 // 0x0010
unsigned int data = BITMASK_CAN_JUMP | BITMASK_CAN_WALK;
...
if (data & BITMASK_CAN_WALK) {
printf("%d", data & BITMASK_CAN_WALK);
printf("can walk\n");
}
...
Setting an watch in GDB for (data & 0x0010) gives me the value 0 because 0x0010 = 0b10000 which is correct. The if condition evaluates to true because the value gets evaluated as 2. To me it seems like the the debugger acts correctly by treating 0x0010 as an hexadecimal value while the program itself gets some kind of implicit conversion, like converting the value behind data to an hexadecimal value. I don't understand why data doesn't get converted to hexadecimal then, too, when using GDB.
Could somebody clear up the situation to me?
Best
Tom

Those are the correct values of the defines:
#define BITMASK_CAN_JUMP 1 << 0 // 0b0001
#define BITMASK_CAN_WALK 1 << 1 // 0b0010
You might at least want to put braces around the 1 << 0 and 1 << 1, but that's another story. Anyway, data becomes 0b10 | 0b01, which is 0b11, or 3 in decimal.
(data & BITMASK_CAN_WALK) is 0b11 & 0b10, which is 0b10, or 2 in decimal.
Therefore, the if (data & BITMASK_CAN_WALK) is taken, because it is not 0 (in C++ it would be implicitly casted to true), and it prints the aforementioned 2. If you change the format specifier to #010x you'll see that it indeed is 0x2.
Nothing here has the value 0x0010 (16 in decimal), maybe there's a display bug if you're seeing that value.

Related

what the meaning of (a&b)>>c in this systemc code? [duplicate]

This question already has answers here:
What are bitwise operators?
(9 answers)
Closed last month.
when I read SYSTEMC code,I find a function return int like this:
static inline int rp_get_busaccess_response(struct rp_pkt *pkt)
{
return (pkt->busaccess_ext_base.attributes & RP_BUS_RESP_MASK) >>
RP_BUS_RESP_SHIFT;
}
pkt->busaccess_ext_base.attributes defined as uint64_t.
RP_BUS_RESP_MASK and RP_BUS_RESP_SHIFT defined as:
enum {
RP_RESP_OK = 0x0,
RP_RESP_BUS_GENERIC_ERROR = 0x1,
RP_RESP_ADDR_ERROR = 0x2,
RP_RESP_MAX = 0xF,
};
enum {
RP_BUS_RESP_SHIFT = 8,
RP_BUS_RESP_MASK = (RP_RESP_MAX << RP_BUS_RESP_SHIFT),
};
What the meaning of this function's return?
Thanks!
a & b is a bitwise operation, this will perform a logical AND to each pair of bits, let's say you have 262 & 261 this will translate to 100000110 & 100000101 the result will be 100000100 (260), the logic behind the result is that each 1 AND 1 will result in 1 whereas 1 AND 0 and 0 AND 0 will result in 0, these are normal logical operations but are performed at bit level:
100000110
& 100000101
-----------
100000100
In (a & b) >> c, >> will shift the bits of the resulting value of a & b to the right by c positions. For example for the previous result 100000100 and having a c value of 8, all bits will shift to the right by 8, and the result is 000000001. The left most 1 bit in the original value will become the first most right whereas the third 1 bit from the right in the original value will be shifted away.
With this knowledge in mind and looking at the function, we can see that the RP_BUS_RESP_MASK constant is a mask that protects the field of bits from 9th through 12th position(from the right, i.e. the first four bits of the second byte), setting them to 1 (RP_RESP_MAX << RP_BUS_RESP_SHIFT which translates to 1111 << 8 resulting in 111100000000), this will preserve the bit values in that range. Then it sets the other bits of pkt->busaccess_ext_base.attributes to 0 when it performs the bitwise & against this mask. Finally it shifts this field to the right by RP_BUS_RESP_SHIFT(8).
It basically extracts the the first four bits in the second byte of kt->busaccess_ext_base.attributes and returns the result as an integer.
What it's for specifically? You must consult the documentation if it exists or try to understand its use in the global context, for what I can see this belongs to LibSystemCTLM-SoC (In case you didn't know)
The function extracts the first 4-Bit of the second byte of the 8-Byte (64-Bit) Attribute. This means, it extracts the following 4-Bits of the Attribute 0xFFFF FFFF FFFFF FAFF resulting in 0x0A
First it creates the mask, which is RP_BUS_RESP_MASK = 0x0F00
Next it applies the mask to the attribute pkt->busaccess_ext_base.attributes & 0x0F00 resulting in 0x0A00 from the example
Next it shifts A by 8-Bit to the right side, leading to 0x0A

Weird problem in Fujitsu Softune IDE - wrong calculation of 11 bit CAN ID

The code below is part of my code to read CAN ID in Rx callback:
tmpp = (((0x07 << 0x1D)|(0x368 << 0x12)) & 0x1FFC0000); //unsigned long long int tmpp - equal to 0xDA00000
if (CAN0_IF2ARB0_ID == tmpp) {
//do some action
}
The problem is that while the 29 bit CAN ID is 0xDA00000, the condition is not true. But when I directly set tmpp as tmpp = 0xDA00000, the program successfully enters the loop. In fact, the calculation tmpp = (((0x07 << 0x1D)|(0x368 << 0x12)) & 0x1FFC0000); seems to have some problem (the value is 0xDA00000, but in Softune, it is not calculated correctly). I would be grateful if you could help me to find the problem. Thanks.
0x07 is an int - perhaps even a 16-bit int. Use at least unsigned long constants for values to be shifted into a 32-bit value.
// tmpp = (((0x07 << 0x1D)|(0x368 << 0x12)) & 0x1FFC0000);
tmpp = ((0x07ul << 0x1D) | (0x368ul << 0x12)) & 0x1FFC0000u;
Left-shifting an integer constant such as 1 is almost always a bug, because integer constants in C have a type just like variables and in most cases it is int. Now since int is a signed type, we cannot left shift data into the sign bit or we invoke undefined behavior. 0x07 << 0x1D does exactly that, it shifts data into bits 31 (sign bit), 30 and 29.
Solve this by always adding an u suffix to all your integer constants.
Furthermore, you shouldn't use "magic numbers" but named constants. And in case you mean to shift something 29 bits, use decimal notation 29 since that's self-documenting code.
Your fixed code should look something like this (replace "MASKn" with something meaningful):
#define MASK1 (0x07u << 29)
#define MASK2 (0x368u << 18)
#define MASK3 (MASK1 | MASK2)
#define MASK4 0x1FFC0000u
if (CAN0_IF2ARB0_ID == (MASK3 & MASK4))
Also an extended CAN identifier doesn't use those bits 31,30,29... so I have no idea what you are even doing here. If you seek to calculate some value for CAN acceptance filtering etc, then it would seem you seem to have managed to confuse yourself by the original use of hex constants for shifting.

A bit value verification

I want to verify two bits (for example the bit number 3 and 5) values of a uint8
if their value is 0, I want to return 1
uint8 a;
if ((a & !(1<<3)) && (a & !(1<<5)))
{
Instructions..
}
Is this code correct ?
No, your code won't work in way that you want. ! operator results only in 0 or 1 and info about actual non-zero bit is lost anyway. You may use something like this:
if(!(a & ((1 << 3) | (1 << 5))) {
/* ... */
}
At first stage you are creating mask with | operator. This mask has non-zero bits only at positions that you are interested in. Then this mask is combined with tested value via &. As result you get 0 only if value has zero bits at tested positions. And then just inverse 0 to 1 with ! to obtain true condition.
It is not correct.
The ! operator is boolean NOT, not a bitwise NOT.
So, if you want to check if bits 3 & 5 are both zeroes you should write:
uint8 a;
...
if (!(a & (1<<3)) && !(a & (1<<5)))
{
Instructions..
}
Further optimisation of the expression in if is possible.
This is trivial if you don't attempt to write it as a single, messy expression. There is no advantage of doing so - contrary to popular belief, mashing as many operators into a single line is actually very bad practice. It destroys readability and you gain no performance benefits what-so-ever.
So start by creating a bit mask:
uint8_t mask = (1<<3) | (1<<5);
(The parenthesis are actually not needed, but not everyone can cite the C operator precedence table in their sleep, so this is recommended style.)
Then check the data against the mask:
if(data & mask) // if any bit contains value 1
return 0;
else // if no bit contains value 1
return 1;
Which, if you will, can be rewritten as a boolean expression:
return !(data & mask);
The complete function could look like this:
bool check_bits (uint8_t data)
{
uint8_t mask = (1<<3) | (1<<5);
return !(data & mask);
}
Your expression is false, you should not negate the masks this way and you must ignore other bits so don't use a negation. Simply:
(a & (1<<3)) + (a & (1<<5))
gives 0 if both are 0s.
Assuming that a actually is initialized, then the expression will not work as you expect. The logical not operator ! gives you a one or a zero (boolean true or false), which you then use in a bitwise and operation. That will not give you the correct result.
I suppose you mean to use the bitwise complement operator ~ instead, as in ~(1 << 3). Not that it would work anyway, as that will just check that any of the other bits in a is non-zero.
Instead check if the bit is one, and then turn around the logic using the logic not operator !, as in !(a & 1 << 3).
No. ! operator does logical negation, and since 1<<3 is not zero, !(1<<3) is zero. It means a & !(1<<3) will always be zero and therefore the condition will never be true.
I think masking is one of good ways to do what you want to do.
uint8 a;
/* assign something to a */
return (a & ((1 << 3) | (1 << 5))) == 0;
a & ((1 << 3) | (1 << 5)) is a value in which the 3rd and 5th bit (0-origin) of a keep their original value and all other bits is turned to zero. Checking if the value is zero means checking if all of the bits to check are zero while not careing other bits. == operator will return 1 if two operands are equal and 0 otherwise.
If you want to test for some combination of BIT_A and BIT_B (or whatever number of bits you can have) You can do this:
#define BIT_A (1 << 3)
#define BIT_B (1 << 5)
...
#define BIT_Z (1 << Z)
...
/* |here you put all bits |here you put only the ones you want set */
/* V V */
if (a & (BIT_A | BIT_B | ... | BIT_Z) == (BIT_A | ... | BIT_I | ...))
{
/* here you will know that bits BIT_A,..., BIT_I,... will **only**
* be set in the mask of (BIT_A | BIT_B | ... | BIT_Z) */
}
as with a & (BIT_A | BIT_B | ... ) you force all bits not in the set to be zero, so only the bits in the set will conserve their values. With the second mask, you generate a bitmap with only the bits of the set you want to be set (and of course the bits that are not in the set forced zero) so if you compare both values for equalness, you'll get the expected result.
NOTE
As an answer to your question, the particular case in which you want all the bits equal to one, is to make both masks equal. For your case, you want to check if both bits are zero, then your test is (the second mask has no bits set, so it is zero):
if (a & ((1 << 3) | (1 << 5)) == 0) { ...
(All bits in the second mask are zero as the required mask, and both, the third and the fifth bits are set in the first mask) This can be written in a more compact form as (you can see it written as):
if (!(a & 0x28)) { /* 0x28 is the octal for 00101000, with the bits you require */
WHY THE CODE YOU WROTE IS NOT CORRECT
First you mix logical operators like ! with bitmasks, making !(1<<3)to eval to 0 (1<<3 is different of 0 so it is true, negating gives 0) and the same for the !(1<<5) subexpression. When you mask a with those values makes you get a & 0 ==> 0 and a & 0 ==> 0 and anding both together gives 0 && 0 ==> 0. So the result value of your expression is 0 -- false always, independent of the original value of a.

C - function arguments and "<<" operator

i'm sorry for title for being not spesific but i dont know how it's called. here is my question: in this code snippet, there is constants defined like this:
#define WS_NONE 0
#define WS_RECURSIVE (1 << 0)
#define WS_DEFAULT WS_RECURSIVE
#define WS_FOLLOWLINK (1 << 1) /* follow symlinks */
#define WS_DOTFILES (1 << 2) /* per unix convention, .file is hidden */
#define WS_MATCHDIRS (1 << 3) /* if pattern is used on dir names too */
and there is a function defined like this:
int walk_recur(char *dname, regex_t *reg, int spec)
he sends constants(WS_DEFAULT and WS_MATCHDIRS) to function using "|":
walk_dir(".", ".\\.c$", WS_DEFAULT|WS_MATCHDIRS);
this is how he uses the arguments:
if ((spec & WS_RECURSIVE))
walk_recur(fn, reg, spec);
if (!(spec & WS_MATCHDIRS)) continue;
if WS_RECURSIVE passed to function, first if statement will be true. i didn't get how << operator works and how (spec & WS_RECURSIVE) statement returning true. and how can he sends different constants with "|"? and he can use "spec" value, which must be equal to passed constants, how is that possible?
and sorry for my bad english.
It's a very common idiom for treating a single integer value as a collection of individual bits. C doesn't have direct support for bit arrays, so we use bitwise operators to set and clear the bits.
The << operator is a left-shift operator. For example:
1 << 0 == 1
1 << 1 == 2
1 << 2 == 4
1 << 3 == 8
1 << n for any non-negative n (within range) is a power of 2. Each bit in an an integer value represents a power of 2. Any integer value can be treated as a unique sums of powers of 2.
| is the bitwise or operator; it's use to combine multiple 1-bit values (powers of 1) into an integer value:
(1 << 0) | (1 << 3) == 1 | 8
1 | 8 == 9
Here we combine bit zero (representing the value 1) and bit three (representing the value 8) into a single value 9. (We could have used + rather than | in this case, but in general using | avoids problems when some power of 2 is given more than once.)
Now we can test whether a bit is set using the bitwise and operator &:
int n = (1<<0) | (1<<3);
if (n & (1<<3)) {
printf("Bit 3 is set\n");
}
else {
printf("Bit 3 is not set\n");
}
Now we can define macros so we don't have to write 1<<0 and 1<<3 all over the place:
#define WS_RECURSIVE (1 << 0)
...
#define WS_MATCHDIRS (1 << 3)
int n = WS_RECURSIVE | WS_MATCHDIRS;
// n == 9
if (n & WS_RECURSIVE) {
// the WS_RECURSIVE bit is set
}
if (!(n&WS_MATCHDIRS) {
// the WS_MATCHDIRS bit is *not* set
}
You could also define macros to simplify setting and testing bits (SET_BIT(), IS_SET(), etc.), but most C programmers don't bother to do so. Symbolic names for the bit values are important for code readability, but once you understand how the bitwise operators work, and more importantly how the common idioms for setting, clearing, and testing bits are written, the raw operators are readable enough.
It's usually better to use unsigned rather than signed integer types; the behavior of the bitwise operators on signed types can be tricky in some cases.
The << operator is a bitwise left shift.
For example, 1 << 0 translates to 1 'left shifted by' 0 bits. This is effectively a nop as 1 left shifted by 0 bits is still the value 1.
To further clarify, let's look at a bitwise representation of a number (lets say the number is a 16 bit value to illustrate)
1 -> 0b'0000000000000001
1 << 1 would be
2 -> 0b'0000000000000010
And so on.
The | operator is a bitwise or, so the WS_DEFAULT | WS_MATCHDIRS is translated to:
0b'0001 | 0b'1000
This yields the value 0b'1001 which is then passed to the walk_dir.
If you pass in WS_RECURSIVE instead, you will be doing a bitwise and (&) operation using two identical values. This will always result in a true value.
AND Truth Table
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1

What is bitwise OR used here for?

descriptor = limit & 0x000F0000;
descriptor |= (flag << 8) & 0x00F0FF00;
descriptor |= (base >> 16) & 0x000000FF;
descriptor |= base & 0xFF000000;
I understood the fact that the and operation is used for masking certain bits. But what is OR operation used here for??? Please elaborate.
This is part of the code for creating a Global Descriptor Table.
If you look at just a single bit, the truth table is given by
0 | 0 == 0
0 | 1 == 1
1 | 0 == 1
1 | 1 == 1
So, bitwise or sets a bit if and only if that bit is set in at least one of the operands.
When you use bitwise or on a variable with more that a single bit, the above truth table is applied in a bitwise fashion.
So, suppose that you had two variables whose binary representations were
001101
011001
When you combine them with bitwise or, you collect all the bits that are set in either variable. So the result is
011101
The bitwise or operator is commonly used to add new flags to a set of bit flags. The value is used to represent a mathematical set. Each bit is assigned a particular meaning, that is associated with a member of the universal set. When the bit is 1, that member is included in the set, and when the bit is 0, the associated member is not in the set.
So, let us have a very simple example with a universal set having two members. Let us call the variable, controlState. Bit 0 represents the visible property, and bit 1 represents the enabled property. So, you can define flags like so
const int visibleFlag = 1; // 01 in binary
const int enabledFlag = 2; // 10 in binary
Then you can build the controlState variable like this:
int controlState = 0; // empty set
if (isVisible)
controlState |= visibleFlag;
if (isEnabled)
controlState |= enabledFlag;
It gets more interesting if you don't know whether or not a particular bit is set. So, you can ensure that the visible bit is set like this:
controlState = ...; // could set visible flag, or not ...
controlState |= visibleFlag;
It does not matter whether the original value of controlState included the flag or not. After this operation, it will be set for sure, and no other flags altered.
This is what is happening in your code example. So,
descriptor = limit & 0x000F0000;
initializes descriptor. Then
descriptor |= (flag << 8) & 0x00F0FF00;
adds (flag << 8) & 0x00F0FF00. And so on.
What the code you've shown is doing is constructing descriptor by selecting different parts of it from other boolean expressions.
Notice that the constants that (flag << 8), (base >> 16) and base are being ANDed with, when themselves ORed together, produce 0xFFFFFFFF.
The point of the OR is to say, "the first 8 bits come from (base >> 16), the next 8 bits from flag << 8, the next 4 from limit, the next 4 from flag << 8 and the last 8 from base." So finally, descriptor looks like this:
d[7], d[6], b[5], a[4], b[3], b[2], c[1], c[0]
Where each comma separated variable is a hexadecimal digit, and a, b, c, and d are
limit, (flag << 8), (base >> 16) and base respectively. (The commas are just there for readability, they stand for concatenation of the digits).
The use of |= here is essentially short hand for the following
descriptor = destriptor | ((flag << 8) & 0x00F0FF00);
descriptor is a collection of values packed together as bitfields. This code is building it up from four values (limit, flag, and two parts of base). Each step is shifting the value to the correction bit position and then ANDing with a mask to ensure the bits don't spill over into other positions. The A |= B operator expands to A = A | B and merges together all of the individual results. This could also be done using a struct with bitfields, although perhaps with less portability.
Bit-wise OR | operator (copies a bit if it exists in either operand) used here to ORing the descriptor with right hand operator of = and store the result to descriptor. It is equivalent to
descriptor = descriptor | (flag << 8) & 0x00F0FF00;
Truth table fo OR operation:
For x = 1 1 0 0 and Y = 1 0 1 0 OR operation works as follows:

Resources