I am using a C like script for Bluegiga chip and their scripting langauge does not have the ~ operator in the compiler.
Is there any way to work with bits using pure math?
For example i read a byte and i need to clear bit 1 and set bit 2.
The following bitwise operations are supported:
Operation Symbol
AND &
OR |
XOR ^
Shift left <<
Shift right >>
The following mathematical operators are supported:
Operation Symbol
Addition: +
Subtraction: -
Multiplication: *
Division: /
Less than: <
Less than or equal: <=
Greater than: >
Greater than or equal: >=
Equals: =
Not equals: !=
Just use OR and AND operations. To do that operation:
initial byte: 0000 0001
clear bit 1: 0000 0001 & 1111 1110 --> result - 0000 0000 (The 1st bit of the second operand must be 0 to clear the bit)
now set bit 2: 0000 0000 | 0000 0010 --> result - 0000 0010 (The 2st bit of the second operand must be 1 to set the bit)
Note that for this operations you only change the specific bit all the other remain with the same value.
Also to obtain the second operand you can just obtain it by:
for the set operation on the n bit - the second operand is 2^n
for the clear operation on the n bit - the second operand is 1111 1111 XOR 2^n (in this case 1111 1111 XOR is used for the not operation).
If you are missing the ~ operator, you can make your own using XOR and a constant.
#include<stdio.h>
int main()
{
unsigned int s = 0xFFFFFFFF ;
printf("%#x" , 0xFF ^ s ) ; //XOR with the constant is equivalent to ~
unsigned int byte = 0x4 ;
printf("%#x" , 0x5 & ( byte ^ s ) ) ; //clear those bits
return 0 ;
}
When you have ~ it is easy to clear the bits.
Clearing a (single) bit is also equivalent to SET following by INVERT (or xor). Thus.
aabbccdd <-- original value
00000110 OR
00000010 XOR
--------
aabbc10d <-- result (I'm counting the bits from 7 downto 0)
This approach has the benefit of being scalable from byte to the native integer size without the burden of calculating the mask for AND operation.
Performing a XOR against -1 will invert all the bits in an integer.
Related
I saw this code called "Counting bits set, Brian Kernighan's way". I am puzzled as to how "bitwise and'ing" an integer with its decrement works to count set bits, can someone explain this?
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Walkthrough
Let's walk through the loop with an example : let's set v = 42 which is 0010 1010 in binary.
First iteration: c=0, v=42 (0010 1010).
Now v-1 is 41 which is 0010 1001 in binary.
Let's compute v & v-1:
0010 1010
& 0010 1001
.........
0010 1000
Now v&v-1's value is 0010 1000 in binary or 40 in decimal. This value is stored into v.
Second iteration : c=1, v=40 (0010 1000). Now v-1 is 39 which is 0010 0111 in binary. Let's compute v & v-1:
0010 1000
& 0010 0111
.........
0010 0000
Now v&v-1's value is 0010 0000 which is 32 in decimal. This value is stored into v.
Third iteration :c=2, v=32 (0010 0000). Now v-1 is 31 which is 0001 1111 in binary. Let's compute v & v-1:
0010 0000
& 0001 1111
.........
0000 0000
Now v&v-1's value is 0.
Fourth iteration : c=3, v=0. The loop terminates. c contains 3 which is the number of bits set in 42.
Why it works
You can see that the binary representation of v-1 sets the least significant bit or LSB (i.e. the rightmost bit that is a 1) from 1 to 0 and all the bits right of the LSB from 0 to 1.
When you do a bitwise AND between v and v-1, the bits left from the LSB are the same in v and v-1 so the bitwise AND will leave them unchanged. All bits right of the LSB (including the LSB itself) are different and so the resulting bits will be 0.
In our original example of v=42 (0010 1010) the LSB is the second bit from the right. You can see that v-1 has the same bits as 42 except the last two : the 0 became a 1 and the 1 became a 0.
Similarly for v=40 (0010 1000) the LSB is the fourth bit from the right. When computing v-1 (0010 0111) you can see that the left four bits remain unchanged while the right four bits became inverted (zeroes became ones and ones became zeroes).
The effect of v = v & v-1 is therefore to set the least significant bit of v to 0 and leave the rest unchanged. When all bits have been cleared this way, v is 0 and we have counted all bits.
Each time though the loop one bit is counted, and one bit is cleared (set to zero).
How this works is: when you subtract one from a number you change the least significant one bit to a zero, and the even less significant bits to one -- though that doesn't matter. It doesn't matter because they are zero in the values you're decrementing, so they will be zero after the and-operation anyway.
XXX1 => XXX0
XX10 => XX01
X100 => X011
etc.
Let A=an-1an-2...a1a0 be the number on which we want to count bits and k the index of the right most bit at one.
Hence A=an-1an-2...ak+1100...0=Ak+2k where Ak=an-1an-2...ak+1000...0
As 2k−1=000..0111..11, we have
A-1=Ak+2k-1=an-1an-2...ak+1011...11
Now perform the bitwise & of A and A-1
an-1an-2...ak+1100...0 A
an-1an-2...ak+1011...1 A-1
an-1an-2...ak+1000...0 A&A-1=Ak
So A&A-1 is identical to A, except that its right most bit has been cleared, that proves the validity of the method.
I don't fully understand how the "-" operator affects the following code:
#define COMP(x) ((x) & -(x))
unsigned short a = 0xA55A;
unsigned short b = 0x0400;
Could someone explain what COMP(a) and COMP(b) are and how they are calculated?
(x) & -(x) is equal to the lowest bit set in x when using 2's complement for representing binary numbers.
This means COMP(a) == 0x0002; and COMP(b) == 0x0400;
the "-" sign negative the value of the short parameter in a two's complement way. (in short, turn all 0 to 1, 1 to 0 and then add 1)
so 0xA55A in binary is 1010 0101 0101 1010
then -(0xA55A) in binary is 0101 1010 1010 0110
run & between them will give you 0000 0000 0000 0010
-(x) negates x. Negation in two's complement is the same as ~x+1 (bitflip+1)
as the code below shows:
#include <stdio.h>
#include <stdio.h>
#include <stdint.h>
int prbits(uintmax_t X, int N /*how many bits to print (from the low end)*/)
{
int n=0;
uintmax_t shift=(uintmax_t)1<<(N-1);
for(;shift;n++,shift>>=1) putchar( (X&shift)?'1':'0');
return n;
}
int main()
{
prbits(0xA55A,16),puts("");
//1010010101011010
prbits(~0xA55A,16),puts("");
//0101101010100101
prbits(~0xA55A+1,16),puts("");
//0101101010100110
prbits(-0xA55A,16),puts("");
//0101101010100110 (same)
}
When you bitand a value with its bitfliped value, you get 0. When you bitand a value with its bitfliped value + 1 (=its negated value) you get the first nonzero bit from the right.
Why? If the rightmost bit of ~x is 1, adding 1 to it will yield 0 with carry=1. You repeat this while the rightmost bits are 1 and, zeroing those bits. Once you hit zero (which would be 1 in x, since you're adding 1 to ~x), it gets turned into 1 with carry==0, so the addition ends. To the right you have zeros, to the left you have bitflips. You bitand this with the original and you get the first nonzero bit from the right.
Basically, what COMP does is AND the two operands of which one is in its original form and one of which is a negation of it's form.
How CPUs typically handle signed numbers is using 2's Complement, 2's complement splits the range of a numeric data type to 2, of which (2^n-1) -1 is positive and (2^n-1) is negative.
The MSB (right-most bit) represents the sign of the numeric data
e.g.
0111 -> +7
0110 -> +6
0000 -> +0
1111 -> -1
1110 -> -2
1100 -> -6
So what COMP does by doing an AND on positive and negative version of the numeric data is to get the LSB (Left-most bit) of the first 1.
I wrote some sample code that can help you understand here:
http://coliru.stacked-crooked.com/a/935c3452b31ba76c
I'm working on a real time protocol that adding the timestamp for each transmitted packet and I don't understand what the lines of code mean. Thanks for help.
// ts for timestamp
unsigned int ts;
if(ts & 0xffff0000){
// do something
}
Given the fact they're using binary-and (&), the intent seems to be to check if any of the 16 high bits are set.
Binary-and examines the bits at each position in both numbers, and if they're both 1, then the result as a 1 bit in that same position. Otherwise the result has a zero in that position
0b 001001001001001001001001001001 (first number, usually a variable)
0b 010101010101010101010101010101 (second number, usually a "mask")
=================================
0b 000001000001000001000001000001 (result)
If this is used as the condition of an if-block, such as if (x & mask), then the if-block is entered if x has any of the same bits as mask set. For 0xFFFF0000, the block will be entered if any of the high 16 bits are set.
That is effectively the same as if (ts > 65535) (if int is 32bit or less), but apparently the intent is to deal with bits, rather than the actual value.
0xffff0000 serves as a bit mask here.
ts & 0xffff0000 satisfies as a condition when some bit in the first 16 bits of ts is 1. Put another way, when ts >= 2^16.
This IF loop checks if any of upper 16 bits of ts is high. If yes, then loop is executed.
The IF loop is executed only if ts >= 0x00010000.
An intuitive way to understand this.
**** **** **** **** //the first 16bits of ts
& 1111 1111 1111 1111 //the first 16bits of 0xffff 0000
If one of the first 16bits of ts is set, then the result above won't be zero.
If they are all 0, the result above will be 0000 0000 0000 0000
For the last 16bits of ts, no matter what happens to these bits, the result of Binary-and will be 0.
**** **** **** ****
& 0000 0000 0000 0000
=0000 0000 0000 0000
So if the first 16 bits of ts have one 1 bit ==> ts&0xffff0000 > 0 (which means ts>=0b 10000 0000 0000 0000(i.e., 2^16)), el se ts&0xffff0000 == 0.
Always, we also use this ts&1 to test whether ts is an odd number.
unsigned long set;
/*set is after modified*/
set >>= 1;
I found this in a kernel system call but I don't understand, how does it work?
The expression set >>= 1; means set = set >> 1; that is right shift bits of set by 1 (self assigned form of >> bitwise right shift operator check Bitwise Shift Operators).
Suppose if set is:
BIT NUMBER 31 n=27 m=17 0
▼ ▼ ▼ ▼
set = 0000 1111 1111 1110 0000 0000 0000 0000
Then after set >> = 1; variable set becomes:
BIT NUMBER 31 n=26 m=16 0
▼ ▼ ▼ ▼
set = 0000 0111 1111 1111 0000 0000 0000 0000
Notice the bits number shifted.
Note a interesting point: Because set is unsigned long so this >> operation should be logical shift( unsigned shift) a logical shift does not preserve a number's sign bit.
Additionally, because you are shifting all bits to right (towards lower significant number) so one right shift is = divide number by two.
check this code (just to demonstrate last point):
int main(){
unsigned long set = 268304384UL;
set >>= 1;
printf(" set :%lu \n", set);
set = 268304384UL;
set /= 2;
printf(" set :%lu \n", set);
return 1;
}
And output:
set :134152192
set :134152192
(note: its doesn't means >> and / are both same)
Similarly you have operator <<= for left shift, check other available Bitwise operators and Compound assignment operators, also check section: bit expressions and difference between: signed/arithmetic shift and unsigned shift.
This "right-shift"s the value by one bit. If you move all the bits of an integer to the right by 1 then you effectively "divide by 2" because binary is a base-2 numbering system.
Imagine you have the number 12 in binary:
1100 = 12 in binary
110 = 6 in binary (1100 right-shifted)
Just like if you moved all of the digits in a base-10 number right by one you would be dividing by 10.
Every binary operator can be combined with =. In all cases
dest op= expression
is equivalent to
dest = dest op expression
(except if dest has any side effects, they only take place once).
So this means that
set>>=1;
is equivalent to:
set = set >> 1;
Since >> is the binary right-shift operator, it means to shift the value in set right by 1 bit.
This shifts bit to the right by 1 which is equivalent to division by 2. For more information on bit shifting, refer to http://msdn.microsoft.com/en-us/library/f96c63ed(v=vs.80).aspx
The above command performs right shift by one bit .Refer bit wise operations in c from this link http://www.cprogramming.com/tutorial/bitwise_operators.html
So I was messing around with Bit-Twiddling in C, and I came across an interesting output:
int main()
{
int a = 0x00FF00FF;
int b = 0xFFFF0000;
int res = (~b & a);
printf("%.8X\n", (res << 8) | (b >> 24));
}
And the output from this statement is:
FFFFFFFF
I expected the output to be
0000FFFF
But why wasn't it? Am I missing something with bit-shifting here?
TLDR: Your integer b is negative so when you shift it right the value of the uppermost bit (i.e. 1) remains the same. Therefore when you shift b right by 24 places you end up with 0xFFFFFFFF.
Longer explanation:
Assuming on your platform that your integers are 32 bits or longer and a signed integer is represented by 2's complement then the 0xFFFF0000 assigned to a signed integer variable is a negative number. If an int is longer than 32 bits then the 0xFFFF0000 will be sign extended first and will still be a negative number.
Shifting a negative number right is implementation defined by the standard (C99 / N1256, section 6.5.7.5):
The result of E1 >> E2 is E1 right-shifted E2 bit positions. [...] If E1
has a signed type and a negative value, the resulting value is
implementation defined.
That means a particular compiler can choose what happens in a particular situation, but it should be noted in the compiler manual what the effect is.
There tend to be two sets of shift instructions in many processors, a logical shift and an arithmetic shift. The logical shift right will shift bits and fill the exposed bits with zeros. Arithmetic shifts right (assuming 2's complement again) will fill the exposed bits with the same bit value of the most significant bit so that it ends up with a result that is consistent with using shifts as a divide by 2. (For example, -4 >> 1 == 0xFFFFFFFC >> 1 == 0xFFFFFFFE == -2.)
In your case it appears that the compiler implementor has chosen to use arithmetic shifts when applied to signed integers and so the result of shifting a negative value to the right remains a negative value. In terms of bit patterns 0xFFFF0000 >> 24 gives 0xFFFFFFFF.
Unless you are absolutely sure of what you are doing it is best to perform bitwise operations only on unsigned types as their internal representation can safety be treated as a collection of bits. You probably also want to make sure any numeric values you use in that case are unsigned by appending the unsigned suffix to your number.
Right-shifting negative values (like b) can be defined in two different ways: logical shift, which pads the value with zeroes on the left (which yields a positive number when shifting a nonzero amount), and arithmetic shift, which pads the value with ones (always yielding a negative number). Which definition is used in C is implementation-defined, and your compiler apparently uses arithmetic shift, so b >> 24 is 0xFFFFFFFF.
b >> 24 gives 0xFFFFFFFF signed right pad of negative number
List = (res << 8) | (b >> 24)
a = 0x00FF00FF = 0000 0000 1111 1111 0000 0000 1111 1111
b = 0xFFFF0000 = 1111 1111 1111 1111 0000 0000 0000 0000
~b = 0x0000FFFF = 0000 0000 0000 0000 1111 1111 1111 1111
~b & a = 0x000000FF = 0000 0000 0000 0000 0000 0000 1111 1111, = res
res << 8 = 0x0000FF00 = 0000 0000 0000 0000 1111 1111 0000 0000
b >> 24 = 0xFFFFFFFF = 1111 1111 1111 1111 1111 1111 1111 1111
List = 0xFFFFFFFF = 1111 1111 1111 1111 1111 1111 1111 1111
The golden rule: Never ever mix signed numbers with bitwise operators.
Change all ints to unsigned ints. Just as a precaution, change all literals to unsigned too.
#include <stdint.h>
uint32_t a = 0x00FF00FFu;
uint32_t b = 0xFFFF0000u;
uint32_t res = (~b & a);