What is the purpose of "ts & 0xffff0000"? - c

I'm working on a real time protocol that adding the timestamp for each transmitted packet and I don't understand what the lines of code mean. Thanks for help.
// ts for timestamp
unsigned int ts;
if(ts & 0xffff0000){
// do something
}

Given the fact they're using binary-and (&), the intent seems to be to check if any of the 16 high bits are set.
Binary-and examines the bits at each position in both numbers, and if they're both 1, then the result as a 1 bit in that same position. Otherwise the result has a zero in that position
0b 001001001001001001001001001001 (first number, usually a variable)
0b 010101010101010101010101010101 (second number, usually a "mask")
=================================
0b 000001000001000001000001000001 (result)
If this is used as the condition of an if-block, such as if (x & mask), then the if-block is entered if x has any of the same bits as mask set. For 0xFFFF0000, the block will be entered if any of the high 16 bits are set.
That is effectively the same as if (ts > 65535) (if int is 32bit or less), but apparently the intent is to deal with bits, rather than the actual value.

0xffff0000 serves as a bit mask here.
ts & 0xffff0000 satisfies as a condition when some bit in the first 16 bits of ts is 1. Put another way, when ts >= 2^16.

This IF loop checks if any of upper 16 bits of ts is high. If yes, then loop is executed.
The IF loop is executed only if ts >= 0x00010000.

An intuitive way to understand this.
**** **** **** **** //the first 16bits of ts
& 1111 1111 1111 1111 //the first 16bits of 0xffff 0000
If one of the first 16bits of ts is set, then the result above won't be zero.
If they are all 0, the result above will be 0000 0000 0000 0000
For the last 16bits of ts, no matter what happens to these bits, the result of Binary-and will be 0.
**** **** **** ****
& 0000 0000 0000 0000
=0000 0000 0000 0000
So if the first 16 bits of ts have one 1 bit ==> ts&0xffff0000 > 0 (which means ts>=0b 10000 0000 0000 0000(i.e., 2^16)), el se ts&0xffff0000 == 0.
Always, we also use this ts&1 to test whether ts is an odd number.

Related

Can someone explains why this works to count set bits in an unsigned integer?

I saw this code called "Counting bits set, Brian Kernighan's way". I am puzzled as to how "bitwise and'ing" an integer with its decrement works to count set bits, can someone explain this?
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Walkthrough
Let's walk through the loop with an example : let's set v = 42 which is 0010 1010 in binary.
First iteration: c=0, v=42 (0010 1010).
Now v-1 is 41 which is 0010 1001 in binary.
Let's compute v & v-1:
0010 1010
& 0010 1001
.........
0010 1000
Now v&v-1's value is 0010 1000 in binary or 40 in decimal. This value is stored into v.
Second iteration : c=1, v=40 (0010 1000). Now v-1 is 39 which is 0010 0111 in binary. Let's compute v & v-1:
0010 1000
& 0010 0111
.........
0010 0000
Now v&v-1's value is 0010 0000 which is 32 in decimal. This value is stored into v.
Third iteration :c=2, v=32 (0010 0000). Now v-1 is 31 which is 0001 1111 in binary. Let's compute v & v-1:
0010 0000
& 0001 1111
.........
0000 0000
Now v&v-1's value is 0.
Fourth iteration : c=3, v=0. The loop terminates. c contains 3 which is the number of bits set in 42.
Why it works
You can see that the binary representation of v-1 sets the least significant bit or LSB (i.e. the rightmost bit that is a 1) from 1 to 0 and all the bits right of the LSB from 0 to 1.
When you do a bitwise AND between v and v-1, the bits left from the LSB are the same in v and v-1 so the bitwise AND will leave them unchanged. All bits right of the LSB (including the LSB itself) are different and so the resulting bits will be 0.
In our original example of v=42 (0010 1010) the LSB is the second bit from the right. You can see that v-1 has the same bits as 42 except the last two : the 0 became a 1 and the 1 became a 0.
Similarly for v=40 (0010 1000) the LSB is the fourth bit from the right. When computing v-1 (0010 0111) you can see that the left four bits remain unchanged while the right four bits became inverted (zeroes became ones and ones became zeroes).
The effect of v = v & v-1 is therefore to set the least significant bit of v to 0 and leave the rest unchanged. When all bits have been cleared this way, v is 0 and we have counted all bits.
Each time though the loop one bit is counted, and one bit is cleared (set to zero).
How this works is: when you subtract one from a number you change the least significant one bit to a zero, and the even less significant bits to one -- though that doesn't matter. It doesn't matter because they are zero in the values you're decrementing, so they will be zero after the and-operation anyway.
XXX1 => XXX0
XX10 => XX01
X100 => X011
etc.
Let A=an-1an-2...a1a0 be the number on which we want to count bits and k the index of the right most bit at one.
Hence A=an-1an-2...ak+1100...0=Ak+2k where Ak=an-1an-2...ak+1000...0
As 2k−1=000..0111..11, we have
A-1=Ak+2k-1=an-1an-2...ak+1011...11
Now perform the bitwise & of A and A-1
an-1an-2...ak+1100...0 A
an-1an-2...ak+1011...1 A-1
an-1an-2...ak+1000...0 A&A-1=Ak
So A&A-1 is identical to A, except that its right most bit has been cleared, that proves the validity of the method.

A question to C operations :return 1 when all bits of byte i of x equal 1; 0 otherwise

You are asked to complete the following C function:
/* Return 1 when all bits of byte i of x equal 1; 0 otherwise. */
int allBits_ofByte_i(unsigned x, int i) {
return _____________________ ;
}
My solution: !!(x&(0xFF << (i<<3)))
The correct answer to this question is:
!~(~0xFF | (x >> (i << 3 ))
Can someone explain it?
Also, can someone take a look at my answer, is it right?
The expression !~(~0xFF | (x >> (i << 3 )) is evaluated as follows.
i<<3 multiplies i by 8 to get a number of bits which will be 0, 8, 16, or 24, depending on which byte the caller wants to test. This is actually the number of bits to ignore, as it is the number of bits that are less significant than the byte we're interested it.
(x >> ...) shifts the test value right to eliminate the low bits that we're not interested in. The 8 bits of interest are now the lowest 8 bits in the unsigned value we're evaluating. Note that other higher bits may or may not be set.
(~0xFF | ...) sets all 24 bits above the 8 we're interested in, but does not alter those 8 bits. (~0xFF is a shorthand for 0xFFFFFF00, and yes, arguably 0xFFu should be used).
~(...) flips all bits. This will result in a value of zero if every bit was set, and a non-zero value in every other case.
!(...) logically negates the result. This will result in a value of 1 only if every bit was set during step 3. In other words, every bit in the 8 bits we were interested in was set. (The other 24 bits were set in step 3.)
The algorithm can be summed up as, set the 24 bits we're not interested in, then verify that 32 bits are set.
Your answer took a slightly different approach, which was to shift the 0xFF mask left rather than shift the test value right. That was my first thought for how to approach the problem too! But your logical negation doesn't verify that every bit is set, which is why your answer wouldn't produce correct results in all cases.
x is of unsigned integer type. Let's say that x is (often) 32 bit.
One byte consists of 8 bits. So x has 4 bytes in this case: 0, 1, 2 or 3
According to the solution the endianness of the architecture can be imagined as follows:
x => bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb
i => 3 2 1 0
I will try to break it down:
!~ ( ~0xFF | ( x >> (i << 3) ) )
i can be either 0, 1, 2 or 3. So i << 3 would either give you 0, 8, 16 or 24. (i << n is like multiplying by 2^n; it means shift i to the left n times putting 0).
Note that 0, 8, 16 and 24 are the byte segments: 0-7, 8-15, 16-23, 24-31
This is used to ...
x >> (i<<3) shifts to the right x by that result (0, 8, 16 or 24 times). So that the corresponding byte denoted by the i parameter occupies now the right most bits.
Until now you manipulated x so that the byte you are interested in is located on the right most 8 bits (the right most byte).
~0xFF is the inversion of 0000 0000 0000 0000 0000 0000 1111 1111 which gives you 1111 1111 1111 1111 1111 1111 0000 0000
The bitwise or operator is applied to the two results above, which would result in
1111 1111 1111 1111 1111 1111 abcd efgh - the letters being the bits of the corresponding byte of x.
~1111 1111 1111 1111 1111 1111 abcd efgh will turn into 0000 0000 0000 0000 0000 0000 ABCD EFGH - the capital letters being the inverse of the lower letters' values.
!0000 0000 0000 0000 0000 0000 ABCD EFGH is a logical operation. !n is 1 if n is 0, and it is 0 if n is otherwise.
So you get a 1 if all the inverted bits of the corresponding byte were 0000 0000 (i.e. the byte is 1111 1111).
Otherwise you get a 0.
In the C programming language a result of 0 corresponds to a boolean false value. And a result different than 0 corresponds to a boolean true value.

How to set and clear bits without ~ operator

I am using a C like script for Bluegiga chip and their scripting langauge does not have the ~ operator in the compiler.
Is there any way to work with bits using pure math?
For example i read a byte and i need to clear bit 1 and set bit 2.
The following bitwise operations are supported:
Operation Symbol
AND &
OR |
XOR ^
Shift left <<
Shift right >>
The following mathematical operators are supported:
Operation Symbol
Addition: +
Subtraction: -
Multiplication: *
Division: /
Less than: <
Less than or equal: <=
Greater than: >
Greater than or equal: >=
Equals: =
Not equals: !=
Just use OR and AND operations. To do that operation:
initial byte: 0000 0001
clear bit 1: 0000 0001 & 1111 1110 --> result - 0000 0000 (The 1st bit of the second operand must be 0 to clear the bit)
now set bit 2: 0000 0000 | 0000 0010 --> result - 0000 0010 (The 2st bit of the second operand must be 1 to set the bit)
Note that for this operations you only change the specific bit all the other remain with the same value.
Also to obtain the second operand you can just obtain it by:
for the set operation on the n bit - the second operand is 2^n
for the clear operation on the n bit - the second operand is 1111 1111 XOR 2^n (in this case 1111 1111 XOR is used for the not operation).
If you are missing the ~ operator, you can make your own using XOR and a constant.
#include<stdio.h>
int main()
{
unsigned int s = 0xFFFFFFFF ;
printf("%#x" , 0xFF ^ s ) ; //XOR with the constant is equivalent to ~
unsigned int byte = 0x4 ;
printf("%#x" , 0x5 & ( byte ^ s ) ) ; //clear those bits
return 0 ;
}
When you have ~ it is easy to clear the bits.
Clearing a (single) bit is also equivalent to SET following by INVERT (or xor). Thus.
aabbccdd <-- original value
00000110 OR
00000010 XOR
--------
aabbc10d <-- result (I'm counting the bits from 7 downto 0)
This approach has the benefit of being scalable from byte to the native integer size without the burden of calculating the mask for AND operation.
Performing a XOR against -1 will invert all the bits in an integer.

Logic of a bit masking XOR code

I have a code that changes two sets of hex numbers and then stores them into a new unsigned char. The code looks like the following:
unsigned char OldSw = 0x1D;
unsigned char NewSw = 0xF0;
unsgined char ChangedSw;
ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;
So what I know is:
0x1D = 0001 1101
0xF0 = 1111 0000
Im confused on what the changedSw line is doing. I know it will give the output 0x02 but I can not figure out how its doing it.
ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;
It means "zero one part of OldSw and inverse other part". NewSw indicates what bits of OldSw to zero and what bits to inverse. Namely, 1's in NewSw indicate bits to be zeroed, 0's indicate bits to be inverted.
This operation implemented in two steps.
Step 1. Invert bits.
(OldSw ^ ~NewSw):
0001 1101
^ 0000 1111
---------
0001 0010
See, we inverted bits which were 0's in original NewSw.
Step 2. Zero bits which were not inverted in previous step.
& ~OldSw:
0001 0010
& 1110 0010
---------
0000 0010
See, it doesn't change inverted bits, but zero all the rest.
the first part would be 1F ie. 0001 1111.So when ended with ~oldsw(1110 0010)
the operation will be something like this:
0001 1111
1110 0010
----------
0000 0010
So the output will be 2. The tilde operator is 1's complement.

Flags in C/ set-clear-toggle

I am confused as to what the following code does, I understand Line 1 sets a flag, line 2 clears a flag and line 3 toggles a flag;
#include <stdio.h>
#define SCC_150_A 0x01
#define SCC_150_B 0x02
#define SCC_150_C 0x04
unsigned int flags = 0;
main () {
flags |= SCC_150_A; // Line 1
flags &= ~SCC_150_B; // Line 2
flags ^= SCC_150_C; // Line 3
printf("Result: %d\n",flags); // Line 4
}
What I don't understand is what the output of Line 4 would be? What is the effect of setting/clearing/toggling the flags on 0x01 0x02 and 0x04?
The macros define constants that each require a single bit to be represented:
macro hex binary
======================
SCC_150_A 0x01 001
SCC_150_B 0x02 010
SCC_150_C 0x04 100
Initially flags is 0.
Then it has:
Bit 0 set by the bitwise OR.
Bit 1 cleared by the bitwise AND with the inverse of SCC_150_B.
Bit 2 toggled (turning it from 0 to 1).
The final result is thus 1012, or 5 in decimal.
First of all, I'm going to use binary numbers, cause it's easier to explain with them. In the end it's the same with hexadecimal numbers. Also note that I shortened the variable to unsigned char to have a shorter value to write down (8 bits vs. 32 bits). The end result is similar, just without leading digits.
Let's start with the values:
0x01 = 0000 0001
0x02 = 0000 0010
0x04 = 0000 0100
So after replacing the constant/macro, the first line would essentially be this:
flags |= 0000 0001
This performs a bitwise or operation, a bit in the result is 1, if any of the input values is 1 at that position. Due to the initial value of flags being 0, this will work just like an assignment or addition (which it won't in general, keep that in mind).
flags: 0000 0000
op: 0000 0001
----------------
or: 0000 0001
The result is flags being set to 0000 0001.
flags &= ~0000 0010
Here we've got two operations, first there's ~, the bitwise complement operator. What this essentially does is flipping all bits of the value. Therefore 0000 0010 becomes 1111 1101 (0xfd in hex). Then you're using the bitwise and operator, where a result bit is only set to 1 if both input values are 1 at the specific position as well. As you can see, this will essentially cause the second bit from the right to be set to 0 without touching any other bit.
flags: 0000 0001
op: 1111 1101
----------------
and: 0000 0001
Due to this, the result of this operation is 0000 0001 (0x01 in hex).
flags ^= 0000 0100
The last operation is the bitwise exclusive or (xor), which will set a bit to 1 only if the input bits don't match (i.e. they're different). This leads to the simple behavior of toggling the bits set in the operands.
flags: 0000 0001
op: 0000 0100
----------------
xor: 0000 0101
In this case the result will be 0000 0101 (0x05 in hex).
For clarification on the last operation, because I think xor might be the hardest to understand here, let's toggle it back:
flags: 0000 0101
op: 0000 0100
----------------
xor: 0000 0001
As you can see, the third bit from the right is equal in both inputs, so the result will be 0 rather than 1.

Resources