Why is ~0xF equal to 0xFFFFFFF0 on a 32-bit machine? - c

Why is ~0xF equal to 0xFFFFFFF0?
Also, how is ~0xF && 0x01 = 1? Maybe I don't get 0x01 either.

Question 1
Why is ~0xF equal to 0xFFFFFFF0?
First, this means you run this on a 32-bit machine. That means 0xF is actually 0x0000000F in hexadecimal,
And that means 0xF is
0000 0000 0000 0000 0000 0000 0000 1111 in binary representation.
The ~ operator means the NOT operation. Tt changes every 0 to 1 and every 1 to 0 in the binary representation. That would make ~0xF to be:
1111 1111 1111 1111 1111 1111 1111 0000 in binary representation.
And that is actually 0xFFFFFFF0.
Note that if you do this on a 16-bit machine, the answer of ~0xF would be 0xFFF0.
Question 2
You wrote the wrong statement, it should be 0xF & 0x1. Note that 0x1 0x01, 0x001, and 0x0001 are all the same. So let’s change this hexdecimal number to binary representation:
0xF would be:
0000 0000 0000 0000 0000 0000 0000 1111
and 0x1 would be:
0000 0000 0000 0000 0000 0000 0000 0001
The & operation follows the following rules:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
So doing that to every bit, you get the result:
0000 0000 0000 0000 0000 0000 0000 0001
which is actually 0x1.
Additional
| means bitwise OR operation. It follows:
0 | 0 = 0
0 | 1 = 1
1 | 0 = 1
1 | 1 = 1
^ means bitwise XOR operation. It follows:
0 ^ 0 = 0
0 ^ 1 = 1
1 ^ 0 = 1
1 ^ 1 = 0
You can get more information here.

If you store 0xF in a 32-bit "int", all 32 bits flip, so ~0xF = 0xFFFFFFF0.
See this:
http://teaching.idallen.com/cst8214/08w/notes/bit_operations.txt
They give a good explanation

You're negating 0xF, which flips all of the bits to their inverse. So for example, with 8 bits you have: 0xF = 00001111. If you Negate that, it turns into 11110000.
Since you're using 32 bits, the F's are just extended all the way out. 1111 .... 0000
For your second question, you're using a logical AND, not a bitwise AND. Those two behave entirely differently.

It sounds like your confusion is that you believe 0xF is the same as 0b1111111111111111. It is not, it is 0b0000000000001111.

~0xF inverts all its bits, going
from 0x0000000F = 00000000000000000000000000001111 (32 bits)
to 0xFFFFFFF0 = 11111111111111111111111111110000 (32 bits)
a && b is 1 if both a and b are non-zero, and ~0xF and 0x01 are both non-zero.

In C, ~0xF can never be equal to 0xFFFFFFF0. The former is a negative number (in any of the three signed representations C allows) and the latter is a positive number. However, if both are converted to a 32-bit unsigned type on a twos-complement implementation, the converted values will be equal.
As for ~0xF && 0x01, the && operator is logical and, not bitwise and.

Related

what does bit_test() function do?

I'm reading Programming in C, 4th edn by Stephen Kochan.
Exercise: Write a function called bit_test() that takes two arguments: an unsigned int and a bit number n. Have the function return 1 if bit number n is on inside the word, and 0 if it is off. Assume that bit number 0 references the leftmost bit inside the integer. Also write a corresponding function called bit_set() that takes two arguments: an unsigned int and a bit number n. Have the function return the result of turning bit n on inside the integer.
This is one of the exercise's answers on their forum.
12-5
-----
/* test bit n in word to see if it is on
assumes words are 32 bits long */
int bit_test (unsigned int word, int n)
{
if ( n < 0 || n > 31 )
return 0;
if ( (word >> (31 - n)) & 0x1 )
return 1;
else
return 0;
}
unsigned int bit_set (unsigned int word, int n)
{
if ( n < 0 || n > 31 )
return 0;
return word | (1 << (31 - n));
}
Now I tried to understand it like this and as per my understanding it always returns 0. What does this function actually do?
It just checks whether a bit is set or not.
It assumes that it unsigned int is stored in 32 bit on that particular system.
Why the check?
Check is needed to make it safe ( I am not shifting a negative value or value greater than 31 ) As first one complains of being an error and the seecond one is useless as it returns 0.
what it really does in (word >> (31 - n)) & 0x1 )?
x x y x x x x x x
0 1 2 3 4 5 6 7 8
|-----------|
8-2=6
(Here I considered 9 bit words instead of 32. In your case it will be 31-3=28
So right shift it 6 bit
0 0 0 0 0 0 x x y
Now how to check if it is set or not?
0 0 0 0 0 0 x x y
& 0 0 0 0 0 0 0 0 1
________________________
0 0 0 0 0 0 0 0 y if it is set it returns 1 else 0
if that bit is et the result will be 1 else 0.
What does bit_set do?
It returns that nth bit set.
So if you input
0001 0010 1
and set bit is 0 (you want to set bit at position 0) then you will get
1001 0010 1
return word | (1 << (31 - n));
let the word be 0001 1001 1
You want to set bit 2 [0 indexing]
0001 1001 1
| 0010 0000 0
0011 1001 1
You have to apply logical or operation on with that value.
How to get that value?
Here we just want this number
0010 0000 0
|-------|
6 shift needed (left shift)
1 << (8-2) ---> is how you get it.
Or in your case 1<<(31-n)
Now I get what you are thinking wrong.....
You considered 25
0000 0000 0000 0000 0000 0000 0000 1101
The bit in 3rd (0 indexing) position is this
000[0] 0000 0000 0000 0000 0000 0000 1101
This bit is unset or 0.
Try 29th position of number 25 you will get 1 as answer.
The problem statement has us identify the leftmost, or highest order bit as n = 0, and the rightmost, or lowest order bit as n = 31.
The bit_test() function shifts the test bit to the lowest order position and does a bitwise AND to find if the test bit was set. For example, to test if the bit n = 0 is set for the bit pattern:
1111 1111 1111 1111 1111 1111 1111 1111
there is a shift to the right (word >> 31 - 0):
0000 0000 0000 0000 0000 0000 0000 0001
then the bitwise AND with 0x1 evaluates to 1, indicating that the n = 0 bit was set.
The bit_set() function shifts a bit-pattern with only the lowest order bit set to the left so that only the bit indicated by n is set, and then combines this bit pattern with the input number using a bitwise OR to set the n bit. If the input number is 0, and n = 3, then the lowest order bit of the bit pattern for 1 (or 0x1):
0000 0000 0000 0000 0000 0000 0000 0001
is shifted to the left (1 << 31 - 3):
0001 0000 0000 0000 0000 0000 0000 0000
and combined with the bit-pattern for 0 using a bitwise OR:
0001 0000 0000 0000 0000 0000 0000 0000
The result is that the n bit of the input number is set to 1.

Logic of a bit masking XOR code

I have a code that changes two sets of hex numbers and then stores them into a new unsigned char. The code looks like the following:
unsigned char OldSw = 0x1D;
unsigned char NewSw = 0xF0;
unsgined char ChangedSw;
ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;
So what I know is:
0x1D = 0001 1101
0xF0 = 1111 0000
Im confused on what the changedSw line is doing. I know it will give the output 0x02 but I can not figure out how its doing it.
ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;
It means "zero one part of OldSw and inverse other part". NewSw indicates what bits of OldSw to zero and what bits to inverse. Namely, 1's in NewSw indicate bits to be zeroed, 0's indicate bits to be inverted.
This operation implemented in two steps.
Step 1. Invert bits.
(OldSw ^ ~NewSw):
0001 1101
^ 0000 1111
---------
0001 0010
See, we inverted bits which were 0's in original NewSw.
Step 2. Zero bits which were not inverted in previous step.
& ~OldSw:
0001 0010
& 1110 0010
---------
0000 0010
See, it doesn't change inverted bits, but zero all the rest.
the first part would be 1F ie. 0001 1111.So when ended with ~oldsw(1110 0010)
the operation will be something like this:
0001 1111
1110 0010
----------
0000 0010
So the output will be 2. The tilde operator is 1's complement.

Flags in C/ set-clear-toggle

I am confused as to what the following code does, I understand Line 1 sets a flag, line 2 clears a flag and line 3 toggles a flag;
#include <stdio.h>
#define SCC_150_A 0x01
#define SCC_150_B 0x02
#define SCC_150_C 0x04
unsigned int flags = 0;
main () {
flags |= SCC_150_A; // Line 1
flags &= ~SCC_150_B; // Line 2
flags ^= SCC_150_C; // Line 3
printf("Result: %d\n",flags); // Line 4
}
What I don't understand is what the output of Line 4 would be? What is the effect of setting/clearing/toggling the flags on 0x01 0x02 and 0x04?
The macros define constants that each require a single bit to be represented:
macro hex binary
======================
SCC_150_A 0x01 001
SCC_150_B 0x02 010
SCC_150_C 0x04 100
Initially flags is 0.
Then it has:
Bit 0 set by the bitwise OR.
Bit 1 cleared by the bitwise AND with the inverse of SCC_150_B.
Bit 2 toggled (turning it from 0 to 1).
The final result is thus 1012, or 5 in decimal.
First of all, I'm going to use binary numbers, cause it's easier to explain with them. In the end it's the same with hexadecimal numbers. Also note that I shortened the variable to unsigned char to have a shorter value to write down (8 bits vs. 32 bits). The end result is similar, just without leading digits.
Let's start with the values:
0x01 = 0000 0001
0x02 = 0000 0010
0x04 = 0000 0100
So after replacing the constant/macro, the first line would essentially be this:
flags |= 0000 0001
This performs a bitwise or operation, a bit in the result is 1, if any of the input values is 1 at that position. Due to the initial value of flags being 0, this will work just like an assignment or addition (which it won't in general, keep that in mind).
flags: 0000 0000
op: 0000 0001
----------------
or: 0000 0001
The result is flags being set to 0000 0001.
flags &= ~0000 0010
Here we've got two operations, first there's ~, the bitwise complement operator. What this essentially does is flipping all bits of the value. Therefore 0000 0010 becomes 1111 1101 (0xfd in hex). Then you're using the bitwise and operator, where a result bit is only set to 1 if both input values are 1 at the specific position as well. As you can see, this will essentially cause the second bit from the right to be set to 0 without touching any other bit.
flags: 0000 0001
op: 1111 1101
----------------
and: 0000 0001
Due to this, the result of this operation is 0000 0001 (0x01 in hex).
flags ^= 0000 0100
The last operation is the bitwise exclusive or (xor), which will set a bit to 1 only if the input bits don't match (i.e. they're different). This leads to the simple behavior of toggling the bits set in the operands.
flags: 0000 0001
op: 0000 0100
----------------
xor: 0000 0101
In this case the result will be 0000 0101 (0x05 in hex).
For clarification on the last operation, because I think xor might be the hardest to understand here, let's toggle it back:
flags: 0000 0101
op: 0000 0100
----------------
xor: 0000 0001
As you can see, the third bit from the right is equal in both inputs, so the result will be 0 rather than 1.

set the m-bit to n-bit [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I have a 32-bit number and without using for loop, I want to set m bit to n bits.
For example:
m bit may be 2nd or 5th or 9th or 10th.
n bit may be 22nd or 27 or 11th bit.
I assume (m < n).
Please help me.Thanks
Suppose Bits are numbered from LSB to MSB:
BIT NUMBER 31 0
▼ ▼
number bits 0000 0000 0000 0000 0000 0000 0001 0101
▲ ^ ^ ▲
MSB | | LSB
| |
n=27 m=17
LSB - Least Significant Bit (numbered 0)
MSB - Most Significant Bit (numbered 31)
In the figure above, I have shown how bits are numbered from LSB to MSB.
Notice the relative positions of n and m where n > m.
To set (all one) bits from n to m
To set-1 all bits from position m to n (where n > m) in a 32-bit number.
You need a 32-bit mask in which all bits are 1 from n to m and remaining bits are 0.
For example, to set all bits from m=17 to n=27 we need mask like:
BIT NUMBER 31 n=27 m=17 0
▼ ▼ ▼ ▼
mask = 0000 1111 1111 1110 0000 0000 0000 0000
And if we have any 32-bit number, by bitwise OR (|) with this number we can set-1 all bits from m to n. All other bits will be unchanged.
Remember OR works like:
x | 1 = 1 , and
x | 0 = x
where x value can be either 1 or 0.
So by doing:
num32bit = num32bit | mask;
we can set n to m bit 1 and remaining bits will be unchanged. For example, suppose, num32bit = 0011 1001 1000 0000 0111 1001 0010 1101,
then:
0011 1001 1000 0000 0111 1001 0010 1101 <--- num32bit
0000 1111 1111 1110 0000 0000 0000 0000 <--- mask
---------------------------------------- ---------------Bitwise OR operation
0011 1111 1111 1110 0111 1001 0010 1101 <--- new number
---- ▲ ▲ -------------------
|-----------| this bits are from `num32bit`
all bits are
1 here
This is what I mean by:
num32bit = num32bit | mask;
##How to make the mask?
To make a mask in which all bits are 1 from n to m and others are 0, we need three steps:
Create mask_n: All bits on Right side from n=27 are one
BIT NUMBER 31 n=27 0
▼ ▼ ▼
mask_27= 0000 1111 1111 1111 1111 1111 1111 1111
In programming this can be created by right-shift (>>) 4 times.
And, why 4?
4 = 32 - n - 1 ==> 31 - 27 ==> 4
Also note: the complement (~) of 0 has all bits one,
and we need unsigned right shift in C.
Understand the difference between signed and unsigned right shift
Create mask_m: All bits on left side from m=17 are one.
BIT NUMBER 31 m=17 0
▼ ▼ ▼
mask_17 1111 1111 1111 1110 0000 0000 0000 0000
Create mask: Bitwise AND of above to: mask = mask_n & mask_m:
mask = 0000 1111 1111 1110 0000 0000 0000 0000
▲ ▲
BIT NUMBER 27 17
And, below is my getMask(n, m) function that returns a unsigned number that looks like mask in step-3.
#define BYTE 8
typedef char byte; // Bit_sizeof(char) == BYTE
unsigned getMask(unsigned n,
unsigned m){
byte noOfBits = sizeof(unsigned) * BYTE;
unsigned mask_n = ((unsigned)~0u) >> (noOfBits - n - 1),
mask_m = (~0u) << (noOfBits - m),
mask = mask_n & mask_m; // bitwise & of 2 sub masks
return mask;
}
To test my getMask() I have also written a main() function and a binary() function, which prints a given number in binary format.
void binary(unsigned);
int main(){
unsigned num32bit = 964720941u;
unsigned mask = 0u;
unsigned rsult32bit;
int i = 51;
mask = getMask(27, 17);
rsult32bit = num32bit | mask; //set n to m bits 1
printf("\nSize of int is = %ld bits, and "
"Size of unsigned = %ld e.g.\n", sizeof(int) * BYTE,
sizeof(unsigned) * BYTE);
printf("dec= %-4u, bin= ", 21);
binary(21);
printf("\n\n%s %d\n\t ", "num32bit =", num32bit);
binary(num32bit);
printf("mask\t ");
binary(mask);
while(i--) printf("-");
printf("\n\t ");
binary(rsult32bit);
printf("\n");
return EXIT_SUCCESS;
}
void binary(unsigned dec){
int i = 0,
left = sizeof(unsigned) * BYTE - 1;
for(i = 0; left >= 0; left--, i++){
printf("%d", !!(dec & ( 1 << left )));
if(!((i + 1) % 4)) printf(" ");
}
printf("\n");
}
This test code runs like (the output is quite same as I explained in above example):
Output of code:
-----------------
$ gcc b.c
:~$ ./a.out
Size of int is = 32 bits, and Size of unsigned = 32 e.g.
dec= 21 , bin= 0000 0000 0000 0000 0000 0000 0001 0101
num32bit = 964720941
0011 1001 1000 0000 0111 1001 0010 1101
mask 0000 1111 1111 1110 0000 0000 0000 0000
---------------------------------------------------
0011 1111 1111 1110 0111 1001 0010 1101
:~$
Additionally, you can write getMask() function in shorter form in two statements, as follows:
unsigned getMask(unsigned n,
unsigned m){
byte noOfBits = sizeof(unsigned) * BYTE;
return ((unsigned)~0u >> (noOfBits - n - 1)) &
(~0u << (noOfBits -m));
}
Note: I removed redundant parentheses, to clean up the code. Although you never need to remember precedence of operators, as you can override precedence using (), a good programmer always refers to precedence table to write neat code.
A better approach may be to write a macro as below:
#define _NO_OF_BITS sizeof(unsigned) * CHAR_BIT
#define MASK(n, m) (((unsigned)~0u >> (_NO_OF_BITS - n - 1)) & \
(~0u << (_NO_OF_BITS - m)))
And call like:
result32bit = num32bit | MASK(27, 17);
To reset (all zero) bits from n to m
To reset all bits from n to m = 0, and leave the rest unchanged, you just need complement (~) of mask.
mask 0000 1111 1111 1111 1000 0000 0000 0000
~mask 1111 0000 0000 0000 0111 1111 1111 1111 <-- complement
Also instead of | operator to set zero & is required.
remember AND works like:
x & 0 = 0 , and
x & 0 = 0
where x value can be 1 or 0.
Because we already have a bitwise complement ~ operator and and & operator, we just need to do:
rsult32bit = num32bit & ~MASK(27, 17);
And it will work like:
num32bit = 964720941
0011 1001 1000 0000 0111 1001 0010 1101
mask 1111 0000 0000 0000 0111 1111 1111 1111
---------------------------------------------------
0011 0000 0000 0000 0111 1001 0010 1101

Complementing binary numbers

When I complement 1 (~1), I get the output as -2. How is this done internally?
I first assumed that the bits are inverted, so 0001 becomes 1110 and then 1 is added to it, so it becomes 1111 which is stored, how is the number then retrieved?
Well, no. When you complement 1, you go just invert the bits:
1 == 0b00000001
~1 == 0b11111110
And that's -2 in two's complement, which is the way your computer internally represents negative numbers. See http://en.wikipedia.org/wiki/Two's_complement but here are some examples:
-1 == 0b11111111
-2 == 0b11111110
....
-128== 0b10000000
+127== 0b01111111
....
+2 == 0b00000010
+1 == 0b00000001
0 == 0b00000000
Whar do you mean "when I complement 1 (~1)," ? There is what is called Ones-complement, and there is what is called Twos-Complement. Twos-Complement is more common (it is used on most computers) as it allows negative numbers to be added and subtracted using the same algorithm as postive numbers.
Twos-Complement is created by taking the binary representation of the postive number and switching every bit from 1 to 0 and from 0 to 1, and then adding one
5 0000 0101
4 0000 0100
3 0000 0011
2 0000 0010
1 0000 0001
0 0000 0000
-1 1111 1111
-2 1111 1110
-3 1111 1101
-4 1111 1100
-5 1111 1011
etc.

Resources