what does bit_test() function do? - c

I'm reading Programming in C, 4th edn by Stephen Kochan.
Exercise: Write a function called bit_test() that takes two arguments: an unsigned int and a bit number n. Have the function return 1 if bit number n is on inside the word, and 0 if it is off. Assume that bit number 0 references the leftmost bit inside the integer. Also write a corresponding function called bit_set() that takes two arguments: an unsigned int and a bit number n. Have the function return the result of turning bit n on inside the integer.
This is one of the exercise's answers on their forum.
12-5
-----
/* test bit n in word to see if it is on
assumes words are 32 bits long */
int bit_test (unsigned int word, int n)
{
if ( n < 0 || n > 31 )
return 0;
if ( (word >> (31 - n)) & 0x1 )
return 1;
else
return 0;
}
unsigned int bit_set (unsigned int word, int n)
{
if ( n < 0 || n > 31 )
return 0;
return word | (1 << (31 - n));
}
Now I tried to understand it like this and as per my understanding it always returns 0. What does this function actually do?

It just checks whether a bit is set or not.
It assumes that it unsigned int is stored in 32 bit on that particular system.
Why the check?
Check is needed to make it safe ( I am not shifting a negative value or value greater than 31 ) As first one complains of being an error and the seecond one is useless as it returns 0.
what it really does in (word >> (31 - n)) & 0x1 )?
x x y x x x x x x
0 1 2 3 4 5 6 7 8
|-----------|
8-2=6
(Here I considered 9 bit words instead of 32. In your case it will be 31-3=28
So right shift it 6 bit
0 0 0 0 0 0 x x y
Now how to check if it is set or not?
0 0 0 0 0 0 x x y
& 0 0 0 0 0 0 0 0 1
________________________
0 0 0 0 0 0 0 0 y if it is set it returns 1 else 0
if that bit is et the result will be 1 else 0.
What does bit_set do?
It returns that nth bit set.
So if you input
0001 0010 1
and set bit is 0 (you want to set bit at position 0) then you will get
1001 0010 1
return word | (1 << (31 - n));
let the word be 0001 1001 1
You want to set bit 2 [0 indexing]
0001 1001 1
| 0010 0000 0
0011 1001 1
You have to apply logical or operation on with that value.
How to get that value?
Here we just want this number
0010 0000 0
|-------|
6 shift needed (left shift)
1 << (8-2) ---> is how you get it.
Or in your case 1<<(31-n)
Now I get what you are thinking wrong.....
You considered 25
0000 0000 0000 0000 0000 0000 0000 1101
The bit in 3rd (0 indexing) position is this
000[0] 0000 0000 0000 0000 0000 0000 1101
This bit is unset or 0.
Try 29th position of number 25 you will get 1 as answer.

The problem statement has us identify the leftmost, or highest order bit as n = 0, and the rightmost, or lowest order bit as n = 31.
The bit_test() function shifts the test bit to the lowest order position and does a bitwise AND to find if the test bit was set. For example, to test if the bit n = 0 is set for the bit pattern:
1111 1111 1111 1111 1111 1111 1111 1111
there is a shift to the right (word >> 31 - 0):
0000 0000 0000 0000 0000 0000 0000 0001
then the bitwise AND with 0x1 evaluates to 1, indicating that the n = 0 bit was set.
The bit_set() function shifts a bit-pattern with only the lowest order bit set to the left so that only the bit indicated by n is set, and then combines this bit pattern with the input number using a bitwise OR to set the n bit. If the input number is 0, and n = 3, then the lowest order bit of the bit pattern for 1 (or 0x1):
0000 0000 0000 0000 0000 0000 0000 0001
is shifted to the left (1 << 31 - 3):
0001 0000 0000 0000 0000 0000 0000 0000
and combined with the bit-pattern for 0 using a bitwise OR:
0001 0000 0000 0000 0000 0000 0000 0000
The result is that the n bit of the input number is set to 1.

Related

A question to C operations :return 1 when all bits of byte i of x equal 1; 0 otherwise

You are asked to complete the following C function:
/* Return 1 when all bits of byte i of x equal 1; 0 otherwise. */
int allBits_ofByte_i(unsigned x, int i) {
return _____________________ ;
}
My solution: !!(x&(0xFF << (i<<3)))
The correct answer to this question is:
!~(~0xFF | (x >> (i << 3 ))
Can someone explain it?
Also, can someone take a look at my answer, is it right?
The expression !~(~0xFF | (x >> (i << 3 )) is evaluated as follows.
i<<3 multiplies i by 8 to get a number of bits which will be 0, 8, 16, or 24, depending on which byte the caller wants to test. This is actually the number of bits to ignore, as it is the number of bits that are less significant than the byte we're interested it.
(x >> ...) shifts the test value right to eliminate the low bits that we're not interested in. The 8 bits of interest are now the lowest 8 bits in the unsigned value we're evaluating. Note that other higher bits may or may not be set.
(~0xFF | ...) sets all 24 bits above the 8 we're interested in, but does not alter those 8 bits. (~0xFF is a shorthand for 0xFFFFFF00, and yes, arguably 0xFFu should be used).
~(...) flips all bits. This will result in a value of zero if every bit was set, and a non-zero value in every other case.
!(...) logically negates the result. This will result in a value of 1 only if every bit was set during step 3. In other words, every bit in the 8 bits we were interested in was set. (The other 24 bits were set in step 3.)
The algorithm can be summed up as, set the 24 bits we're not interested in, then verify that 32 bits are set.
Your answer took a slightly different approach, which was to shift the 0xFF mask left rather than shift the test value right. That was my first thought for how to approach the problem too! But your logical negation doesn't verify that every bit is set, which is why your answer wouldn't produce correct results in all cases.
x is of unsigned integer type. Let's say that x is (often) 32 bit.
One byte consists of 8 bits. So x has 4 bytes in this case: 0, 1, 2 or 3
According to the solution the endianness of the architecture can be imagined as follows:
x => bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb
i => 3 2 1 0
I will try to break it down:
!~ ( ~0xFF | ( x >> (i << 3) ) )
i can be either 0, 1, 2 or 3. So i << 3 would either give you 0, 8, 16 or 24. (i << n is like multiplying by 2^n; it means shift i to the left n times putting 0).
Note that 0, 8, 16 and 24 are the byte segments: 0-7, 8-15, 16-23, 24-31
This is used to ...
x >> (i<<3) shifts to the right x by that result (0, 8, 16 or 24 times). So that the corresponding byte denoted by the i parameter occupies now the right most bits.
Until now you manipulated x so that the byte you are interested in is located on the right most 8 bits (the right most byte).
~0xFF is the inversion of 0000 0000 0000 0000 0000 0000 1111 1111 which gives you 1111 1111 1111 1111 1111 1111 0000 0000
The bitwise or operator is applied to the two results above, which would result in
1111 1111 1111 1111 1111 1111 abcd efgh - the letters being the bits of the corresponding byte of x.
~1111 1111 1111 1111 1111 1111 abcd efgh will turn into 0000 0000 0000 0000 0000 0000 ABCD EFGH - the capital letters being the inverse of the lower letters' values.
!0000 0000 0000 0000 0000 0000 ABCD EFGH is a logical operation. !n is 1 if n is 0, and it is 0 if n is otherwise.
So you get a 1 if all the inverted bits of the corresponding byte were 0000 0000 (i.e. the byte is 1111 1111).
Otherwise you get a 0.
In the C programming language a result of 0 corresponds to a boolean false value. And a result different than 0 corresponds to a boolean true value.

How to return middle bits?

I searched online and the common solution for returning x bits from start seems to be:
mask = ((1 << x) - 1 ) << start
then, mask & value.
However, I'm confused on how this works still.
If I have a number 0101 1100 and I want to return the two bits from positions 5 and 6 (11)
mask = ((1<<2)-1) << 5
1<<2 = 0000 0100, and subtracting 1 yields 0000 0011 then, shifting 5 is 0110 0000
If I take 0110 0000 & 0101 1100 (the original), I get 0100 0000
This is not the answer I want, so what am I doing wrong?
Assuming we're referring to two's complement binary, then 11 would be bits 2 and 3 (or 3 and 4, depending on who you ask).
Think of it this way. If you want to get a specific amount of bits from the middle of a binary value, then you can first shift it to the right n times where n is the index of the first bit of interest.
Shifting 0101 1100 to the right twice yields 0001 0111
Here, we're interested in the first two bits, so we can simply call 0001 0111 & 3 because 3 = 00000011.
Therefore, the formula for this specific example is (b >> 2) & 3 where b is the binary value.
If you want the value at their current location, you can call 0101 1100 & 12 because 12 = 00001100, which returns 0000 1100.
Bit positions are typically counted from least- to most-significant, i.e. right-to-left.
0101 1110
---- ----
7654 3210
The bits you want are bits 2 and 3.
for e.g input num is 92 = 0101 1100
initial pos = 1
ending pos = 4
Now your task is to find the number b/w 1st and 4th position(LSB to MSB), output should be 11 means 3.
First find the no of ones b/w these 2 positions, for this rotate loop from starting position to ending position and do
sum = sum | 1<<i; // sum =12( 1100)
Next if you do sum & num, you will get output b/w given positions which is 1100 but you need 11 so do 1100 >> 2(which is n1+1)
Here is the my solution
int sum = 0,num,res; /** num is your input number **/
for(int i=n1+1;i<n2;i++) /* n1 is starting position and n2 is the ending position */
sum = sum | (1<<i); //counting ones
res = sum & num; // we will get o/p in middle
res = res>>(n1+1); // shift previous res to correct position
Finally prints the res.
As advised by many, you should start your positioning from the right.
num = 0101 1100
Positions: 8765 4321
Then the given solution work's just fine.
However, a more detailed approach by myself would be:
To get the values of the bits at position 3 and 4 ('11'), I would do the following:
Let's say you want the bit at position k. Shift a 1, k-1 positions to the left and evaluate it with the &-operator. Shift it back, then save the result in an array with the length of how many bits you want. Repeat the process for the rest.
int mask = 1 << (k - 1);
int bitAtPosK = (num & mask) >> (k - 1);
... do this in a loop and save the results.
Visualisation for Position 3:
num = 0101 1100
1 << 2 results in 0000 0010
and
0101 1100
0000 0100
--------- &
0000 0100
0000 0100 >> 2 results in 1

How can I get the value of the least significant bit in a number?

I'm working on a programming project and one of things I need to do is write a function that returns a mask that marks the value of the least significant 1 bit. Any ideas on how I can determine the value using bitwise operators?
ex:
0000 0000 0000 0000 0000 0000 0110 0000 = 96
What can I do with the # 96 to turn it into:
0000 0000 0000 0000 0000 0000 0010 0000 = 32
I've been slamming my head against the wall for hours trying to figure this out any help would be greatly appreciated!
x &= -x; /* clears all but the lowest bit of x */
A more readable code:
int leastSignificantBit(int number)
{
int index = 0;
while ((~number) & 1) {
number >>= 1;
index++;
}
return 1 << index;
}
To be sure you get the right bit/value:
The value at the least significant bit position = x & 1
The value of the isolated least significant 1 = x & -x
The zero-based index of the isolated least significant 1 = log2(x & -x)
Here's how it looks in JavaScript:
let x = 0b1101000;
console.log(x & 1); // 0 (the farthest-right bit)
console.log(x & -x); // 8 (the farthest-right 1 by itself)
console.log(Math.log2(x & -x); // 3 (the zero-based index of the farthest-right 1)

set the m-bit to n-bit [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I have a 32-bit number and without using for loop, I want to set m bit to n bits.
For example:
m bit may be 2nd or 5th or 9th or 10th.
n bit may be 22nd or 27 or 11th bit.
I assume (m < n).
Please help me.Thanks
Suppose Bits are numbered from LSB to MSB:
BIT NUMBER 31 0
▼ ▼
number bits 0000 0000 0000 0000 0000 0000 0001 0101
▲ ^ ^ ▲
MSB | | LSB
| |
n=27 m=17
LSB - Least Significant Bit (numbered 0)
MSB - Most Significant Bit (numbered 31)
In the figure above, I have shown how bits are numbered from LSB to MSB.
Notice the relative positions of n and m where n > m.
To set (all one) bits from n to m
To set-1 all bits from position m to n (where n > m) in a 32-bit number.
You need a 32-bit mask in which all bits are 1 from n to m and remaining bits are 0.
For example, to set all bits from m=17 to n=27 we need mask like:
BIT NUMBER 31 n=27 m=17 0
▼ ▼ ▼ ▼
mask = 0000 1111 1111 1110 0000 0000 0000 0000
And if we have any 32-bit number, by bitwise OR (|) with this number we can set-1 all bits from m to n. All other bits will be unchanged.
Remember OR works like:
x | 1 = 1 , and
x | 0 = x
where x value can be either 1 or 0.
So by doing:
num32bit = num32bit | mask;
we can set n to m bit 1 and remaining bits will be unchanged. For example, suppose, num32bit = 0011 1001 1000 0000 0111 1001 0010 1101,
then:
0011 1001 1000 0000 0111 1001 0010 1101 <--- num32bit
0000 1111 1111 1110 0000 0000 0000 0000 <--- mask
---------------------------------------- ---------------Bitwise OR operation
0011 1111 1111 1110 0111 1001 0010 1101 <--- new number
---- ▲ ▲ -------------------
|-----------| this bits are from `num32bit`
all bits are
1 here
This is what I mean by:
num32bit = num32bit | mask;
##How to make the mask?
To make a mask in which all bits are 1 from n to m and others are 0, we need three steps:
Create mask_n: All bits on Right side from n=27 are one
BIT NUMBER 31 n=27 0
▼ ▼ ▼
mask_27= 0000 1111 1111 1111 1111 1111 1111 1111
In programming this can be created by right-shift (>>) 4 times.
And, why 4?
4 = 32 - n - 1 ==> 31 - 27 ==> 4
Also note: the complement (~) of 0 has all bits one,
and we need unsigned right shift in C.
Understand the difference between signed and unsigned right shift
Create mask_m: All bits on left side from m=17 are one.
BIT NUMBER 31 m=17 0
▼ ▼ ▼
mask_17 1111 1111 1111 1110 0000 0000 0000 0000
Create mask: Bitwise AND of above to: mask = mask_n & mask_m:
mask = 0000 1111 1111 1110 0000 0000 0000 0000
▲ ▲
BIT NUMBER 27 17
And, below is my getMask(n, m) function that returns a unsigned number that looks like mask in step-3.
#define BYTE 8
typedef char byte; // Bit_sizeof(char) == BYTE
unsigned getMask(unsigned n,
unsigned m){
byte noOfBits = sizeof(unsigned) * BYTE;
unsigned mask_n = ((unsigned)~0u) >> (noOfBits - n - 1),
mask_m = (~0u) << (noOfBits - m),
mask = mask_n & mask_m; // bitwise & of 2 sub masks
return mask;
}
To test my getMask() I have also written a main() function and a binary() function, which prints a given number in binary format.
void binary(unsigned);
int main(){
unsigned num32bit = 964720941u;
unsigned mask = 0u;
unsigned rsult32bit;
int i = 51;
mask = getMask(27, 17);
rsult32bit = num32bit | mask; //set n to m bits 1
printf("\nSize of int is = %ld bits, and "
"Size of unsigned = %ld e.g.\n", sizeof(int) * BYTE,
sizeof(unsigned) * BYTE);
printf("dec= %-4u, bin= ", 21);
binary(21);
printf("\n\n%s %d\n\t ", "num32bit =", num32bit);
binary(num32bit);
printf("mask\t ");
binary(mask);
while(i--) printf("-");
printf("\n\t ");
binary(rsult32bit);
printf("\n");
return EXIT_SUCCESS;
}
void binary(unsigned dec){
int i = 0,
left = sizeof(unsigned) * BYTE - 1;
for(i = 0; left >= 0; left--, i++){
printf("%d", !!(dec & ( 1 << left )));
if(!((i + 1) % 4)) printf(" ");
}
printf("\n");
}
This test code runs like (the output is quite same as I explained in above example):
Output of code:
-----------------
$ gcc b.c
:~$ ./a.out
Size of int is = 32 bits, and Size of unsigned = 32 e.g.
dec= 21 , bin= 0000 0000 0000 0000 0000 0000 0001 0101
num32bit = 964720941
0011 1001 1000 0000 0111 1001 0010 1101
mask 0000 1111 1111 1110 0000 0000 0000 0000
---------------------------------------------------
0011 1111 1111 1110 0111 1001 0010 1101
:~$
Additionally, you can write getMask() function in shorter form in two statements, as follows:
unsigned getMask(unsigned n,
unsigned m){
byte noOfBits = sizeof(unsigned) * BYTE;
return ((unsigned)~0u >> (noOfBits - n - 1)) &
(~0u << (noOfBits -m));
}
Note: I removed redundant parentheses, to clean up the code. Although you never need to remember precedence of operators, as you can override precedence using (), a good programmer always refers to precedence table to write neat code.
A better approach may be to write a macro as below:
#define _NO_OF_BITS sizeof(unsigned) * CHAR_BIT
#define MASK(n, m) (((unsigned)~0u >> (_NO_OF_BITS - n - 1)) & \
(~0u << (_NO_OF_BITS - m)))
And call like:
result32bit = num32bit | MASK(27, 17);
To reset (all zero) bits from n to m
To reset all bits from n to m = 0, and leave the rest unchanged, you just need complement (~) of mask.
mask 0000 1111 1111 1111 1000 0000 0000 0000
~mask 1111 0000 0000 0000 0111 1111 1111 1111 <-- complement
Also instead of | operator to set zero & is required.
remember AND works like:
x & 0 = 0 , and
x & 0 = 0
where x value can be 1 or 0.
Because we already have a bitwise complement ~ operator and and & operator, we just need to do:
rsult32bit = num32bit & ~MASK(27, 17);
And it will work like:
num32bit = 964720941
0011 1001 1000 0000 0111 1001 0010 1101
mask 1111 0000 0000 0000 0111 1111 1111 1111
---------------------------------------------------
0011 0000 0000 0000 0111 1001 0010 1101

Why is ~0xF equal to 0xFFFFFFF0 on a 32-bit machine?

Why is ~0xF equal to 0xFFFFFFF0?
Also, how is ~0xF && 0x01 = 1? Maybe I don't get 0x01 either.
Question 1
Why is ~0xF equal to 0xFFFFFFF0?
First, this means you run this on a 32-bit machine. That means 0xF is actually 0x0000000F in hexadecimal,
And that means 0xF is
0000 0000 0000 0000 0000 0000 0000 1111 in binary representation.
The ~ operator means the NOT operation. Tt changes every 0 to 1 and every 1 to 0 in the binary representation. That would make ~0xF to be:
1111 1111 1111 1111 1111 1111 1111 0000 in binary representation.
And that is actually 0xFFFFFFF0.
Note that if you do this on a 16-bit machine, the answer of ~0xF would be 0xFFF0.
Question 2
You wrote the wrong statement, it should be 0xF & 0x1. Note that 0x1 0x01, 0x001, and 0x0001 are all the same. So let’s change this hexdecimal number to binary representation:
0xF would be:
0000 0000 0000 0000 0000 0000 0000 1111
and 0x1 would be:
0000 0000 0000 0000 0000 0000 0000 0001
The & operation follows the following rules:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
So doing that to every bit, you get the result:
0000 0000 0000 0000 0000 0000 0000 0001
which is actually 0x1.
Additional
| means bitwise OR operation. It follows:
0 | 0 = 0
0 | 1 = 1
1 | 0 = 1
1 | 1 = 1
^ means bitwise XOR operation. It follows:
0 ^ 0 = 0
0 ^ 1 = 1
1 ^ 0 = 1
1 ^ 1 = 0
You can get more information here.
If you store 0xF in a 32-bit "int", all 32 bits flip, so ~0xF = 0xFFFFFFF0.
See this:
http://teaching.idallen.com/cst8214/08w/notes/bit_operations.txt
They give a good explanation
You're negating 0xF, which flips all of the bits to their inverse. So for example, with 8 bits you have: 0xF = 00001111. If you Negate that, it turns into 11110000.
Since you're using 32 bits, the F's are just extended all the way out. 1111 .... 0000
For your second question, you're using a logical AND, not a bitwise AND. Those two behave entirely differently.
It sounds like your confusion is that you believe 0xF is the same as 0b1111111111111111. It is not, it is 0b0000000000001111.
~0xF inverts all its bits, going
from 0x0000000F = 00000000000000000000000000001111 (32 bits)
to 0xFFFFFFF0 = 11111111111111111111111111110000 (32 bits)
a && b is 1 if both a and b are non-zero, and ~0xF and 0x01 are both non-zero.
In C, ~0xF can never be equal to 0xFFFFFFF0. The former is a negative number (in any of the three signed representations C allows) and the latter is a positive number. However, if both are converted to a 32-bit unsigned type on a twos-complement implementation, the converted values will be equal.
As for ~0xF && 0x01, the && operator is logical and, not bitwise and.

Resources