int a = 12;
for eg: binary of 12 is 1100 so answer should be 3 as 3rd bit from right is set.
I want the position of the last most set bit of a. Can anyone tell me how can I do so.
NOTE : I want position only, here I don't want to set or reset the bit. So it is not duplicate of any question on stackoverflow.
This answer Unset the rightmost set bit tells both how to get and unset rightmost set bit for an unsigned integer or signed integer represented as two's complement.
get rightmost set bit,
x & -x
// or
x & (~x + 1)
unset rightmost set bit,
x &= x - 1
// or
x -= x & -x // rhs is rightmost set bit
why it works
x: leading bits 1 all 0
~x: reversed leading bits 0 all 1
~x + 1 or -x: reversed leading bits 1 all 0
x & -x: all 0 1 all 0
eg, let x = 112, and choose 8-bit for simplicity, though the idea is same for all size of integer.
// example for get rightmost set bit
x: 01110000
~x: 10001111
-x or ~x + 1: 10010000
x & -x: 00010000
// example for unset rightmost set bit
x: 01110000
x-1: 01101111
x & (x-1): 01100000
Finding the (0-based) index of the least significant set bit is equivalent to counting how many trailing zeros a given integer has. Depending on your compiler there are builtin functions for this, for example gcc and clang support __builtin_ctz.
For MSVC you would need to implement your own version, this answer to a different question shows a solution making use of MSVC intrinsics.
Given that you are looking for the 1-based index, you simply need to add 1 to ctz's result in order to achieve what you want.
int a = 12;
int least_bit = __builtin_ctz(a) + 1; // least_bit = 3
Note that this operation is undefined if a == 0. Furthermore there exist __builtin_ctzl and __builtin_ctzll which you should use if you are working with long and long long instead of int.
One can use the property of 2s-complement here.
Fastest way to find 2s-complement of a number is to get the rightmost set bit and flip everything to the left of it.
For example: consider a 4 bit system
/* Number in binary */
4 = 0100
/* 2s complement of 4 */
complement = 1100
/* which nothing but */
complement == -4
/* Result */
4 & (-4) = 0100
Notice that there is only one set bit and its at rightmost set bit of 4.
Similarly we can generalise this for n.
n&(-n) will contain only one set bit which is actually at the rightmost set bit position of n.
Since there is only one set bit in n&(-n), it is a power of 2.
So finally we can get the bit position by:
log2(n&(-n))+1
The leftmost bit of n can be obtained using the formulae:
n & ~(n-1)
This works because when you calculate (n-1) .. you are actually making all the zeros till the rightmost bit to 1, and the rightmost bit to 0.
Then you take a NOT of it .. which leaves you with the following:
x= ~(bits from the original number) + (rightmost 1 bit) + trailing zeros
Now, if you do (n & x), you get what you need, as the only bit that is 1 in both n and x is the rightmost bit.
Phewwwww .. :sweat_smile:
http://www.catonmat.net/blog/low-level-bit-hacks-you-absolutely-must-know/
helped me understand this.
There is a neat trick in Knuth 7.1.3 where you multiply by a "magic" number (found by a brute-force search) that maps the first few bits of the number to a unique value for each position of the rightmost bit, and then you can use a small lookup table. Here is an implementation of that trick for 32-bit values, adapted from the nlopt library (MIT/expat licensed).
/* Return position (0, 1, ...) of rightmost (least-significant) one bit in n.
*
* This code uses a 32-bit version of algorithm to find the rightmost
* one bit in Knuth, _The Art of Computer Programming_, volume 4A
* (draft fascicle), section 7.1.3, "Bitwise tricks and
* techniques."
*
* Assumes n has a 1 bit, i.e. n != 0
*
*/
static unsigned rightone32(uint32_t n)
{
const uint32_t a = 0x05f66a47; /* magic number, found by brute force */
static const unsigned decode[32] = { 0, 1, 2, 26, 23, 3, 15, 27, 24, 21, 19, 4, 12, 16, 28, 6, 31, 25, 22, 14, 20, 18, 11, 5, 30, 13, 17, 10, 29, 9, 8, 7 };
n = a * (n & (-n));
return decode[n >> 27];
}
Try this
int set_bit = n ^ (n&(n-1));
Explanation:
As noted in this answer, n&(n-1) unsets the last set bit.
So, if we unset the last set bit and xor it with the number; by the nature of the xor operation, the last set bit will become 1 and the rest of the bits will return 0
1- Subtract 1 form number: (a-1)
2- Take it's negation : ~(a-1)
3- Take 'AND' operation with original number:
int last_set_bit = a & ~(a-1)
The reason behind subtraction is, when you take negation it set its last bit 1, so when take 'AND' it gives last set bit.
Check if a & 1 is 0. If so, shift right by one until it's not zero. The number of times you shift is how many bits from the right is the rightmost bit that is set.
You can find the position of rightmost set bit by doing bitwise xor of n and (n&(n-1) )
int pos = n ^ (n&(n-1));
I inherited this one, with a note that it came from HAKMEM (try it out here). It works on both signed and unsigned integers, logical or arithmetic right shift. It's also pretty efficient.
#include <stdio.h>
int rightmost1(int n) {
int pos, temp;
for (pos = 0, temp = ~n & (n - 1); temp > 0; temp >>= 1, ++pos);
return pos;
}
int main()
{
int pos = rightmost1(16);
printf("%d", pos);
}
You must check all 32 bits starting at index 0 and working your way to the left. If you can bitwise-and your a with a one bit at that position and get a non-zero value back, it means the bit is set.
#include <limits.h>
int last_set_pos(int a) {
for (int i = 0; i < sizeof a * CHAR_BIT; ++i) {
if (a & (0x1 << i)) return i;
}
return -1; // a == 0
}
On typical systems int will be 32 bits, but doing sizeof a * CHAR_BIT will get you the right number of bits in a even if it's a different size
Accourding to dbush's solution, Try this:
int rightMostSet(int a){
if (!a) return -1; //means there isn't any 1-bit
int i=0;
while(a&1==0){
i++;
a>>1;
}
return i;
}
return log2(((num-1)^num)+1);
explanation with example: 12 - 1100
num-1 = 11 = 1011
num^ (num-1) = 12^11 = 7 (111)
num^ (num-1))+1 = 8 (1000)
log2(1000) = 3 (answer).
x & ~(x-1) isolates the lowest bit that is one.
int main(int argc, char **argv)
{
int setbit;
unsigned long d;
unsigned long n1;
unsigned long n = 0xFFF7;
double nlog2 = log(2);
while(n)
{
n1 = (unsigned long)n & (unsigned long)(n -1);
d = n - n1;
n = n1;
setbit = log(d) / nlog2;
printf("Set bit: %d\n", setbit);
}
return 0;
}
And the result is as below.
Set bit: 0
Set bit: 1
Set bit: 2
Set bit: 4
Set bit: 5
Set bit: 6
Set bit: 7
Set bit: 8
Set bit: 9
Set bit: 10
Set bit: 11
Set bit: 12
Set bit: 13
Set bit: 14
Set bit: 15
Let x be your integer input.
Bitwise AND by 1.
If it's even ie 0, 0&1 returns you 0.
If it's odd ie 1, 1&1 returns you 1.
if ( (x & 1) == 0) )
{
std::cout << "The rightmost bit is 0 ie even \n";
}
else
{
std::cout<< "The rightmost bit is 1 ie odd \n";
}```
Alright, so number systems is just working with logarithms and exponents. So I'll dive down into an approach that really makes sense to me.
I would prefer you read this because I write there about how I interpret logarithms as.
When you perform the x & -x operation, it gives you the value which has the right most bit as 1 (for example, it can be 0001000 or 0000010. Now according to how I interpret logarithms as, this value of the right most set bit, is the final value after I grow at the rate of 2. Now we are interested in finding the number of digits in this answer because whatever that is, if you subtract 1 from it, that is precisely the bit-count of set bit (bit count begins with 0 here and the digit count begins with 1, so yeah). But the number of digits is precisely the time you expanded for + 1 (in accordance with my logic) or just the formula I mentioned in the previous link. But now, as we don't really need the digits, but need the bit count, and we also don't have to worry about values of bits which potentially can be real (if the number is 65) because the number is always some multiple of 2 (except 1). So if you just take the logarithm of the value x & -x, we get the bit count! I did see an answer before that mentioned this, but diving down to why it really works was something I felt like writing down.
P.S: You could also count the number of digits and then subtract 1 from it to get the bit-count.
Related
I need to extract specific part (no of bits) of a short data type in C.
Fox example, i have a binary of 45 as 101101 and i just want 2 bits in middle such as (10)
I started with C code 2 days ago so don't given a lot of functions.
How do i extract them ?
Please search for bit-wise operations for more general information, and bit masking for your specific question. I wouldn't recommend to jump to bits if you are new to programming though.
The solution will slightly change depending on whether your input will be fixed in length. If it won't be fixed, you need to arrange you mask accordingly. Or you can use a different method, this is probably simplest way.
In order to get specific bits that you want, you can use bitmasking.
E.g you have 101101 and you want those middle two bits, if you & this with 001100, only bits that are 1 on the mask will remain unchanged in the source, all the other bits will be set to 0. Effectively, you will have those bits that you are interested in.
If you don't know what & (bitwise and) is, it takes two operands, and returns 1 only if first AND second operands are 1, returns 0 otherwise.
input : 1 0 1 1 0 1
mask : 0 0 1 1 0 0
result : 0 0 1 1 0 0
As C syntax, we can do this like:
unsigned int input = 45;
unsigned int mask = 0b001100; // I don't know if this is standard notation. May not work with all compilers
// or
unsigned int mask = 12; // This is equivalent
unsigned int result = input & mask; // result contains ...001100
As yo can see, we filtered the bits we wanted. The next step depends on what you want to do with those bytes.
At this point, the result 001100 corresponds to 12. I assume this is not really useful. What you can do is, you can move those bits around. In order to get rid of 0s at the right, we can shit it 2 bits to the right. For this, we need to use >> operator.
0 0 1 1 0 0 >> 2 ≡ 0 0 0 0 1 1
result = result >> 2; // result contains ...011
From there, you can set a bool variable to store each of them being 1 or 0.
unsigned char flag1 = result & 0b01; // or just 1
unsigned char flag2 = result & 0b10; // or just 2
You could do this without shifting at all but this way it's more clear.
You need to mask the bits that you want to extract. If suppose you want to create mask having first 4 bits set. Then you can do that by using:
(1 << 4) - 1
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
void print_bin(short n)
{
unsigned long i = CHAR_BIT * sizeof(n);
while(i--)
putchar('0' + ((n >> i) & 1));
printf("\n");
}
int main()
{
short num = 45; /* Binary 101101 */
short mask = 4; /* 4 bits */
short start = 0; /* Start from leftmost bit
position 0 */
print_bin((num >> start) & ((1 << mask) - 1)); /* Prints 1101 */
mask = 2; /* 2 bits */
start = 1; /* start from bit indexed at position 1 */
print_bin((num >> start) & ((1 << mask) - 1)); /* Prints 10 */
return 0;
}
Output:
0000000000001101
0000000000000010
I'm trying to find the position of two 1's in a 64 bit number. In this case the ones are at the 0th and 63rd position. The code here returns 0 and 32, which is only half right. Why does this not work?
#include<stdio.h>
void main()
{
unsigned long long number=576460752303423489;
int i;
for (i=0; i<64; i++)
{
if ((number & (1 << i))==1)
{
printf("%d ",i);
}
}
}
There are two bugs on the line
if ((number & (1 << i))==1)
which should read
if (number & (1ull << i))
Changing 1 to 1ull means that the left shift is done on a value of type unsigned long long rather than int, and therefore the bitmask can actually reach positions 32 through 63. Removing the comparison to 1 is because the result of number & mask (where mask has only one bit set) is either mask or 0, and mask is only equal to 1 when i is 0.
However, when I make that change, the output for me is 0 59, which still isn't what you expected. The remaining problem is that 576460752303423489 (decimal) = 0800 0000 0000 0001 (hexadecimal). 0 59 is the correct output for that number. The number you wanted is 9223372036854775809 (decimal) = 8000 0000 0000 0001 (hex).
Incidentally, main is required to return int, not void, and needs an explicit return 0; as its last action (unless you are doing something more sophisticated with the return code). Yes, C99 lets you omit that. Do it anyway.
Because (1 << i) is a 32-bit int value on the platform you are compiling and running on. This then gets sign-extended to 64 bits for the & operation with the number value, resulting in bit 31 being duplicated into bits 32 through 63.
Also, you are comparing the result of the & to 1, which isn't correct. It will not be 0 if the bit is set, but it won't be 1.
Shifting a 32-bit int by 32 is undefined.
Also, your input number is incorrect. The bits set are at positions 0 and 59 (or 1 and 60 if you prefer to count starting at 1).
The fix is to use (1ull << i), or otherwise to right-shift the original value and & it with 1 (instead of left-shifting 1). And of course if you do left-shift 1 and & it with the original value, the result won't be 1 (except for bit 0), so you need to compare != 0 rather than == 1.
#include<stdio.h>
int main()
{
unsigned long long number = 576460752303423489;
int i;
for (i=0; i<64; i++)
{
if ((number & (1ULL << i))) //here
{
printf("%d ",i);
}
}
}
First is to use 1ULL to represent unsigned long long constant. Second is in the if statement, what you mean is not to compare with 1, that will only be true for the rightmost bit.
Output: 0 59
It's correct because 576460752303423489 is equal to 0x800000000000001
The problem could have been avoided in the first place by adopting the methodology of applying the >> operator to a variable, instead of a literal:
if ((variable >> other_variable) & 1)
...
I know the question has some time and multiple correct answers while my should be a comment, but is a bit too long for it. I advice you to encapsulate bit checking logic in a macro and don't use 64 number directly, but rather calculate it. Take a look here for quite comprehensive source of bit manipulation hacks.
#include<stdio.h>
#include<limits.h>
#define CHECK_BIT(var,pos) ((var) & (1ULL<<(pos)))
int main(void)
{
unsigned long long number=576460752303423489;
int pos=sizeof(unsigned long long)*CHAR_BIT-1;
while((pos--)>=0) {
if(CHECK_BIT(number,pos))
printf("%d ",pos);
}
return(0);
}
Rather than resorting to bit manipulation, one can use compiler facilities to perform bit analysis tasks in the most efficient manner (using only a single CPU instruction in many cases).
For example, gcc and clang provide those handy routines:
__builtin_popcountll() - number of bits set in the 64b value
__builtin_clzll() - number of leading zeroes in the 64b value
__builtin_ctzll() - number of trailing zeroes in the 64b value
__builtin_ffsll() - bit index of least significant set bit in the 64b value
Other compilers have similar mechanisms.
Reading the book "C - A reference manual (Fifth Edition)", I stumbled upon this piece of code (each integer in the SET is represented by a bit position):
typedef unsigned int SET;
#define emptyset ((SET) 0)
#define first_set_of_n_elements(n) (SET)((1<<(n))-1)
/* next_set_of_n_elements(s): Given a set of n elements,
produce a new set of n elements. If you start with the
result of first_set_of_n_elements(k)/ and then at each
step apply next_set_of_n_elements to the previous result,
and keep going until a set is obtained containing m as a
member, you will have obtained a set representing all
possible ways of choosing k things from m things. */
SET next_set_of_n_elements(SET x) {
/* This code exploits many unusual properties of unsigned arithmetic. As an illustration:
if x == 001011001111000, then
smallest == 000000000001000
ripple == 001011010000000
new_smallest == 000000010000000
ones == 000000000000111
returned value == 001011010000111
The overall idea is that you find the rightmost
contiguous group of 1-bits. Of that group, you slide the
leftmost 1-bit to the left one place, and slide all the
others back to the extreme right.
(This code was adapted from HAKMEM.) */
SET smallest, ripple, new_smallest, ones;
if (x == emptyset) return x;
smallest = (x & -x);
ripple = x + smallest;
new_smallest = (ripple & -ripple);
ones = ((new_smallest / smallest) >> 1) -1;
return (ripple | ones);
}
I'm lost at the calculation of 'ones', and it's significance in the calculation. Although I can understand the calculation mathematically, I cannot understand why this works, or how.
On a related note, the authors of the book claim that the calculation for first_set_of_n_elements "exploits the properties of unsigned subtractions". How is (2^n)-1 an "exploit"?
The smallest computation gets the first non-0 bit of your int. How does it works ?
Let n be the bit length of your int. The opposite of a number x (bits bn-1...b0) is computed in a way that when you sum x to -x, you will get 2n. Since your integer is only n-bit long, the resulting bit is discarded and you obtain 0.
Now, let b'n-1...b'0 be the binary representation of -x.
Since x+(-x) must be equal to 2n, when you meet the first bit 1 of x (say at position i), the related -x bit will also be set to 1 and when adding the numbers, you'll get a carry.
To obtain the 2n, this carry must propagate through all the bits until the end of the bit sequence of your int. Thus, the bit of -x at each position j with i < j < n follows the properties below :
bj + b'j + 1 = 10(binary)
Then, from the above we can infer that :
bj = NOT(b'j) and thus, that bj & b'j = 0
On the other hand, the bits b'j of -x located at a position j such that 0 <= j < i are ruled by what follows :
bj + b'j = 0 or 10
Since all the related bj are set to 0, the only option is b'j = 0
Thus, the only bit that is 1 in both x and -x is the one at position i
In your example :
x = 001011001111000
-x = 110100110001000
Thus,
0.0.1.0.1.1.0.0.1.1.1.1.0.0.0
1.1.0.1.0.0.1.1.0.0.0.1.0.0.0 AND
\=====================/
0.0.0.0.0.0.0.0.0.0.1.0.0.0
The ripple then turns every contiguous "1" after position i (bit i included) to 0, and the first following 0 bit to 1 (due to the carry propagation). That's why you ripple is :
r(x) = 0.0.1.0.1.1.0.1.0.0.0.0.0.0.0
Ones is computed as the division of smallest(r(x)) over smallest(x). Since smallest(x) is an integer with only a single bit set to 1 at position i, you have :
(smallest(r(x)) / smallest(x)) >> 1 = smallest(r(x)) >>(i+1)
The resulting integer has also only one bit set to 1, at say index p, thus, substract -1 to this value will get you an integer ones such that :
For each j such that 0 <= j < p,
onesj = 1
For each j such that p <= j < n,
onesj = 0
Finally, the return value is the integer such that :
The first subsequence of 1-bit of the argument is set to 0.
All the 0-bit before the subsequence are set to 1.
The first 0-bit after the subsequence is set to 1.
The remaining bits are left unchanged
Then, I can't explain the remaining part since I did not understand the sentence :
a set is obtained containing m as a member
First of all, this code is rather obscure and doesn't look like anything worth spending time pondering upon, it will only yield useless knowledge.
The "exploit" is that the code relies on implementation-defined behavior of various arithmetric rules.
001 0110 0111 1000 is a 15-bit number. Why the author uses 15 bit numbers instead of 16, I
don't know. Seems like a typo remaining even after 5 editions.
If we put a minus sign in front of that binary number on a two's complement system (explanation of two's complement here), it will turn into 1110 1001 1000 1000. Because the compiler will preserve the decimal presentation of the number (5752) and translate it to its negative equivalent (-5752). (However, the actual data type will remain unsigned int, so if you tried to print it you would get the garbage number 59784.)
0001 0110 0111 1000
AND 1110 1001 1000 1000
= 0000 0000 0000 1000
The C standard does not enforce two's complement, so the code in that book is not portable.
It's a little misleading, because it actually exploits 2's complement. First, the calculation of smallest:
In 2's complement representation, for the x in the comments -x is 110100110001000. Focus on the least significant bit of x that is a one; since two's complement is essentially 1's complement plus 1, that bit will be set in both x and -x and no other bit position after it (on the way to the LSB) will have that property. That's how you get the smallest bit set.
ripple is pretty straightforward and is named as such because it propagates ones to the MSB, and smallest_ripple follows from the description above.
ones is the number we should add to the ripple in order to continue choosing n elements, picture it below:
ones: 11 next set: 100011
ones: 1 next set: 100101
ones: 0 next set: 100110
Running it will indeed show you all the ways of choosing n bits out of CHAR_BIT * sizeof(int) - 1 items (CHAR_BIT * sizeof(int) bits are needed because -x of an n-bit number needs at worst n+1 bits to be represented).
First, here a exemple of the output we can get with n=4. The idea is that we start with 'n' LSB set to '1', and then we iterate through all the combinations of numbers with the same count of bits set to '1':
1111
10111
11011
11101
11110
100111
101011
101101
101110 (*)
110011
110101
110110
111001
111010
111100
1000111
1001011
1001101
It is working the following way. I will use the number with the star above as an exemple:
101110
We get the LSB set to '1' as clearly explained in other answers.
101110
& 010011
= 000010
We "move" the LSB one position to the left by adding it to the original number. If the bit immediately on the left is '0', this is easy to understand, as the subsequent operations will do nothing. If this left bit is '1', we get a carry which will propagate to the left. The problem with this last case is that the numbers of '1' will change, so we have to set back some '1' to keep their count constant.
101110
+ 000010
= 110000
To do so, we retrieve the LSB of the new result, and by dividing it with the previous LSB, we get the number of bits over which the carry has propagated. This is converted to plain '1' at the lowest positions with the '-1',
010000
/ 000010
= 001000
>> 1
- 1
= 000011
We finally OR the result of the addition and the ones.
110011
I would say that the "exploit" is on the unsigned change of sign, in the operation (x & -x).
How do I check if an integer is even or odd using bitwise operators
Consider what being "even" and "odd" means in "bit" terms. Since binary integer data is stored with bits indicating multiples of 2, the lowest-order bit will correspond to 20, which is of course 1, while all of the other bits will correspond to multiples of 2 (21 = 2, 22 = 4, etc.). Gratuituous ASCII art:
NNNNNNNN
||||||||
|||||||+−− bit 0, value = 1 (20)
||||||+−−− bit 1, value = 2 (21)
|||||+−−−− bit 2, value = 4 (22)
||||+−−−−− bit 3, value = 8 (23)
|||+−−−−−− bit 4, value = 16 (24)
||+−−−−−−− bit 5, value = 32 (25)
|+−−−−−−−− bit 6, value = 64 (26)
+−−−−−−−−− bit 7 (highest order bit), value = 128 (27) for unsigned numbers,
value = -128 (-27) for signed numbers (2's complement)
I've only shown 8 bits there, but you get the idea.
So you can tell whether an integer is even or odd by looking only at the lowest-order bit: If it's set, the number is odd. If not, it's even. You don't care about the other bits because they all denote multiples of 2, and so they can't make the value odd.
The way you look at that bit is by using the AND operator of your language. In C and many other languages syntactically derived from B (yes, B), that operator is &. In BASICs, it's usually And. You take your integer, AND it with 1 (which is a number with only the lowest-order bit set), and if the result is not equal to 0, the bit was set.
I'm intentionally not actually giving the code here, not only because I don't know what language you're using, but because you marked the question "homework." :-)
In C (and most C-like languages)
if (number & 1) {
// It's odd
}
if (number & 1)
number is odd
else // (number & 1) == 0
number is even
For example, let's take integer 25, which is odd.
In binary 25 is 00011001. Notice that the least significant bit b0 is 1.
00011001
00000001 (00000001 is 1 in binary)
&
--------
00000001
Just a footnote to Jim's answer.
In C#, unlike C, bitwise AND returns the resulting number, so you'd want to write:
if ((number & 1) == 1) {
// It's odd
}
if(x & 1) // '&' is a bit-wise AND operator
printf("%d is ODD\n", x);
else
printf("%d is EVEN\n", x);
Examples:
For 9:
9 -> 1 0 0 1
1 -> & 0 0 0 1
-------------------
result-> 0 0 0 1
So 9 AND 1 gives us 1, as the right most bit of every odd number is 1.
For 14:
14 -> 1 1 1 0
1 -> & 0 0 0 1
------------------
result-> 0 0 0 0
So 14 AND 1 gives us 0, as the right most bit of every even number is 0.
Also in Java you will have to use if((number&1)==1){//then odd}, because in Java and C# like languages the int is not casted to boolean. You'll have to use the relational operators to return
a boolean value i.e true and false unlike C and C++ like languages which treats non-zero value as true.
You can do it simply using bitwise AND & operator.
if(num & 1)
{
//I am odd number.
}
Read more over here - Checking even odd using bitwise operator in C
Check Number is Even or Odd using XOR Operator
Number = 11
1011 - 11 in Binary Format
^ 0001 - 1 in Binary Format
----
1010 - 10 in Binary Format
Number = 14
1110 - 14 in Binary Format
^ 0001 - 1 in Binary Format
----
1111 - 15 in Binary Format
AS It can observe XOR Of a number with 1, increments it by 1 if it is
even, decrements it by 1 if it is odd.
Code:
if((n^1) == (n+1))
cout<<"even\n";
else
cout<<"odd\n";
#include <iostream>
#include <algorithm>
#include <vector>
void BitConvert(int num, std::vector<int> &array){
while (num > 0){
array.push_back(num % 2);
num = num / 2;
}
}
void CheckEven(int num){
std::vector<int> array;
BitConvert(num, array);
if (array[0] == 0)
std::cout << "Number is even";
else
std::cout << "Number is odd";
}
int main(){
int num;
std::cout << "Enter a number:";
std::cin >> num;
CheckEven(num);
std::cout << std::endl;
return 0;
}
In Java,
if((num & 1)==0){
//its an even num
}
//otherwise its an odd num
This is an old question, however the other answers have left this out.
In addition to using num & 1, you can also use num | 1 > num.
This works because if a number is odd, the resulting value will be the same since the original value num will have started with the ones bit set, however if the original value num was even, the ones bit won't have been set, so changing it to a 1 will make the new value greater by one.
Approach 1: Short and no need for explicit comparison with 1
if (number & 1) {
// number is odd
}
else {
// number is even
}
Approach 2: Needs an extra bracket and explicit comparison with 0
if((num & 1) == 0){ // Note: Bracket is MUST around num & 1
// number is even
}
else {
// number is odd
}
What would happen if I miss the bracket in the above code
if(num & 1 == 0) { } // wrong way of checking even or not!!
becomes
if(num & (1 == 0)) { } // == is higher precedence than &
https://en.cppreference.com/w/cpp/language/operator_precedence
Jon Bentley in Column 1 of his book programming pearls introduces a technique for sorting a sequence of non-zero positive integers using bit vectors.
I have taken the program bitsort.c from here and pasted it below:
/* Copyright (C) 1999 Lucent Technologies */
/* From 'Programming Pearls' by Jon Bentley */
/* bitsort.c -- bitmap sort from Column 1
* Sort distinct integers in the range [0..N-1]
*/
#include <stdio.h>
#define BITSPERWORD 32
#define SHIFT 5
#define MASK 0x1F
#define N 10000000
int a[1 + N/BITSPERWORD];
void set(int i)
{
int sh = i>>SHIFT;
a[i>>SHIFT] |= (1<<(i & MASK));
}
void clr(int i) { a[i>>SHIFT] &= ~(1<<(i & MASK)); }
int test(int i){ return a[i>>SHIFT] & (1<<(i & MASK)); }
int main()
{ int i;
for (i = 0; i < N; i++)
clr(i);
/*Replace above 2 lines with below 3 for word-parallel init
int top = 1 + N/BITSPERWORD;
for (i = 0; i < top; i++)
a[i] = 0;
*/
while (scanf("%d", &i) != EOF)
set(i);
for (i = 0; i < N; i++)
if (test(i))
printf("%d\n", i);
return 0;
}
I understand what the functions clr, set and test are doing and explain them below: ( please correct me if I am wrong here ).
clr clears the ith bit
set sets the ith bit
test returns the value at the ith bit
Now, I don't understand how the functions do what they do. I am unable to figure out all the bit manipulation happening in those three functions.
The first 3 constants are inter-related. BITSPERWORD is 32. This you'd want to set based on your compiler+architecture. SHIFT is 5, because 2^5 = 32. Finally, MASK is 0x1F which is 11111 in binary (ie: the bottom 5 bits are all set). Equivalently, MASK = BITSPERWORD - 1.
The bitset is conceptually just an array of bits. This implementation actually uses an array of ints, and assumes 32 bits per int. So whenever we want to set, clear or test (read) a bit we need to figure out two things:
which int (of the array) is it in
which of that int's bits are we talking about
Because we're assuming 32 bits per int, we can just divide by 32 (and truncate) to get the array index we want. Dividing by 32 (BITSPERWORD) is the same as shifting to the right by 5 (SHIFT). So that's what the a[i>>SHIFT] bit is about. You could also write this as a[i/BITSPERWORD] (and in fact, you'd probably get the same or very similar code assuming your compiler has a reasonable optimizer).
Now that we know which element of a we want, we need to figure out which bit. Really, we want the remainder. We could do this with i%BITSPERWORD, but it turns out that i&MASK is equivalent. This is because BITSPERWORD is a power of 2 (2^5 in this case) and MASK is the bottom 5 bits all set.
Basically is a bucket sort optimized:
reserve a bit array of length n
bits.
clear the bit array (first for in main).
read the items one by one (they must all be distinct).
set the i'th bit in the bit array if the read number is i.
iterate the bit array.
if the bit is set then print the position.
Or in other words (for N < 10 and to sort 3 numbers 4, 6, 2) 0
start with an empty 10 bit array (aka one integer usually)
0000000000
read 4 and set the bit in the array..
0000100000
read 6 and set the bit in the array
0000101000
read 2 and set the bit in the array
0010101000
iterate the array and print every position in which the bits are set to one.
2, 4, 6
sorted.
Starting with set():
A right shift of 5 is the same as dividing by 32. It does that to find which int the bit is in.
MASK is 0x1f or 31. ANDing with the address gives the bit index within the int. It's the same as the remainder of dividing the address by 32.
Shifting 1 left by the bit index ("1<<(i & MASK)") results in an integer which has just 1 bit in the given position set.
ORing sets the bit.
The line "int sh = i>>SHIFT;" is a wasted line, because they didn't use sh again beneath it, and instead just repeated "i>>SHIFT"
clr() is basically the same as set, except instead of ORing with 1<<(i & MASK) to set the bit, it ANDs with the inverse to clear the bit. test() ANDs with 1<<(i & MASK) to test the bit.
The bitsort will also remove duplicates from the list, because it will only count up to 1 per integer. A sort that uses integers instead of bits to count more than 1 of each is called a radix sort.
The bit magic is used as a special addressing scheme that works well with row sizes that are powers of two.
If you try understand this (note: I rather use bits-per-row than bits-per-word, since we're talking about a bit-matrix here):
// supposing an int of 1 bit would exist...
int1 bits[BITSPERROW * N]; // an array of N x BITSPERROW elements
// set bit at x,y:
int linear_address = y*BITSPERWORD + x;
bits + linear_address = 1; // or 0
// 0 1 2 3 4 5 6 7 8 9 10 11 ... 31
// . . . . . . . . . . . . .
// . . . . X . . . . . . . . -> x = 4, y = 1 => i = (1*32 + 4)
The statement linear_address = y*BITSPERWORD + x also means that x = linear_address % BITSPERWORD and y = linear_address / BITSPERWORD.
When you optimize this in memory by using 1 word of 32 bits per row, you get the fact that a bit at column x can be set using
int bitrow = 0;
bitrow |= 1 << (x);
Now when we iterate over the bits, we have the linear address, but need to find the corresponding word.
int column = linear_address % BITSPERROW;
int bit_mask = 1 << column; // meaning for the xth column,
// you take 1 and shift that bit x times
int row = linear_address / BITSPERROW;
So to set the i'th bit, you can do this:
bits[ i%BITSPERROW ] |= 1 << (linear_address / BITSPERROW );
An extra gotcha is, that the modulo operator can be replaced by a logical AND, and the / operator can be replaced by a shift, too, if the second operand is a power of two.
a % BITSPERROW == a & ( BITSPERROW - 1 ) == a & MASK
a / BITSPERROW == a >> ( log2(BITSPERROW) ) == a & SHIFT
This ultimately boils down to the very dense, yet hard-to-understand-for-the-bitfucker-agnostic notation
a[ i >> SHIFT ] |= ( 1 << (i&MASK) );
But I don't see the algorithm working for e.g. 40 bits per word.
Quoting the excerpts from Bentleys' original article in DDJ, this is what the code does at a high level:
/* phase 1: initialize set to empty */
for (i = 0; i < n; i++)
bit[i] = 0
/* phase 2: insert present elements */
for each i in the input file
bit[i] = 1
/* phase 3: write sorted output */
for (i = 0; i < n; i++)
if bit[i] == 1
write i on the output file
A few doubts :
1. Why is it a need for a 32 bit ?
2. Can we do this in Java by creating a HashMap with Keys from 0000000 to 9999999
and values 0 or 1 based on the presence/absence of the bit ? What are the implications
for such a program ?