Next set of n elements in set using bitwise operators - c

Reading the book "C - A reference manual (Fifth Edition)", I stumbled upon this piece of code (each integer in the SET is represented by a bit position):
typedef unsigned int SET;
#define emptyset ((SET) 0)
#define first_set_of_n_elements(n) (SET)((1<<(n))-1)
/* next_set_of_n_elements(s): Given a set of n elements,
produce a new set of n elements. If you start with the
result of first_set_of_n_elements(k)/ and then at each
step apply next_set_of_n_elements to the previous result,
and keep going until a set is obtained containing m as a
member, you will have obtained a set representing all
possible ways of choosing k things from m things. */
SET next_set_of_n_elements(SET x) {
/* This code exploits many unusual properties of unsigned arithmetic. As an illustration:
if x == 001011001111000, then
smallest == 000000000001000
ripple == 001011010000000
new_smallest == 000000010000000
ones == 000000000000111
returned value == 001011010000111
The overall idea is that you find the rightmost
contiguous group of 1-bits. Of that group, you slide the
leftmost 1-bit to the left one place, and slide all the
others back to the extreme right.
(This code was adapted from HAKMEM.) */
SET smallest, ripple, new_smallest, ones;
if (x == emptyset) return x;
smallest = (x & -x);
ripple = x + smallest;
new_smallest = (ripple & -ripple);
ones = ((new_smallest / smallest) >> 1) -1;
return (ripple | ones);
}
I'm lost at the calculation of 'ones', and it's significance in the calculation. Although I can understand the calculation mathematically, I cannot understand why this works, or how.
On a related note, the authors of the book claim that the calculation for first_set_of_n_elements "exploits the properties of unsigned subtractions". How is (2^n)-1 an "exploit"?

The smallest computation gets the first non-0 bit of your int. How does it works ?
Let n be the bit length of your int. The opposite of a number x (bits bn-1...b0) is computed in a way that when you sum x to -x, you will get 2n. Since your integer is only n-bit long, the resulting bit is discarded and you obtain 0.
Now, let b'n-1...b'0 be the binary representation of -x.
Since x+(-x) must be equal to 2n, when you meet the first bit 1 of x (say at position i), the related -x bit will also be set to 1 and when adding the numbers, you'll get a carry.
To obtain the 2n, this carry must propagate through all the bits until the end of the bit sequence of your int. Thus, the bit of -x at each position j with i < j < n follows the properties below :
bj + b'j + 1 = 10(binary)
Then, from the above we can infer that :
bj = NOT(b'j) and thus, that bj & b'j = 0
On the other hand, the bits b'j of -x located at a position j such that 0 <= j < i are ruled by what follows :
bj + b'j = 0 or 10
Since all the related bj are set to 0, the only option is b'j = 0
Thus, the only bit that is 1 in both x and -x is the one at position i
In your example :
x = 001011001111000
-x = 110100110001000
Thus,
0.0.1.0.1.1.0.0.1.1.1.1.0.0.0
1.1.0.1.0.0.1.1.0.0.0.1.0.0.0 AND
\=====================/
0.0.0.0.0.0.0.0.0.0.1.0.0.0
The ripple then turns every contiguous "1" after position i (bit i included) to 0, and the first following 0 bit to 1 (due to the carry propagation). That's why you ripple is :
r(x) = 0.0.1.0.1.1.0.1.0.0.0.0.0.0.0
Ones is computed as the division of smallest(r(x)) over smallest(x). Since smallest(x) is an integer with only a single bit set to 1 at position i, you have :
(smallest(r(x)) / smallest(x)) >> 1 = smallest(r(x)) >>(i+1)
The resulting integer has also only one bit set to 1, at say index p, thus, substract -1 to this value will get you an integer ones such that :
For each j such that 0 <= j < p,
onesj = 1
For each j such that p <= j < n,
onesj = 0
Finally, the return value is the integer such that :
The first subsequence of 1-bit of the argument is set to 0.
All the 0-bit before the subsequence are set to 1.
The first 0-bit after the subsequence is set to 1.
The remaining bits are left unchanged
Then, I can't explain the remaining part since I did not understand the sentence :
a set is obtained containing m as a member

First of all, this code is rather obscure and doesn't look like anything worth spending time pondering upon, it will only yield useless knowledge.
The "exploit" is that the code relies on implementation-defined behavior of various arithmetric rules.
001 0110 0111 1000 is a 15-bit number. Why the author uses 15 bit numbers instead of 16, I
don't know. Seems like a typo remaining even after 5 editions.
If we put a minus sign in front of that binary number on a two's complement system (explanation of two's complement here), it will turn into 1110 1001 1000 1000. Because the compiler will preserve the decimal presentation of the number (5752) and translate it to its negative equivalent (-5752). (However, the actual data type will remain unsigned int, so if you tried to print it you would get the garbage number 59784.)
0001 0110 0111 1000
AND 1110 1001 1000 1000
= 0000 0000 0000 1000
The C standard does not enforce two's complement, so the code in that book is not portable.

It's a little misleading, because it actually exploits 2's complement. First, the calculation of smallest:
In 2's complement representation, for the x in the comments -x is 110100110001000. Focus on the least significant bit of x that is a one; since two's complement is essentially 1's complement plus 1, that bit will be set in both x and -x and no other bit position after it (on the way to the LSB) will have that property. That's how you get the smallest bit set.
ripple is pretty straightforward and is named as such because it propagates ones to the MSB, and smallest_ripple follows from the description above.
ones is the number we should add to the ripple in order to continue choosing n elements, picture it below:
ones: 11 next set: 100011
ones: 1 next set: 100101
ones: 0 next set: 100110
Running it will indeed show you all the ways of choosing n bits out of CHAR_BIT * sizeof(int) - 1 items (CHAR_BIT * sizeof(int) bits are needed because -x of an n-bit number needs at worst n+1 bits to be represented).

First, here a exemple of the output we can get with n=4. The idea is that we start with 'n' LSB set to '1', and then we iterate through all the combinations of numbers with the same count of bits set to '1':
1111
10111
11011
11101
11110
100111
101011
101101
101110 (*)
110011
110101
110110
111001
111010
111100
1000111
1001011
1001101
It is working the following way. I will use the number with the star above as an exemple:
101110
We get the LSB set to '1' as clearly explained in other answers.
101110
& 010011
= 000010
We "move" the LSB one position to the left by adding it to the original number. If the bit immediately on the left is '0', this is easy to understand, as the subsequent operations will do nothing. If this left bit is '1', we get a carry which will propagate to the left. The problem with this last case is that the numbers of '1' will change, so we have to set back some '1' to keep their count constant.
101110
+ 000010
= 110000
To do so, we retrieve the LSB of the new result, and by dividing it with the previous LSB, we get the number of bits over which the carry has propagated. This is converted to plain '1' at the lowest positions with the '-1',
010000
/ 000010
= 001000
>> 1
- 1
= 000011
We finally OR the result of the addition and the ones.
110011

I would say that the "exploit" is on the unsigned change of sign, in the operation (x & -x).

Related

Value of x when s = x >> 31 and x = (s & ~x) | (~s & x);

If x was to equal 12 in a 32 bit scenario, x = multiple 0's into the lsb 0000 1100. If the above scenario were to run, I believe I would get 0000 1100. Am I wrong?
Along with that, what if I was to use x=-1? Wouldn't s = 1, but then does (s & ~x) look like (0001 & 0000) and (1110 & 1111)? Thanks
I thought that x=-1 would mean x>>31 would be like 0001 (output 1), but I don't know if the above is correct.
The typical implementation of a right shift of a signed integer is an arithmetic shift. Different implementations are unfortunately still allowed, though rare, and they're not relevant to understanding this code (it ignores such possibilities anyway). Two's complement integers are now mandatory (in C23: "The sign representation defined in this document is called two’s complement. Previous revisions of this document
additionally allowed other sign representation") so I'm not going to do the usual consideration of hypothetical integer representations that haven't been seen since the stone age.
By assumption the number of bits in an int is 32, so shifting an int right by 31 makes every bit of the result a copy of the sign bit. So if x was negative, s would be -1.
x = (s & ~x) | (~s & x) is a verbose way to spell out x ^= s. XORing x by 0 leaves it the same as before, XORing it by -1 inverts all the bits. Taking into account that s = x < 0 ? -1 : 0, effectively the computation does this:
if (x < 0)
x = ~x; // equivalent to: x = -x - 1;

SWAR byte counting methods from 'Bit Twiddling Hacks' - why do they work?

Bit Twiddling Hacks contains the following macros, which count the number of bytes in a word x that are less than, or greater than, n:
#define countless(x,n) \
(((~0UL/255*(127+(n))-((x)&~0UL/255*127))&~(x)&~0UL/255*128)/128%255)
#define countmore(x,n) \
(((((x)&~0UL/255*127)+~0UL/255*(127-(n))|(x))&~0UL/255*128)/128%255)
However, it doesn't explain why they work. What's the logic behind these macros?
Let's try for intuition on countmore.
First, ~0UL/255*(127-n) is a clever way of copying the value 127-n to all bytes in the word in parallel. Why does it work? ~0 is 255 in all bytes. Consequently, ~0/255 is 1 in all bytes. Multiplying by (127-n) does the "copying" mentioned at the outset.
The term ~0UL/255*127 is just a special case of the above where n is zero. It copies 127 into all bytes. That's 0x7f7f7f7f if words are 4 bytes. "Anding" with x zeros out the high order bit in each byte.
That's the first term (x)&~0UL/255*127). The result is the same as x except the high bit in each byte is zeroed.
The second term ~0UL/255*(127-(n)) is as above: 127-n copied to each byte.
For any given byte x[i], adding the two terms gives us 127-n+x[i] if x[i]<=127. This quantity will have the high order bit set whenever x[i]>n. It's easiest to see this as adding two 7-bit unsigned numbers. The result "overflows" into the 8th bit because the result is 128 or more.
So it looks like the algorithm is going to use the 8th bit of each byte as a boolean indicating x[i]>n.
So what about the other case, x[i]>127? Here we know the byte is more than n because the algorithm stipulates n<=127. The 8th bit ought to be always 1. Happily, the sum's 8th bit doesn't matter because the next step "or"s the result with x. Since x[i] has the 8th bit set to 1 if and only if it's 128 or larger, this operation "forces" the 8th bit to 1 just when the sum might provide a bad value.
To summarize so far, the "or" result has the 8th bit set to 1 in its i'th byte if and only if x[i]>n. Nice.
The next operation &~0UL/255*128 sets everything to zero except all those 8th bits of interest. It's "anding" with 0x80808080...
Now the task is to find the number of these bits set to 1. For this, countmore uses some basic number theory. First it shifts right 7 bits so the bits of interest are b0, b8, b16... The value of this word is
b0 + b8*2^8 + b16*2^16 + ...
A beautiful fact is that 1 == 2^8 == 2^16 == ... mod 255. In other words, each 1 bit is 1 mod 255. It follows that finding mod 255 of the shifted result is the same as summing b0+b8+b16+...
Yikes. We're done.
Let's analyse countless macro. We can simplify this macro as following code:
#define A(n) (0x0101010101010101UL * (0x7F+n))
#define B(x) (x & 0x7F7F7F7F7F7F7F7FUL)
#define C(x,n) (A(n) - B(x))
#define countless(x,n) (( C(x,n) & ~x & 0x8080808080808080UL) / 0x80 % 0xFF )
A(n) will be:
A(0) = 0x7F7F7F7F7F7F7F7F
A(1) = 0x8080808080808080
A(2) = 0x8181818181818181
A(3) = 0x8282828282828282
....
And for B(x), each byte of x will mask with 0x7F.
If we suppose x = 0xb0b1b2b3b4b5b6b7 and n = 0, then C(x,n) will equals to 0x(0x7F-b0)(0x7F-b1)(0x7F-b2)...
For example, We suppose x = 0x1234567811335577 and n = 0x50. So:
A(0x50) = 0xCFCFCFCFCFCFCFCF
B(0x1234567811335577) = 0x1234567811335577
C(0x1234567811335577, 0x50) = 0xBD9B7957BE9C7A58
~(0x1234567811335577) = 0xEDCBA987EECCAA88
0xEDCBA987EECCAA88 & 0x8080808080808080UL = 0x8080808080808080
C(0x1234567811335577, 0x50) & 0x8080808080808080 = 0x8080000080800000
(0x8080000080800000 / 0x80) % 0xFF = 4 //Count bytes that equal to 0x80 value.

Finding if a value falls within a range, using bitwise operators in C

So i am working on this method, but am restricted to using ONLY these operators:
<<, >>, !, ~, &, ^, |, +
I need to find if a given int parameter can be represented using 2's complement arithmetic in a given amount of bits.
Here is what I have so far:
int validRange(int val, int bits){
int minInRange = ~(1<<(bits + ~0 ))+1; //the smallest 2's comp value possible with this many bits
int maxInRange = (1<<(bits+~0))+~0; //largest 2's comp value possible
..........
}
This is what I have so far, and all I need to do now is figure out how to tell if minInRange <= val <=maxInRange. I wish I could use the greater than or less than operator, but we are not allowed. What is the bitwise way to check this?
Thanks for any help!
Two's complement negative numbers always have a '1' in their high bit.
You can convert from negative to positive (and vice versa) by converting from FF -> 00 -> 01. That is, invert the bits, add 1. (01 -> FE -> FF also works: invert the bits, add 1)
A positive number can be represented if the highest set bit in the number is within your range. (nbits - 1: 7 bits for an 8 bit signed char, etc.)
I'm not sure if your constraints allow you to use arrays. They would speed up some things but can be replaced with loops or if statements.
Anyway, if 1 << (NUM_INT_BITS-1) is set on your input, then it's negative.
Invert, add one.
Now, consider 0. Zero is a constant, and it's always the same no matter how many bits. But if you invert 0, you get "all the bits" which changes by architecture. So, ALL_BITS = ~0.
If you want to know if a positive number can be represented in 2 bits, check to see if any bits greater than or equal to bit 2 are set. Example:
two_bits = 0b00000011
any_other_bits = ~two_bits # Result: 0b11...11100
if positive_number & any_other_bits
this number is too fat for these bits!
But how do you know what ~two_bits should be? Well, it's "all set bits except the bottom however-many". And you can construct that by starting with "all set bits" and shifting them upwards (aka, "left") however-many places:
any_other_bits = ~0 << 2 # where "2" is the number of bits to check
All together now:
if (val & ((unsigned)INT_MAX + 1))
val = ~val + 1;
mask = ~0 << bits;
too_wide = val & mask;
return !too_wide;
To test if a number can be represented in a N-bit 2s compliment number: Simply test that either
The number bitwise-and'ed with the compliment of a word with the low (N-1) bits set is equal to zero
OR The high InputBitWidth-(N-1) bits of the number are 1s.
mask=(1<<(bits-1))-1; return ( !(val&mask) | !((val&~mask)^~mask) );

How to get position of right most set bit in C

int a = 12;
for eg: binary of 12 is 1100 so answer should be 3 as 3rd bit from right is set.
I want the position of the last most set bit of a. Can anyone tell me how can I do so.
NOTE : I want position only, here I don't want to set or reset the bit. So it is not duplicate of any question on stackoverflow.
This answer Unset the rightmost set bit tells both how to get and unset rightmost set bit for an unsigned integer or signed integer represented as two's complement.
get rightmost set bit,
x & -x
// or
x & (~x + 1)
unset rightmost set bit,
x &= x - 1
// or
x -= x & -x // rhs is rightmost set bit
why it works
x: leading bits 1 all 0
~x: reversed leading bits 0 all 1
~x + 1 or -x: reversed leading bits 1 all 0
x & -x: all 0 1 all 0
eg, let x = 112, and choose 8-bit for simplicity, though the idea is same for all size of integer.
// example for get rightmost set bit
x: 01110000
~x: 10001111
-x or ~x + 1: 10010000
x & -x: 00010000
// example for unset rightmost set bit
x: 01110000
x-1: 01101111
x & (x-1): 01100000
Finding the (0-based) index of the least significant set bit is equivalent to counting how many trailing zeros a given integer has. Depending on your compiler there are builtin functions for this, for example gcc and clang support __builtin_ctz.
For MSVC you would need to implement your own version, this answer to a different question shows a solution making use of MSVC intrinsics.
Given that you are looking for the 1-based index, you simply need to add 1 to ctz's result in order to achieve what you want.
int a = 12;
int least_bit = __builtin_ctz(a) + 1; // least_bit = 3
Note that this operation is undefined if a == 0. Furthermore there exist __builtin_ctzl and __builtin_ctzll which you should use if you are working with long and long long instead of int.
One can use the property of 2s-complement here.
Fastest way to find 2s-complement of a number is to get the rightmost set bit and flip everything to the left of it.
For example: consider a 4 bit system
/* Number in binary */
4 = 0100
/* 2s complement of 4 */
complement = 1100
/* which nothing but */
complement == -4
/* Result */
4 & (-4) = 0100
Notice that there is only one set bit and its at rightmost set bit of 4.
Similarly we can generalise this for n.
n&(-n) will contain only one set bit which is actually at the rightmost set bit position of n.
Since there is only one set bit in n&(-n), it is a power of 2.
So finally we can get the bit position by:
log2(n&(-n))+1
The leftmost bit of n can be obtained using the formulae:
n & ~(n-1)
This works because when you calculate (n-1) .. you are actually making all the zeros till the rightmost bit to 1, and the rightmost bit to 0.
Then you take a NOT of it .. which leaves you with the following:
x= ~(bits from the original number) + (rightmost 1 bit) + trailing zeros
Now, if you do (n & x), you get what you need, as the only bit that is 1 in both n and x is the rightmost bit.
Phewwwww .. :sweat_smile:
http://www.catonmat.net/blog/low-level-bit-hacks-you-absolutely-must-know/
helped me understand this.
There is a neat trick in Knuth 7.1.3 where you multiply by a "magic" number (found by a brute-force search) that maps the first few bits of the number to a unique value for each position of the rightmost bit, and then you can use a small lookup table. Here is an implementation of that trick for 32-bit values, adapted from the nlopt library (MIT/expat licensed).
/* Return position (0, 1, ...) of rightmost (least-significant) one bit in n.
*
* This code uses a 32-bit version of algorithm to find the rightmost
* one bit in Knuth, _The Art of Computer Programming_, volume 4A
* (draft fascicle), section 7.1.3, "Bitwise tricks and
* techniques."
*
* Assumes n has a 1 bit, i.e. n != 0
*
*/
static unsigned rightone32(uint32_t n)
{
const uint32_t a = 0x05f66a47; /* magic number, found by brute force */
static const unsigned decode[32] = { 0, 1, 2, 26, 23, 3, 15, 27, 24, 21, 19, 4, 12, 16, 28, 6, 31, 25, 22, 14, 20, 18, 11, 5, 30, 13, 17, 10, 29, 9, 8, 7 };
n = a * (n & (-n));
return decode[n >> 27];
}
Try this
int set_bit = n ^ (n&(n-1));
Explanation:
As noted in this answer, n&(n-1) unsets the last set bit.
So, if we unset the last set bit and xor it with the number; by the nature of the xor operation, the last set bit will become 1 and the rest of the bits will return 0
1- Subtract 1 form number: (a-1)
2- Take it's negation : ~(a-1)
3- Take 'AND' operation with original number:
int last_set_bit = a & ~(a-1)
The reason behind subtraction is, when you take negation it set its last bit 1, so when take 'AND' it gives last set bit.
Check if a & 1 is 0. If so, shift right by one until it's not zero. The number of times you shift is how many bits from the right is the rightmost bit that is set.
You can find the position of rightmost set bit by doing bitwise xor of n and (n&(n-1) )
int pos = n ^ (n&(n-1));
I inherited this one, with a note that it came from HAKMEM (try it out here). It works on both signed and unsigned integers, logical or arithmetic right shift. It's also pretty efficient.
#include <stdio.h>
int rightmost1(int n) {
int pos, temp;
for (pos = 0, temp = ~n & (n - 1); temp > 0; temp >>= 1, ++pos);
return pos;
}
int main()
{
int pos = rightmost1(16);
printf("%d", pos);
}
You must check all 32 bits starting at index 0 and working your way to the left. If you can bitwise-and your a with a one bit at that position and get a non-zero value back, it means the bit is set.
#include <limits.h>
int last_set_pos(int a) {
for (int i = 0; i < sizeof a * CHAR_BIT; ++i) {
if (a & (0x1 << i)) return i;
}
return -1; // a == 0
}
On typical systems int will be 32 bits, but doing sizeof a * CHAR_BIT will get you the right number of bits in a even if it's a different size
Accourding to dbush's solution, Try this:
int rightMostSet(int a){
if (!a) return -1; //means there isn't any 1-bit
int i=0;
while(a&1==0){
i++;
a>>1;
}
return i;
}
return log2(((num-1)^num)+1);
explanation with example: 12 - 1100
num-1 = 11 = 1011
num^ (num-1) = 12^11 = 7 (111)
num^ (num-1))+1 = 8 (1000)
log2(1000) = 3 (answer).
x & ~(x-1) isolates the lowest bit that is one.
int main(int argc, char **argv)
{
int setbit;
unsigned long d;
unsigned long n1;
unsigned long n = 0xFFF7;
double nlog2 = log(2);
while(n)
{
n1 = (unsigned long)n & (unsigned long)(n -1);
d = n - n1;
n = n1;
setbit = log(d) / nlog2;
printf("Set bit: %d\n", setbit);
}
return 0;
}
And the result is as below.
Set bit: 0
Set bit: 1
Set bit: 2
Set bit: 4
Set bit: 5
Set bit: 6
Set bit: 7
Set bit: 8
Set bit: 9
Set bit: 10
Set bit: 11
Set bit: 12
Set bit: 13
Set bit: 14
Set bit: 15
Let x be your integer input.
Bitwise AND by 1.
If it's even ie 0, 0&1 returns you 0.
If it's odd ie 1, 1&1 returns you 1.
if ( (x & 1) == 0) )
{
std::cout << "The rightmost bit is 0 ie even \n";
}
else
{
std::cout<< "The rightmost bit is 1 ie odd \n";
}```
Alright, so number systems is just working with logarithms and exponents. So I'll dive down into an approach that really makes sense to me.
I would prefer you read this because I write there about how I interpret logarithms as.
When you perform the x & -x operation, it gives you the value which has the right most bit as 1 (for example, it can be 0001000 or 0000010. Now according to how I interpret logarithms as, this value of the right most set bit, is the final value after I grow at the rate of 2. Now we are interested in finding the number of digits in this answer because whatever that is, if you subtract 1 from it, that is precisely the bit-count of set bit (bit count begins with 0 here and the digit count begins with 1, so yeah). But the number of digits is precisely the time you expanded for + 1 (in accordance with my logic) or just the formula I mentioned in the previous link. But now, as we don't really need the digits, but need the bit count, and we also don't have to worry about values of bits which potentially can be real (if the number is 65) because the number is always some multiple of 2 (except 1). So if you just take the logarithm of the value x & -x, we get the bit count! I did see an answer before that mentioned this, but diving down to why it really works was something I felt like writing down.
P.S: You could also count the number of digits and then subtract 1 from it to get the bit-count.

Homework - C bit puzzle - Perform % using C bit operations (no looping, conditionals, function calls, etc)

I'm completely stuck on how to do this homework problem and looking for a hint or two to keep me going. I'm limited to 20 operations (= doesn't count in this 20).
I'm supposed to fill in a function that looks like this:
/* Supposed to do x%(2^n).
For example: for x = 15 and n = 2, the result would be 3.
Additionally, if positive overflow occurs, the result should be the
maximum positive number, and if negative overflow occurs, the result
should be the most negative number.
*/
int remainder_power_of_2(int x, int n){
int twoToN = 1 << n;
/* Magic...? How can I do this without looping? We are assuming it is a
32 bit machine, and we can't use constants bigger than 8 bits
(0xFF is valid for example).
However, I can make a 32 bit number by ORing together a bunch of stuff.
Valid operations are: << >> + ~ ! | & ^
*/
return theAnswer;
}
I was thinking maybe I could shift the twoToN over left... until I somehow check (without if/else) that it is bigger than x, and then shift back to the right once... then xor it with x... and repeat? But I only have 20 operations!
Hint: In decadic system to do a modulo by power of 10, you just leave the last few digits and null the other. E.g. 12345 % 100 = 00045 = 45. Well, in computer numbers are binary. So you have to null the binary digits (bits). So look at various bit manipulation operators (&, |, ^) to do so.
Since binary is base 2, remainders mod 2^N are exactly represented by the rightmost bits of a value. For example, consider the following 32 bit integer:
00000000001101001101000110010101
This has the two's compliment value of 3461525. The remainder mod 2 is exactly the last bit (1). The remainder mod 4 (2^2) is exactly the last 2 bits (01). The remainder mod 8 (2^3) is exactly the last 3 bits (101). Generally, the remainder mod 2^N is exactly the last N bits.
In short, you need to be able to take your input number, and mask it somehow to get only the last few bits.
A tip: say you're using mod 64. The value of 64 in binary is:
00000000000000000000000001000000
The modulus you're interested in is the last 6 bits. I'll provide you a sequence of operations that can transform that number into a mask (but I'm not going to tell you what they are, you can figure them out yourself :D)
00000000000000000000000001000000 // starting value
11111111111111111111111110111111 // ???
11111111111111111111111111000000 // ???
00000000000000000000000000111111 // the mask you need
Each of those steps equates to exactly one operation that can be performed on an int type. Can you figure them out? Can you see how to simplify my steps? :D
Another hint:
00000000000000000000000001000000 // 64
11111111111111111111111111000000 // -64
Since your divisor is always power of two, it's easy.
uint32_t remainder(uint32_t number, uint32_t power)
{
power = 1 << power;
return (number & (power - 1));
}
Suppose you input number as 5 and divisor as 2
`00000000000000000000000000000101` number
AND
`00000000000000000000000000000001` divisor - 1
=
`00000000000000000000000000000001` remainder (what we expected)
Suppose you input number as 7 and divisor as 4
`00000000000000000000000000000111` number
AND
`00000000000000000000000000000011` divisor - 1
=
`00000000000000000000000000000011` remainder (what we expected)
This only works as long as divisor is a power of two (Except for divisor = 1), so use it carefully.

Resources