How does modulus of a smaller dividend and larger divisor work? - c

7 % 3 = 1 (remainder 1)
how does
3 % 7 (remainder ?)
work?

remainder of 3/7 is 3..since it went 0 times with 3 remainder so 3%7 = 3

7 goes into 3? zero times with 3 left over.
quotient is zero. Remainder (modulus) is 3.

Conceptually, I think of it this way. By definition, your dividend must be equal to (quotient * divisor) + modulus
Or, solving for modulus: modulus = dividend - (quotient * divisor)
Whenever the dividend is less than the divisor, the quotient is always zero which results in the modulus simply being equal to the dividend.
To illustrate with OP's values:
modulus of 3 and 7 = 3 - (0 * 7) = 3
To illustrate with other values:
1 % 3:
1 - (0 * 3) = 1
2 % 3:
2 - (0 * 3) = 2

The same way. The quotient is 0 (3 / 7 with fractional part discarded). The remainder then satisfies:
(a / b) * b + (a % b) = a
(3 / 7) * 7 + (3 % 7) = 3
0 * 7 + (3 % 7) = 3
(3 % 7) = 3
This is defined in C99 §6.5.5, Multiplicative operators.

7 divided by 3 is 2 with a remainder of 1
3 divided by 7 is 0 with a remainder of 3

As long as they're both positive, the remainder will be equal to the dividend. If one or both is negative, then you get reminded that % is really the remainder operator, not the modulus operator. A modulus will always be positive, but a remainder can be negative.

(7 * 0) + 3 = 3; therefore, the remainder is 3.

a % q = r means there is a x so that q * x + r = a.
So, 7 % 3 = 1 because 3 * 2 + 1 = 7,
and 3 % 7 = 3 because 7 * 0 + 3 = 3

It seems you forgot to mention the surprising case, if the divident is smaller and negative:
-3 % 7
result: 4

For my brain to understand this kind of question, I'm always converting it to a real world object, for example if I convert your question 3 % 7. I'm going to represent the "3" as a 3-inch wide metal hole, then the "7" as a 7 inch metal screw. Can you insert the 7 inch metal screw to a 3 inch wide metal hole? Of course not, therefore the answer should be the 3-inch metal hole, it doesn't matter even let say you have a 1000 or a million inch wide screw, it is still 3, because how many times can you insert the 1000 or a million inch wide screw to a 3 inch wide metal hole? Zero times, right?

The most simple and effective catch to remember would be:
Whenever dividend is less than the divisor, modulus is just that dividend.
Let's formulate this:
if x < y, then x % y = x

Related

Fastest way to compute sum of first set bit over consecutive integers?

Edit: I wish SO let me accept 2 answers because neither is complete without the other. I suggest reading both!
I am trying to come up with a fast implementation of a function that given an unsigned 32-bit integer x returns the sum of 2^trailing_zeros(i) for i=1..x-1, where trailing_zeros is the count trailing zeros operation which is defined as returning the 0 bits after the least significant 1 bit. This seems like the kind of problem that should lend itself to a clever bit manipulation implementation that takes the same number of instructions regardless of the input, but I haven't been able to derive it.
Mathematically, 2^trailing_zeros(i) is equivalent to the largest factor of 2 that exactly divides i. So we are summing those largest factors for 1..x-1.
i | 1 2 3 4 5 6 7 8 9 10
-----------------------------------------------------------------------
2^trailing_zeroes(i) | 1 2 1 4 1 2 1 8 1 2
-----------------------------------------------------------------------
Sum (desired value) | 0 1 3 4 8 9 11 12 20 21
It is a little easier to see the structure of 2^trailing_zeroes(i) if we 'plot' the values -- horizontal position increasing from left to right corresponding to i and vertical position increasing from top to bottom corresponding to trailing_zeroes(i).
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
16 16 16 16 16 16 16 16
32 32 32 32
64 64
Here it is easier to see the pattern that 2's are always 4 apart, 8's are always 16 apart, etc. However, each pattern starts at a different time -- 8's don't begin until i=8, 16 doesn't begin until i=16, etc. If you don't take into account that the patterns don't start right away you can come up with formulas that don't work -- for example you might think to determine the number of 8's going into the total you should just compute floor(x/16) but i=25 is far enough to the right to include both of the first two 8s.
The best solution I have come up with so far is:
Set n = floor(log2(x)). This can be computed quickly using the count leading zeros operation. This tells us the highest power of two that is going to be involved in the sum.
Set sum = 0
for i = 1..n
sum += floor((x - 2^i) / 2^(i+1))*2^i + 2^i
The way this works as for each power, it calculates the horizontal distance on the plot between x and the first appearance of that power, e.g. the distance between x and the first 8 is (x-8), and then it divides by the distance between repeating instances of that power, e.g. floor((x-8)/16), which gives us how many times that power appeared, we the sum for that power, e.g. floor((x-8)/16)*8. Then we add one instance of the given power because that calculation excludes the very first time that power appears.
In practice this implementation should be pretty fast because the division/floor can be done by right bit shift and powers of two can be done with 1 bit-shifted to the left. However it seems like it should still be possible to do better. This implementation will loop more for larger inputs, up to 32 times (it's O(log2(n)), ideally we want O(1) without a gigantic lookup table using up all the CPU cache). I've been eyeing the BMI/BMI2 intrinsics but I don't see an obvious way to apply them.
Although my goal is to implement this in a compiled language like C++ or Rust with real bit shifting and intrinsics, I've been prototyping in Python. Included below is my script that includes the implementation I described, z(x), and the code for generating the plot, tower(x).
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from math import pow, floor, log, ceil
def leading_zeros(x):
return len(bin(x).split('b')[-1].split('1')[-1])
def f(x):
s = 0
for c, i in enumerate(range(1,x)):
a = pow(2, len(bin(i).split('b')[-1].split('1')[-1]))
s += a
return s
def g(x): return sum([pow(2,i)*floor((x+pow(2,i)-1)/pow(2,i+1)) for i in range(0,32)])
def h(x):
s = 0
extra = 0
extra_s = 0
for i in range(0,32):
num = (x+pow(2,i)-1)
den = pow(2,i+1)
fraction = num/den
floored = floor(num/den)
power = pow(2,i)
product = power*floored
if product == 0:
break
s += product
extra += (fraction - floored)
extra_s += power*fraction
#print(f"i={i} s={s} num={num} den={den} fraction={fraction} floored={floored} power={power} product={product} extra={extra} extra_s={extra_s}")
return s
def z(x):
upper_bound = floor(log(x,2)) if x > 0 else 0
s = 0
for i in range(upper_bound+1):
num = (x - pow(2,i))
den = pow(2,i+1)
fraction = num/den
floored = floor(fraction)
added = pow(2,i)
s += floored * added
s += added
print(f"i={i} s={s} upper_bound={upper_bound} num={num} den={den} floored={floored} added={added}")
return s
# return sum([floor((x - pow(2,i))/pow(2,i+1) + pow(2,i)) for i in range(floor(log(x, 2)))])
def tower(x):
table = [[" " for i in range(x)] for j in range(ceil(log(x,2)))]
for i in range(1,x):
p = leading_zeros(i)
table[p][i] = 2**p
for row in table:
for col in row:
print(col,end='')
print()
# h(9000)
for i in range(1,16):
tower(i)
print((i, f(i), g(i), h(i), z(i-1)))
Based on the method of Eric Postpischil, here is a way to do it without a loop.
Note that every bit is being multiplied by its position, and the results are summed (sort of, except there is also a factor of 0.5 in it, let's put that aside for now). Let's call those values that are being added up "the partial products" just to call them something, it's not really accurate to call them that, I can't come up with anything better. If we transpose that a little bit, then it's built up like this: the lowest bit of every partial product is the lowest bit of the position of every bit multiplied by that bit. Single-bit-products are bitwise-AND, and the values of the lowest bits of the positions are 0,1,0,1 etc, so it works out to x & 0xAAAAAAAA, the second bit of every partial product is x & 0xCCCCCCCC (and has a "weight" of 2, so this must be multiplied by 2) etc.
Then the whole thing needs to be shifted right by 1, to account for the factor of 0.5
So in total:
unsigned CountCumulativeTrailingZeros(unsigned x)
{
--x;
unsigned sum = x;
sum += (x >> 1) & 0x55555555;
sum += x & 0xCCCCCCCC;
sum += (x & 0xF0F0F0F0) << 1;
sum += (x & 0xFF00FF00) << 2;
sum += (x & 0xFFFF0000) << 3;
return sum;
}
For an additional explanation, here is a more visual example. Let's temporarily drop the factor of 0.5 again, it doesn't fundamentally change the algorithm but adds some complication.
First I write above every bit of v (some example value), the position of that bit in binary (p0 is the least significant bit of the position, p1 the second bit etc). Read the ps vertically, every column is a number:
p0: 10101010101010101010101010101010
p1: 11001100110011001100110011001100
p2: 11110000111100001111000011110000
p3: 11111111000000001111111100000000
p4: 11111111111111110000000000000000
v : 00000000100001000000001000000000
So for example bit 9 is set, and it has (reading from bottom to top) 01001 above it (9 in binary).
What we want to do (why this works has been explained by Eric's answer), is take the indexes of the bits that are set, shift them to their corresponding positions, and add them. In this case, they are already at their own positions (by construction, the numbers were written at their own positions), so there is no shift, but they still need to be filtered so only the numbers that correspond to set bits survive. This is what I meant by the "single bit products": take a bit of v and multiply it by the corresponding bits of p0, p1, etc.
You can look at that as multiplying the bit value by its index as well so 2^bit * bit as mentioned in the comments. That is not how it is done here, but that is effectively what is done.
Back to the example, applying bitwise-AND results in these partial products:
pp0: 00000000100000000000001000000000
pp1: 00000000100001000000000000000000
pp2: 00000000100000000000000000000000
pp3: 00000000000000000000001000000000
pp4: 00000000100001000000000000000000
v : 00000000100001000000001000000000
The only values that are left are 01001, 10010, 10111, and they are at their corresponding positions (so, already shifted to where they need to go).
Those values must be added, while keeping them at their positions. They don't need to be extracted from the strange form which they are in, addition is freely reorderable (associative and commutative) so it's OK to add all the least significant bits of the partial products to the sum first, then all the seconds bits, and so on. But they have to added with the right "weight", after all a set bit in pp0 corresponds to a 1 at that position but a set bit in pp1 really corresponds to a 2 at that position (since it's the second bit of the number that it is part of). So pp0 is used directly, but pp1 is shifted left by 1, pp2 is shifted left by 2, etc.
The the factor of 0.5 must still be accounted for, which I did mostly by shifting over the bits of the partial products by one less than what their weight would imply. pp0 was shifted left by zero, so it must be shifted right by 1 now. This could be done with less complication by just putting return sum >> 1; at the end, but that would reduce the range of values that the function can handle before running into integer wrapping modulo 232 (also it would cost an extra operation, and doing it the weird way does not).
Observe that if we count from 1 to x instead of to x−1, we have a pattern:
x
sum
sum/x
1
1
1
2
3
1.5
4
8
2
8
20
2.5
16
48
3
So we can easily calculate the sum for any power of two p as p • (1 + ½b), where b is the power (equivalently, the number of the bit that is set or the log2 of the power). We can see this by induction: If the sum from 1 to 2b is 2b•(1+½b) (which it is for b=0), then the sum from 1 to 2b+1 reprises the individual term contributions twice except that the last term adds 2b+1 instead of 2b, so the sum is 2•2b•(1+½b) − 2b + 2b+1 = 2b+1•(1+½b) + ½•2b+1 = 2b+1•(1+½(b+1)).
Further, between any two powers of two, the lower bits reprise the previous partial sums. Thus, for any x, we can compute the cumulative number of trailing zeros by summing the sums for the set bits in it. Recalling this provides the sum for numbers from 1 to x, we adjust by to get the desired sum from 1 to x−1 subtracting one from x before computation:
unsigned CountCumulative(unsigned x)
{
--x;
unsigned sum = 0;
for (unsigned bit = 0; bit < sizeof x * CHAR_BIT; ++bit)
sum += (x & 1u << bit) * (1 + bit * .5);
return sum;
}
We can terminate the loop when x is exhausted:
unsigned CountCumulative(unsigned x)
{
--x;
unsigned sum = 0;
for (unsigned bit = 0; x; ++bit, x >>= 1)
sum += ((x & 1) << bit) * (1 + bit * .5);
return sum;
}
As harold points out, we can factor out the 1, as summing the value of each bit of x equals x:
unsigned CountCumulative(unsigned x)
{
--x;
unsigned sum = x;
for (unsigned bit = 0; x; ++bit, x >>= 1)
sum += ((x & 1) << bit) * bit * .5;
return sum;
}
Then eliminate the floating-point:
unsigned CountCumulative(unsigned x)
{
unsigned sum = --x;
for (unsigned bit = 0; x; ++bit, x >>= 1)
sum += ((x & 1) << bit) / 2 * bit;
return sum;
}
Note that when bit is zero, ((x & 1) << bit) / 2 will lose the fraction, but this irrelevant as * bit makes the contribution zero anyway. For all other values of bit, (x & 1) << bit is even, so the division does not lose anything.
This will overflow unsigned at some point, so one might want to use a wider type for the calculations.
More Code Golf
Another way to add half the values of the bits of x repeatedly depending on their bit position is to shift x (to halve its bit values) and then add that repeatedly while removing successive bits from low to high:
unsigned CountCumulative(unsigned x)
{
unsigned sum = --x;
for (unsigned bit = 0; x >>= 1; ++bit)
sum += x << bit;
return sum;
}

Round division of unsigned integers with no overflow

I'm looking for an overflow-safe method to perform round division of unsigned integers.
I have this:
uint roundDiv(uint n, uint d)
{
return (n + d / 2) / d;
}
But unfortunately, the expression n + d / 2 may overflow.
I think that I will have to check whether or not n % d is smaller than d / 2.
But d / 2 itself may truncate (when d is odd).
So I figured I should check whether or not n % d * 2 is smaller than d.
Or even without a logical condition, rely on the fact that n % d * 2 / d is either 0 or 1:
uint roundDiv(uint n, uint d)
{
return n / d + n % d * 2 / d;
}
This works well, however once again, n % d * 2 may overflow.
Is there any custom way to achieve round integer division which is overflow-safe?
Update
I have come up with this:
uint roundDiv(uint n, uint d)
{
if (n % d < (d + d % 2) / 2)
return n / d;
return n / d + 1;
}
Still, the expression d + d % 2 may overflow.
return n/d + (d-d/2 <= n%d);
The way to avoid overflow at any stage is, as OP stated, to compare the remainder with half the divisor, but the result isn't quite as obvious as it first seems. Here are some examples, with the assumption that 0.5 would round up. First with an odd divisor:
Numerator Divisor Required Quotient Remainder Half divisor Quot < Req?
3 3 1 1 0 1 no
4 3 1 1 1 1 no
5 3 2 1 2 1 yes
6 3 2 2 0 1 no
Above, the only increment needed is when d / 2 < remainder. Now with an even divisor:
Numerator Divisor Required Quotient Remainder Half divisor Quot < Req?
4 4 1 1 0 2 no
5 4 1 1 1 2 no
6 4 2 1 2 2 yes
7 4 2 1 3 2 yes
8 4 2 2 0 2 no
But here, the increment is needed when d / 2 <= remainder.
Summary:
You need a different condition depending on odd or even divisor.

Generate random even number in range [m, n]

I was looking for C code to generate a set of random even number in range [start, end]. I tried,
int random = ((start + rand() % (end - start) / 2)) * 2;
This won't work, for example if the range is [0, 4], both 0 & 4 included
int random = (0 + rand() % (4 - 0) / 2) * 2
=> (rand() % 2) * 2
=> 0, 2, ... (never includes 4) but expectation = 0, 2, 4 ...
On the other hands if I use,
int random = ((start + rand() % (end - start) / 2) + 1) * 2;
This won't work, for example,
int random = (0 + (rand() % (4 - 0) / 2) + 1) * 2
=> ((rand() % 4 / 2) + 1) * 2
=> 2, 4, ... (never includes 0) but expectation = 0, 2, 4 ...
Any clue? how to get rid of this problem?
You complicated it too much. Since you're using rand() and the modulo operator, I'm assuming that you will not be using this for cryptographic or security purposes, but as a simple even number generator.
The formula I have found for generating a random even number in the range of [0, 2n] is to use
s = (rand() % (n + 1)) * 2
An example code:
#include <stdio.h>
int main() {
int i, s;
for(i = 0; i < 100; i++) {
s = (rand() % 3) * 2;
printf("%d ", s);
}
}
And it gave me the following output:
2 2 0 2 4 2 2 0 0 2 4 2 4 2 4 2 0 0 2 2 4 4 0 0 4 4 4 2 2 2 4 0 0 0 4 0 2 2 2 2 0 0 0 4 4 2 4 4 4 0 4 2 2 4 4 0 4 4 2 2 0 0 4 0 4 4 2 0 2 4 0 0 0 0 4 0 4 4 0 4 2 0 0 4 4 0 0 4 4 2 0 0 4 0 2 2 2 0 0 4 0 2 4 2
Best regards!
rand() % x will generate a number in the range [0,x) so if you want the range [0,x] then use rand() % (x+1)
Common notation for ranges is to use [] for inclusive and () for exclusive, so [a,b) would be a range such that a is included but not b.
So in your case, just use (rand() % 3)*2 to get random numbers among {0,2,4}
If you want even numbers in the range [m,n], then use ((m/2) + rand() % ((n-m+2)/2))*2
I do not trust in the mod operator for random numbers. I prefer
start + ((1 + stop - start) * rand())
/ (1 + RAND_MAX)
which only relies on the distribution of rand() in the interval
[0, .. , RAND_MAX] and not on any distribution of rand()%n in the
interval [0, .. , n-1].
Note: If you use this expression you should add appropriate casts to avoid multiplication overflow.
Note also
ISO/IEC 9899:201x (p.346):
There are no guarantees as to the quality of the random sequence produced and some implementations are known to produce sequences with distressingly non-random low-order bits. Applications with particular requirements should use a generator that is known to be sufficient for their needs.
Just and-out the low bit, which makes it even:
n= (rand()%N)&(-2);
and to use a start/stop (a range), the values can be offset:
int n, start= 5, stop= 20+1;
n= ((rand()%(stop-start))+start)&(-2);
The latter calculation generates a random number between 0 and RAND_MAX (this value is library-dependent, but is guaranteed to be at least 32767).
If the stop value must be included in the range of generated numbers, then add 1 to the stop value.
It takes that value modulo the stop value plus the start value, and then adds the start value. The value is now within the range of [start, stop]. As only even numbers are required, the low bit is anded-out because even numbers start at 2.
The anding-out is performed by generating a mask of all 1's, except the lowest bit. As -1 is all 1's (0xFFF...FFFFF), -2 is all 1's except this low bit (0xFFF...FFFFE). Next the bitwise AND operation (&) is perfomed with this mask and the number is now in the range [start,stop]. QED.

C rand() dice issue

I'm new to C and I'm reading a book about it. I just came across the rand() function. The book states that using rand() returns a random number from 0 to 32767. It also states that you can narrow the random numbers by using % (modulus operator) to do so.
Here is an example: the following expression puts a random number from 1 to 6 in the variable dice
dice = (rand() % 5) + 1;
I'm unable to get a remainder of 5 as any number from 0 to 33767 % 5 is equal to 0 to 4, but never 5.
Shouldn't it be % 6 in the above statement instead?
For example, if I choose randomly a number between 0 and 32767, let's say 75, then:
75 % 5 == 0
76 % 5 == 1
77 % 5 == 2
78 % 5 == 3
79 % 5 == 4
80 % 5 == 0
Etc.
So regardless of the random number between 0 and 32767, the remainder will never be 5, so it will not be possible to get a 6 number for the dice (as per the above statement).
Not sure if you will understand what I mean but your help would be much appreciated.
dice = (rand() % 5) + 1;
This will generate a random number between 1 to 5, inclusive, as you have analyzed. The % 5 in the book is probably just a typo. To get 1 to 6 it needs to be % 6.
First you will have to understand how modulo (%) works. If you have say 10 and divide it by 5 you get 2 with a remainder of 0, hence the 10 % 5. The possible range of remainders you would get when you mod(modulo) 5 is 0 - 4. Remember that the possible remainders would can get when you divide by x if from 0 to x-1. So in your case with the dice program you need numbers from the range of 1 to 6 (the faces of a die) hence you would mod 6 and add 1 to this number for give the necessary shift. (rand() % 6) + 1

generate random number in a range [l u] [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to generate a random number from within a range - C
I saw the following code from programming pearls
int randint(int l, int u)
{ return l + (RAND_MAX*rand() + rand()) % (u-l+1);
}
Can anyone help me explain it?
Can we just use
return l + rand() % (u-l+1);
Thanks,
The problem with using rand() % n to get a number between 0 and n-1 is that it has some bias when n is not an exact divisor of RAND_MAX. The higher the value of n, the stronger this bias becomes.
To illustrate why this happens, let's imagine that rand() would be implemented with a six-sided die. So RAND_MAX would be 5. We want to use this die to generate random numbers between 0 and 3, so we do this:
x = rand() % 4
What's the value of x for each of the six outcomes of rand?
0 % 4 = 0
1 % 4 = 1
2 % 4 = 2
3 % 4 = 3
4 % 4 = 0
5 % 4 = 1
As you can see, the numbers 0 and 1 will be generated twice as often as the numbers 2 and 3.
When your use case doesn't permit bias, this is a better way to calculate a random number:
(int)((double)rand() / (double)RAND_MAX * (double)n)
yes that is ok, check that u>l and you can do only this:
return l + (RAND_MAX*rand()) % (u-l+1);
explaination:
if we would like to generate in union distribution a random integer number in [0,N] when N>0 we would use:
return (RAND_MAX*rand()) % (N+1);
since the range is shitted with a constant value l in your case we just have to add it to the final result.
python model:
>>> import random
>>> import sys
>>> for i in xrange(20):
int(random.random()*sys.maxint%4)
0
1
2
3
1
1
2
2
3
0
3
3
0
2
3
3
1
2
2
3

Resources