This is a follow on from a previously posted question:
How to generate a random number in C?
I wish to be able to generate a random number from within a particular range, such as 1 to 6 to mimic the sides of a die.
How would I go about doing this?
All the answers so far are mathematically wrong. Returning rand() % N does not uniformly give a number in the range [0, N) unless N divides the length of the interval into which rand() returns (i.e. is a power of 2). Furthermore, one has no idea whether the moduli of rand() are independent: it's possible that they go 0, 1, 2, ..., which is uniform but not very random. The only assumption it seems reasonable to make is that rand() puts out a Poisson distribution: any two nonoverlapping subintervals of the same size are equally likely and independent. For a finite set of values, this implies a uniform distribution and also ensures that the values of rand() are nicely scattered.
This means that the only correct way of changing the range of rand() is to divide it into boxes; for example, if RAND_MAX == 11 and you want a range of 1..6, you should assign {0,1} to 1, {2,3} to 2, and so on. These are disjoint, equally-sized intervals and thus are uniformly and independently distributed.
The suggestion to use floating-point division is mathematically plausible but suffers from rounding issues in principle. Perhaps double is high-enough precision to make it work; perhaps not. I don't know and I don't want to have to figure it out; in any case, the answer is system-dependent.
The correct way is to use integer arithmetic. That is, you want something like the following:
#include <stdlib.h> // For random(), RAND_MAX
// Assumes 0 <= max <= RAND_MAX
// Returns in the closed interval [0, max]
long random_at_most(long max) {
unsigned long
// max <= RAND_MAX < ULONG_MAX, so this is okay.
num_bins = (unsigned long) max + 1,
num_rand = (unsigned long) RAND_MAX + 1,
bin_size = num_rand / num_bins,
defect = num_rand % num_bins;
long x;
do {
x = random();
}
// This is carefully written not to overflow
while (num_rand - defect <= (unsigned long)x);
// Truncated division is intentional
return x/bin_size;
}
The loop is necessary to get a perfectly uniform distribution. For example, if you are given random numbers from 0 to 2 and you want only ones from 0 to 1, you just keep pulling until you don't get a 2; it's not hard to check that this gives 0 or 1 with equal probability. This method is also described in the link that nos gave in their answer, though coded differently. I'm using random() rather than rand() as it has a better distribution (as noted by the man page for rand()).
If you want to get random values outside the default range [0, RAND_MAX], then you have to do something tricky. Perhaps the most expedient is to define a function random_extended() that pulls n bits (using random_at_most()) and returns in [0, 2**n), and then apply random_at_most() with random_extended() in place of random() (and 2**n - 1 in place of RAND_MAX) to pull a random value less than 2**n, assuming you have a numerical type that can hold such a value. Finally, of course, you can get values in [min, max] using min + random_at_most(max - min), including negative values.
Following on from #Ryan Reich's answer, I thought I'd offer my cleaned up version. The first bounds check isn't required given the second bounds check, and I've made it iterative rather than recursive. It returns values in the range [min, max], where max >= min and 1+max-min < RAND_MAX.
unsigned int rand_interval(unsigned int min, unsigned int max)
{
int r;
const unsigned int range = 1 + max - min;
const unsigned int buckets = RAND_MAX / range;
const unsigned int limit = buckets * range;
/* Create equal size buckets all in a row, then fire randomly towards
* the buckets until you land in one of them. All buckets are equally
* likely. If you land off the end of the line of buckets, try again. */
do
{
r = rand();
} while (r >= limit);
return min + (r / buckets);
}
Here is a formula if you know the max and min values of a range, and you want to generate numbers inclusive in between the range:
r = (rand() % (max + 1 - min)) + min
unsigned int
randr(unsigned int min, unsigned int max)
{
double scaled = (double)rand()/RAND_MAX;
return (max - min +1)*scaled + min;
}
See here for other options.
Wouldn't you just do:
srand(time(NULL));
int r = ( rand() % 6 ) + 1;
% is the modulus operator. Essentially it will just divide by 6 and return the remainder... from 0 - 5
For those who understand the bias problem but can't stand the unpredictable run-time of rejection-based methods, this series produces a progressively less biased random integer in the [0, n-1] interval:
r = n / 2;
r = (rand() * n + r) / (RAND_MAX + 1);
r = (rand() * n + r) / (RAND_MAX + 1);
r = (rand() * n + r) / (RAND_MAX + 1);
...
It does so by synthesising a high-precision fixed-point random number of i * log_2(RAND_MAX + 1) bits (where i is the number of iterations) and performing a long multiplication by n.
When the number of bits is sufficiently large compared to n, the bias becomes immeasurably small.
It does not matter if RAND_MAX + 1 is less than n (as in this question), or if it is not a power of two, but care must be taken to avoid integer overflow if RAND_MAX * n is large.
Here is a slight simpler algorithm than Ryan Reich's solution:
/// Begin and end are *inclusive*; => [begin, end]
uint32_t getRandInterval(uint32_t begin, uint32_t end) {
uint32_t range = (end - begin) + 1;
uint32_t limit = ((uint64_t)RAND_MAX + 1) - (((uint64_t)RAND_MAX + 1) % range);
/* Imagine range-sized buckets all in a row, then fire randomly towards
* the buckets until you land in one of them. All buckets are equally
* likely. If you land off the end of the line of buckets, try again. */
uint32_t randVal = rand();
while (randVal >= limit) randVal = rand();
/// Return the position you hit in the bucket + begin as random number
return (randVal % range) + begin;
}
Example (RAND_MAX := 16, begin := 2, end := 7)
=> range := 6 (1 + end - begin)
=> limit := 12 (RAND_MAX + 1) - ((RAND_MAX + 1) % range)
The limit is always a multiple of the range,
so we can split it into range-sized buckets:
Possible-rand-output: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Buckets: [0, 1, 2, 3, 4, 5][0, 1, 2, 3, 4, 5][X, X, X, X, X]
Buckets + begin: [2, 3, 4, 5, 6, 7][2, 3, 4, 5, 6, 7][X, X, X, X, X]
1st call to rand() => 13
→ 13 is not in the bucket-range anymore (>= limit), while-condition is true
→ retry...
2nd call to rand() => 7
→ 7 is in the bucket-range (< limit), while-condition is false
→ Get the corresponding bucket-value 1 (randVal % range) and add begin
=> 3
In order to avoid the modulo bias (suggested in other answers) you can always use:
arc4random_uniform(MAX-MIN)+MIN
Where "MAX" is the upper bound and "MIN" is lower bound. For example, for numbers between 10 and 20:
arc4random_uniform(20-10)+10
arc4random_uniform(10)+10
Simple solution and better than using "rand() % N".
While Ryan is correct, the solution can be much simpler based on what is known about the source of the randomness. To re-state the problem:
There is a source of randomness, outputting integer numbers in range [0, MAX) with uniform distribution.
The goal is to produce uniformly distributed random integer numbers in range [rmin, rmax] where 0 <= rmin < rmax < MAX.
In my experience, if the number of bins (or "boxes") is significantly smaller than the range of the original numbers, and the original source is cryptographically strong - there is no need to go through all that rigamarole, and simple modulo division would suffice (like output = rnd.next() % (rmax+1), if rmin == 0), and produce random numbers that are distributed uniformly "enough", and without any loss of speed. The key factor is the randomness source (i.e., kids, don't try this at home with rand()).
Here's an example/proof of how it works in practice. I wanted to generate random numbers from 1 to 22, having a cryptographically strong source that produced random bytes (based on Intel RDRAND). The results are:
Rnd distribution test (22 boxes, numbers of entries in each box):
1: 409443 4.55%
2: 408736 4.54%
3: 408557 4.54%
4: 409125 4.55%
5: 408812 4.54%
6: 409418 4.55%
7: 408365 4.54%
8: 407992 4.53%
9: 409262 4.55%
10: 408112 4.53%
11: 409995 4.56%
12: 409810 4.55%
13: 409638 4.55%
14: 408905 4.54%
15: 408484 4.54%
16: 408211 4.54%
17: 409773 4.55%
18: 409597 4.55%
19: 409727 4.55%
20: 409062 4.55%
21: 409634 4.55%
22: 409342 4.55%
total: 100.00%
This is as close to uniform as I need for my purpose (fair dice throw, generating cryptographically strong codebooks for WWII cipher machines such as http://users.telenet.be/d.rijmenants/en/kl-7sim.htm, etc). The output does not show any appreciable bias.
Here's the source of cryptographically strong (true) random number generator:
Intel Digital Random Number Generator
and a sample code that produces 64-bit (unsigned) random numbers.
int rdrand64_step(unsigned long long int *therand)
{
unsigned long long int foo;
int cf_error_status;
asm("rdrand %%rax; \
mov $1,%%edx; \
cmovae %%rax,%%rdx; \
mov %%edx,%1; \
mov %%rax, %0;":"=r"(foo),"=r"(cf_error_status)::"%rax","%rdx");
*therand = foo;
return cf_error_status;
}
I compiled it on Mac OS X with clang-6.0.1 (straight), and with gcc-4.8.3 using "-Wa,q" flag (because GAS does not support these new instructions).
As said before modulo isn't sufficient because it skews the distribution. Heres my code which masks off bits and uses them to ensure the distribution isn't skewed.
static uint32_t randomInRange(uint32_t a,uint32_t b) {
uint32_t v;
uint32_t range;
uint32_t upper;
uint32_t lower;
uint32_t mask;
if(a == b) {
return a;
}
if(a > b) {
upper = a;
lower = b;
} else {
upper = b;
lower = a;
}
range = upper - lower;
mask = 0;
//XXX calculate range with log and mask? nah, too lazy :).
while(1) {
if(mask >= range) {
break;
}
mask = (mask << 1) | 1;
}
while(1) {
v = rand() & mask;
if(v <= range) {
return lower + v;
}
}
}
The following simple code lets you look at the distribution:
int main() {
unsigned long long int i;
unsigned int n = 10;
unsigned int numbers[n];
for (i = 0; i < n; i++) {
numbers[i] = 0;
}
for (i = 0 ; i < 10000000 ; i++){
uint32_t rand = random_in_range(0,n - 1);
if(rand >= n){
printf("bug: rand out of range %u\n",(unsigned int)rand);
return 1;
}
numbers[rand] += 1;
}
for(i = 0; i < n; i++) {
printf("%u: %u\n",i,numbers[i]);
}
}
Will return a floating point number in the range [0,1]:
#define rand01() (((double)random())/((double)(RAND_MAX)))
Related
This is a follow on from a previously posted question:
How to generate a random number in C?
I wish to be able to generate a random number from within a particular range, such as 1 to 6 to mimic the sides of a die.
How would I go about doing this?
All the answers so far are mathematically wrong. Returning rand() % N does not uniformly give a number in the range [0, N) unless N divides the length of the interval into which rand() returns (i.e. is a power of 2). Furthermore, one has no idea whether the moduli of rand() are independent: it's possible that they go 0, 1, 2, ..., which is uniform but not very random. The only assumption it seems reasonable to make is that rand() puts out a Poisson distribution: any two nonoverlapping subintervals of the same size are equally likely and independent. For a finite set of values, this implies a uniform distribution and also ensures that the values of rand() are nicely scattered.
This means that the only correct way of changing the range of rand() is to divide it into boxes; for example, if RAND_MAX == 11 and you want a range of 1..6, you should assign {0,1} to 1, {2,3} to 2, and so on. These are disjoint, equally-sized intervals and thus are uniformly and independently distributed.
The suggestion to use floating-point division is mathematically plausible but suffers from rounding issues in principle. Perhaps double is high-enough precision to make it work; perhaps not. I don't know and I don't want to have to figure it out; in any case, the answer is system-dependent.
The correct way is to use integer arithmetic. That is, you want something like the following:
#include <stdlib.h> // For random(), RAND_MAX
// Assumes 0 <= max <= RAND_MAX
// Returns in the closed interval [0, max]
long random_at_most(long max) {
unsigned long
// max <= RAND_MAX < ULONG_MAX, so this is okay.
num_bins = (unsigned long) max + 1,
num_rand = (unsigned long) RAND_MAX + 1,
bin_size = num_rand / num_bins,
defect = num_rand % num_bins;
long x;
do {
x = random();
}
// This is carefully written not to overflow
while (num_rand - defect <= (unsigned long)x);
// Truncated division is intentional
return x/bin_size;
}
The loop is necessary to get a perfectly uniform distribution. For example, if you are given random numbers from 0 to 2 and you want only ones from 0 to 1, you just keep pulling until you don't get a 2; it's not hard to check that this gives 0 or 1 with equal probability. This method is also described in the link that nos gave in their answer, though coded differently. I'm using random() rather than rand() as it has a better distribution (as noted by the man page for rand()).
If you want to get random values outside the default range [0, RAND_MAX], then you have to do something tricky. Perhaps the most expedient is to define a function random_extended() that pulls n bits (using random_at_most()) and returns in [0, 2**n), and then apply random_at_most() with random_extended() in place of random() (and 2**n - 1 in place of RAND_MAX) to pull a random value less than 2**n, assuming you have a numerical type that can hold such a value. Finally, of course, you can get values in [min, max] using min + random_at_most(max - min), including negative values.
Following on from #Ryan Reich's answer, I thought I'd offer my cleaned up version. The first bounds check isn't required given the second bounds check, and I've made it iterative rather than recursive. It returns values in the range [min, max], where max >= min and 1+max-min < RAND_MAX.
unsigned int rand_interval(unsigned int min, unsigned int max)
{
int r;
const unsigned int range = 1 + max - min;
const unsigned int buckets = RAND_MAX / range;
const unsigned int limit = buckets * range;
/* Create equal size buckets all in a row, then fire randomly towards
* the buckets until you land in one of them. All buckets are equally
* likely. If you land off the end of the line of buckets, try again. */
do
{
r = rand();
} while (r >= limit);
return min + (r / buckets);
}
Here is a formula if you know the max and min values of a range, and you want to generate numbers inclusive in between the range:
r = (rand() % (max + 1 - min)) + min
unsigned int
randr(unsigned int min, unsigned int max)
{
double scaled = (double)rand()/RAND_MAX;
return (max - min +1)*scaled + min;
}
See here for other options.
Wouldn't you just do:
srand(time(NULL));
int r = ( rand() % 6 ) + 1;
% is the modulus operator. Essentially it will just divide by 6 and return the remainder... from 0 - 5
For those who understand the bias problem but can't stand the unpredictable run-time of rejection-based methods, this series produces a progressively less biased random integer in the [0, n-1] interval:
r = n / 2;
r = (rand() * n + r) / (RAND_MAX + 1);
r = (rand() * n + r) / (RAND_MAX + 1);
r = (rand() * n + r) / (RAND_MAX + 1);
...
It does so by synthesising a high-precision fixed-point random number of i * log_2(RAND_MAX + 1) bits (where i is the number of iterations) and performing a long multiplication by n.
When the number of bits is sufficiently large compared to n, the bias becomes immeasurably small.
It does not matter if RAND_MAX + 1 is less than n (as in this question), or if it is not a power of two, but care must be taken to avoid integer overflow if RAND_MAX * n is large.
Here is a slight simpler algorithm than Ryan Reich's solution:
/// Begin and end are *inclusive*; => [begin, end]
uint32_t getRandInterval(uint32_t begin, uint32_t end) {
uint32_t range = (end - begin) + 1;
uint32_t limit = ((uint64_t)RAND_MAX + 1) - (((uint64_t)RAND_MAX + 1) % range);
/* Imagine range-sized buckets all in a row, then fire randomly towards
* the buckets until you land in one of them. All buckets are equally
* likely. If you land off the end of the line of buckets, try again. */
uint32_t randVal = rand();
while (randVal >= limit) randVal = rand();
/// Return the position you hit in the bucket + begin as random number
return (randVal % range) + begin;
}
Example (RAND_MAX := 16, begin := 2, end := 7)
=> range := 6 (1 + end - begin)
=> limit := 12 (RAND_MAX + 1) - ((RAND_MAX + 1) % range)
The limit is always a multiple of the range,
so we can split it into range-sized buckets:
Possible-rand-output: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Buckets: [0, 1, 2, 3, 4, 5][0, 1, 2, 3, 4, 5][X, X, X, X, X]
Buckets + begin: [2, 3, 4, 5, 6, 7][2, 3, 4, 5, 6, 7][X, X, X, X, X]
1st call to rand() => 13
→ 13 is not in the bucket-range anymore (>= limit), while-condition is true
→ retry...
2nd call to rand() => 7
→ 7 is in the bucket-range (< limit), while-condition is false
→ Get the corresponding bucket-value 1 (randVal % range) and add begin
=> 3
In order to avoid the modulo bias (suggested in other answers) you can always use:
arc4random_uniform(MAX-MIN)+MIN
Where "MAX" is the upper bound and "MIN" is lower bound. For example, for numbers between 10 and 20:
arc4random_uniform(20-10)+10
arc4random_uniform(10)+10
Simple solution and better than using "rand() % N".
While Ryan is correct, the solution can be much simpler based on what is known about the source of the randomness. To re-state the problem:
There is a source of randomness, outputting integer numbers in range [0, MAX) with uniform distribution.
The goal is to produce uniformly distributed random integer numbers in range [rmin, rmax] where 0 <= rmin < rmax < MAX.
In my experience, if the number of bins (or "boxes") is significantly smaller than the range of the original numbers, and the original source is cryptographically strong - there is no need to go through all that rigamarole, and simple modulo division would suffice (like output = rnd.next() % (rmax+1), if rmin == 0), and produce random numbers that are distributed uniformly "enough", and without any loss of speed. The key factor is the randomness source (i.e., kids, don't try this at home with rand()).
Here's an example/proof of how it works in practice. I wanted to generate random numbers from 1 to 22, having a cryptographically strong source that produced random bytes (based on Intel RDRAND). The results are:
Rnd distribution test (22 boxes, numbers of entries in each box):
1: 409443 4.55%
2: 408736 4.54%
3: 408557 4.54%
4: 409125 4.55%
5: 408812 4.54%
6: 409418 4.55%
7: 408365 4.54%
8: 407992 4.53%
9: 409262 4.55%
10: 408112 4.53%
11: 409995 4.56%
12: 409810 4.55%
13: 409638 4.55%
14: 408905 4.54%
15: 408484 4.54%
16: 408211 4.54%
17: 409773 4.55%
18: 409597 4.55%
19: 409727 4.55%
20: 409062 4.55%
21: 409634 4.55%
22: 409342 4.55%
total: 100.00%
This is as close to uniform as I need for my purpose (fair dice throw, generating cryptographically strong codebooks for WWII cipher machines such as http://users.telenet.be/d.rijmenants/en/kl-7sim.htm, etc). The output does not show any appreciable bias.
Here's the source of cryptographically strong (true) random number generator:
Intel Digital Random Number Generator
and a sample code that produces 64-bit (unsigned) random numbers.
int rdrand64_step(unsigned long long int *therand)
{
unsigned long long int foo;
int cf_error_status;
asm("rdrand %%rax; \
mov $1,%%edx; \
cmovae %%rax,%%rdx; \
mov %%edx,%1; \
mov %%rax, %0;":"=r"(foo),"=r"(cf_error_status)::"%rax","%rdx");
*therand = foo;
return cf_error_status;
}
I compiled it on Mac OS X with clang-6.0.1 (straight), and with gcc-4.8.3 using "-Wa,q" flag (because GAS does not support these new instructions).
As said before modulo isn't sufficient because it skews the distribution. Heres my code which masks off bits and uses them to ensure the distribution isn't skewed.
static uint32_t randomInRange(uint32_t a,uint32_t b) {
uint32_t v;
uint32_t range;
uint32_t upper;
uint32_t lower;
uint32_t mask;
if(a == b) {
return a;
}
if(a > b) {
upper = a;
lower = b;
} else {
upper = b;
lower = a;
}
range = upper - lower;
mask = 0;
//XXX calculate range with log and mask? nah, too lazy :).
while(1) {
if(mask >= range) {
break;
}
mask = (mask << 1) | 1;
}
while(1) {
v = rand() & mask;
if(v <= range) {
return lower + v;
}
}
}
The following simple code lets you look at the distribution:
int main() {
unsigned long long int i;
unsigned int n = 10;
unsigned int numbers[n];
for (i = 0; i < n; i++) {
numbers[i] = 0;
}
for (i = 0 ; i < 10000000 ; i++){
uint32_t rand = random_in_range(0,n - 1);
if(rand >= n){
printf("bug: rand out of range %u\n",(unsigned int)rand);
return 1;
}
numbers[rand] += 1;
}
for(i = 0; i < n; i++) {
printf("%u: %u\n",i,numbers[i]);
}
}
Will return a floating point number in the range [0,1]:
#define rand01() (((double)random())/((double)(RAND_MAX)))
I'm trying to write a program that calculates decimal digits of π to 1000 digits or more.
To practice low-level programming for fun, the final program will be written in assembly, on a 8-bit CPU that has no multiplication or division, and only performs 16-bit additions. To ease the implementation, it's desirable to be able to use only 16-bit unsigned integer operations, and use an iterative algorithm. Speed is not a major concern. And fast multiplication and division is beyond the scope of this question, so don't consider those issues as well.
Before implementing it in assembly, I'm still trying to figure out an usable algorithm in C on my desktop computer. So far, I found the following series is reasonably efficient and relatively easy to implement.
The formula is derived from the Leibniz Series using a convergence acceleration technique, To derive it, see Computing the Digits in π, by Carl D. Offner (https://cs.umb.edu/~offner/files/pi.pdf), page 19-26. The final formula is shown in page 26. The initial formula I've written had some typos, please refresh the page to see the fixed formula. The constant term 2 at the greatest term is explained in page 54. The paper described an advanced iterative algorithm as well, but I didn't use it here.
If one evaluates the series using many (e.g. 5000) terms, it's possible to get thousands digits of π easily, and I found this series is easy to evaluate iteratively as well using this algorithm:
Algorithm
First, rearrange the formula to obtain its constant terms from an array.
Fill the array with 2 to start the first iteration, hence the new formula resembles the original one.
Let carry = 0.
Start from the greatest term. Obtain one term (2) from the array, multiply the term by PRECISION to perform a fixed-point division against 2 * i + 1, and save the reminder as the new term to the array. Then add the next term. Now decrement i, go to the next term, repeat until i == 1. Finally add the final term x_0.
Because 16-bit integer is used, PRECISION is 10, hence 2 decimal digits are obtained, but only the first digit is valid. Save the second digit as carry. Show the first digit plus carry.
x_0 is the integer 2, it should not be added for the successive iterations, clear it.
Goto step 4 to calculate the next decimal digit, until we have all the digits we want.
Implementation 1
Translating this algorithm to C:
#include <stdio.h>
#include <stdint.h>
#define N 2160
#define PRECISION 10
uint16_t terms[N + 1] = {0};
int main(void)
{
/* initialize the initial terms */
for (size_t i = 0; i < N + 1; i++) {
terms[i] = 2;
}
uint16_t carry = 0;
for (size_t j = 0; j < N / 4; j++) {
uint16_t numerator = 0;
uint16_t denominator;
uint16_t digit;
for (size_t i = N; i > 0; i--) {
numerator += terms[i] * PRECISION;
denominator = 2 * i + 1;
terms[i] = numerator % denominator;
numerator /= denominator;
numerator *= i;
}
numerator += terms[0] * PRECISION;
digit = numerator / PRECISION + carry;
carry = numerator % PRECISION;
printf("%01u", digit);
/* constant term 2, only needed for the first iteration. */
terms[0] = 0;
}
putchar('\n');
}
The code can calculate π to 31 decimal digits, until it makes an error.
31415926535897932384626433832794
10 <-- wrong
Sometimes digit + carry is greater than 9, so it needs an extra carry. If we are very unlucky, there may even be a double carry, triple carry, etc. We use a ring-buffer to store the last 4 digits. If an extra carry is detected, we output a backspace to erase the previous digit, perform a carry, and reprint them. This is just a ugly solution to the Proof-of-Concept, which is irrelevant to my question about overflow, but for completeness, here is it. Something better would be implemented in the future.
Implementation 2 with Repeated Carry
#include <stdio.h>
#include <stdint.h>
#define N 2160
#define PRECISION 10
#define BUF_SIZE 4
uint16_t terms[N + 1] = {0};
int main(void)
{
/* initialize the initial terms */
for (size_t i = 0; i < N + 1; i++) {
terms[i] = 2;
}
uint16_t carry = 0;
uint16_t digit[BUF_SIZE];
int8_t idx = 0;
for (size_t j = 0; j < N / 4; j++) {
uint16_t numerator = 0;
uint16_t denominator;
for (size_t i = N; i > 0; i--) {
numerator += terms[i] * PRECISION;
denominator = 2 * i + 1;
terms[i] = numerator % denominator;
numerator /= denominator;
numerator *= i;
}
numerator += terms[0] * PRECISION;
digit[idx] = numerator / PRECISION + carry;
/* over 9, needs at least one carry op. */
if (digit[idx] > 9) {
for (int i = 1; i <= 4; i++) {
if (i > 3) {
/* allow up to 3 consecutive carry ops */
fprintf(stderr, "ERROR: too many carry ops!\n");
return 1;
}
/* erase a digit */
putchar('\b');
/* carry */
digit[idx] -= 10;
idx--;
if (idx < 0) {
idx = BUF_SIZE - 1;
}
digit[idx]++;
if (digit[idx] < 10) {
/* done! reprint the digits */
for (int j = 0; j <= i; j++) {
printf("%01u", digit[idx]);
idx++;
if (idx > BUF_SIZE - 1) {
idx = 0;
}
}
break;
}
}
}
else {
printf("%01u", digit[idx]);
}
carry = numerator % PRECISION;
terms[0] = 0;
/* put an element to the ring buffer */
idx++;
if (idx > BUF_SIZE - 1) {
idx = 0;
}
}
putchar('\n');
}
Great, now the program can correctly calculate 534 digits of π, until it makes an
error.
3141592653589793238462643383279502884
1971693993751058209749445923078164062
8620899862803482534211706798214808651
3282306647093844609550582231725359408
1284811174502841027019385211055596446
2294895493038196442881097566593344612
8475648233786783165271201909145648566
9234603486104543266482133936072602491
4127372458700660631558817488152092096
2829254091715364367892590360011330530
5488204665213841469519415116094330572
7036575959195309218611738193261179310
5118548074462379962749567351885752724
8912279381830119491298336733624406566
43086021394946395
22421 <-- wrong
16-bit Integer Overflow
It turns out, during the calculation of the largest terms at the beginning, the error term gets quite large, since the divisors at the beginning are in the range of ~4000. When evaluating the series, numerator actually starts to overflow in the multiplication immediately.
The integer overflow is insignificant when calculating the first 500 digits, but starts to get worse and worse, until it gives an incorrect result.
Changing uint16_t numerator = 0 to uint32_t numerator = 0 can solve this problem and calculate π to 1000+ digits.
However, as I mentioned before, my target platform is a 8-bit CPU, and only has 16-bit operations. Is there a trick to solve the 16-bit integer overflow issue that I'm seeing here, using only one or more uint16_t? If it's not possible to avoid multiple-precision arithmetic, what is the simplest method to implement it here? I know somehow I need to introduce an extra 16-bit "extension word", but I'm not sure how can I implement it.
And thanks in advance for your patience to understand the long context here.
Take a look at related QA:
Baking-Pi Challenge - Understanding & Improving
Its using Wiki: Bailey–Borwein–Plouffe_formula which is more suited for integer arithmetics.
The real challenge however would be:
How do I convert a very long binary number to decimal?.
As you probably want to print the number in dec base ...
Also if you need carry in higher level language than asm take a look at this:
Cant make value propagate through carry
You can modify it to handle as many carry bits as you need (if still less than the data type bit-width).
[Edit1] BBP example in C++/VCL
I used this formula (taken from Wiki page linked above):
converted to fixed point...
//---------------------------------------------------------------------------
AnsiString str_hex2dec(const AnsiString &hex)
{
char c;
AnsiString dec="",s;
int i,j,l,ll,cy,val;
int i0,i1,i2,i3,sig;
sig=+1; l=hex.Length();
if (l) { c=hex[l]; if (c=='h') l--; if (c=='H') l--; }
i0=0; i1=l; i2=0; i3=l;
for (i=1;i<=l;i++) // scan for parts of number
{
char c=hex[i];
if (c=='-') sig=-sig;
if ((c=='.')||(c==',')) i1=i-1;
if ((c>='0')&&(c<='9')) { if (!i0) i0=i; if ((!i2)&&(i>i1)) i2=i; }
if ((c>='A')&&(c<='F')) { if (!i0) i0=i; if ((!i2)&&(i>i1)) i2=i; }
if ((c>='a')&&(c<='f')) { if (!i0) i0=i; if ((!i2)&&(i>i1)) i2=i; }
}
l=0; s=""; if (i0) for (i=i0;i<=i1;i++)
{
c=hex[i];
if ((c>='0')&&(c<='9')) c-='0';
else if ((c>='A')&&(c<='F')) c-='A'-10;
else if ((c>='a')&&(c<='f')) c-='A'-10;
for (cy=c,j=1;j<=l;j++)
{
val=(s[j]<<4)+cy;
s[j]=val%10;
cy =val/10;
}
while (cy>0)
{
l++;
s+=char(cy%10);
cy/=10;
}
}
if (s!="")
{
for (j=1;j<=l;j++) { c=s[j]; if (c<10) c+='0'; else c+='A'-10; s[j]=c; }
for (i=l,j=1;j<i;j++,i--) { c=s[i]; s[i]=s[j]; s[j]=c; }
dec+=s;
}
if (dec=="") dec="0";
if (sig<0) dec="-"+dec;
if (i2)
{
dec+='.';
s=hex.SubString(i2,i3-i2+1);
l=s.Length();
for (i=1;i<=l;i++)
{
c=s[i];
if ((c>='0')&&(c<='9')) c-='0';
else if ((c>='A')&&(c<='F')) c-='A'-10;
else if ((c>='a')&&(c<='f')) c-='A'-10;
s[i]=c;
}
ll=((l*1234)>>10); // num of decimals to compute
for (cy=0,i=1;i<=ll;i++)
{
for (cy=0,j=l;j>=1;j--)
{
val=s[j];
val*=10;
val+=cy;
s[j]=val&15;
cy=val>>4;
}
dec+=char(cy+'0');
for (;;)
{
if (!l) break;;
if (s[l]) break;
l--;
}
if (!l) break;;
}
}
return dec;
}
//---------------------------------------------------------------------------
AnsiString pi_BBP() // https://en.wikipedia.org/wiki/Bailey–Borwein–Plouffe_formula
{
const int N=100; // 32*N bit uint arithmetics
int sh;
AnsiString s;
uint<N> pi,a,b,k,k2,k3,k4;
for (pi=0,sh=(N<<5)-8,k=0;sh>=0;k++,sh-=4)
{
k2=k*k;
k3=k2*k;
k4=k3*k;
a =k2* 120;
a+=k * 151;
a+= 47;
b =k4* 512;
b+=k3*1024;
b+=k2* 712;
b+=k * 194;
b+= 15;
a<<=sh;
pi+=a/b;
}
pi<<=4;
s=pi.strhex();
s=s.Insert(".",2);
return str_hex2dec(s);
}
//---------------------------------------------------------------------------
The code is using VCL AnsiString which is a self allocating string and mine uint<N> template which is unsigned integer arithmetics of 32*N bitwidth based on mine ALU32. As you can see you only need big integer division addition and multiplication for this (all the other stuff is doable on normal integers).
Here decadic result versus 1000 digit Pi reference:
ref: 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989
BPP: 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631558817488152092096282925409171536436789259036001133053054882046652138414695194151160943305727036575959195309218611738193261179310511854807446237996274956735188575272489122793818301194912983367336244065664308602139494639522473719070217986094370277053921717629317675238467481846766940513200056812714526356082778577134275778960917363717872146844090122495343014654958537105079227968925892354201995611212902196086403441815981362977477130996051870721134999999837297804995105973173281609631859502445945534690830264252230825334468503526193118817101000313783875288658753320838142061717766914730359825349042875546873115956286388235378759375195778185778048187
The computed bigint value is exported to hex string and then converted to decadic base using str_hex2dec from link above. The number of iterations depends on the target bitwidth.
The code is not optimized yet...
What about implementing 32 bits arithmetic ?
For an addition, add the two high order words (16 bits), then the two low order words, test the overflow bit, and carry to the high order result if necessary.
If you can predict when overflow will occur, you can switch from 16 to 32 bits arithmetic when necessary.
Testing the overflow bit cannot be done in pure C, it will require some inline assembly or an intrinsic function.
Otherwise, you can be inspired by this answer: https://codereview.stackexchange.com/a/37178/39646
There is a trick:
Consider using an array for the numerators and another array for the denominators. Each position would represent the number of times that number is multiplied to get the actual number.
An example:
(1 * 2 * 3 * 7 * 7) / (3 * 6 * 8)
Would be represented as:
num[] = {1, 1, 1, 0, 0, 0, 2};
denom[] = {0, 0, 1, 0, 0, 1, 0, 1};
Then consider factorizing into prime numbers every number before storing it, so that you have lower numbers. Now you will need another array to store all the primes:
primes[] = {2, 3, 5, 7};
num[] = {1, 1, 0, 2};
denom[] = {4, 2, 0, 0};
This will allow you to store unimaginably big numbers, but you will sooner or later want to transform them back into numbers, so you will want to simplify this first. The way to do it is just subtract factors[i] += num[i] - denom[i] for every field in the arrays, for every fraction in the series. You will want to simplify after each iteration, so you minimize overflow risk.
factors[] = {-3, -1, 0, 2};
When you need the number, just do num *= pow(primes[i], factors[i]); if the factor is positive, or num /= pow(primes, -factors[i]); if it is negative, for every field in the arrays. (Do nothing if it is 0.
num and denom are temporary arrays used to store a fraction, the array where the result is being stored is factors. Remember to memset the temporary arrays before every use.
This explanation is useful for any big fraction. To adapt it to your specific problem, you may need to use an integer power function, and also multiply by 10^something to turn the decimal part into an integral part. That is your mission, should you accept it :)
I have an array that is of size 4,9,16 or 25 (according to the input) and the numbers in the array are the same but less by one (if the array size is 9 then the biggest element in the array would be 8) the numbers start with 0
and I would like to do some algorithm to generate some sort of a checksum for the array so that I can compare that 2 arrays are equal without looping through the whole array and checking each element one by one.
Where can I get this sort of information? I need something that is as simple as possible. Thank you.
edit: just to be clear on what I want:
-All the numbers in the array are distinct, so [0,1,1,2] is not valid because there is a repeated element (1)
-The position of the numbers matter, so [0,1,2,3] is not the same as [3,2,1,0]
-The array will contain the number 0, so this should also be taken into consideration.
EDIT:
Okay I tried to implement the Fletcher's algorithm here:
http://en.wikipedia.org/wiki/Fletcher%27s_checksum#Straightforward
int fletcher(int array[], int size){
int i;
int sum1=0;
int sum2=0;
for(i=0;i<size;i++){
sum1=(sum1+array[i])%255;
sum2=(sum2+sum1)%255;
}
return (sum2 << 8) | sum1;
}
to be honest I have no idea what does the return line do but unfortunately, the algorithm does not work.
For arrays [2,1,3,0] and [1,3,2,0] I get the same checksum.
EDIT2:
okay here's another one, the Adler checksum
http://en.wikipedia.org/wiki/Adler-32#Example_implementation
#define MOD 65521;
unsigned long adler(int array[], int size){
int i;
unsigned long a=1;
unsigned long b=0;
for(i=0;i<size;i++){
a=(a+array[i])%MOD;
b=(b+a)%MOD;
}
return (b <<16) | a;
}
This also does not work.
Arrays [2,0,3,1] and [1,3,0,2] generate same checksum.
I'm losing hope here, any ideas?
Let's take the case of your array of 25 integers. You explain that it can contains any permutations of the unique integers 0 to 24. According to this page, there is 25! (25 factorial) possible permutations, that is 15511210043330985984000000. Far more than a 32bit integer can contains.
The conclusion is that you will have collision, no matter how hard you try.
Now, here is a simple algorithm that account for position:
int checksum(int[] array, int size) {
int c = 0;
for(int i = 0; i < size; i++) {
c += array[i];
c = c << 3 | c >> (32 - 3); // rotate a little
c ^= 0xFFFFFFFF; // invert just for fun
}
return c;
}
I think what you want is in the answer of the following thread:
Fast permutation -> number -> permutation mapping algorithms
You just take the number your permutation is mapped to and take that as your Checksum. As there is exactly one Checksum per permutation there can't be a smaller Checksum that is collision free.
How about the checksum of weighted sum? Let's take an example for [0,1,2,3]. First pick a seed and limit, let's pick a seed as 7 and limit as 10000007.
a[4] = {0, 1, 2, 3}
limit = 10000007, seed = 7
result = 0
result = ((result + a[0]) * seed) % limit = ((0 + 0) * 7)) % 10000007 = 0
result = ((result + a[1]) * seed) % limit = ((0 + 1) * 7)) % 10000007 = 7
result = ((result + a[2]) * seed) % limit = ((7 + 2) * 7)) % 10000007 = 63
result = ((result + a[3]) * seed) % limit = ((63 + 3) * 7)) % 10000007 = 462
Your checksum is 462 for that [0, 1, 2, 3].
The reference is http://www.codeabbey.com/index/wiki/checksum
For an array of N unique integers from 1 to N, just adding up the elements will always be N*(N+1)/2. Therefore the only difference is in the ordering. If by "checksum" you imply that you tolerate some collisions, then one way is to sum the differences between consecutive numbers. So for example, the delta checksum for {1,2,3,4} is 1+1+1=3, but the delta checksum for {4,3,2,1} is -1+-1+-1=-3.
No requirements were given for collision rates or computational complexity, but if the above doesn't suit, then I recommend a position dependent checksum
From what I understand your array contains a permutation of numbers from 0 to N-1. One check-sum which will be useful is the rank of the array in its lexicographic ordering. What does it means ? Given 0, 1, 2
You have the possible permutations
1: 0, 1, 2
2: 0, 2, 1
3: 1, 0, 2
4: 1, 2, 0
5: 2, 0, 1
6: 2, 1, 0
The check-sum will be the first number, and computed when you create the array. There are solutions proposed in
Find the index of a given permutation in the list of permutations in lexicographic order
which can be helpful, although it seems the best algorithm was of quadratic complexity. To improve it to linear complexity you should cache the values of the factorials before hand.
The advantage? ZERO collision.
EDIT: Computation
The value is like the evaluation of a polynomial where factorial is used for the monomial instead of power. So the function is
f(x0,....,xn-1) = X0 * (0!) + X1 * (1!) + X2 * (2!) +...+ Xn-1 * (n-1!)
The idea is to use each values to get a sub-range of permutations, and with enough values you pinpoint an unique permutation.
Now for the implementation (like the one of a polynomial):
pre compute 0!.... to n-1! at the beginning of the program
Each time you set an array you use f(elements) to compute its checksum
you compare in O(1) using this checksum
I am using a base-conversion algorithm to generate a permutation from a large integer (split into 32-bit words).
I use a relatively standard algorithm for this:
/* N = count,K is permutation index (0..N!-1) A[N] contains 0..N-1 */
i = 0;
while (N > 1) {
swap A[i] and A[i+(k%N)]
k = k / N
N = N - 1
i = i + 1
}
Unfortunately, the divide and modulo each iteration adds up, especially moving to large integers - But, it seems I could just use multiply!
/* As before, N is count, K is index, A[N] contains 0..N-1 */
/* Split is arbitrarily 128 (bits), for my current choice of N */
/* "Adjust" is precalculated: (1 << Split)/(N!) */
a = k*Adjust; /* a can be treated as a fixed point fraction */
i = 0;
while (N > 1) {
a = a*N;
index = a >> Split;
a = a & ((1 << Split) - 1); /* actually, just zeroing a register */
swap A[i] and A[i+index]
N = N - 1
i = i + 1
}
This is nicer, but doing large integer multiplies is still sluggish.
Question 1:
Is there a way of doing this faster?
Eg. Since I know that N*(N-1) is less than 2^32, could I pull out those numbers from one word, and merge in the 'leftovers'?
Or, is there a way to modify an arithetic decoder to pull out the indicies one at a time?
Question 2:
For the sake of curiosity - if I use multiplication to convert a number to base 10 without the adjustment, then the result is multiplied by (10^digits/2^shift). Is there a tricky way to remove this factor working with the decimal digits? Even with the adjustment factor, this seems like it would be faster -- why wouldn't standard libraries use this vs divide and mod?
Seeing that you are talking about numbers like 2^128/(N!), it seems that in your problem N is going to be rather small (N < 35 according to my calculations).
I suggest taking the original algorithm as a starting point; first switch the direction of the loop:
i = 2;
while (i < N) {
swap A[N - 1 - i] and A[N - i + k % i]
k = k / i
i = i + 1
}
Now change the loop to do several permutations per iteration. I guess the speed of division is the same regardless of the number i, as long as i < 2^32.
Split the range 2...N-1 into sub-ranges so that the product of the numbers in each sub-range is less than 2^32:
2, 3, 4, ..., 12: product is 479001600
13, 14, ..., 19: product is 253955520
20, 21, ..., 26: product is 3315312000
27, 28, ..., 32: product is 652458240
33, 34, 35: product is 39270
Then, divide the long number k by the products instead of dividing by i. Each iteration will yield a remainder (less than 2^32) and a smaller number k. When you have the remainder, you can work with it in an inner loop using the original algorithm; which will now be faster because it doesn't involve long division.
Here is some code:
static const int rangeCount = 5;
static const int rangeLimit[rangeCount] = {13, 20, 27, 33, 36};
static uint32_t rangeProduct[rangeCount] = {
479001600,
253955520,
3315312000,
652458240,
39270
};
for (int rangeIndex = 0; rangeIndex < rangeCount; ++rangeIndex)
{
// The following two lines involve long division;
// math libraries probably calculate both quotient and remainder
// in one function call
uint32_t rangeRemainder = k % rangeProduct[rangeIndex];
k /= rangeProduct[rangeIndex];
// A range starts where the previous range ended
int rangeStart = (rangeIndex == 0) ? 2 : rangeLimit[rangeIndex - 1];
// Iterate over range
for (int i = rangeStart; i < rangeLimit[rangeIndex] && i < n; ++i)
{
// The following two lines involve a 32-bit division;
// it produces both quotient and remainder in one Pentium instruction
int remainder = rangeRemainder % i;
rangeRemainder /= i;
std::swap(permutation[n - 1 - i], permutation[n - i + remainder]);
}
}
Of course, this code can be extended into more than 128 bits.
Another optimization could involve extraction of powers of 2 from the products of ranges; this might add a slight speedup by making the ranges longer. Not sure whether this is worthwhile (maybe for large values of N, like N=1000).
Dont know about algorithms, but the ones you use seems pretty simple, so i dont really see how you can optimize the algorithm.
You may use alternative approaches:
use ASM (assembler) - from my experience, after a long time trying to figure out how should a certain algorithm would be written in ASM, it ended up being slower than the version generated by the compiler:) Probably because the compiler also knows how to layout the code so the CPU cache would be more efficient, and/or what instructions are actually faster and what situations(this was on GCC/linux).
use multi-processing:
make your algorithm multithreaded, and make sure you run with the same number of threads as the number of available cpu cores(most cpu's nowdays do have multiple cores/multithreading)
make you algorithm capable of running on multiple machines on a network, and devise a way of sending these numbers to machines in a network, so you may use their CPU power.
srand(time(null));
printf("%d", rand());
Gives a high-range random number (0-32000ish), but I only need about 0-63 or 0-127, though I'm not sure how to go about it. Any help?
rand() % (max_number + 1 - minimum_number) + minimum_number
So, for 0-65:
rand() % (65 + 1 - 0) + 0
(obviously you can leave the 0 off, but it's there for completeness).
Note that this will bias the randomness slightly, but probably not anything to be concerned about if you're not doing something particularly sensitive.
You can use this:
int random(int min, int max){
return min + rand() / (RAND_MAX / (max - min + 1) + 1);
}
From the:
comp.lang.c FAQ list · Question 13.16
Q: How can I get random integers in a certain range?
A: The obvious way,
rand() % N /* POOR */
(which tries to return numbers from 0 to N-1) is poor, because the
low-order bits of many random number generators are distressingly
non-random. (See question 13.18.) A better method is something like
(int)((double)rand() / ((double)RAND_MAX + 1) * N)
If you'd rather not use floating point, another method is
rand() / (RAND_MAX / N + 1)
If you just need to do something with probability 1/N, you could use
if(rand() < (RAND_MAX+1u) / N)
All these methods obviously require knowing RAND_MAX (which ANSI #defines in <stdlib.h>), and assume that N is much less than RAND_MAX. When N is close to RAND_MAX, and if the range of the random number
generator is not a multiple of N (i.e. if (RAND_MAX+1) % N != 0), all
of these methods break down: some outputs occur more often than
others. (Using floating point does not help; the problem is that rand
returns RAND_MAX+1 distinct values, which cannot always be evenly
divvied up into N buckets.) If this is a problem, about the only thing
you can do is to call rand multiple times, discarding certain values:
unsigned int x = (RAND_MAX + 1u) / N;
unsigned int y = x * N;
unsigned int r;
do {
r = rand();
} while(r >= y);
return r / x;
For any of these techniques, it's straightforward to shift the range,
if necessary; numbers in the range [M, N] could be generated with
something like
M + rand() / (RAND_MAX / (N - M + 1) + 1)
(Note, by the way, that RAND_MAX is a constant telling you what the
fixed range of the C library rand function is. You cannot set RAND_MAX
to some other value, and there is no way of requesting that rand
return numbers in some other range.)
If you're starting with a random number generator which returns
floating-point values between 0 and 1 (such as the last version of
PMrand alluded to in question 13.15, or drand48 in question
13.21), all you have to do to get integers from 0 to N-1 is
multiply the output of that generator by N:
(int)(drand48() * N)
Additional links
References: K&R2 Sec. 7.8.7 p. 168
PCS Sec. 11 p. 172
Quote from: http://c-faq.com/lib/randrange.html
check here
http://c-faq.com/lib/randrange.html
For any of these techniques, it's straightforward to shift the range, if necessary; numbers in the range [M, N] could be generated with something like
M + rand() / (RAND_MAX / (N - M + 1) + 1)
Taking the modulo of the result, as the other posters have asserted will give you something that's nearly random, but not perfectly so.
Consider this extreme example, suppose you wanted to simulate a coin toss, returning either 0 or 1. You might do this:
isHeads = ( rand() % 2 ) == 1;
Looks harmless enough, right? Suppose that RAND_MAX is only 3. It's much higher of course, but the point here is that there's a bias when you use a modulus that doesn't evenly divide RAND_MAX. If you want high quality random numbers, you're going to have a problem.
Consider my example. The possible outcomes are:
rand()
freq.
rand() % 2
0
1/3
0
1
1/3
1
2
1/3
0
Hence, "tails" will happen twice as often as "heads"!
Mr. Atwood discusses this matter in this Coding Horror Article
The naive way to do it is:
int myRand = rand() % 66; // for 0-65
This will likely be a very slightly non-uniform distribution (depending on your maximum value), but it's pretty close.
To explain why it's not quite uniform, consider this very simplified example:
Suppose RAND_MAX is 4 and you want a number from 0-2. The possible values you can get are shown in this table:
rand() | rand() % 3
---------+------------
0 | 0
1 | 1
2 | 2
3 | 0
See the problem? If your maximum value is not an even divisor of RAND_MAX, you'll be more likely to choose small values. However, since RAND_MAX is generally 32767, the bias is likely to be small enough to get away with for most purposes.
There are various ways to get around this problem; see here for an explanation of how Java's Random handles it.
rand() will return numbers between 0 and RAND_MAX, which is at least 32767.
If you want to get a number within a range, you can just use modulo.
int value = rand() % 66; // 0-65
For more accuracy, check out this article. It discusses why modulo is not necessarily good (bad distributions, particularly on the high end), and provides various options.
As others have noted, simply using a modulus will skew the probabilities for individual numbers so that smaller numbers are preferred.
A very ingenious and good solution to that problem is used in Java's java.util.Random class:
public int nextInt(int n) {
if (n <= 0)
throw new IllegalArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)next(31)) >> 31);
int bits, val;
do {
bits = next(31);
val = bits % n;
} while (bits - val + (n-1) < 0);
return val;
}
It took me a while to understand why it works and I leave that as an exercise for the reader but it's a pretty concise solution which will ensure that numbers have equal probabilities.
The important part in that piece of code is the condition for the while loop, which rejects numbers that fall in the range of numbers which otherwise would result in an uneven distribution.
double scale = 1.0 / ((double) RAND_MAX + 1.0);
int min, max;
...
rval = (int)(rand() * scale * (max - min + 1) + min);
Updated to not use a #define
double RAND(double min, double max)
{
return (double)rand()/(double)RAND_MAX * (max - min) + min;
}
If you don't overly care about the 'randomness' of the low-order bits, just rand() % HI_VAL.
Also:
(double)rand() / (double)RAND_MAX; // lazy way to get [0.0, 1.0)
This answer does not focus on the randomness but on the arithmetic order.
To get a number within a range, usually we can do it like this:
// the range is between [aMin, aMax]
double f = (double)rand() / RAND_MAX;
double result = aMin + f * (aMax - aMin);
However, there is a possibility that (aMax - aMin) overflows. E.g. aMax = 1, aMin = -DBL_MAX. A safer way is to write like this:
// the range is between [aMin, aMax]
double f = (double)rand() / RAND_MAX;
double result = aMin - f * aMin + f * aMax;
Based on this concept, something like this may cause a problem.
rand() % (max_number + 1 - minimum_number) + minimum_number
// 1. max_number + 1 might overflow
// 2. max_number + 1 - min_number might overflow
if you care about the quality of your random numbers don't use rand()
use some other prng like http://en.wikipedia.org/wiki/Mersenne_twister or one of the other high quality prng's out there
then just go with the modulus.
Just to add some extra detail to the existing answers.
The mod % operation will always perform a complete division and therefore yield a remainder less than the divisor.
x % y = x - (y * floor((x/y)))
An example of a random range finding function with comments:
uint32_t rand_range(uint32_t n, uint32_t m) {
// size of range, inclusive
const uint32_t length_of_range = m - n + 1;
// add n so that we don't return a number below our range
return (uint32_t)(rand() % length_of_range + n);
}
Another interesting property as per the above:
x % y = x, if x < y
const uint32_t value = rand_range(1, RAND_MAX); // results in rand() % RAND_MAX + 1
// TRUE for all x = RAND_MAX, where x is the result of rand()
assert(value == RAND_MAX);
result of rand()
2 cents (ok 4 cents):
n = rand()
x = result
l = limit
n/RAND_MAX = x/l
Refactor:
(l/1)*(n/RAND_MAX) = (x/l)*(l/1)
Gives:
x = l*n/RAND_MAX
int randn(int limit)
{
return limit*rand()/RAND_MAX;
}
int i;
for (i = 0; i < 100; i++) {
printf("%d ", randn(10));
if (!(i % 16)) printf("\n");
}
> test
0
5 1 8 5 4 3 8 8 7 1 8 7 5 3 0 0
3 1 1 9 4 1 0 0 3 5 5 6 6 1 6 4
3 0 6 7 8 5 3 8 7 9 9 5 1 4 2 8
2 7 8 9 9 6 3 2 2 8 0 3 0 6 0 0
9 2 2 5 6 8 7 4 2 7 4 4 9 7 1 5
3 7 6 5 3 1 2 4 8 5 9 7 3 1 6 4
0 6 5
Just using rand() will give you same random numbers when running program multiple times. i.e. when you run your program first time it would produce random number x,y and z. If you run the program again then it will produce same x,y and z numbers as observed by me.
The solution I found to keep it unique every time is using srand()
Here is the additional code,
#include<stdlib.h>
#include<time.h>
time_t t;
srand((unsigned) time(&t));
int rand_number = rand() % (65 + 1 - 0) + 0 //i.e Random numbers in range 0-65.
To set range you can use formula : rand() % (max_number + 1 - minimum_number) + minimum_number
Hope it helps!
You can change it by adding a % in front of the rand function in order to change to code
For example:
rand() % 50
will give you a random number in a range of 50. For you, replace 50 with 63 or 127
I think the following does it semi right. It's been awhile since I've touched C. The idea is to use division since modulus doesn't always give random results. I added 1 to RAND_MAX since there are that many possible values coming from rand including 0. And since the range is also 0 inclusive, I added 1 there too. I think the math is arranged correctly avoid integer math problems.
#define MK_DIVISOR(max) ((int)((unsigned int)RAND_MAX+1/(max+1)))
num = rand()/MK_DIVISOR(65);
Simpler alternative to #Joey's answer. If you decide to go with the % method, you need to do a reroll to get the correct distribution. However, you can skip rerolls most of the time because you only need to avoid numbers that fall in the last bucket:
int rand_less_than(int max) {
int last_bucket_min = RAND_MAX - RAND_MAX % max;
int value;
do {
value = rand();
} while (last_bucket_min <= value);
return value % max;
}
See #JarosrawPawlak's article for explanation with diagrams: Random number generator using modulo
In case of RAND_MAX < max, you need to expand the generator: Expand a random range from 1–5 to 1–7
#include <stdio.h>
#include <stdlib.h>
#include <time.h> // this line is necessary
int main() {
srand(time(NULL)); // this line is necessary
int random_number = rand() % 65; // [0-64]
return 0;
}
Foy any range between min_num and max_num:
int random_number = rand() % (max_num + 1 - min_num) + min_num;