Generating random number between [-1, 1] in C? - c

I have seen many questions on SO about this particular subject but none of them has any answer for me, so I thought of asking this question.
I wanted to generate a random number between [-1, 1]. How I can do this?

Use -1+2*((float)rand())/RAND_MAX
rand() generates integers in the range [0,RAND_MAX] inclusive therefore, ((float)rand())/RAND_MAX returns a floating-point number in [0,1]. We get random numbers from [-1,1] by adding it to -1.
EDIT: (adding relevant portions of the comment section)
On the limitations of this method:
((float)rand())/RAND_MAX returns a percentage (a fraction from 0 to 1). So since the range between -1 to 1 is 2 integers, I multiply that fraction by 2 and then add it to the minimum number you want, -1. This also tells you about the quality of your random numbers since you will only have RAND_MAX unique random numbers.

If all you have is the Standard C library, then other people's answers are sensible. If you have POSIX functionality available to you, consider using the drand48() family of functions. In particular:
#define _XOPEN_SOURCE 600 /* Request non-standard functions */
#include <stdlib.h>
double f = +1.0 - 2.0 * drand48();
double g = -1.0 + 2.0 * drand48();
Note that the manual says:
The drand48() and erand48() functions shall return non-negative, double-precision, floating-point values, uniformly distributed over the interval [0.0,1.0).
If you strictly need [-1.0,+1.0] (as opposed to [-1.0,+1.0)), then you face a very delicate problem with how to extend the range.
The drand48() functions give you considerably more randomness than the typical implementation of rand(). However, if you need cryptographic randomness, none of these are appropriate; you need to look for 'cryptographically strong PRNG' (PRNG = pseudo-random number generator).

I had a similar question a while back and thought that it might be more efficient to just generate the fractional part directly. I did some searching and came across an interesting fast floating point rand that doesn't use floating point division or multiplication or a int->float cast can be done with some intimate knowledge of the internal representation of a float:
float sfrand( void )
{
unsigned int a=(rand()<<16)|rand(); //we use the bottom 23 bits of the int, so one
//16 bit rand() won't cut it.
a=(a&0x007fffff) | 0x40000000;
return( *((float*)&a) - 3.0f );
}
The first part generates a random float from [2^1,2^2), subtract 3 and you have [-1, 1). This of course may be too intimate for some applications/developers but it was just what I was looking for. This mechanism works well for any range that is a power of 2 wide.

For starters, you'll need the C library function rand(). This is in the stdlib.h header file, so you should put:
#include <stdlib.h>
near the beginning of your code. rand() will generate a random integer between zero and RAND_MAX so dividing it by RAND_MAX / 2 will give you a number between zero and 2 inclusive. Subtract one, and you're onto your target range of -1 to 1.
However, if you simply do int n = rand() / (RAND_MAX / 2) you will find you don't get the answer which you expect. This is because both rand() and RAND_MAX / 2 are integers, so integer arithmetic is used. To stop this from happening, some people use a float cast, but I would recommend avoiding casts by multiplying by 1.0.
You should also seed your random number generator using the srand() function. In order to get a different result each time, people often seed the generator based on the clock time, by doing srand(time(0)).
So, overall we have:
#include <stdlib.h>
srand(time(0);
double r = 1.0 * rand() / (RAND_MAX / 2) - 1;

While the accepted answer is fine in many cases, it will leave out "every other number", because it is expanding a range of already discrete values by 2 to cover the [-1, 1] interval. In a similar way if you had a random number generator which could generate an integer from [0, 10] and you wanted to generate [0, 20], simply multiplying by 2 will span the range, but not be able to cover the range (it would leave out all the odd numbers).
It probably has sufficiently fine grain for your needs, but does have this drawback, which could be statistically significant (and detrimental) in many applications - particularly monte carlo simulations and systems which have sensitive dependence on initial conditions.
A method which is able to generate any representable floating point number from -1 to 1 inclusive should rely on generating a sequence a1.a2 a3 a4 a5 ... up to the limit of your floating point precision which is the only way to be able to generate any possible float in the range. (i.e. following the definition of the real numbers)

From the "The C Standard Library"
int rand(void) - Returns pseudo-random number in range 0 to RAND_MAX
RAND_MAX - Maximum value returned by rand().
So:
rand() will return a pseudo-random number in range 0 to RAND_MAX
rand() / (double) RAND_MAX will return a pseudo-random number in range 0 to 1
2 * (rand() / (double) RAND_MAX) will return a pseudo-random number in range 0 to 2
2 * (rand() / (double) RAND_MAX) - 1 will return a pseudo-random number in range -1 to 1

As others already noted, any attempts to simply transform the range of 'rand()' function from [0, RAND_MAX] into the desired [-1, +1] will produce a random number generator that can only generate a discrete set of floating-point values. For a floating-point generator the density of these values might be insufficient in some applications (if the implementation-defined value of RAND_MAX is not sufficiently large). If this is a problem, one can increase the aforementioned density exponentially by using two or more 'rand()' calls instead of one.
For example, by combining the results of two consecutive calls to 'rand()' one can obtain a pseudo-random number in [0, (RAND_MAX + 1)^2 - 1] range
#define RAND_MAX2 ((RAND_MAX + 1ul) * (RAND_MAX + 1) - 1)
unsigned long r2 = (unsigned long) rand() * (RAND_MAX + 1) + rand();
and later use the same method to transform it into a floating-point number in [-1, +1] range
double dr2 = r2 * 2.0 / RAND_MAX2 - 1;
By using this method one can build-up as many 'rand()' calls as necessary, keeping an eye on integer overflow, of course.
As a side note, this method of combining consecutive 'rand()' calls doesn't produce very high quality pseudo-random number generators, but it might work perfectly well for many purposes.

Related

Random integers in C, how bad is rand()%N compared to integer arithmetic? What are its flaws?

EDIT:
My question is: rand()%N is considered very bad, whereas the use of integer arithmetic is considered superior, but I cannot see the difference between the two.
People always mention:
low bits are not random in rand()%N,
rand()%N is very predictable,
you can use it for games but not for cryptography
Can someone explain if any of these points are the case here and how to see that?
The idea of the non-randomness of the lower bits is something that should make the PE of the two cases that I show differ, but it's not the case.
I guess many like me would always avoid using rand(), or rand()%N because we've been always taught that it is pretty bad. I was curious to see how "wrong" random integers generated with c rand()%N effectively are. This is also a follow up to Ryan Reich's answer in How to generate a random integer number from within a range.
The explanation there sounds very convincing, to be honest; nevertheless, I thought I’d give it a try. So, I compare the distributions in a VERY naive way. I run both random generators for different numbers of samples and domains. I didn't see the point of computing a density instead of histograms, so I just computed histograms and, just by looking, I would say they both look just as uniform. Regarding the other point that was raised, about the actual randomness (despite being uniformly distributed). I — again naively —compute the permutation entropy for these runs, which are the same for both sample sets, which tell us that there's no difference between both regarding the ordering of the occurrence.
So, for many purposes, it seems to me that rand()%N would be just fine, how can we see their flaws?
Here I show you a very simple, inefficient and not very elegant (but I think correct) way of computing these samples and get the histograms together with the permutation entropies.
I show plots for domains (0,i) with i in {5,10,25,50,100} for different number of samples:
There's not much to see in the code I guess, so I will leave both the C and the matlab code for replication purposes.
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
int main(int argc, char *argv[]){
unsigned long max = atoi(argv[2]);
int samples=atoi(argv[3]);
srand(time(NULL));
if(atoi(argv[1])==1){
for(int i=0;i<samples;++i)
printf("%ld\n",rand()%(max+1));
}else{
for(int i=0;i<samples;++i){
unsigned long
num_bins = (unsigned long) max + 1,
num_rand = (unsigned long) RAND_MAX + 1,
bin_size = num_rand / num_bins,
defect = num_rand % num_bins;
long x;
do {
x = rand();
}
while (num_rand - defect <= (unsigned long)x);
printf("%ld\n",x/bin_size);
}
}
return 0;
}
And here is the Matlab code to plot this and compute the PEs (the recursion for the permutations I took it from: https://www.mathworks.com/matlabcentral/answers/308255-how-to-generate-all-possible-permutations-without-using-the-function-perms-randperm):
system('gcc randomTest.c -o randomTest.exe;');
max = 100;
samples = max*10000;
trials = 200;
system(['./randomTest.exe 1 ' num2str(max) ' ' num2str(samples) ' > file1'])
system(['./randomTest.exe 2 ' num2str(max) ' ' num2str(samples) ' > file2'])
a1=load('file1');
a2=load('file2');
uni = figure(1);
title(['Samples: ' num2str(samples)])
subplot(1,3,1)
h1 = histogram(a1,max+1);
title('rand%(max+1)')
subplot(1,3,2)
h2 = histogram(a2,max+1);
title('Integer arithmetic')
as=[a1,a2];
ns=3:8;
H = nan(numel(ns),size(as,2));
for op=1:size(as,2)
x = as(:,op);
for n=ns
sequenceOcurrence = zeros(1,factorial(n));
sequences = myperms(1:n);
sequencesArrayIdx = sum(sequences.*10.^(size(sequences,2)-1:-1:0),2);
for i=1:numel(x)-n
[~,sequenceOrder] = sort(x(i:i+n-1));
out = sequenceOrder'*10.^(numel(sequenceOrder)-1:-1:0).';
sequenceOcurrence(sequencesArrayIdx == out) = sequenceOcurrence(sequencesArrayIdx == out) + 1;
end
chunks = length(x) - n + 1;
ps = sequenceOcurrence/chunks;
hh = sum(ps(logical(ps)).*log2(ps(logical(ps))));
H(n,op) = hh/log2(factorial(n));
end
end
subplot(1,3,3)
plot(ns,H(ns,:),'--*','linewidth',2)
ylabel('PE')
xlabel('Sequence length')
filename = ['all_' num2str(max) '_' num2str(samples) ];
export_fig(filename)
Due to the way modulo arithmetic works if N is significant compared to RAND_MAX doing %N will make it so you're considerably more likely to get some values than others. Imagine RAND_MAX is 12, and N is 9. If the distribution is good then the chances of getting one of 0, 1, or 2 is 0.5, and the chances of getting one of 3, 4, 5, 6, 7, 8 is 0.5. The result being that you're twice as likely to get a 0 instead of a 4. If N is an exact divider of RAND_MAX this distribution problem doesn't happen, and if N is very small compared to RAND_MAX the issue becomes less noticeable. RAND_MAX may not be a particularly large value (maybe 2^15 - 1), making this problem worse than you may expect. The alternative of doing (rand() * n) / (RAND_MAX + 1) also doesn't give an even distribution, however, it will be every mth value (for some m) that will be more likely to occur rather than the more likely values all being at the low end of the distribution.
If N is 75% of RAND_MAX then the values in the bottom third of your distribution are twice as likely as the values in the top two thirds (as this is where the extra values map to)
The quality of rand() will depend on the implementation of the system that you're on. I believe that some systems have had very poor implementation, OS Xs man pages declare rand obsolete. The Debian man page says the following:
The versions of rand() and srand() in the Linux C Library use the same
random number generator as random(3) and srandom(3), so the lower-order
bits should be as random as the higher-order bits. However, on older
rand() implementations, and on current implementations on different
systems, the lower-order bits are much less random than the higher-
order bits. Do not use this function in applications intended to be
portable when good randomness is needed. (Use random(3) instead.)
Both approaches have their pitfalls, and your graphs are little more than a pretty verification of the central limit theorem! For a sensible implementation of rand():
% N suffers from a "pigeon-holing" effect if 1u + RAND_MAX is not a multiple of N
/((RAND_MAX + 1u)/N) does not, in general, evenly distribute the return of rand across your range, due to integer truncation effects.
On balance, if N is small cf. RAND_MAX, I'd plump for % for its tractability. In any case test your generator to see it it has the appropriate statistical properties for your application.
rand() % N is considered extremely poor not because the distribution is bad, but because the randomness is poor-to-nonexistent. (If anything the distribution will be too good.)
If N is not small with respect to RAND_MAX, both
rand() % N
and
rand() / (RAND_MAX / N + 1)
will have more or less the same, poor distribution -- certain values will occur with significantly higher probability than others.
Looking at distribution histograms won't show you that for some implementations, rand() % N has a much, much worse problem -- to show that you'd have to perform some correlations with previous values. (For example, try taking rand() % 2, then subtracting from the previous value you got, and plotting a histogram of the differences. If the difference is never 0, you've got a problem.)
I would like to say that the implementations for which rand()'s low-order bits aren't random are simply buggy. I'd like to think that all those buggy implementations would have disappeared by now. I'd like to think that programmers shouldn't have to worry about calling rand()%N any more. But, unfortunately, my wishes don't change the fact that this seems to be one of those bugs that never get fixed, meaning that programmers do still have to worry.
See also the C FAQ list, question 13.16.

Eliminating modulo bias: how is it achieved in the arc4random_uniform() function?

Modulo bias is a problem that arises when naively using the modulo operation to get pseudorandom numbers smaller than a given "upper bound".
Therefore as a C programmer I am using a modified version of the arc4random_uniform() function to generate evenly distributed pseudorandom numbers.
The problem is I do not understand how the function works, mathematically.
This is the function's explanatory comment, followed by a link to the full source code:
/*
* Calculate a uniformly distributed random number less than upper_bound
* avoiding "modulo bias".
*
* Uniformity is achieved by generating new random numbers until the one
* returned is outside the range [0, 2**32 % upper_bound). This
* guarantees the selected random number will be inside
* [2**32 % upper_bound, 2**32) which maps back to [0, upper_bound)
* after reduction modulo upper_bound.
*/
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/crypt/arc4random_uniform.c?rev=1.1&content-type=text/x-cvsweb-markup
From the comment above we can define:
[2^32 % upper_bound, 2^32) - interval A
[0, upper_bound) - interval B
In order to work, the function relies on the fact that interval A maps to interval B.
My question is: mathematically, how come the numbers in interval A map uniformly to the ones in interval B? And is there a proof for this?
Sometimes it helps to start with an easily understood example, and then generalize from there. To keep things simple, let's imagine that arc4random returns a uint8_t instead of a uint32_t, so the output from arc4random is a number in the interval [0,256). And let's choose an upper_bound of 7.
Note that 7 does not divide evenly into 256
256 = 7 * 36 + 4
That means that naively using the modulo operation to get pseudorandom numbers smaller than 7 would result in the following probability distribution
37/256 for outcomes 0,1,2,3
36/256 for outcomes 4,5,6
That's what's known as modulo bias, outcomes 0,1,2,3 are more likely than outcomes 4,5,6.
To avoid modulo bias we could simply reject the values 252,253,254,255, and generate a new number until the result is in the interval [0,252). All the numbers in the interval [0,252) have equal probability (rejecting higher numbers doesn't effect the distribution of the lower numbers). And since 7 divides evenly into 252, the resulting probability distribution is uniform
36/252 for outcomes 0,1,2,3,4,5,6,7
That's essentially what arc4random_uniform does, except that arc4random_uniform rejects numbers at the bottom of the range. Specifically, interval A would be
[2^8 % 7, 2^8) which is [4, 256)
After generating a number (call it N) in the interval [4,256) the final calculation is
outcome = N % 7
There are 252 numbers in the interval [4,256), and since 252 is a multiple of 7, every outcome on the interval [0,7) has equal probability.
That's how arc4random_uniform works, it rejects/retries on a small range of numbers, and the count of numbers in the remaining range is a multiple of the upper_bound. (Since the upper_bound is typically a small number compared to 2^32, the odds of having multiple retries for a single outcome is quite small.)
But do you really care about modulo bias? In most cases, the answer is, "No". Consider our example with an upper bound of 7. The probability distribution for the naive modulo implementation is
613566757 / 4294967296 for outcomes 0,1,2,3
613566756 / 4294967296 for outcomes 4,5,6
which is a modulo bias of less than 0.0000002%.
So there's your choice: either spend a minuscule amount of time on retries to get a perfect distribution, or accept a minuscule error in the probability distribution to avoid the retries.

A line which randomizing a number between two values

my problem this time is not using a line but understanding it,
i received this line from my teacher to randomize a number between the MIN and MAX values, and it works perfectly, but i have tried to understand How exactly and i just couldn't.
I would be happy if anyone could explain it to me step by step (please not i'm not 100% sure how the rand() function works)
Thanks!
int number = (rand() % (DICE_MAX - DICE_MIN +1)) + DICE_MIN; // Randomizing a value between 'DICE_MAX' and 'DICE_MIN' which can be defined on the head of this program.
The function rand() generates a random (well, pseudo-random to be precise) number. The int returned from it has a large range, so you need to scale it to necessary range.
Assuming DICE_MIN to be 1 and DICE_MAX to be 6, you need to generate random integers in the range [1, 6]. There are 6 numbers in the range, and DICE_MAX - DICE_MIN + 1 = 6. So whatever integer you get from rand() the value of rand() % (DICE_MAX - DICE_MIN + 1) will be in the range [0, 5]. Adding the minimum of the required range DICE_MIN to it shifts the range to [1, 6].
This is a very widely practiced technique for generating random numbers in a given range.
rand:
Function: Random number generator.
Include: stdlib.h
syntax: int rand(void);
Return Value: The function rand returns the generated pseudo random number.
Description: The rand function generates an integer between 0 and RAND_MAX (a symbolic constant defined in stdlib.h). standard C states that the value of RAND_MAX must be at least 32767. If rand truly produces integers at random, every number between 0 and RAND_MAX has an equal probability of being chosen each time rand is called.
How it works?
Take an example of rolling a dice (six sided). The remainder operator % is used here in conjugation with rand as :
rand % 6;
to produce integers in the range 0 to 5. This is called scaling. The number 6 is called scaling factor. But, we need to generate number from 1 to 6. Now we shift the range of numbers produced by adding 1 to our result (1 + rand%6).
In general
n = a + rand() % b;
where a is the shifting value (which is equal to the first number in the desired range of consecutive integers, i.e, to lower bound) and b is equal to the width of the desired range of consecutive integers.
In the provided snippet of your's
int number = (rand() % (DICE_MAX - DICE_MIN +1)) + DICE_MIN;
DICE_MAX - DICE_MIN +1 is desired width and DICE_MIN is the shifting value.
Further reading: Using rand().

Generating a uniform distribution of INTEGERS in C

I've written a C function that I think selects integers from a uniform distribution with range [rangeLow, rangeHigh], inclusive. This isn't homework--I'm just using this in some embedded systems tinkering that I'm doing for fun.
In my test cases, this code appears to produce an appropriate distribution. I'm not feeling fully confident that the implementation is correct, though.
Could someone do a sanity check and let me know if I've done anything wrong here?
//uniform_distribution returns an INTEGER in [rangeLow, rangeHigh], inclusive.
int uniform_distribution(int rangeLow, int rangeHigh)
{
int myRand = (int)rand();
int range = rangeHigh - rangeLow + 1; //+1 makes it [rangeLow, rangeHigh], inclusive.
int myRand_scaled = (myRand % range) + rangeLow;
return myRand_scaled;
}
//note: make sure rand() was already initialized using srand()
P.S. I searched for other questions like this. However, it was hard to filter out the small subset of questions that discuss random integers instead of random floating-point numbers.
Let's assume that rand() generates a uniformly-distributed value I in the range [0..RAND_MAX],
and you want to generate a uniformly-distributed value O in the range [L,H].
Suppose I in is the range [0..32767] and O is in the range [0..2].
According to your suggested method, O= I%3. Note that in the given range, there are 10923 numbers for which I%3=0, 10923 number for which I%3=1, but only 10922 number for which I%3=2. Hence your method will not map a value from I into O uniformly.
As another example, suppose O is in the range [0..32766].
According to your suggested method, O=I%32767. Now you'll get O=0 for both I=0 and I=32767. Hence 0 is twice as likely than any other value - your method is again nonuniform.
The suggest way to generate a uniform mapping is as follow:
Calculate the number of bits that are needed to store a random value in the range [L,H]:
unsigned int nRange = (unsigned int)H - (unsigned int)L + 1;
unsigned int nRangeBits= (unsigned int)ceil(log((double(nRange) / log(2.));
Generate nRangeBits random bits
this can be easily implemented by shifting-right the result of rand()
Ensure that the generated number is not greater than H-L.
If it is - repeat step 2.
Now you can map the generated number into O just by adding a L.
On some implementations, rand() did not provide good randomness on its lower order bits, so the modulus operator would not provide very random results. If you find that to be the case, you could try this instead:
int uniform_distribution(int rangeLow, int rangeHigh) {
double myRand = rand()/(1.0 + RAND_MAX);
int range = rangeHigh - rangeLow + 1;
int myRand_scaled = (myRand * range) + rangeLow;
return myRand_scaled;
}
Using rand() this way will produce a bias as noted by Lior. But, the technique is fine if you can find a uniform number generator to calculate myRand. One possible candidate would be drand48(). This will greatly reduce the amount of bias to something that would be very difficult to detect.
However, if you need something cryptographically secure, you should use an algorithm outlined in Lior's answer, assuming your rand() is itself cryptographically secure (the default one is probably not, so you would need to find one). Below is a simplified implementation of what Lior described. Instead of counting bits, we assume the range falls within RAND_MAX, and compute a suitable multiple. Worst case, the algorithm ends up calling the random number generator twice on average per request for a number in the range.
int uniform_distribution_secure(int rangeLow, int rangeHigh) {
int range = rangeHigh - rangeLow + 1;
int secureMax = RAND_MAX - RAND_MAX % range;
int x;
do x = secure_rand(); while (x >= secureMax);
return rangeLow + x % range;
}
I think it is known that rand() is not very good. It just depends on how good of "random" data you need.
http://www.azillionmonkeys.com/qed/random.html
http://www.linuxquestions.org/questions/programming-9/generating-random-numbers-in-c-378358/
http://forums.indiegamer.com/showthread.php?9460-Using-C-rand%28%29-isn-t-as-bad-as-previously-thought
I suppose you could write a test then calculate the chi-squared value to see how good your uniform generator is:
http://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test
Depending on your use (don't use this for your online poker shuffler), you might consider a LFSR
http://en.wikipedia.org/wiki/Linear_feedback_shift_register
It may be faster, if you just want some psuedo-random output. Also, supposedly they can be uniform, although I haven't studied the math enough to back up that claim.
A version which corrects the distribution errors (noted by Lior),
involves the high-bits returned by rand() and
only uses integer math (if that's desirable):
int uniform_distribution(int rangeLow, int rangeHigh)
{
int range = rangeHigh - rangeLow + 1; //+1 makes it [rangeLow, rangeHigh], inclusive.
int copies=RAND_MAX/range; // we can fit n-copies of [0...range-1] into RAND_MAX
// Use rejection sampling to avoid distribution errors
int limit=range*copies;
int myRand=-1;
while( myRand<0 || myRand>=limit){
myRand=rand();
}
return myRand/copies+rangeLow; // note that this involves the high-bits
}
//note: make sure rand() was already initialized using srand()
This should work well provided that range is much smaller than RAND_MAX, otherwise
you'll be back to the problem that rand() isn't a good random number generator in terms of its low-bits.

create a random number less than a max given value

What i would love to do is to create a function that takes a parameter that is the limit of which number the random generation should create. I have experienced that some generators that just repeat the number generated over and over again.
How can I make a generator that doesn't return the same number consecutively. Can someone please help me to achieve my goal?
int randomGen(int max)
{
int n;
return n;
}
The simplest way to get uniformly distributed results from rand is something like this:
int limited_rand(int limit)
{
int r, d = RAND_MAX / limit;
limit *= d;
do { r = rand(); } while (r >= limit);
return r / d;
}
The result will be in the range 0 to limit-1, and each will occur with equal probability as long as the values 0 through RAND_MAX all had equal probability with the original rand function.
Other methods such as modular arithmetic or dividing without the loop I used introduce bias. Methods that go through floating point intermediates do not avoid this problem. Getting good random floating point numbers from rand is at least as difficult. Using my function for integers (or an improvement of it) is a good place to start if you want random floats.
Edit: Here's an explanation of what I mean by bias. Suppose RAND_MAX is 7 and limit is 5. Suppose (if this is a good rand function) that the outputs 0, 1, 2, ..., 7 are all equally likely. Taking rand()%5 would map 0, 1, 2, 3, and 4 to themselves, but map 5, 6, and 7 to 0, 1, and 2. This means the values 0, 1, and 2 are twice as likely to pop up as the values 3 and 4. A similar phenomenon happens if you try to rescale and divide, for instance using rand()*(double)limit/(RAND_MAX+1) Here, 0 and 1 map to 0, 2 and 3 map to 1, 4 maps to 2, 5 and 6 map to 3, and 7 maps to 4.
These effects are somewhat mitigated by the magnitude of RAND_MAX, but they can come back if limit is large. By the way, as others have said, with linear congruence PRNGs (the typical implementation of rand), the low bits tend to behave very badly, so using modular arithmetic when limit is a power of 2 may avoid the bias problem I described (since limit usually divides RAND_MAX+1 evenly in this case), but you run into a different problem in its place.
How about this:
int randomGen(int limit)
{
return rand() % limit;
}
/* ... */
int main()
{
srand(time(NULL));
printf("%d", randomGen(2041));
return 0;
}
Any pseudo-random generator will repeat the values over and over again with some period. C only has rand(), if you use that you should definitively initialize the random seed with srand(). But probably your platform has better than that.
On POSIX systems there is a whole family of functions that you should find under the man drand48 page. They have a well defined period and quality. You probably find what you need, there.
Without explicit knowledge of the random generator of your platform, do not do rand() % max. The low-order bytes of simple random number generators are usually not random at all.
Use instead (returns a number between min inclusive and max non-inclusive):
int randomIntegerInRange(int min, int max)
{
double tmp = (double)rand() / (RAND_MAX - 1.);
return min + (int)floor(tmp * (max - min));
}
Update: The solution above is biased (see comments for explanation), and will likely not produce uniform results. I do not delete it since it is a non natural example of what not to do. Please use rejection methods as recommended elsewhere in this thread.

Resources