Why is my C code only generating every third random number? - c

I am trying to simulate the propagation of a worm across a network made of 100,000 computers. The simulation itself is very simple and I don't need any help except that for some reason, I am only getting every third random number.
Only the computers whose index modulo 1000 is less than 10 can be infected so when 1000 computers are infected, the program should be done. For some reason, my program only gets 329. When I lower the goal number and check the contents of the array, only every third computer has been changed and it is a consistent pattern. For example at the end of the array, only computers 98001, 98004, 98007, 99002, 99005, 99008 are changed even though the computers in between (98002, 98003, etc.) should be changed as well. The pattern holds all the way to the beginning of the array. When I try to get all 1000 changed, the program goes into an infinite loop and is stuck at 329.
Edit: I just discovered that if I lower the NETSIZE to 10,000 and the goal in the while loop to 100, it doesn't skip anything. Does that mean the problem has something to do with a rounding error? Someone who knows more about C than me must know the answer.
Thanks.
#include <stdio.h>
#include <stdlib.h>
#define NETSIZE 100000
double rand01();
void initNetwork();
unsigned char network[NETSIZE];
int scanrate = 3;
int infectedCount;
int scans;
int ind;
int time;
int main(void) {
initNetwork();
time = 0;
infectedCount = 1;
while (infectedCount < 1000) { //changing 1000 to 329 stops the infinite loop
scans = infectedCount * scanrate;
for (int j = 0; j < scans; j++) {
ind = (int) (rand01() * NETSIZE);
if (network[ind] == 0) {
network[ind] = 1;
infectedCount++;
}
}
time++;
}
for (int k = 0; k < NETSIZE; k++) {
if (network[k] == 1) printf("%d at %d\n", network[k], k);
}
}
double rand01() {
double temp;
temp = (rand() + 0.1) / (RAND_MAX + 1.0);
return temp;
}
void initNetwork() {
for (int i = 0; i < NETSIZE; i++) {
if (i % 1000 < 10) {
network[i] = 0;
} else {
network[i] = 2;
}
}
network[1000] = 1;
}
In the above code, I expect the code to run until the 1000 vulnerable indexes are changed from 0 to 1.

Converting comments into an answer.
What is RAND_MAX on your system? If it is a 15-bit or 16-bit value, you probably aren't getting good enough quantization when converted to double. If it is a 31-bit or bigger number, that (probably) won't be the issue. You need to investigate what values are generated by just the rand01() function with different seeds, plus the multiplication and cast to integer — simply print the results and sort -n | uniq -c to see how uniform the results are.
On my system RAND_MAX is only 32767. Do you think that might be why my results might not be granular enough? Now that you've made me think about it, there would only be 32,767 possible values and my network array is 100,000 possible values. Which corresponds about about the 1/3 results I am getting.
Yes, I think that is very probably the problem. You want 100,000 different values, but your random number generator can only generate about 33,000 different values, which is awfully close to your 1:3 metric. It also explains immediately why you got good results when you reduced the multiplier from 100,000 to 10,000.
You could try:
double rand01(void)
{
assert(RAND_MAX == 32767);
return ((rand() << 15) + rand()) / ((RAND_MAX + 1.0) * (RAND_MAX + 1.0));
}
Or you could use an alternative random number generator — for example, POSIX defines both the drand48() family of functions and
random(), with corresponding seed-setting functions where needed.

Yeah, the problem I am having is that the RAND_MAX value on my system is only 32767 and I am trying to effectively spread that out over 100,000 values which results in about only every third number ever showing up.
In my defense, the person who suggested the rand01() function has a PhD in Computer Science, but I think he ran this code on our school's main computer which probably has a much bigger RAND_MAX value.
#JonathanLeffler deserves credit for this solution.

Related

Matchmaking program in C?

The problem I am given is the following:
Write a program to discover the answer to this puzzle:"Let's say men and women are paid equally (from the same uniform distribution). If women date randomly and marry the first man with a higher salary, what fraction of the population will get married?"
From this site
My issue is that it seems that the percent married figure I am getting is wrong. Another poster asked this same question on the programmers exchange before, and the percentage getting married should be ~68%. However, I am getting closer to 75% (with a lot of variance). If anyone can take a look and let me know where I went wrong, I would be very grateful.
I realize, looking at the other question that was on the programmers exchange, that this is not the most efficient way to solve the problem. However, I would like to solve the problem in this manner before using more efficient approaches.
My code is below, the bulk of the problem is "solved" in the test function:
#include <cs50.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ARRAY_SIZE 100
#define MARRIED 1
#define SINGLE 0
#define MAX_SALARY 1000000
bool arrayContains(int* array, int val);
int test();
int main()
{
printf("Trial count: ");
int trials = GetInt();
int sum = 0;
for(int i = 0; i < trials; i++)
{
sum += test();
}
int average = (sum/trials) * 100;
printf("Approximately %d %% of the population will get married\n", average / ARRAY_SIZE);
}
int test()
{
srand(time(NULL));
int femArray[ARRAY_SIZE][2];
int maleArray[ARRAY_SIZE][2];
// load up random numbers
for (int i = 0; i < ARRAY_SIZE; i++)
{
femArray[i][0] = (rand() % MAX_SALARY);
femArray[i][1] = SINGLE;
maleArray[i][0] = (rand() % MAX_SALARY);
maleArray[i][1] = SINGLE;
}
srand(time(NULL));
int singleFemales = 0;
for (int k = 0; k < ARRAY_SIZE; k++)
{
int searches = 0; // count the unsuccessful matches
int checkedMates[ARRAY_SIZE] = {[0 ... ARRAY_SIZE - 1] = ARRAY_SIZE + 1};
while(true)
{
// ARRAY_SIZE - k is number of available people, subtract searches for people left
// checked all possible mates
if(((ARRAY_SIZE - k) - searches) == 0)
{
singleFemales++;
break;
}
int randMale = rand() % ARRAY_SIZE; // find a random male
while(arrayContains(checkedMates, randMale)) // ensure that the male was not checked earlier
{
randMale = rand() % ARRAY_SIZE;
}
checkedMates[searches] = randMale;
// male has a greater income and is single
if((femArray[k][0] < maleArray[randMale][0]) && (maleArray[randMale][1] == SINGLE))
{
femArray[k][1] = MARRIED;
maleArray[randMale][1] = MARRIED;
break;
}
else
{
searches++;
continue;
}
}
}
return ARRAY_SIZE - singleFemales;
}
bool arrayContains(int* array, int val)
{
for(int i = 0; i < ARRAY_SIZE; i++)
{
if (array[i] == val)
return true;
}
return false;
}
In the first place, there is some ambiguity in the problem as to what it means for the women to "date randomly". There are at least two plausible interpretations:
You cycle through the unmarried women, with each one randomly drawing one of the unmarried men and deciding, based on salary, whether to marry. On each pass through the available women, this probably results in some available men being dated by multiple women, and others being dated by none.
You divide each trial into rounds. In each round, you randomly shuffle the unmarried men among the unmarried women, so that each unmarried man dates exactly one unmarried woman.
In either case, you must repeat the matching until there are no more matches possible, which occurs when the maximum salary among eligible men is less than or equal to the minimum salary among eligible women.
In my tests, the two interpretations produced slightly different statistics: about 69.5% married using interpretation 1, and about 67.6% using interpretation 2. 100 trials of 100 potential couples each was enough to produce fairly low variance between runs. In the common (non-statistical) sense of the term, for example, the results from one set of 10 runs varied between 67.13% and 68.27%.
You appear not to take either of those interpretations, however. If I'm reading your code correctly, you go through the women exactly once, and for each one you keep drawing random men until either you find one that that woman can marry or you have tested every one. It should be clear that this yields a greater chance for women early in the list to be married, and that order-based bias will at minimum increase the variance of your results. I find it plausible that it also exerts a net bias toward more marriages, but I don't have a good argument in support.
Additionally, as I wrote in comments, you introduce some bias through the way you select random integers. The rand() function returns an int between 0 and RAND_MAX, inclusive, for RAND_MAX + 1 possible values. For the sake of argument, let's suppose those values are uniformly distributed over that range. If you use the % operator to shrink the range of the result to N possible values, then that result is still uniformly distributed only if N evenly divides RAND_MAX + 1, because otherwise more rand() results map to some values than map to others. In fact, this applies to any strictly mathematical transformation you might think of to narrow the range of the rand() results.
For the salaries, I don't see why you even bother to map them to a restricted range. RAND_MAX is as good a maximum salary as any other; the statistics gleaned from the simulation don't depend on the range of salaries; but only on their uniform distribution.
For selecting random indices into your arrays, however, either for drawing men or for shuffling, you do need a restricted range, so you do need to take care. The best way to reduce bias in this case is to force the random numbers drawn to come from a range that is evenly divisible by the number of options by re-drawing as many times as necessary to ensure it:
/*
* Returns a random `int` in the half-open interval [0, upper_bound).
* upper_bound must be positive, and should not exceed RAND_MAX + 1.
*/
int random_draw(int upper_bound) {
/* integer division truncates the remainder: */
int rand_bound = (RAND_MAX / upper_bound) * upper_bound;
for (;;) {
int r = rand();
if (r < rand_bound) {
return r % upper_bound;
}
}
}

Implementing a running average using a circular buffer array

I'm trying to implement a circular buffer in order to average a stream of data points generated by a pressure sensor in C running on an embedded controller. The idea is to store the last N pressure readings in the buffer while maintaining a running sum of the buffer. Average = sum / N. Should be trivial.
However, the average I'm seeing is a value that starts near the pressure reading (I preload the buffer registers with a typical value), but which subsequently trends towards zero. If I also display the sum, it too is dropping asymptotically to zero. If the pressure changes, the average moves away from zero in the direction of the pressure change, but returns to its zero trend as soon as the pressure stabilizes.
If anyone could spot the error I'm making, it would be very helpful.
#define ARRAYSIZE 100
double Sum; // variable for running sum
double Average; // variable for average
double PressureValue[ARRAYSIZE]; // declare value array
int i; // data array index
int main(void) {
while (1)
{
if (i == ARRAYSIZE) i = 0; // test index, reset if it reaches the upper boundary
Sum = Sum - PressureValue[i]; // subtract the old datapoint from running sum
PressureValue[i] = PRESSURE; // replace previous loop datapoint with new data
Sum = Sum + PressureValue[i]; // add back the new current value to the running sum
Average = Sum / ARRAYSIZE; // calculate average value = SUM / ARRAYSIZE
++i; // increment index
} // end while loop
} // end main
The averaging code takes place in an interrupt handler; I'm reading the data from the pressure sensor via I2C with interrupts triggered at the end of each I2C communication phase. During the last phase, after the four bytes comprising the pressure data have been retrieved, they are assembled into a complete reading, and then converted to a decimal reading in PSI contained in the PRESSURE variable.
Obviously , this isn't a direct cut and paste from my code, but I didn't want anyone to have to wade through the whole thing, so I've limited it to just the stuff relevant to figuring the average, and changed the variable names to be more readable. Still, I can't spot what I'm doing wrong.
Thanks for your attention!
Doug G.
I don't see anything obviously wrong with your code, but as you say, you're not providing all of it, so who knows what's happening in the rest of it (in particular, how/if you're initializing i and Sum), but the following works fine for me, which is basically the same algorithm you have:
#include <stdio.h>
#include <stddef.h>
double PressureValue[8];
double Pressures[800];
int main(void) {
const size_t array_size = sizeof(PressureValue) / sizeof(PressureValue[0]);
const size_t num_pressures = sizeof(Pressures) / sizeof(Pressures[0]);
size_t count = 0, i = 0;
double average = 0;
/* Initialize PressureValue to {0, 1, 2, 3, ...} */
for ( size_t n = 0; n < array_size; ++n ) {
PressureValue[n] = n;
}
double sum = ((array_size - 1) / (double) 2) * array_size;
/* Initialize pressures to repeats of PressureValue */
for ( size_t n = 0; n < num_pressures; ++n ) {
Pressures[n] = n % array_size;
}
while ( count < num_pressures ) {
if ( i == array_size )
i = 0;
sum -= PressureValue[i];
PressureValue[i] = Pressures[count++];
sum += PressureValue[i++];
}
average = sum / array_size;
printf("Sum is %f\n", sum);
printf("Counted %zu pressures\n", count);
printf("Average is %f\n", average);
return 0;
}
Outputs:
paul#local:~/src/c/scratch$ ./pressure
Sum is 28.000000
Counted 800 pressures
Average is 3.500000
paul#local:~/src/c/scratch$
Just one more possibility, when you say they are "converted to a decimal reading in PSI contained in the PRESSURE variable", and elsewhere, for that matter, make sure you're not getting things truncated to zero because of integer division. If you've got things "trending to zero" as you're adding more, that's something I'd be immediately suspicious of. A classic error in converting Fahrenheit to Celsius, for instance, would be to write c = (f - 32) * (5 / 9), where that (5 / 9) truncates to zero every time, and always leaves you with c == 0.
Also, as a general rule, I understand that you "didn't want anyone to have to wade through the whole thing", but you'd be surprised how many times the real problem is not in the part of the code that you think it is. This is why it's important to provide an SSCCE to ensure that you can narrow down your code and actually isolate and reproduce the problem. If you try to narrow down your code and find that you can't isolate and reproduce the problem, then it's almost certain that your issue is not being caused by the thing you think is causing it.
It is also possible your code is working exactly as intended. If you are preloading your array with typical values outside of this loop and then running this code you would get the behavior you are describing. If you are preloading the array make sure you are preloading the sum and average otherwise you are essentially measuring gauge pressure with you preloaded value as atmospheric pressure.

why runtime error on online judge?

I am unable to understand why i am getting runtime error with this code. Problem is every number >=6 can be represented as sum of two prime numbers.
My code is ...... Thanks in advance problem link is http://poj.org/problem?id=2262
#include "stdio.h"
#include "stdlib.h"
#define N 1000000
int main()
{
long int i,j,k;
long int *cp = malloc(1000000*sizeof(long int));
long int *isprime = malloc(1000000*sizeof(long int));
//long int *isprime;
long int num,flag;
//isprime = malloc(2*sizeof(long int));
for(i=0;i<N;i++)
{
isprime[i]=1;
}
j=0;
for(i=2;i<N;i++)
{
if(isprime[i])
{
cp[j] = i;
j++;
for(k=i*i;k<N;k+=i)
{
isprime[k] = 0;
}
}
}
//for(i=0;i<j;i++)
//{
// printf("%d ",cp[i]);
//}
//printf("\n");
while(1)
{
scanf("%ld",&num);
if(num==0) break;
flag = 0;
for(i=0;i<j&&num>cp[i];i++)
{
//printf("%d ",cp[i]);
if(isprime[num-cp[i]])
{
printf("%ld = %ld + %ld\n",num,cp[i],num-cp[i]);
flag = 1;
break;
}
}
if(flag==0)
{
printf("Goldbach's conjecture is wrong.\n");
}
}
free(cp);
free(isprime);
return 0;
}
Two possibilities immediately spring to mind. The first is that the user input may be failing if whatever test harness is being used does not provide any input. Without knowing more detail on the harness, this is a guess at best.
You could check that by hard-coding a value rather than accepting one from standard input.
The other possibility is the rather large memory allocations being done. It may be that you're in a constrained environment which doesn't allow that.
A simple test for that is to drop the value of N (and, by the way, use it rather than the multiple hardcoded 1000000 figures in your malloc calls). A better way would be to check the return value from malloc to ensure it's not NULL. That should be done anyway.
And, aside from that, you may want to check your Eratosthenes Sieve code. The first item that should be marked non-prime for the prime i is i + i rather than i * i as you have. I think it should be:
for (k = i + i; k < N; k += i)
The mathematical algorithm is actually okay since any multiple of N less than N * N will already have been marked non-prime by virtue of the fact it's a multiple of one of the primes previously checked.
Your problem lies with integer overflow. At the point where N becomes 46_349, N * N is 2_148_229_801 which, if you have a 32-bit two's complement integer (maximum value of 2_147_483_647), will wrap around to -2_146_737_495.
When that happens, the loop keeps going since that negative number is still less than your limit, but using it as an array index is, shall we say, inadvisable :-)
The reason it works with i + i is because your limit is well short of INT_MAX / 2 so no overflow happens there.
If you want to make sure that this won't be a problem if you get up near INT_MAX / 2, you can use something like:
for (k = i + i; (k < N) && (k > i); k += i)
That extra check on k should catch the wraparound event, provided your wrapping follows the "normal" behaviour - technically, I think it's undefined behaviour to wrap but most implementations simply wrap two positives back to a negative due to the two's complement nature. Be aware then that this is actually non-portable, but what that means in practice is that it will only work on 99.999% of machines out there :-)
But, if you're a stickler for portability, there are better ways to prevent overflow in the first place. I won't go into them here but to say they involve subtracting one of the terms being summed from MAX_INT and comparing it to the other term being summed.
The only way I can get this to give an error is if I enter a value greater than 1000000 or less than 1 to the scanf().
Like this:
ubuntu#amrith:/tmp$ ./x
183475666
Segmentation fault (core dumped)
ubuntu#amrith:/tmp$
But the reason for that should be obvious. Other than that, this code looks good.
Just trying to find what went wrong!
If the sizeof(long int) is 4 bytes for the OS that you are using, then it makes this problem.
In the code:
for(k=i*i;k<N;k+=i)
{
isprime[k] = 0;
}
Here, when you do k = i*i, for large values if i, the value of k goes beyond 4 bytesand get truncated which may result in negative numbers and so, the condition k<N is satisfied but with a negative number :). So you get a segmentation fault there.
It's good that you need only i+i, but if you need to increase the limit, take care of this problem.

Not getting proper output from Pollard's rho algorithm implementation

I don't know where I am doing wrong in trying to calculate prime factorizations using Pollard's rho algorithm.
#include<stdio.h>
#define f(x) x*x-1
int pollard( int );
int gcd( int, int);
int main( void ) {
int n;
scanf( "%d",&n );
pollard( n );
return 0;
}
int pollard( int n ) {
int i=1,x,y,k=2,d;
x = rand()%n;
y = x;
while(1) {
i++;
x = f( x ) % n;
d = gcd( y-x, n);
if(d!=1 && d!=n)
printf( "%d\n", d);
if(i == k) {
y = x;
k = 2 * k;
}
}
}
int gcd( int a, int b ) {
if( b == 0)
return a;
else
return gcd( b, a % b);
}
One immediate problem is, as Peter de Rivaz suspected the
#define f(x) x*x-1
Thus the line
x = f(x)%n;
becomes
x = x*x-1%n;
and the precedence of % is higher than that of -, hence the expression is implicitly parenthesised as
x = (x*x) - (1%n);
which is equivalent to x = x*x - 1; (I assume n > 1, anyway it's x = x*x - constant;) and if you start with a value x >= 2, you have overflow before you had a realistic chance of finding a factor:
2 -> 2*2-1 = 3 -> 3*3 - 1 = 8 -> 8*8 - 1 = 63 -> 3968 -> 15745023 -> overflow if int is 32 bits
That doesn't immediately make it impossible that gcd(y-x,n) is a factor, though. It just makes it likely that at a stage where theoretically, you would have found a factor, the overflow destroys the common factor that mathematically would exist - more likely than a common factor introduced by overflow.
Overflow of signed integers is undefined behaviour, so there are no guarantees how the programme behaves, but usually it behaves consistently so the iteration of f still produces a well-defined sequence for which the algorithm in principle works.
Another problem is that y-x will frequently be negative, and then the computed gcd can also be negative - often -1. In that case, you print -1.
And then, it is a not too rare occurrence that iterating f from a starting value doesn't detect a common factor because the cycles modulo both prime factors (for the example of n a product of two distinct primes) have equal length and are entered at the same time. You make no attempt at detecting such a case; whenever gcd(|y-x|, n) == n, any further work in that sequence is pointless, so you should break out of the loop when d == n.
Also, you never check whether n is a prime, in which case trying to find a factor is a futile undertaking from the start.
Furthermore, after fixing f(x) so that the % n applies to the complete result of f(x), you have the problem that x*x still overflows for relatively small x (with the standard signed 32-bit ints, for x >= 46341), so factoring larger n may fail due to overflow. At least, you should use unsigned long long for the computations, so that overflow is avoided for n < 2^32. However, factorising such small numbers is typically done more efficiently with trial division. Pollard's Rho method and other advanced factoring algorithms are meant for larger numbers, where trial division is no longer efficient or even feasible.
I'm just a novice at C++, and I am new to Stack Overflow, so some of what I have written is going to look sloppy, but this should get you going in the right direction. The program posted here should generally find and return one non-trivial factor of the number you enter at the prompt, or it will apologize if it cannot find such a factor.
I tested it with a few semiprime numbers, and it worked for me. For 371156167103, it finds 607619 without any detectable delay after I hit the enter key. I didn't check it with larger numbers than this. I used unsigned long long variables, but if possible, you should get and use a library that provides even larger integer types.
Editing to add, the single call to the method f for X and 2 such calls for Y is intentional and is in accordance with the way the algorithm works. I thought to nest the call for Y inside another such call to keep it on one line, but I decided to do it this way so it's easier to follow.
#include "stdafx.h"
#include <stdio.h>
#include <iostream>
typedef unsigned long long ULL;
ULL pollard(ULL numberToFactor);
ULL gcd(ULL differenceBetweenCongruentFunctions, ULL numberToFactor);
ULL f(ULL x, ULL numberToFactor);
int main(void)
{
ULL factor;
ULL n;
std::cout<<"Enter the number for which you want a prime factor: ";
std::cin>>n;
factor = pollard(n);
if (factor == 0) std::cout<<"No factor found. Your number may be prime, but it is not certain.\n\n";
else std::cout<<"One factor is: "<<factor<<"\n\n";
}
ULL pollard(ULL n)
{
ULL x = 2ULL;
ULL y = 2ULL;
ULL d = 1ULL;
while(d==1||d==n)
{
x = f(x,n);
y = f(y,n);
y = f(y,n);
if (y>x)
{
d = gcd(y-x, n);
}
else
{
d = gcd(x-y, n);
}
}
return d;
}
ULL gcd(ULL a, ULL b)
{
if (a==b||a==0)
return 0; // If x==y or if the absolute value of (x-y) == the number to be factored, then we have failed to find
// a factor. I think this is not proof of primality, so the process could be repeated with a new function.
// For example, by replacing x*x+1 with x*x+2, and so on. If many such functions fail, primality is likely.
ULL currentGCD = 1;
while (currentGCD!=0) // This while loop is based on Euclid's algorithm
{
currentGCD = b % a;
b=a;
a=currentGCD;
}
return b;
}
ULL f(ULL x, ULL n)
{
return (x * x + 1) % n;
}
Sorry for the long delay getting back to this. As I mentioned in my first answer, I am a novice at C++, which will be evident in my excessive use of global variables, excessive use of BigIntegers and BigUnsigned where other types might be better, lack of error checking, and other programming habits on display which a more skilled person might not exhibit. That being said, let me explain what I did, then will post the code.
I am doing this in a second answer because the first answer is useful as a very simple demo of how a Pollard's Rho algorithm is to implement once you understand what it does. And what it does is to first take 2 variables, call them x and y, assign them the starting values of 2. Then it runs x through a function, usually (x^2+1)%n, where n is the number you want to factor. And it runs y through the same function twice each cycle. Then the difference between x and y is calculated, and finally the greatest common divisor is found for this difference and n. If that number is 1, then you run x and y through the function again.
Continue this process until the GCD is not 1 or until x and y are equal again. If the GCD is found which is not 1, then that GCD is a non-trivial factor of n. If x and y become equal, then the (x^2+1)%n function has failed. In that case, you should try again with another function, maybe (x^2+2)%n, and so on.
Here is an example. Take 35, for which we know the prime factors are 5 and 7. I'll walk through Pollard Rho and show you how it finds a non-trivial factor.
Cycle #1: X starts at 2. Then using the function (x^2+1)%n, (2^2+1)%35, we get 5 for x. Y starts at 2 also, and after one run through the function, it also has a value of 5. But y always goes through the function twice, so the second run is (5^2+1)%35, or 26. The difference between x and y is 21. The GCD of 21 (the difference) and 35 (n) is 7. We have already found a prime factor of 35! Note that the GCD for any 2 numbers, even extremely large exponents, can be found very quickly by formula using Euclid's algorithm, and that's what the program I will post here does.
On the subject of the GCD function, I am using one library I downloaded for this program, a library that allows me to use BigIntegers and BigUnsigned. That library also has a GCD function built in, and I could have used it. But I decided to stay with the hand-written GCD function for instructional purposes. If you want to improve the program's execution time, it might be a good idea to use the library's GCD function because there are faster methods than Euclid, and the library may be written to use one of those faster methods.
Another side note. The .Net 4.5 library supports the use of BigIntegers and BigUnsigned also. I decided not to use that for this program because I wanted to write the whole thing in C++, not C++/CLI. You could get better performance from the .Net library, or you might not. I don't know, but I wanted to share that that is also an option.
I am jumping around a bit here, so let me start now by explaining in broad strokes what the program does, and lastly I will explain how to set it up on your computer if you use Visual Studio 11 (also called Visual Studio 2012).
The program allocates 3 arrays for storing the factors of any number you give it to process. These arrays are 1000 elements wide, which is excessive, maybe, but it ensures any number with 1000 prime factors or less will fit.
When you enter the number at the prompt, it assumes the number is composite and puts it in the first element of the compositeFactors array. Then it goes through some admittedly inefficient while loops, which use Miller-Rabin to check if the number is composite. Note this test can either say a number is composite with 100% confidence, or it can say the number is prime with extremely high (but not 100%) confidence. The confidence is adjustable by a variable confidenceFactor in the program. The program will make one check for every value between 2 and confidenceFactor, inclusive, so one less total check than the value of confidenceFactor itself.
The setting I have for confidenceFactor is 101, which does 100 checks. If it says a number is prime, the odds that it is really composite are 1 in 4^100, or the same as the odds of correctly calling the flip of a fair coin 200 consecutive times. In short, if it says the number is prime, it probably is, but the confidenceFactor number can be increased to get greater confidence at the cost of speed.
Here might be as good a place as any to mention that, while Pollard's Rho algorithm can be pretty effective factoring smaller numbers of type long long, the Miller-Rabin test to see if a number is composite would be more or less useless without the BigInteger and BigUnsigned types. A BigInteger library is pretty much a requirement to be able to reliably factor large numbers all the way to their prime factors like this.
When Miller Rabin says the factor is composite, it is factored, the factor stored in a temp array, and the original factor in the composites array divided by the same factor. When numbers are identified as likely prime, they are moved into the prime factors array and output to screen. This process continues until there are no composite factors left. The factors tend to be found in ascending order, but this is coincidental. The program makes no effort to list them in ascending order, but only lists them as they are found.
Note that I could not find any function (x^2+c)%n which will factor the number 4, no matter what value I gave c. Pollard Rho seems to have a very hard time with all perfect squares, but 4 is the only composite number I found which is totally impervious to it using functions in the format described. Therefore I added a check for an n of 4 inside the pollard method, returning 2 instantly if so.
So to set this program up, here is what you should do. Go to https://mattmccutchen.net/bigint/ and download bigint-2010.04.30.zip. Unzip this and put all of the .hh files and all of the C++ source files in your ~\Program Files\Microsoft Visual Studio 11.0\VC\include directory, excluding the Sample and C++ Testsuite source files. Then in Visual Studio, create an empty project. In the solution explorer, right click on the resource files folder and select Add...existing item. Add all of the C++ source files in the directory I just mentioned. Then also in solution expolorer, right click the Source Files folder and add a new item, select C++ file, name it, and paste the below source code into it, and it should work for you.
Not to flatter overly much, but there are folks here on Stack Overflow who know a great deal more about C++ than I do, and if they modify my code below to make it better, that's fantastic. But even if not, the code is functional as-is, and it should help illustrate the principles involved in programmatically finding prime factors of medium sized numbers. It will not threaten the general number field sieve, but it can factor numbers with 12 - 14 digit prime factors in a reasonably short time, even on an old Core2 Duo computer like the one I am using.
The code follows. Good luck.
#include <string>
#include <stdio.h>
#include <iostream>
#include "BigIntegerLibrary.hh"
typedef BigInteger BI;
typedef BigUnsigned BU;
using std::string;
using std::cin;
using std::cout;
BU pollard(BU numberToFactor);
BU gcda(BU differenceBetweenCongruentFunctions, BU numberToFactor);
BU f(BU x, BU numberToFactor, int increment);
void initializeArrays();
BU getNumberToFactor ();
void factorComposites();
bool testForComposite (BU num);
BU primeFactors[1000];
BU compositeFactors[1000];
BU tempFactors [1000];
int primeIndex;
int compositeIndex;
int tempIndex;
int numberOfCompositeFactors;
bool allJTestsShowComposite;
int main ()
{
while(1)
{
primeIndex=0;
compositeIndex=0;
tempIndex=0;
initializeArrays();
compositeFactors[0] = getNumberToFactor();
cout<<"\n\n";
if (compositeFactors[0] == 0) return 0;
numberOfCompositeFactors = 1;
factorComposites();
}
}
void initializeArrays()
{
for (int i = 0; i<1000;i++)
{
primeFactors[i] = 0;
compositeFactors[i]=0;
tempFactors[i]=0;
}
}
BU getNumberToFactor ()
{
std::string s;
std::cout<<"Enter the number for which you want a prime factor, or 0 to quit: ";
std::cin>>s;
return stringToBigUnsigned(s);
}
void factorComposites()
{
while (numberOfCompositeFactors!=0)
{
compositeIndex = 0;
tempIndex = 0;
// This while loop finds non-zero values in compositeFactors.
// If they are composite, it factors them and puts one factor in tempFactors,
// then divides the element in compositeFactors by the same amount.
// If the element is prime, it moves it into tempFactors (zeros the element in compositeFactors)
while (compositeIndex < 1000)
{
if(compositeFactors[compositeIndex] == 0)
{
compositeIndex++;
continue;
}
if(testForComposite(compositeFactors[compositeIndex]) == false)
{
tempFactors[tempIndex] = compositeFactors[compositeIndex];
compositeFactors[compositeIndex] = 0;
tempIndex++;
compositeIndex++;
}
else
{
tempFactors[tempIndex] = pollard (compositeFactors[compositeIndex]);
compositeFactors[compositeIndex] /= tempFactors[tempIndex];
tempIndex++;
compositeIndex++;
}
}
compositeIndex = 0;
// This while loop moves all remaining non-zero values from compositeFactors into tempFactors
// When it is done, compositeFactors should be all 0 value elements
while (compositeIndex < 1000)
{
if (compositeFactors[compositeIndex] != 0)
{
tempFactors[tempIndex] = compositeFactors[compositeIndex];
compositeFactors[compositeIndex] = 0;
tempIndex++;
compositeIndex++;
}
else compositeIndex++;
}
compositeIndex = 0;
tempIndex = 0;
// This while loop checks all non-zero elements in tempIndex.
// Those that are prime are shown on screen and moved to primeFactors
// Those that are composite are moved to compositeFactors
// When this is done, all elements in tempFactors should be 0
while (tempIndex<1000)
{
if(tempFactors[tempIndex] == 0)
{
tempIndex++;
continue;
}
if(testForComposite(tempFactors[tempIndex]) == false)
{
primeFactors[primeIndex] = tempFactors[tempIndex];
cout<<primeFactors[primeIndex]<<"\n";
tempFactors[tempIndex]=0;
primeIndex++;
tempIndex++;
}
else
{
compositeFactors[compositeIndex] = tempFactors[tempIndex];
tempFactors[tempIndex]=0;
compositeIndex++;
tempIndex++;
}
}
compositeIndex=0;
numberOfCompositeFactors=0;
// This while loop just checks to be sure there are still one or more composite factors.
// As long as there are, the outer while loop will repeat
while(compositeIndex<1000)
{
if(compositeFactors[compositeIndex]!=0) numberOfCompositeFactors++;
compositeIndex ++;
}
}
return;
}
// The following method uses the Miller-Rabin primality test to prove with 100% confidence a given number is composite,
// or to establish with a high level of confidence -- but not 100% -- that it is prime
bool testForComposite (BU num)
{
BU confidenceFactor = 101;
if (confidenceFactor >= num) confidenceFactor = num-1;
BU a,d,s, nMinusOne;
nMinusOne=num-1;
d=nMinusOne;
s=0;
while(modexp(d,1,2)==0)
{
d /= 2;
s++;
}
allJTestsShowComposite = true; // assume composite here until we can prove otherwise
for (BI i = 2 ; i<=confidenceFactor;i++)
{
if (modexp(i,d,num) == 1)
continue; // if this modulus is 1, then we cannot prove that num is composite with this value of i, so continue
if (modexp(i,d,num) == nMinusOne)
{
allJTestsShowComposite = false;
continue;
}
BU exponent(1);
for (BU j(0); j.toInt()<=s.toInt()-1;j++)
{
exponent *= 2;
if (modexp(i,exponent*d,num) == nMinusOne)
{
// if the modulus is not right for even a single j, then break and increment i.
allJTestsShowComposite = false;
continue;
}
}
if (allJTestsShowComposite == true) return true; // proven composite with 100% certainty, no need to continue testing
}
return false;
/* not proven composite in any test, so assume prime with a possibility of error =
(1/4)^(number of different values of i tested). This will be equal to the value of the
confidenceFactor variable, and the "witnesses" to the primality of the number being tested will be all integers from
2 through the value of confidenceFactor.
Note that this makes this primality test cryptographically less secure than it could be. It is theoretically possible,
if difficult, for a malicious party to pass a known composite number for which all of the lowest n integers fail to
detect that it is composite. A safer way is to generate random integers in the outer "for" loop and use those in place of
the variable i. Better still if those random numbers are checked to ensure no duplicates are generated.
*/
}
BU pollard(BU n)
{
if (n == 4) return 2;
BU x = 2;
BU y = 2;
BU d = 1;
int increment = 1;
while(d==1||d==n||d==0)
{
x = f(x,n, increment);
y = f(y,n, increment);
y = f(y,n, increment);
if (y>x)
{
d = gcda(y-x, n);
}
else
{
d = gcda(x-y, n);
}
if (d==0)
{
x = 2;
y = 2;
d = 1;
increment++; // This changes the pseudorandom function we use to increment x and y
}
}
return d;
}
BU gcda(BU a, BU b)
{
if (a==b||a==0)
return 0; // If x==y or if the absolute value of (x-y) == the number to be factored, then we have failed to find
// a factor. I think this is not proof of primality, so the process could be repeated with a new function.
// For example, by replacing x*x+1 with x*x+2, and so on. If many such functions fail, primality is likely.
BU currentGCD = 1;
while (currentGCD!=0) // This while loop is based on Euclid's algorithm
{
currentGCD = b % a;
b=a;
a=currentGCD;
}
return b;
}
BU f(BU x, BU n, int increment)
{
return (x * x + increment) % n;
}
As far as I can see, Pollard Rho normally uses f(x) as (x*x+1) (e.g. in these lecture notes ).
Your choice of x*x-1 appears not as good as it often seems to get stuck in a loop:
x=0
f(x)=-1
f(f(x))=0

Generate a random number within range? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Generating Random Numbers in Objective-C
How do I generate a random number which is within a range?
This is actually a bit harder to get really correct than most people realize:
int rand_lim(int limit) {
/* return a random number between 0 and limit inclusive.
*/
int divisor = RAND_MAX/(limit+1);
int retval;
do {
retval = rand() / divisor;
} while (retval > limit);
return retval;
}
Attempts that just use % (or, equivalently, /) to get the numbers in a range almost inevitably introduce skew (i.e., some numbers will be generated more often than others).
As to why using % produces skewed results: unless the range you want is a divisor of RAND_MAX, skew is inevitable. If you start with small numbers, it's pretty easy to see why. Consider taking 10 pieces of candy (that we'll assume you can't cut, break, etc. into smaller pieces) and trying to divide it evenly between three children. Clearly it can't be done--if you hand out all the candy, the closest you can get is for two kids to get three pieces of candy, and one of them getting four.
There's only one way for all the kids to get the same number of pieces of candy: make sure you don't hand out the last piece of candy at all.
To relate this to the code above, let's start by numbering the candies from 1 to 10 and the kids from 1 to 3. The initial division says since there are three kids, our divisor is three. We then pull a random candy from the bucket, look at its number and divide by three and hand it to that kid -- but if the result is greater than 3 (i.e. we've picked out candy number 10) we just don't hand it out at all -- we discard it and pick out another candy.
Of course, if you're using a modern implementation of C++ (i.e., one that supports C++11 or newer), you should usually use one the distribution classes from the standard library. The code above corresponds most closely with std::uniform_int_distribution, but the standard library also includes uniform_real_distribution as well as classes for a number of non-uniform distributions (Bernoulli, Poisson, normal, maybe a couple others I don't remember at the moment).
int rand_range(int min_n, int max_n)
{
return rand() % (max_n - min_n + 1) + min_n;
}
For fractions:
double rand_range(double min_n, double max_n)
{
return (double)rand()/RAND_MAX * (max_n - min_n) + min_n;
}
For an integer value in the range [min,max):
double scale = (double) (max - min) / RAND_MAX;
int val = min + floor(rand() * scale)
I wrote this specifically in Obj-C for an iPhone project:
- (int) intInRangeMinimum:(int)min andMaximum:(int)max {
if (min > max) { return -1; }
int adjustedMax = (max + 1) - min; // arc4random returns within the set {min, (max - 1)}
int random = arc4random() % adjustedMax;
int result = random + min;
return result;
}
To use:
int newNumber = [aClass intInRangeMinimum:1 andMaximum:100];
Add salt to taste
+(NSInteger)randomNumberWithMin:(NSInteger)min WithMax:(NSInteger)max {
if (min>max) {
int tempMax=max;
max=min;
min=tempMax;
}
int randomy=arc4random() % (max-min+1);
randomy=randomy+min;
return randomy;
}
I use this method in a random number related class I made. Works well for my non-demanding needs, but may well be biased in some way.

Resources