Project Euler #179 - c

Problem: Find the number of integers 1 < n < 10^7, for which n and n + 1 have the same number of positive divisors. For example, 14 has the positive divisors 1, 2, 7, 14 while 15 has 1, 3, 5, 15.
I can't reach 10^7 because it is too big number for C and me. How can i solve this problem in C?
#include<stdio.h>
#include<conio.h>
int divisorcount(int);
int main()
{
int number,divisornumber1,divisornumber2,j=0;
for(number=1;number<=100;number++){
divisornumber1=divisorcount(number);
divisornumber2=divisorcount(number-1);
if(divisornumber1==divisornumber2){
printf("%d and %d\n",number-1,number);
j++;
}
}
printf("\nThere is %d integers.",j);
getch();
}
int divisorcount(int num)
{
int i,divi=0;
for(i=1;i<=(num)/2;i++)
if(num%i==0)
divi++;
return divi;
}

As a hint to how to solve the problem within a minute, you can go through each number from 2 to 10^7, loop through all multiples of the those numbers and increment by 1 (1 is ignored, since all numbers are multiple of 1). In the end, you will get the number of divisors of each of the numbers in the array (check whether your compiler support 32-bit index). Just use a final linear scan to count.

Ever tried long long num = 100000000LL;? C isn't smart enough to conclude the type on the right side from the left long long so you have to add the LL. With this approach you should be able to handle larger numbers than normal integers, just change your functions and variables in a suitable way.
A long long is always at least 2^64 bit in size which you can check on Wikipedia.
Hint: As someone mentioned in the comments, Project Euler is not about bruteforcing. This is a lame approach. Think about some better strategies. You might want to get help at math.stackexchange?
EDIT: I don't know why I thought, that a uint32_t is not enough for 10^7 - sorry for that mistake.

To expand on nhahtdh's idea, to make it even faster (at cost of making it more complicated), make a prime number sieve calculating the prime numbers up to sqrt(10^7) = about 3170. Then the exponents of prime factors determine the number of multiples so that the product of (exp+1) is the number of integers dividing the number. So you can set an array to ones, then loop over each prime, multiplying with that primes exponent contribution (plus one) for each position it multiplies.

Related

How to optimize for loop in c by using square root (perfect, abundant, deficient)

Note: I left out irrelevant code
So I am currently working on CCC 1996 P1, and the whole purpose of this problem is to be able to compute whether an integer input is a perfect, deficient, or abundant number. The code I have listed above works however, I think it is too slow. The code, iterates through every number in order to find perfect divisors, which I think is too inefficient. Anyway, I have been thinking about this for awhile, but cant seem to come up with any ways to optimize this code.
I read online that you could replace i < n, with i < sqrt(n) and then switch the line in which score is added to s += i + (n/i), or something of the sorts, however, this doesn't seem to work for me. Any suggestions on what I could do to get more efficient code and decrease the run time, because currently, the program runs for too long before reaching an output. Any help would be greatly appreciated thanks!
Also, a number is defined as perfect if the sum of all its perfect divisors equal the number. A number is defined as abundant if the sum of all its perfect divisors > the number. A number is defined as deficient if the sum of all its perfect divisors < the number. The number itself does not count as a perfect divisor.
I am not too familiar with Big-O notation.
Also, a number is defined as perfect if the sum of all its perfect divisors equal the number. A number is defined as abundant if the sum of all its perfect divisors > the number. A number is defined as deficient if the sum of all its perfect divisors < the number. The number itself does not count as a perfect divisor.
You should replace
score += i + ((sqrt(numInput))/i);
with
result = numInput/i;
score += (result == i || i == 1)? i : i + result;
The idea is that for each factor less than the square root, the result is always larger or equal to the square root, and is also a factor.

advice on how to make my algorithm faster

here is my code in C for problem#3 from project-Euler, where I have to find the largest prime factor of 600851475143.
#include <stdio.h>
#include <stdlib.h>
bool is_prime(long int number){
long int j;
for (j=2; j<=number/2; j++){
if (number%j==0) return false;
if (j==number/2) return true;
}
}
int main(){
long int input;
scanf("%d", &input);
long int factor;
int ans=0;
for (factor=input/2; factor>1; factor--){
if (input%factor==0 && is_prime(factor)) {
ans = factor;
break;
}
}
printf("%d\n", ans);
system("pause");
return 0;
}
Although it works fine for small numbers, gradually it takes more and more time for it to give an answer. And, finally, for 600851475143 the code returns 0, which is obviously wrong.
Could anyone help?
thanks a lot.
A few things to consider:
As #Alex Reynolds pointed out, the number you're trying to factor might be so large that it can't fit in an int. You may need to use a long or a uint64_t to store the number. That alone might solve the problem.
Rather than checking each divisor and seeing which ones are prime, you might instead want to try this approach: set n to 600851475143. For each integer from 2 upward, try dividing n by that integer. If it cleanly divides out, then divide out all copies of that number from n and record the largest prime factor as being the current integer. If you think about it a bit, you'll notice that the only divisors you'll consider this way are prime numbers. As a helpful hint - if n has no divisors (other than 1) less than √n, then it's prime. That might help give you an upper bound on your search space that's much tighter than the division by two trick you're using.
Rather than increasing the divisor by one, try testing out 2 as a divisor and then only dividing by odd numbers (3, 5, 7, 9, 11, etc.) No even number other than 2 is prime, so this halves the number of numbers you need to divide by.
Alternatively, create a file storing all prime numbers up to √600851475143 by downloading a list of primes from the internet, then just test each one to see if any of them divide 600851475143 and take the biggest. :-)
Hope this helps!
I suggest you to improve the primality check part of your code. The running time of your method is O(n2) so you should use a more efficient algorithm for this like the well-known Miller–Rabin primality test with O(klog3n).
I provide a pseudo code here for you and you can write the code on your own:
Input: n > 3, an odd integer to be tested for primality;
Input: k, a parameter that determines the accuracy of the test
Output: composite if n is composite, otherwise probably prime
write n − 1 as 2s·d with d odd by factoring powers of 2 from n − 1
WitnessLoop: repeat k times:
pick a random integer a in the range [2, n − 2]
x ← ad mod n
if x = 1 or x = n − 1 then do next WitnessLoop
repeat s − 1 times:
x ← x2 mod n
if x = 1 then return composite
if x = n − 1 then do next WitnessLoop
return composite
return probably prime
I provide a link for you to see an implementation in python that also compares this algorithm with yours. BTW, there are many implementations of this algorithm all over the web but I think righting it by yourself may help you to better understand it.
Try the following code. It essentially implements the points in the accepted answer. The only improvement is that it skips all multiples of 2, 3, and 5 using wheel factorization http://en.wikipedia.org/wiki/Wheel_factorization
//find largest prime factor for x <2^64
#include <stdio.h>
#include <stdint.h>
int main() {
uint64_t x = 600851475143;
int wheel[] = {4,2,4,2,4,6,2,6};
while(x>2 && x%2==0) x/=2;
while(x>3 && x%3==0) x/=3;
while(x>5 && x%5==0) x/=5;
for(uint64_t j=0, i=7; i<=x/i; i+=wheel[j++], j%=8) {
while(x>i && x%i==0) x/=i;
}
printf("%llu\n", x);
}
Another thing that could be done is to pre-compute all primes less than 2^32 (rather than downloading them) and then only divide by the primes. The fastest method I know to do this is the Sieve of Eratosthenes. Here is a version using OpenMP which finds the primes up to 1 billion in less than one second http://create.stephan-brumme.com/eratosthenes/

Bit count for all numbers up to 1048576 is wrong [duplicate]

This question already has answers here:
Binary numbers with the same quantity of 0s and 1s
(6 answers)
Closed 8 years ago.
I want to convert all integers below 1048576 to binary and display all numbers which have the same number of bits set as unset. My program works fine when I use a table t of 20 integers, in which case cpt records the correct result.
However, when I use a table t of 40 integers (which means I want the numbers with 20 '1' bits and 20 '0' bits) the counter is set to 1. What is wrong?
int main(){
long int a;
int r,j,i;
long int aux;
int z,u;
long int cpt;
int t[40];
for(int k=0;k<40;k++) t[k]=0;
cpt=0;
for(a=0;a<1048576;a++){
j=0;u=0;z=0;
aux=a;
do{
r=aux%2;
switch(r){
case 0 : t[j]=0;
aux=(aux/2);
j++;
break;
case 1 : t[j]=1;
aux=((aux-1)/2);
j++;
break;
}
}while(aux!=0);
for(i=0;i<40;i++){
if(t[i]==0) z++;
else u++;
}
if(z==u) cpt++;
}
printf("%d",cpt);
getchar();
}
Your loop only goes to 1048576, which is 2^20.
Don't you need to loop until 2^40?
Also, note that int may not be 40 bits wide.
Note:
The naive solution to check all numbers doesn't scale well. Perhaps you should consider a smarter solution?
Because only one number in the range [0, 1048576) has exactly as many bits 1 as 0, when counted in your 40 "bit" array.
The flaw in your logic is that you do not examine all numbers in a given range. For instance, when you want to examine all 40-bit integers, you need to iterate until 2^40 and not 2^20.
Lastly, this brute force solution won't work very well for your problem. Instead, try to consider the pattern that appears when you examine the number of paths from the top-left node and proceeding to the down or right for a small array on a piece of paper. Does one emerge? If you're math-inclined, you will instantly recognise it; otherwise, take a minute to look through the binomial coefficients.
As others have said, the main thing that is wrong is the algorithm you are using (exhaustive search); you would need to loop from 0 to 240-1 to iterate across the entire set of numbers to be tested (rather than to 220-1), which would be an impractical number of iterations, not least as there are faster ways.
Consider the maths of the problem: you want a 40 bit field, with 20 bits set to 1. So, you are choosing 20 things from 40. Think about the nCr (combination operator from permutations and combinations); that will give you the link to the binomial coefficients. Now think how you might write an algorithm to go through every combination.
Your code would also be more comprehensible if it did not have single letter variable names, and had some comments explaining what it was meant to be doing.
If you are having difficulty remembering the bit width of integer types, I suggest you use
#include <stdint.h>
and use types like int64_t which is guaranteed to be 64 bits. See:
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/stdint.h.html
(Note that OP apparently wanted a list of the numbers with 20 bits set, rather just the count of such numbers, so merely looking at binomial coefficients is insufficient).
try using a long long int:
unsigned long long int a;
to fasten and clear a lot of code try also using popcount, it's a function that returns the number of 1-bits in x: http://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html.
//I'm not 100% sure if the popcount will work with long long ints, but you can try.
BTW I have a question about the same euler problem #: Binary numbers with the same quantity of 0s and 1s

Does "n * (rand() / RAND_MAX)" make a skewed random number distribution?

I'd like to find an unskewed way of getting random numbers in C (although at most I'm going to be using it for values of 0-20, and more likely only 0-8). I've seen this formula but after running some tests I'm not sure if it's skewed or not. Any help?
Here is the full function used:
int randNum()
{
return 1 + (int) (10.0 * (rand() / (RAND_MAX + 1.0)));
}
I seeded it using:
unsigned int iseed = (unsigned int)time(NULL);
srand (iseed);
The one suggested below refuses to work for me I tried
int greek;
for (j=0; j<50000; j++)
{
greek =rand_lim(5);
printf("%d, " greek);
greek =(int) (NUM * (rand() / (RAND_MAX + 1.0)));
int togo=number[greek];
number[greek]=togo+1;
}
and it stops working and gives me the same number 50000 times when I comment out printf.
Yes, it's skewed, unless your RAND_MAX happens to be a multiple of 10.
If you take the numbers from 0 to RAND_MAX, and try to divide them into 10 piles, you really have only three possibilities:
RAND_MAX is a multiple of 10, and the piles come out even.
RAND_MAX is not a multiple of 10, and the piles come out uneven.
You split it into uneven groups to start with, but throw away all the "extras" that would make it uneven.
You rarely have control over RAND_MAX, and it's often a prime number anyway. That really only leaves 2 and 3 as possibilities.
The third option looks roughly like this:
[Edit: After some thought, I've revised this to produce numbers in the range 0...(limit-1), to fit with the way most things in C and C++ work. This also simplifies the code (a tiny bit).
int rand_lim(int limit) {
/* return a random number in the range [0..limit)
*/
int divisor = RAND_MAX/limit;
int retval;
do {
retval = rand() / divisor;
} while (retval == limit);
return retval;
}
For anybody who questions whether this method might leave some skew, I also wrote a rather different version, purely for testing. This one uses a decidedly non-random generator with a very limited range, so we can simply iterate through every number in the range. It looks like this:
#include <stdlib.h>
#include <stdio.h>
#define MAX 1009
int next_val() {
// just return consecutive numbers
static int v=0;
return v++;
}
int lim(int limit) {
int divisor = MAX/limit;
int retval;
do {
retval = next_val() / divisor;
} while (retval == limit);
return retval;
}
#define LIMIT 10
int main() {
// we'll allocate extra space at the end of the array:
int buckets[LIMIT+2] = {0};
int i;
for (i=0; i<MAX; i++)
++buckets[lim(LIMIT)];
// and print one beyond what *should* be generated
for (i=0; i<LIMIT+1; i++)
printf("%2d: %d\n", i, buckets[i]);
}
So, we're starting with numbers from 0 to 1009 (1009 is prime, so it won't be an exact multiple of any range we choose). So, we're starting with 1009 numbers, and splitting it into 10 buckets. That should give 100 in each bucket, and the 9 leftovers (so to speak) get "eaten" by the do/while loop. As it's written right now, it allocates and prints out an extra bucket. When I run it, I get exactly 100 in each of buckets 0..9, and 0 in bucket 10. If I comment out the do/while loop, I see 100 in each of 0..9, and 9 in bucket 10.
Just to be sure, I've re-run the test with various other numbers for both the range produced (mostly used prime numbers), and the number of buckets. So far, I haven't been able to get it to produce skewed results for any range (as long as the do/while loop is enabled, of course).
One other detail: there is a reason I used division instead of remainder in this algorithm. With a good (or even decent) implementation of rand() it's irrelevant, but when you clamp numbers to a range using division, you keep the upper bits of the input. When you do it with remainder, you keep the lower bits of the input. As it happens, with a typical linear congruential pseudo-random number generator, the lower bits tend to be less random than the upper bits. A reasonable implementation will throw out a number of the least significant bits already, rendering this irrelevant. On the other hand, there are some pretty poor implementations of rand around, and with most of them, you end up with better quality of output by using division rather than remainder.
I should also point out that there are generators that do roughly the opposite -- the lower bits are more random than the upper bits. At least in my experience, these are quite uncommon. That with which the upper bits are more random are considerably more common.

What is the time complexity of this multiplication algorithm?

For the classic interview question "How do you perform integer multiplication without the multiplication operator?", the easiest answer is, of course, the following linear-time algorithm in C:
int mult(int multiplicand, int multiplier)
{
for (int i = 1; i < multiplier; i++)
{
multiplicand += multiplicand;
}
return multiplicand;
}
Of course, there is a faster algorithm. If we take advantage of the property that bit shifting to the left is equivalent to multiplying by 2 to the power of the number of bits shifted, we can bit-shift up to the nearest power of 2, and use our previous algorithm to add up from there. So, our code would now look something like this:
#include <math.h>
int log2( double n )
{
return log(n) / log(2);
}
int mult(int multiplicand, int multiplier)
{
int nearest_power = 2 ^ (floor(log2(multiplier)));
multiplicand << nearest_power;
for (int i = nearest_power; i < multiplier; i++)
{
multiplicand += multiplicand;
}
return multiplicand;
}
I'm having trouble determining what the time complexity of this algorithm is. I don't believe that O(n - 2^(floor(log2(n)))) is the correct way to express this, although (I think?) it's technically correct. Can anyone provide some insight on this?
mulitplier - nearest_power can be as large as half of multiplier, and as it tends towards infinity the constant 0.5 there doesn't matter (not to mention we get rid of constants in Big O). The loop is therefore O(multiplier). I'm not sure about the bit-shifting.
Edit: I took more of a look around on the bit-shifting. As gbulmer says, it can be O(n), where n is the number of bits shifted. However, it can also be O(1) on certain architectures. See: Is bit shifting O(1) or O(n)?
However, it doesn't matter in this case! n > log2(n) for all valid n. So we have O(n) + O(multiplier) which is a subset of O(2*multiplier) due to the aforementioned relationship, and thus the whole algorithm is O(multiplier).
The point of finding the nearest power is so that your function runtime could get close to runtime O(1). This happens when 2^nearest_power is very close to the result of your addition.
Behind the scenes the whole "to the power of 2" is done with bit shifting.
So, to answer your question, the second version of your code is still worse case linear time: O(multiplier).
Your answer, O(n - 2^(floor(log2(n)))), is also not incorrect; it's just very precise and might be hard to do in your head quickly to find the bounds.
Edit
Let's look at the second posted algorithm, starting with:
int nearest_power = 2 ^ (floor(log2(multiplier)));
I believe calculating log2, is, rather pleasingly, O(log2(multiplier))
then nearest_power gets to the interval [multiplier/2 to multiplier], the magnitude of this is multiplier/2. This is the same as finding the highest set-bit for a positive number.
So the for loop is O(multiplier/2), the constant of 1/2 comes out, so it is O(n)
On average, it is half the interval away, which would be O(multiplier/4). But that is just the constant 1/4 * n, so it is still O(n), the constant is smaller but it is still O(n).
A faster algorithm.
Our intuitiion is we can multiply by an n digit number in n steps
In binary this is using 1-bit shift, 1-bit test and binary add to construct the whole answer. Each of those operations is O(1). This is long-multiplication, one digit at a time.
If we use O(1) operations for n, an x bit number, it is O(log2(n)) or O(x), where x is the number of bits in the number
This is an O(log2(n)) algorithm:
int mult(int multiplicand, int multiplier) {
int product = 0;
while (multiplier) {
if (multiplier & 1) product += multiplicand;
multiplicand <<= 1;
multiplier >>= 1;
}
return product;
}
It is essentially how we do long multiplication.
Of course, the wise thing to do is use the smaller number as the multiplier. (I'll leave that as an exercise for the reader :-)
This only works for positive values, but by testing and remembering the signs of the input, operating on positive values, and then adjusting the sign, it works for all numbers.

Resources