Divide times in two boxes and find the minimum difference - c

Started to learn recursion and I am stuck with this simple problem. I believe that there are more optimized ways to do this but first I'm trying to learn the bruteforce approach.
I have bag A and bag B and have n items each one with some time (a float with two decimal places). The idea is to distribute the items by the two bags and obtain the minimum difference in the two bags. The idea is to try all possible outcomes.
I thought only in one bag (lets say bag A) since the other bag will contain all the items that are not in the bag A and therefore the difference will be the absolute value of total times sum - 2 * sum of the items time that are in the bag A.
I'm calling my recursive function like this:
min = total_time;
recursive(0, items_number - 1, 0);
And the code for the function is this:
void recursive(int index, int step, float sum) {
sum += items_time[index];
float difference = fabs(total_time - 2 * sum);
if (min > difference) {
min = difference;
}
if (!(min == 0.00 || step == 1 || sum > middle_time)) {
int i;
for (i = 0; i < items_number; i++) {
if (i != index) {
recursive(i, step - 1, sum);
}
}
}
}
Imagine I have 4 items with the times 1.23, 2.17 , 2.95 , 2.31
I'm getting the result 0.30. I believe that this is the correct result but I'm almost certain that if it is is pure change because If I try with bigger cases the program stops after a while. Probably because the recursion tree gets to bigger.
Can someone point me in some direction?

Okay, after the clarification, let me (hopefully) point you to a direction:
Let's assume that you know what n is, mentioned in n items. In your example, it was 2n is 4, making n = 2. Let's pick another n, let it be 3 this time, and our times shall be:
1.00
2.00
3.00
4.00
5.00
6.00
Now, we can already tell what the answer is; what you had said is all correct, optimally each of the bags will have their n = 3 times summed up to middle_time, which is 21 / 2 = 10.5 in this case. Since integers may never sum up to numbers with decimal points, 10.5 : 10.5 may never be achieved in this example, but 10 : 11 can, and you can have 10 through 6.00 + 3.00 + 1.00 (3 elements), so... yeah, the answer is simply 1.
How would you let a computer calculate it? Well; recall what I said at the beginning:
Let us assume that you know what n is.
In that case a naive programmer would probably simply put all those inside 2 or 3 nested for loops. 2 if he/she knew that the other half will be determined when you pick a half (by simply fixing the very first element in our group, since that element is to be included in one of the groups), like you also know; 3 if he/she didn't know that. Let's make it with 2:
...
float difference;
int i;
for ( i = 1; i < items_number; i++ ) {
sum = items_time[0] + items_time[i];
int j;
for ( j = i + 1; j < items_number; j++ ) {
sum += items_time[j];
difference = fabs( total_time - 2 * sum );
if ( min > difference ) {
min = difference;
}
}
}
...
Let me comment about the code a little for faster understanding: On the first cycle, it will add up the 0th time, the 1st time and then the 2nd time as you may see; then it will do the same check you had made (calculate the difference and compare the it with min). Let us call this the 012 group. The next group that will be checked will be 013, then 014, then 015; then 023, and so on... Each possible combination that will split the 6 into two 3s will be checked.
This operation shouldn't be any tiresome for the computer to issue. Even with this simple approach, the maximum amount of tries will be the amount of combinations of 3 you could have with 6 unique elements divided by 2. In maths, people denote this as C(6, 3), which evaluates to (6 * 5 * 4) / (3 * 2 * 1) = 20; divided by 2, so it's 10.
My guess is that the computer wouldn't make it a problem even if n was 10, making the amount of combinations as high as C(20, 10) / 2 = 92 378. It would, however, be a problem for you to write down 9 nested for loops by hand...
Anyway, the good thing is, you can recursively nest these loops. Here I will end my guidance. Since you apparently are studying for the recursion already, it wouldn't be good for me to offer a solution at this point. I can assure you that it is do-able.
Also the version I have made on my end can do it within a second for up to items_number = 22, without having made any optimizations; simply with brute force. That makes 352 716 combinations, and my machine is just a simple Windows tablet...

Your problem is called the Partition Problem. It is NP-hard and after some point, it will take a very long time to complete: the tree gets exponentially bigger as the number of cases to test grows.
The partition problem is well known and well documented over the internet. There exists some optimized solution

Your approach is not the naive brute-force approach, which would just walk through the list of items and put it into bag A and bag B recursively, chosing the case with the minimum difference, for example:
double recurse(double arr[], int n, double l, double r)
{
double ll, rr;
if (n == 0) return fabs(l - r);
ll = recurse(arr + 1, n - 1, l + *arr, r);
rr = recurse(arr + 1, n - 1, l, r + *arr);
if (ll > rr) return rr;
return ll;
}
(This code is very naive - it doesn't quite early on clearly non-optimal cases and it also wastes time by calculating every case twice with bags A and B swapped. it is brute force, however.)
You maximum recursion depth is the numer of items n, you call the recursive function 2^n - 1 times.
In your code, you can put the same item into a bag over and over:
for (i = 0; i < number_of_pizzas; i++) {
if (i != index) {
recursive(i, step - 1, sum);
}
}
This loop prevents you from treating the current item, but will happily treat items that have been put into the bag in earlier recursions for a second (or third) time. If you want to use that approach, you must keep a state of which item is in which bag.
Also, I don't understand your step. You start with step - 1 and stop recursion when step == 1. That means you are considering n - 2 items. I understand that the other items are in the other bag, but that's a weird condition that won't let you find the solution to, say, {8.0, 2.4, 2.4, 2.8}.

Related

What is the complexity of this? [duplicate]

Most people with a degree in CS will certainly know what Big O stands for.
It helps us to measure how well an algorithm scales.
But I'm curious, how do you calculate or approximate the complexity of your algorithms?
I'll do my best to explain it here on simple terms, but be warned that this topic takes my students a couple of months to finally grasp. You can find more information on the Chapter 2 of the Data Structures and Algorithms in Java book.
There is no mechanical procedure that can be used to get the BigOh.
As a "cookbook", to obtain the BigOh from a piece of code you first need to realize that you are creating a math formula to count how many steps of computations get executed given an input of some size.
The purpose is simple: to compare algorithms from a theoretical point of view, without the need to execute the code. The lesser the number of steps, the faster the algorithm.
For example, let's say you have this piece of code:
int sum(int* data, int N) {
int result = 0; // 1
for (int i = 0; i < N; i++) { // 2
result += data[i]; // 3
}
return result; // 4
}
This function returns the sum of all the elements of the array, and we want to create a formula to count the computational complexity of that function:
Number_Of_Steps = f(N)
So we have f(N), a function to count the number of computational steps. The input of the function is the size of the structure to process. It means that this function is called such as:
Number_Of_Steps = f(data.length)
The parameter N takes the data.length value. Now we need the actual definition of the function f(). This is done from the source code, in which each interesting line is numbered from 1 to 4.
There are many ways to calculate the BigOh. From this point forward we are going to assume that every sentence that doesn't depend on the size of the input data takes a constant C number computational steps.
We are going to add the individual number of steps of the function, and neither the local variable declaration nor the return statement depends on the size of the data array.
That means that lines 1 and 4 takes C amount of steps each, and the function is somewhat like this:
f(N) = C + ??? + C
The next part is to define the value of the for statement. Remember that we are counting the number of computational steps, meaning that the body of the for statement gets executed N times. That's the same as adding C, N times:
f(N) = C + (C + C + ... + C) + C = C + N * C + C
There is no mechanical rule to count how many times the body of the for gets executed, you need to count it by looking at what does the code do. To simplify the calculations, we are ignoring the variable initialization, condition and increment parts of the for statement.
To get the actual BigOh we need the Asymptotic analysis of the function. This is roughly done like this:
Take away all the constants C.
From f() get the polynomium in its standard form.
Divide the terms of the polynomium and sort them by the rate of growth.
Keep the one that grows bigger when N approaches infinity.
Our f() has two terms:
f(N) = 2 * C * N ^ 0 + 1 * C * N ^ 1
Taking away all the C constants and redundant parts:
f(N) = 1 + N ^ 1
Since the last term is the one which grows bigger when f() approaches infinity (think on limits) this is the BigOh argument, and the sum() function has a BigOh of:
O(N)
There are a few tricks to solve some tricky ones: use summations whenever you can.
As an example, this code can be easily solved using summations:
for (i = 0; i < 2*n; i += 2) { // 1
for (j=n; j > i; j--) { // 2
foo(); // 3
}
}
The first thing you needed to be asked is the order of execution of foo(). While the usual is to be O(1), you need to ask your professors about it. O(1) means (almost, mostly) constant C, independent of the size N.
The for statement on the sentence number one is tricky. While the index ends at 2 * N, the increment is done by two. That means that the first for gets executed only N steps, and we need to divide the count by two.
f(N) = Summation(i from 1 to 2 * N / 2)( ... ) =
= Summation(i from 1 to N)( ... )
The sentence number two is even trickier since it depends on the value of i. Take a look: the index i takes the values: 0, 2, 4, 6, 8, ..., 2 * N, and the second for get executed: N times the first one, N - 2 the second, N - 4 the third... up to the N / 2 stage, on which the second for never gets executed.
On formula, that means:
f(N) = Summation(i from 1 to N)( Summation(j = ???)( ) )
Again, we are counting the number of steps. And by definition, every summation should always start at one, and end at a number bigger-or-equal than one.
f(N) = Summation(i from 1 to N)( Summation(j = 1 to (N - (i - 1) * 2)( C ) )
(We are assuming that foo() is O(1) and takes C steps.)
We have a problem here: when i takes the value N / 2 + 1 upwards, the inner Summation ends at a negative number! That's impossible and wrong. We need to split the summation in two, being the pivotal point the moment i takes N / 2 + 1.
f(N) = Summation(i from 1 to N / 2)( Summation(j = 1 to (N - (i - 1) * 2)) * ( C ) ) + Summation(i from 1 to N / 2) * ( C )
Since the pivotal moment i > N / 2, the inner for won't get executed, and we are assuming a constant C execution complexity on its body.
Now the summations can be simplified using some identity rules:
Summation(w from 1 to N)( C ) = N * C
Summation(w from 1 to N)( A (+/-) B ) = Summation(w from 1 to N)( A ) (+/-) Summation(w from 1 to N)( B )
Summation(w from 1 to N)( w * C ) = C * Summation(w from 1 to N)( w ) (C is a constant, independent of w)
Summation(w from 1 to N)( w ) = (N * (N + 1)) / 2
Applying some algebra:
f(N) = Summation(i from 1 to N / 2)( (N - (i - 1) * 2) * ( C ) ) + (N / 2)( C )
f(N) = C * Summation(i from 1 to N / 2)( (N - (i - 1) * 2)) + (N / 2)( C )
f(N) = C * (Summation(i from 1 to N / 2)( N ) - Summation(i from 1 to N / 2)( (i - 1) * 2)) + (N / 2)( C )
f(N) = C * (( N ^ 2 / 2 ) - 2 * Summation(i from 1 to N / 2)( i - 1 )) + (N / 2)( C )
=> Summation(i from 1 to N / 2)( i - 1 ) = Summation(i from 1 to N / 2 - 1)( i )
f(N) = C * (( N ^ 2 / 2 ) - 2 * Summation(i from 1 to N / 2 - 1)( i )) + (N / 2)( C )
f(N) = C * (( N ^ 2 / 2 ) - 2 * ( (N / 2 - 1) * (N / 2 - 1 + 1) / 2) ) + (N / 2)( C )
=> (N / 2 - 1) * (N / 2 - 1 + 1) / 2 =
(N / 2 - 1) * (N / 2) / 2 =
((N ^ 2 / 4) - (N / 2)) / 2 =
(N ^ 2 / 8) - (N / 4)
f(N) = C * (( N ^ 2 / 2 ) - 2 * ( (N ^ 2 / 8) - (N / 4) )) + (N / 2)( C )
f(N) = C * (( N ^ 2 / 2 ) - ( (N ^ 2 / 4) - (N / 2) )) + (N / 2)( C )
f(N) = C * (( N ^ 2 / 2 ) - (N ^ 2 / 4) + (N / 2)) + (N / 2)( C )
f(N) = C * ( N ^ 2 / 4 ) + C * (N / 2) + C * (N / 2)
f(N) = C * ( N ^ 2 / 4 ) + 2 * C * (N / 2)
f(N) = C * ( N ^ 2 / 4 ) + C * N
f(N) = C * 1/4 * N ^ 2 + C * N
And the BigOh is:
O(N²)
Big O gives the upper bound for time complexity of an algorithm. It is usually used in conjunction with processing data sets (lists) but can be used elsewhere.
A few examples of how it's used in C code.
Say we have an array of n elements
int array[n];
If we wanted to access the first element of the array this would be O(1) since it doesn't matter how big the array is, it always takes the same constant time to get the first item.
x = array[0];
If we wanted to find a number in the list:
for(int i = 0; i < n; i++){
if(array[i] == numToFind){ return i; }
}
This would be O(n) since at most we would have to look through the entire list to find our number. The Big-O is still O(n) even though we might find our number the first try and run through the loop once because Big-O describes the upper bound for an algorithm (omega is for lower bound and theta is for tight bound).
When we get to nested loops:
for(int i = 0; i < n; i++){
for(int j = i; j < n; j++){
array[j] += 2;
}
}
This is O(n^2) since for each pass of the outer loop ( O(n) ) we have to go through the entire list again so the n's multiply leaving us with n squared.
This is barely scratching the surface but when you get to analyzing more complex algorithms complex math involving proofs comes into play. Hope this familiarizes you with the basics at least though.
While knowing how to figure out the Big O time for your particular problem is useful, knowing some general cases can go a long way in helping you make decisions in your algorithm.
Here are some of the most common cases, lifted from http://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions:
O(1) - Determining if a number is even or odd; using a constant-size lookup table or hash table
O(logn) - Finding an item in a sorted array with a binary search
O(n) - Finding an item in an unsorted list; adding two n-digit numbers
O(n2) - Multiplying two n-digit numbers by a simple algorithm; adding two n×n matrices; bubble sort or insertion sort
O(n3) - Multiplying two n×n matrices by simple algorithm
O(cn) - Finding the (exact) solution to the traveling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute force
O(n!) - Solving the traveling salesman problem via brute-force search
O(nn) - Often used instead of O(n!) to derive simpler formulas for asymptotic complexity
Small reminder: the big O notation is used to denote asymptotic complexity (that is, when the size of the problem grows to infinity), and it hides a constant.
This means that between an algorithm in O(n) and one in O(n2), the fastest is not always the first one (though there always exists a value of n such that for problems of size >n, the first algorithm is the fastest).
Note that the hidden constant very much depends on the implementation!
Also, in some cases, the runtime is not a deterministic function of the size n of the input. Take sorting using quick sort for example: the time needed to sort an array of n elements is not a constant but depends on the starting configuration of the array.
There are different time complexities:
Worst case (usually the simplest to figure out, though not always very meaningful)
Average case (usually much harder to figure out...)
...
A good introduction is An Introduction to the Analysis of Algorithms by R. Sedgewick and P. Flajolet.
As you say, premature optimisation is the root of all evil, and (if possible) profiling really should always be used when optimising code. It can even help you determine the complexity of your algorithms.
Seeing the answers here I think we can conclude that most of us do indeed approximate the order of the algorithm by looking at it and use common sense instead of calculating it with, for example, the master method as we were thought at university.
With that said I must add that even the professor encouraged us (later on) to actually think about it instead of just calculating it.
Also I would like to add how it is done for recursive functions:
suppose we have a function like (scheme code):
(define (fac n)
(if (= n 0)
1
(* n (fac (- n 1)))))
which recursively calculates the factorial of the given number.
The first step is to try and determine the performance characteristic for the body of the function only in this case, nothing special is done in the body, just a multiplication (or the return of the value 1).
So the performance for the body is: O(1) (constant).
Next try and determine this for the number of recursive calls. In this case we have n-1 recursive calls.
So the performance for the recursive calls is: O(n-1) (order is n, as we throw away the insignificant parts).
Then put those two together and you then have the performance for the whole recursive function:
1 * (n-1) = O(n)
Peter, to answer your raised issues; the method I describe here actually handles this quite well. But keep in mind that this is still an approximation and not a full mathematically correct answer. The method described here is also one of the methods we were taught at university, and if I remember correctly was used for far more advanced algorithms than the factorial I used in this example.
Of course it all depends on how well you can estimate the running time of the body of the function and the number of recursive calls, but that is just as true for the other methods.
If your cost is a polynomial, just keep the highest-order term, without its multiplier. E.g.:
O((n/2 + 1)*(n/2)) = O(n2/4 + n/2) = O(n2/4) = O(n2)
This doesn't work for infinite series, mind you. There is no single recipe for the general case, though for some common cases, the following inequalities apply:
O(log N) < O(N) < O(N log N) < O(N2) < O(Nk) < O(en) < O(n!)
I think about it in terms of information. Any problem consists of learning a certain number of bits.
Your basic tool is the concept of decision points and their entropy. The entropy of a decision point is the average information it will give you. For example, if a program contains a decision point with two branches, it's entropy is the sum of the probability of each branch times the log2 of the inverse probability of that branch. That's how much you learn by executing that decision.
For example, an if statement having two branches, both equally likely, has an entropy of 1/2 * log(2/1) + 1/2 * log(2/1) = 1/2 * 1 + 1/2 * 1 = 1. So its entropy is 1 bit.
Suppose you are searching a table of N items, like N=1024. That is a 10-bit problem because log(1024) = 10 bits. So if you can search it with IF statements that have equally likely outcomes, it should take 10 decisions.
That's what you get with binary search.
Suppose you are doing linear search. You look at the first element and ask if it's the one you want. The probabilities are 1/1024 that it is, and 1023/1024 that it isn't. The entropy of that decision is 1/1024*log(1024/1) + 1023/1024 * log(1024/1023) = 1/1024 * 10 + 1023/1024 * about 0 = about .01 bit. You've learned very little! The second decision isn't much better. That is why linear search is so slow. In fact it's exponential in the number of bits you need to learn.
Suppose you are doing indexing. Suppose the table is pre-sorted into a lot of bins, and you use some of all of the bits in the key to index directly to the table entry. If there are 1024 bins, the entropy is 1/1024 * log(1024) + 1/1024 * log(1024) + ... for all 1024 possible outcomes. This is 1/1024 * 10 times 1024 outcomes, or 10 bits of entropy for that one indexing operation. That is why indexing search is fast.
Now think about sorting. You have N items, and you have a list. For each item, you have to search for where the item goes in the list, and then add it to the list. So sorting takes roughly N times the number of steps of the underlying search.
So sorts based on binary decisions having roughly equally likely outcomes all take about O(N log N) steps. An O(N) sort algorithm is possible if it is based on indexing search.
I've found that nearly all algorithmic performance issues can be looked at in this way.
Lets start from the beginning.
First of all, accept the principle that certain simple operations on data can be done in O(1) time, that is, in time that is independent of the size of the input. These primitive operations in C consist of
Arithmetic operations (e.g. + or %).
Logical operations (e.g., &&).
Comparison operations (e.g., <=).
Structure accessing operations (e.g. array-indexing like A[i], or pointer fol-
lowing with the -> operator).
Simple assignment such as copying a value into a variable.
Calls to library functions (e.g., scanf, printf).
The justification for this principle requires a detailed study of the machine instructions (primitive steps) of a typical computer. Each of the described operations can be done with some small number of machine instructions; often only one or two instructions are needed.
As a consequence, several kinds of statements in C can be executed in O(1) time, that is, in some constant amount of time independent of input. These simple include
Assignment statements that do not involve function calls in their expressions.
Read statements.
Write statements that do not require function calls to evaluate arguments.
The jump statements break, continue, goto, and return expression, where
expression does not contain a function call.
In C, many for-loops are formed by initializing an index variable to some value and
incrementing that variable by 1 each time around the loop. The for-loop ends when
the index reaches some limit. For instance, the for-loop
for (i = 0; i < n-1; i++)
{
small = i;
for (j = i+1; j < n; j++)
if (A[j] < A[small])
small = j;
temp = A[small];
A[small] = A[i];
A[i] = temp;
}
uses index variable i. It increments i by 1 each time around the loop, and the iterations
stop when i reaches n − 1.
However, for the moment, focus on the simple form of for-loop, where the difference between the final and initial values, divided by the amount by which the index variable is incremented tells us how many times we go around the loop. That count is exact, unless there are ways to exit the loop via a jump statement; it is an upper bound on the number of iterations in any case.
For instance, the for-loop iterates ((n − 1) − 0)/1 = n − 1 times,
since 0 is the initial value of i, n − 1 is the highest value reached by i (i.e., when i
reaches n−1, the loop stops and no iteration occurs with i = n−1), and 1 is added
to i at each iteration of the loop.
In the simplest case, where the time spent in the loop body is the same for each
iteration, we can multiply the big-oh upper bound for the body by the number of
times around the loop. Strictly speaking, we must then add O(1) time to initialize
the loop index and O(1) time for the first comparison of the loop index with the
limit, because we test one more time than we go around the loop. However, unless
it is possible to execute the loop zero times, the time to initialize the loop and test
the limit once is a low-order term that can be dropped by the summation rule.
Now consider this example:
(1) for (j = 0; j < n; j++)
(2) A[i][j] = 0;
We know that line (1) takes O(1) time. Clearly, we go around the loop n times, as
we can determine by subtracting the lower limit from the upper limit found on line
(1) and then adding 1. Since the body, line (2), takes O(1) time, we can neglect the
time to increment j and the time to compare j with n, both of which are also O(1).
Thus, the running time of lines (1) and (2) is the product of n and O(1), which is O(n).
Similarly, we can bound the running time of the outer loop consisting of lines
(2) through (4), which is
(2) for (i = 0; i < n; i++)
(3) for (j = 0; j < n; j++)
(4) A[i][j] = 0;
We have already established that the loop of lines (3) and (4) takes O(n) time.
Thus, we can neglect the O(1) time to increment i and to test whether i < n in
each iteration, concluding that each iteration of the outer loop takes O(n) time.
The initialization i = 0 of the outer loop and the (n + 1)st test of the condition
i < n likewise take O(1) time and can be neglected. Finally, we observe that we go
around the outer loop n times, taking O(n) time for each iteration, giving a total
O(n^2) running time.
A more practical example.
If you want to estimate the order of your code empirically rather than by analyzing the code, you could stick in a series of increasing values of n and time your code. Plot your timings on a log scale. If the code is O(x^n), the values should fall on a line of slope n.
This has several advantages over just studying the code. For one thing, you can see whether you're in the range where the run time approaches its asymptotic order. Also, you may find that some code that you thought was order O(x) is really order O(x^2), for example, because of time spent in library calls.
Basically the thing that crops up 90% of the time is just analyzing loops. Do you have single, double, triple nested loops? The you have O(n), O(n^2), O(n^3) running time.
Very rarely (unless you are writing a platform with an extensive base library (like for instance, the .NET BCL, or C++'s STL) you will encounter anything that is more difficult than just looking at your loops (for statements, while, goto, etc...)
Less useful generally, I think, but for the sake of completeness there is also a Big Omega Ω, which defines a lower-bound on an algorithm's complexity, and a Big Theta Θ, which defines both an upper and lower bound.
Big O notation is useful because it's easy to work with and hides unnecessary complications and details (for some definition of unnecessary). One nice way of working out the complexity of divide and conquer algorithms is the tree method. Let's say you have a version of quicksort with the median procedure, so you split the array into perfectly balanced subarrays every time.
Now build a tree corresponding to all the arrays you work with. At the root you have the original array, the root has two children which are the subarrays. Repeat this until you have single element arrays at the bottom.
Since we can find the median in O(n) time and split the array in two parts in O(n) time, the work done at each node is O(k) where k is the size of the array. Each level of the tree contains (at most) the entire array so the work per level is O(n) (the sizes of the subarrays add up to n, and since we have O(k) per level we can add this up). There are only log(n) levels in the tree since each time we halve the input.
Therefore we can upper bound the amount of work by O(n*log(n)).
However, Big O hides some details which we sometimes can't ignore. Consider computing the Fibonacci sequence with
a=0;
b=1;
for (i = 0; i <n; i++) {
tmp = b;
b = a + b;
a = tmp;
}
and lets just assume the a and b are BigIntegers in Java or something that can handle arbitrarily large numbers. Most people would say this is an O(n) algorithm without flinching. The reasoning is that you have n iterations in the for loop and O(1) work in side the loop.
But Fibonacci numbers are large, the n-th Fibonacci number is exponential in n so just storing it will take on the order of n bytes. Performing addition with big integers will take O(n) amount of work. So the total amount of work done in this procedure is
1 + 2 + 3 + ... + n = n(n-1)/2 = O(n^2)
So this algorithm runs in quadradic time!
Familiarity with the algorithms/data structures I use and/or quick glance analysis of iteration nesting. The difficulty is when you call a library function, possibly multiple times - you can often be unsure of whether you are calling the function unnecessarily at times or what implementation they are using. Maybe library functions should have a complexity/efficiency measure, whether that be Big O or some other metric, that is available in documentation or even IntelliSense.
Break down the algorithm into pieces you know the big O notation for, and combine through big O operators. That's the only way I know of.
For more information, check the Wikipedia page on the subject.
As to "how do you calculate" Big O, this is part of Computational complexity theory. For some (many) special cases you may be able to come with some simple heuristics (like multiplying loop counts for nested loops), esp. when all you want is any upper bound estimation, and you do not mind if it is too pessimistic - which I guess is probably what your question is about.
If you really want to answer your question for any algorithm the best you can do is to apply the theory. Besides of simplistic "worst case" analysis I have found Amortized analysis very useful in practice.
For the 1st case, the inner loop is executed n-i times, so the total number of executions is the sum for i going from 0 to n-1 (because lower than, not lower than or equal) of the n-i. You get finally n*(n + 1) / 2, so O(n²/2) = O(n²).
For the 2nd loop, i is between 0 and n included for the outer loop; then the inner loop is executed when j is strictly greater than n, which is then impossible.
I would like to explain the Big-O in a little bit different aspect.
Big-O is just to compare the complexity of the programs which means how fast are they growing when the inputs are increasing and not the exact time which is spend to do the action.
IMHO in the big-O formulas you better not to use more complex equations (you might just stick to the ones in the following graph.) However you still might use other more precise formula (like 3^n, n^3, ...) but more than that can be sometimes misleading! So better to keep it as simple as possible.
I would like to emphasize once again that here we don't want to get an exact formula for our algorithm. We only want to show how it grows when the inputs are growing and compare with the other algorithms in that sense. Otherwise you would better use different methods like bench-marking.
In addition to using the master method (or one of its specializations), I test my algorithms experimentally. This can't prove that any particular complexity class is achieved, but it can provide reassurance that the mathematical analysis is appropriate. To help with this reassurance, I use code coverage tools in conjunction with my experiments, to ensure that I'm exercising all the cases.
As a very simple example say you wanted to do a sanity check on the speed of the .NET framework's list sort. You could write something like the following, then analyze the results in Excel to make sure they did not exceed an n*log(n) curve.
In this example I measure the number of comparisons, but it's also prudent to examine the actual time required for each sample size. However then you must be even more careful that you are just measuring the algorithm and not including artifacts from your test infrastructure.
int nCmp = 0;
System.Random rnd = new System.Random();
// measure the time required to sort a list of n integers
void DoTest(int n)
{
List<int> lst = new List<int>(n);
for( int i=0; i<n; i++ )
lst[i] = rnd.Next(0,1000);
// as we sort, keep track of the number of comparisons performed!
nCmp = 0;
lst.Sort( delegate( int a, int b ) { nCmp++; return (a<b)?-1:((a>b)?1:0)); }
System.Console.Writeline( "{0},{1}", n, nCmp );
}
// Perform measurement for a variety of sample sizes.
// It would be prudent to check multiple random samples of each size, but this is OK for a quick sanity check
for( int n = 0; n<1000; n++ )
DoTest(n);
Don't forget to also allow for space complexities that can also be a cause for concern if one has limited memory resources. So for example you may hear someone wanting a constant space algorithm which is basically a way of saying that the amount of space taken by the algorithm doesn't depend on any factors inside the code.
Sometimes the complexity can come from how many times is something called, how often is a loop executed, how often is memory allocated, and so on is another part to answer this question.
Lastly, big O can be used for worst case, best case, and amortization cases where generally it is the worst case that is used for describing how bad an algorithm may be.
First of all, the accepted answer is trying to explain nice fancy stuff,
but I think, intentionally complicating Big-Oh is not the solution,
which programmers (or at least, people like me) search for.
Big Oh (in short)
function f(text) {
var n = text.length;
for (var i = 0; i < n; i++) {
f(text.slice(0, n-1))
}
// ... other JS logic here, which we can ignore ...
}
Big Oh of above is f(n) = O(n!) where n represents number of items in input set,
and f represents operation done per item.
Big-Oh notation is the asymptotic upper-bound of the complexity of an algorithm.
In programming: The assumed worst-case time taken,
or assumed maximum repeat count of logic, for size of the input.
Calculation
Keep in mind (from above meaning) that; We just need worst-case time and/or maximum repeat count affected by N (size of input),
Then take another look at (accepted answer's) example:
for (i = 0; i < 2*n; i += 2) { // line 123
for (j=n; j > i; j--) { // line 124
foo(); // line 125
}
}
Begin with this search-pattern:
Find first line that N caused repeat behavior,
Or caused increase of logic executed,
But constant or not, ignore anything before that line.
Seems line hundred-twenty-three is what we are searching ;-)
On first sight, line seems to have 2*n max-looping.
But looking again, we see i += 2 (and that half is skipped).
So, max repeat is simply n, write it down, like f(n) = O( n but don't close parenthesis yet.
Repeat search till method's end, and find next line matching our search-pattern, here that's line 124
Which is tricky, because strange condition, and reverse looping.
But after remembering that we just need to consider maximum repeat count (or worst-case time taken).
It's as easy as saying "Reverse-Loop j starts with j=n, am I right? yes, n seems to be maximum possible repeat count", so:
Add n to previous write down's end,
but like "( n " instead of "+ n" (as this is inside previous loop),
and close parenthesis only if we find something outside of previous loop.
Search Done! why? because line 125 (or any other line after) does not match our search-pattern.
We can now close any parenthesis (left-open in our write down), resulting in below:
f(n) = O( n( n ) )
Try to further shorten "n( n )" part, like:
n( n ) = n * n
= n2
Finally, just wrap it with Big Oh notation, like O(n2) or O(n^2) without formatting.
What often gets overlooked is the expected behavior of your algorithms. It doesn't change the Big-O of your algorithm, but it does relate to the statement "premature optimization. . .."
Expected behavior of your algorithm is -- very dumbed down -- how fast you can expect your algorithm to work on data you're most likely to see.
For instance, if you're searching for a value in a list, it's O(n), but if you know that most lists you see have your value up front, typical behavior of your algorithm is faster.
To really nail it down, you need to be able to describe the probability distribution of your "input space" (if you need to sort a list, how often is that list already going to be sorted? how often is it totally reversed? how often is it mostly sorted?) It's not always feasible that you know that, but sometimes you do.
great question!
Disclaimer: this answer contains false statements see the comments below.
If you're using the Big O, you're talking about the worse case (more on what that means later). Additionally, there is capital theta for average case and a big omega for best case.
Check out this site for a lovely formal definition of Big O: https://xlinux.nist.gov/dads/HTML/bigOnotation.html
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Ok, so now what do we mean by "best-case" and "worst-case" complexities?
This is probably most clearly illustrated through examples. For example if we are using linear search to find a number in a sorted array then the worst case is when we decide to search for the last element of the array as this would take as many steps as there are items in the array. The best case would be when we search for the first element since we would be done after the first check.
The point of all these adjective-case complexities is that we're looking for a way to graph the amount of time a hypothetical program runs to completion in terms of the size of particular variables. However for many algorithms you can argue that there is not a single time for a particular size of input. Notice that this contradicts with the fundamental requirement of a function, any input should have no more than one output. So we come up with multiple functions to describe an algorithm's complexity. Now, even though searching an array of size n may take varying amounts of time depending on what you're looking for in the array and depending proportionally to n, we can create an informative description of the algorithm using best-case, average-case, and worst-case classes.
Sorry this is so poorly written and lacks much technical information. But hopefully it'll make time complexity classes easier to think about. Once you become comfortable with these it becomes a simple matter of parsing through your program and looking for things like for-loops that depend on array sizes and reasoning based on your data structures what kind of input would result in trivial cases and what input would result in worst-cases.
I don't know how to programmatically solve this, but the first thing people do is that we sample the algorithm for certain patterns in the number of operations done, say 4n^2 + 2n + 1 we have 2 rules:
If we have a sum of terms, the term with the largest growth rate is kept, with other terms omitted.
If we have a product of several factors constant factors are omitted.
If we simplify f(x), where f(x) is the formula for number of operations done, (4n^2 + 2n + 1 explained above), we obtain the big-O value [O(n^2) in this case]. But this would have to account for Lagrange interpolation in the program, which may be hard to implement. And what if the real big-O value was O(2^n), and we might have something like O(x^n), so this algorithm probably wouldn't be programmable. But if someone proves me wrong, give me the code . . . .
For code A, the outer loop will execute for n+1 times, the '1' time means the process which checks the whether i still meets the requirement. And inner loop runs n times, n-2 times.... Thus,0+2+..+(n-2)+n= (0+n)(n+1)/2= O(n²).
For code B, though inner loop wouldn't step in and execute the foo(), the inner loop will be executed for n times depend on outer loop execution time, which is O(n)

Interesting Powers Of 2 - algorithm/ Math (from Hackerrank ACM APC)

I came across a problem from a recent competition.
I was unable to figure out a solution, and no editorial for the question is yet available.
Question Link
I am quoting the problem statement here also in case the link doesn't work.
Find the number of integers n which are greater than or equal to A and less than or equal to B (A<= n <=B) and the decimal representation of 2^n ends in n.
Ex: 2^36 = 68719476736 which ends in “36”.
INPUT
The first line contains an integer T i.e. number of test cases. T lines follow, each containing two integers A and B.
Constraints
1 <= T <= 10^5
A<=B
A,B <= 10^150
OUTPUT
Print T lines each containing the answer to the corresponding testcase.
Sample Input
2
36 36
100 500
Sample Output
1
0
As often happens on programming competitions I have come up with an heuristics I have not proven, but seems plausible. I have written a short program to find the numbers up to 1000000 and they are:
36
736
8736
48736
948736
Thus my theory is the following - each consecutive number is suffixed with the previous one and only adds one digit. Hope this will set you on the right track for the problem. Note that if my assumption is right than you only need to find 150 numbers and finding each consecutive number requires checking 9 digits that may be added.
A general advice for similar problems - always try to find the first few numbers and think of some relation.
Also often it happens on a competition that you come up with a theory like the one I propose above, but have no time to prove it. You can't afford the time to prove it. Simply hope you are right and code.
EDIT: I believe I was able to prove my conjecture above(in fact I have missed some numbers -see end of the post). First let me point out that as v3ga states in a comment the algorithm above works up until 75353432948736 as no digit can be prepended to make the new number "interesting" as per the definition you give. However I completely missed another option - you may prepend some number of 0 and then add a non-zero digit.
I will now proof a lemma:
Lemma: if a1a2...an is an interesting number and n is more than 3, then a2...an also is interesting.
Proof:
2a1a2...an = 2a1*10n - 1*2a2a2...an
Now I will prove that 2a1*10n - 1*2a2a2...an is comparable to 2a2a2...an modulo 10n-1.
To do that lets prove that 2a1*10n - 1*2a2a2...an - 2a2a2...an is divisible by 10n-1.
2a1*10n - 1*2a2a2...an - 2a2a2...an =
2a2a2...an * (2a1*10n - 1 - 1)
a2a2...an is more than n-1 for the values we consider.
Thus all that's left to prove to have 10n-1 dividing the difference is that 5n-1 divides 2a1*10n - 1 - 1.
For this I will use Euler's theorem:
2phi(5n-1) = 1 (modulo 5n-1).
Now phi(5n-1) = 4*(5n-2) and for n >= 3 4*(5n-2) will divide a1*10n - 1(actually even solely 10n - 1).
Thus 2a1*10n - 1 gives remainder 1 modulo 5n-1 and so 5n-1 divides 2a1*10n - 1 - 1.
Consequently 10n-1 divides 2a2a2...an * (2a1*10n - 1 - 1) and so the last n - 1 digits of 2a1a2a2...an and 2a2a3a4...an are the same.
Now as a1a2a2...an is interesting the last n digits of 2a1a2a2...an are a1a2a2...an and so the last n-1 digits of 2a2a3a4...an are a2a3a4...an and consequently a2a3a4...an is also interesting. QED.
Use this lemma and you will be able to solve the problem. Please note that you may also prepend some zeros and then add a non-zero number.
In general, you can try solving these problems by finding some pattern in the output. Our team got this problem accepted at the contest. Our approach was to find a general pattern in the values that satisfy the criteria. If you print the first few such digits, then you will find the following pattern
36
736
8736
48736
948736
Thus the next number after 948736 should be of 7 digits and can be any one of 1948736, 2948736, 3948736, 4948736, 5948736, 6948736, 7948736, 8948736, 9948736. Thus check which value is valid and you have the next number. Continuing in this fashion you can back yourself to get all the 150 numbers.
But there is a problem here. There will be some numbers that do not immediately follow from the previous number by appending '1' to '9'. To counter this you can now start appending values from 10 to 99 and now check if there is a valid number or not. If there is still no valid number, then again try appending numbers from 100 to 999.
Now employing this hack, you will get all the 137 values that satisfy the criterion given in the question and easily answer all the queries. For example, working java code that implements this is shown here. It prints all the 137 values.
import java.io.*;
import java.math.*;
import java.util.*;
class Solution
{
public static void main(String[] args)throws java.lang.Exception{
new Solution().run();
}
void run()throws java.lang.Exception{
BigInteger[] powers = new BigInteger[152];
powers[0] = one;
for(int i=1; i<=150; i++){
powers[i] = powers[i-1].multiply(ten);
}
BigInteger[] answers = new BigInteger[152];
answers[2] = BigInteger.valueOf(36);
answers[3] = BigInteger.valueOf(736);
int last = 3;
for(int i=4; i<=150; i++){
int dif = i-last;
BigInteger start = ten.pow(dif-1);
BigInteger end = start.multiply(ten);
while(start.compareTo(end) < 0){
BigInteger newVal = powers[last].multiply(start);
newVal = newVal.add(answers[last]);
BigInteger modPow = pow(two, newVal, powers[i]);
if(modPow.equals(newVal)){
answers[i] = newVal;
System.out.println(answers[i]);
last = i;
break;
}
start = start.add(one);
}
}
}
BigInteger pow(BigInteger b, BigInteger e, BigInteger mod){
if(e.equals(zero)){
return one;
}
if(e.mod(two).equals(zero)){
BigInteger x = pow(b, e.divide(two), mod);
x = x.multiply(x).mod(mod);
return x;
}else{
BigInteger x = pow(b, e.divide(two), mod);
x = x.multiply(x).mod(mod);
x = x.multiply(two).mod(mod);
return x;
}
}
BigInteger ten = BigInteger.valueOf(10);
BigInteger zero = BigInteger.ZERO;
BigInteger one = BigInteger.ONE;
BigInteger two = BigInteger.valueOf(2);
}
This is very interesting property. During the contest, I found that 36 was the only number under 500 checking with python...
The property is : 2^36 last two digits are 36, last three digits are 736, so next number is 736. 2^736 has last three digits as 736, and next number is 8376...
And the series is : 36 , 736 , 8736 , 48736 , 948736 ...
And then started with BigInt class in C++.
But alas there was no time, and 4th problem wasn't solved. But after the contest, we did it in python.
here's link : Ideone it!
def powm(i):
j = 10
a = 1
while i:
if i % 2:
a = a * j
i /= 2
j *= j
return a
def power(n, i):
m = powm(i)
y = 1
x = 2
while n:
if n % 2 == 1:
y = y * x % m
x = x * x % m
n /= 2
return y
mylist = []
mylist.append(power(36, 2))
n = mylist[0]
print(n)
for i in range(3, 170):
p = power(n, i)
print p
if p != n:
mylist.append(p)
n = p
t = input()
while t:
x = raw_input().split(" ")
a = int(x[0])
b = int(x[1])
i = 0
#while i <= 150:
#print mylist[i]
#i += 1
#print power(8719476736,14)
while mylist[i] < a:
i += 1
ans = 0
while mylist[i] <= b:
i += 1
ans += 1
print ans
t -= 1
The final digits start to repeat after 20 increments. So for any n with the final digit 1, the final digit of the answer will be 2. So most values of n can be eliminated immediately.
2^1 = 2
2^21 = 2097152
2^101 = 2535301200456458802993406410752
2^2 = 4
2^22 = 4194304
2^42 = 4398046511104
In fact only two possibilities share a final digit:
2^14 = 16384
2^16 = 65536
2^34 = 17179869184
2^36 = 68719476736
If n is 14+20x or 16+20x, then it might work, so you'll need to check it. Otherwise, it cannot work.
I am not very good with such problems. But modular exponentiation appears to be key in your case.
Repeat for all n in the range A to B:
1. Find k, the no of digits in n. This can be done in O(logn)
2. Find 2^n (mod 10^k) using modular exponentiation and check if it is equal to n. This'll take O(n) time. (actually, O(n) multiplications)
EDIT
Actually, don't repeat the whole process for each n. Given 2^n (mod 10^k), we can find 2^(n+1) (mod 10^k) in constant time. Use this fact to speed it up further
EDIT - 2
This doesn't work for such large range.

Efficient way to detect "rank of corner" in flattened multi-dimensional array

This is a small piece of very frequently-called code, and part of a convolution algorithm I am trying to optimise (technically it's my first-pass optimisation, and I have already improved speed by a factor of 2, but now I am stuck):
inline int corner_rank( int max_ranks, int *shape, int pos ) {
int i;
int corners = 0;
for ( i = 0; i < max_ranks; i++ ) {
if ( pos % shape[i] ) break;
pos /= shape[i];
corners++;
}
return corners;
}
The code is being used to calculate a property of a position pos within an N-dimensional array (that has been flattened to pointer, plus arithmetic). max_ranks is the dimensionality, and shape is the array of sizes in each dimension.
An example 3-dimensional array might have max_ranks = 3, and shape = { 3, 4, 5 }. The schematic layout of the first few elements might look like this:
0 1 2 3 4 5 6 7 8
[0,0,0] [1,0,0] [2,0,0] [0,1,0] [1,1,0] [2,1,0] [0,2,0] [1,2,0] [2,2,0]
Returned by function:
3 0 0 1 0 0 1 0 0
Where the first row 0..8 shows the index offset given by pos, and the numbers below give the multi-dimensional indices. Edit: Below that I have put the value returned by the function (the value of 2 is returned at positions 12, 24 and 36).
The function is effectively returning the number of "leading" zeros in the multi-dimensional index, and is designed as it is to avoid needing to make a full conversion to array indices on every increment.
Is there anything I can do with this function to make it inherently faster? Is there a clever way of avoiding %, or another way to calculate the "corner rank" - apologies by the way if it has a more formal name that I do not know . . .
The only time you should return max_ranks is if pos equals zero. Checking for this allows you to remove the conditional check from your for-loop. This should improve both the worst case completion time, and speed of the looping for large values of max_ranks.
Here is my addition, plus a alternative way of avoiding the division operation. I believe that this is as fast as a handwritten div like #twalberg was suggesting, unless there is some way to produce the remainder without a second multiplication.
I'm afraid since the most common answer is 0 (which doesn't even get past the first mod call) you aren't going to see much improvement. My guess is that your average run time is very close to the run time of the modulus function itself. You might try searching for a faster way to determine if a number is a factor of pos. You don't actual need to calculate the remainder; you just need to know if there is a remainder or not.
Sorry if I made things confusing by restructuring your code. I believe this will be slightly faster unless your compiler was already making these optimizations.
inline int corner_rank( int max_ranks, int *shape, int pos ) {
// Most calls will not get farther than this.
if (pos % shape[0] != 0) return 0;
// One check here, guarantees that while loop below always returns.
if (pos == 0) return max_ranks;
int divisor = shape[0] * shape[1];
int i = 1;
while (true) {
if (pos % divisor != 0) return i;
divisor *= shape[++i];
}
}
Also try declaring pos and divisor as the smallest types possible. If they will never be greater than 255 you can use an unsigned char. I know that some processors can perform a divide with smaller numbers faster than larger numbers, but you have to set your variable types appropriately.

An algorithm to calculate probability of a sum of the results happening

The algorithm I'm talking about using would allow you to present it with x number of items with each having a range of a to b with the result being y. I would like to have an algorithm which would, when presented with the values as described would output the possibility of it happening.
For example, for two die. Since I already know them(due to the possible results being so low). It'd be able to tell you each of the possibilities.
The setup would be something like. x=2 a=1 b=6. If you wanted to know the chance of having it result in a 2. Then it'd simply spit out 1/36(or it's float value). If you put in 7 as the total sum, it'd tell you 6.
So my question is, is there a simple way to implement such a thing via an algorithm that is already written. Or does one have to go through every single iteration of each and every item to get the total number of combinations for each value.
The exact formula would also, give you the combinations to make each of the values from 1-12.
So it'd give you a distribution array with each one's combinations at each of the indexes. If it does 0-12. Then 0 would have 0, 1 would have 0, and 2 would have 1.
I feel like this is the type of problem that someone else has had and wanted to work with and has the algorithm already done. If anyone has an easy way to do this beyond simply just looping through every possible value would be awesome.
I have no idea why I want to have this problem solved, but for some reason today I just had this feeling of wanting to solve it. And since I've been googling, and using wolfram alpha, along with trying it myself. I think it's time to concede defeat and ask the community.
I'd like the algorithm to be in c, or maybe PHP(even though I'd rather it not be since it's a lot slower). The reason for c is simply because I want raw speed, and I don't want to have to deal with classes or objects.
Pseudo code, or C is the best ways show your algorithm.
Edit:
Also, if I offended the person with a 'b' in his name due to the thing about mathematics I'm sorry. Since I didn't mean to offend, but I wanted to just state that I didn't understand it. But the answer could've stayed on there since I'm sure there are people who might come to this question and understand the mathematics behind it.
Also I cannot decide which way that I want to code this up. I think I'll try using both and then decide which one I like more to see/use inside of my little library.
The final thing that I forgot to say is that, calculus is about four going on five years ago. My understanding of probability, statistics, and randomness come from my own learning via looking at code/reading wikipedia/reading books.
If anyone is curious what sparked this question. I had a book that I was putting off reading called The Drunkards Walk and then once I say XKCD 904, I decided it was time to finally get around to reading it. Then two nights ago, whilst I was going to sleep... I had pondered how to solve this question via a simple algorithm and was able to think of one.
My coding understanding of code comes from tinkering with other programs, seeing what happened when I broke something, and then trying my own things whilst looking over the documentation for the build in functions. I do understand big O notation from reading over wikipedia(as much as one can from that), and pseudo code was because it's so similar to python. I myself, cannot write pseudo code(or says the teachers in college). I kept getting notes like "make it less like real code make it more like pseudo code." That thing hasn't changed.
Edit 2: Incase anyone searching for this question just quickly wanted the code. I've included it below. It is licensed under the LGPLv3 since I'm sure that there exists closed-source equivalents of this code.
It should be fairly portable since it is written entirely in c. If one was wanting to make it into an extension in any of the various languages that are written in c, it should take very little effort to do so. I chose to 'mark' the first one that linked to "Ask Dr. Math" as the answer since it was the implementation that I have used for this question.
The first file's name is "sum_probability.c"
#include <math.h>
#include <stdlib.h>
#include <stdio.h>
#include <limits.h>
/*!
* file_name: sum_probability.c
*
* Set of functions to calculate the probabilty of n number of items adding up to s
* with sides x. The question that this program relates to can be found at the url of
* http://stackoverflow.com/questions/6394120/
*
* Copyright 2011-2019, Macarthur Inbody
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the Lesser GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the Lesser GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/lgpl-3.0.html>.
*
* 2011-06-20 06:03:57 PM -0400
*
* These functions work by any input that is provided. For a function demonstrating it.
* Please look at the second source file at the post of the question on stack overflow.
* It also includes an answer for implenting it using recursion if that is your favored
* way of doing it. I personally do not feel comfortable working with recursion so that is
* why I went with the implementation that I have included.
*
*/
/*
* The following functions implement falling factorials so that we can
* do binomial coefficients more quickly.
* Via the following formula.
*
* K
* PROD (n-(k-i))/i
* i=1;
*
*/
//unsigned int return
unsigned int m_product_c( int k, int n){
int i=1;
float result=1;
for(i=1;i<=k;++i){
result=((n-(k-i))/i)*result;
}
return result;
}
//float return
float m_product_cf(float n, float k){
int i=1;
float result=1;
for(i=1;i<=k;++i){
result=((n-(k-i))/i)*result;
}
return result;
}
/*
* The following functions calculates the probability of n items with x sides
* that add up to a value of s. The formula for this is included below.
*
* The formula comes from. http://mathforum.org/library/drmath/view/52207.html
*
*s=sum
*n=number of items
*x=sides
*(s-n)/x
* SUM (-1)^k * C(n,k) * C(s-x*k-1,n-1)
* k=0
*
*/
float chance_calc_single(float min, float max, float amount, float desired_result){
float range=(max-min)+1;
float series=ceil((desired_result-amount)/range);
float i;
--amount;
float chances=0.0;
for(i=0;i<=series;++i){
chances=pow((-1),i)*m_product_cf(amount,i)*m_product_cf(desired_result-(range*i)-1,amount)+chances;
}
return chances;
}
And here is the file that shows the implementation as I said in the previous file.
#include "sum_probability.c"
/*
*
* file_name:test.c
*
* Function showing off the algorithms working. User provides input via a cli
* And it will give you the final result.
*
*/
int main(void){
int amount,min,max,desired_results;
printf("%s","Please enter the amount of items.\n");
scanf("%i",&amount);
printf("%s","Please enter the minimum value allowed.\n");
scanf("%i",&min);
printf("%s","Please enter the maximum value allowed.\n");
scanf("%i",&max);
printf("%s","Please enter the value you wish to have them add up to. \n");
scanf("%i",&desired_results);
printf("The total chances for %i is %f.\n", desired_results, chance_calc_single(min, max, amount, desired_results));
}
First of all, you do not need to worry about the range being from a to b. You can just subtract a*x from y and pretend the range goes from 0 to b-a. (Because each item contributes at least a to the sum... So you can subtract off that a once for each of your x items.)
Second, note that what you are really trying to do is count the number of ways of achieving a particular sum. The probability is just that count divided by a simple exponential (b-a+1)^x.
This problem was covered by "Ask Dr. Math" around a decade ago:
Link
His formulation is assuming dice numbered from 1 to X, so to use his answer, you probably want to shift your range by a-1 (rather than a) to convert it into that form.
His derivation uses generating functions which I feel deserve a little explanation. The idea is to define a polynomial f(z) such that the coefficient on z^n is the number of ways of rolling n. For a single 6-sided die, for example, this is the generating function:
z + z^2 + z^3 + z^4 + z^5 + z^6
...because there is one way of rolling each number from 1 to 6, and zero ways of rolling anything else.
Now, if you have two generating functions g(z) and h(z) for two sets of dice, it turns out the generating function for the union of those sets is just the product of g and h. (Stare at the "multiply two polynomials" operation for a while to convince yourself this is true.) For example, for two dice, we can just square the above expression to get:
z^2 + 2z^3 + 3z^4 +4z^5 + 5z^6 + 6z^7 + 5z^8 + 4z^9 + 3z^10 + 2z^11 + z^12
Notice how we can read the number of combinations directly off of the coefficients: 1 way to get a 2 (1*z^2), 6 ways to get a 7 (6*z^7), etc.
The cube of the expression would give us the generating function for three dice; the fourth power, four dice; and so on.
The power of this formulation comes when you write the generating functions in closed form, multiply, and then expand them again using the Binomial Theorem. I defer to Dr. Math's explanation for the details.
Let's say that f(a, b, n, x) represents the number of ways you can select n numbers between a and b, which sum up to x.
Then notice that:
f(a, b, n, x) = f(0, b-a, n, x-n*a)
Indeed, just take one way to achieve the sum of x and from each of the n numbers subtract a, then the total sum will become x - n*a and each of them will be between 0 and b-a.
Thus it's enough to write code to find f(0, m, n, x).
Now note that, all the ways to achieve the goal, such that the last number is c is:
f(0, m, n-1, x-c)
Indeed, we have n-1 numbers left and want the total sum to be x-c.
Then we have a recursive formula:
f(0,m,n,x) = f(0,m,n-1,x) + f(0,m,n-1,x-1) + ... + f(0,m,n-1,x-m)
where the summands on the right correspond to the last number being equal to 0, 1, ..., m
Now you can implement that using recursion, but this will be too slow.
However, there is a trick called memoized recursion, i.e. you save the result of the function, so that you don't have to compute it again (for the same arguments).
The memoized recursion will have complexity of O(m * n), because that's the number of different input parameters that you need to compute and save.
Once you have computed the count you need to divide by the total number of posiblities, which is (m+1)*n to get the final probability.
Number theory, statistics and combinatorics lead you to believe that to arrive at a numerical value for the probability of an event -- well you have to know 2 things:
the number of possible outcomes
within the set of total outcomes how many equal the outcome 'y' whose probability value you seek.
In pseudocode:
numPossibleOutcomes = calcNumOutcomes(x, a, b);
numSpecificOutcomes = calcSpecificOutcome(y);
probabilityOfOutcome = numSpecificOutcomes / numPossibleOutcomes;
Then just code up the 2 functions above which should be easy.
To get all possibilities, you could make a map of values:
for (i=a to b) {
for (j=a to b) {
map.put(i+j, 1+map.get(i+j))
}
}
For a more efficient way to count sums, you could use the pattern
6 7's, 5 6's, 4 5's, 3 4's, 2 3's, 1 two.
The pattern holds for n x n grid, there will be n (n+1)'s, with one less possibility for a sum 1 greater or less.
This will count the possibilities, for example, Count(6, 1/2/3/4/5/6) will give possibilities for sums of dice.
import math
def Count(poss,sumto):
return poss - math.fabs(sumto-(poss+1));
Edit: In C this would be:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>;
int count(int poss, int sumto)
{
return poss - abs(sumto-(poss+1));
}
int main(int argc, char** argv) {
printf("With two dice,\n");
int i;
for (i=1; i<= 13; i++)
{
printf("%d ways to sum to %d\n",count(6,i),i);
}
return (EXIT_SUCCESS);
}
gives:
With two dice,
0 ways to sum to 1
1 ways to sum to 2
2 ways to sum to 3
3 ways to sum to 4
4 ways to sum to 5
5 ways to sum to 6
6 ways to sum to 7
5 ways to sum to 8
4 ways to sum to 9
3 ways to sum to 10
2 ways to sum to 11
1 ways to sum to 12
0 ways to sum to 13

How can I find a number which occurs an odd number of times in a SORTED array in O(n) time?

I have a question and I tried to think over it again and again... but got nothing so posting the question here. Maybe I could get some view-point of others, to try and make it work...
The question is: we are given a SORTED array, which consists of a collection of values occurring an EVEN number of times, except one, which occurs ODD number of times. We need to find the solution in log n time.
It is easy to find the solution in O(n) time, but it looks pretty tricky to perform in log n time.
Theorem: Every deterministic algorithm for this problem probes Ω(log2 n) memory locations in the worst case.
Proof (completely rewritten in a more formal style):
Let k > 0 be an odd integer and let n = k2. We describe an adversary that forces (log2 (k + 1))2 = Ω(log2 n) probes.
We call the maximal subsequences of identical elements groups. The adversary's possible inputs consist of k length-k segments x1 x2 … xk. For each segment xj, there exists an integer bj ∈ [0, k] such that xj consists of bj copies of j - 1 followed by k - bj copies of j. Each group overlaps at most two segments, and each segment overlaps at most two groups.
Group boundaries
| | | | |
0 0 1 1 1 2 2 3 3
| | | |
Segment boundaries
Wherever there is an increase of two, we assume a double boundary by convention.
Group boundaries
| || | |
0 0 0 2 2 2 2 3 3
Claim: The location of the jth group boundary (1 ≤ j ≤ k) is uniquely determined by the segment xj.
Proof: It's just after the ((j - 1) k + bj)th memory location, and xj uniquely determines bj. //
We say that the algorithm has observed the jth group boundary in case the results of its probes of xj uniquely determine xj. By convention, the beginning and the end of the input are always observed. It is possible for the algorithm to uniquely determine the location of a group boundary without observing it.
Group boundaries
| X | | |
0 0 ? 1 2 2 3 3 3
| | | |
Segment boundaries
Given only 0 0 ?, the algorithm cannot tell for sure whether ? is a 0 or a 1. In context, however, ? must be a 1, as otherwise there would be three odd groups, and the group boundary at X can be inferred. These inferences could be problematic for the adversary, but it turns out that they can be made only after the group boundary in question is "irrelevant".
Claim: At any given point during the algorithm's execution, consider the set of group boundaries that it has observed. Exactly one consecutive pair is at odd distance, and the odd group lies between them.
Proof: Every other consecutive pair bounds only even groups. //
Define the odd-length subsequence bounded by the special consecutive pair to be the relevant subsequence.
Claim: No group boundary in the interior of the relevant subsequence is uniquely determined. If there is at least one such boundary, then the identity of the odd group is not uniquely determined.
Proof: Without loss of generality, assume that each memory location not in the relevant subsequence has been probed and that each segment contained in the relevant subsequence has exactly one location that has not been probed. Suppose that the jth group boundary (call it B) lies in the interior of the relevant subsequence. By hypothesis, the probes to xj determine B's location up to two consecutive possibilities. We call the one at odd distance from the left observed boundary odd-left and the other odd-right. For both possibilities, we work left to right and fix the location of every remaining interior group boundary so that the group to its left is even. (We can do this because they each have two consecutive possibilities as well.) If B is at odd-left, then the group to its left is the unique odd group. If B is at odd-right, then the last group in the relevant subsequence is the unique odd group. Both are valid inputs, so the algorithm has uniquely determined neither the location of B nor the odd group. //
Example:
Observed group boundaries; relevant subsequence marked by […]
[ ] |
0 0 Y 1 1 Z 2 3 3
| | | |
Segment boundaries
Possibility #1: Y=0, Z=2
Possibility #2: Y=1, Z=2
Possibility #3: Y=1, Z=1
As a consequence of this claim, the algorithm, regardless of how it works, must narrow the relevant subsequence to one group. By definition, it therefore must observe some group boundaries. The adversary now has the simple task of keeping open as many possibilities as it can.
At any given point during the algorithm's execution, the adversary is internally committed to one possibility for each memory location outside of the relevant subsequence. At the beginning, the relevant subsequence is the entire input, so there are no initial commitments. Whenever the algorithm probes an uncommitted location of xj, the adversary must commit to one of two values: j - 1, or j. If it can avoid letting the jth boundary be observed, it chooses a value that leaves at least half of the remaining possibilities (with respect to observation). Otherwise, it chooses so as to keep at least half of the groups in the relevant interval and commits values for the others.
In this way, the adversary forces the algorithm to observe at least log2 (k + 1) group boundaries, and in observing the jth group boundary, the algorithm is forced to make at least log2 (k + 1) probes.
Extensions:
This result extends straightforwardly to randomized algorithms by randomizing the input, replacing "at best halved" (from the algorithm's point of view) with "at best halved in expectation", and applying standard concentration inequalities.
It also extends to the case where no group can be larger than s copies; in this case the lower bound is Ω(log n log s).
A sorted array suggests a binary search. We have to redefine equality and comparison. Equality simple means an odd number of elements. We can do comparison by observing the index of the first or last element of the group. The first element will be an even index (0-based) before the odd group, and an odd index after the odd group. We can find the first and last elements of a group using binary search. The total cost is O((log N)²).
PROOF OF O((log N)²)
T(2) = 1 //to make the summation nice
T(N) = log(N) + T(N/2) //log(N) is finding the first/last elements
For some N=2^k,
T(2^k) = (log 2^k) + T(2^(k-1))
= (log 2^k) + (log 2^(k-1)) + T(2^(k-2))
= (log 2^k) + (log 2^(k-1)) + (log 2^(k-2)) + ... + (log 2^2) + 1
= k + (k-1) + (k-2) + ... + 1
= k(k+1)/2
= (k² + k)/2
= (log(N)² + log(N))/ 2
= O(log(N)²)
Look at the middle element of the array. With a couple of appropriate binary searches, you can find the first and its last appearance in the array. E.g., if the middle element is 'a', you need to find i and j as shown below:
[* * * * a a a a * * *]
^ ^
| |
| |
i j
Is j - i an even number? You are done! Otherwise (and this is the key here), the question to ask is i an even or an odd number? Do you see what this piece of knowledge implies? Then the rest is easy.
This answer is in support of the answer posted by "throwawayacct". He deserves the bounty. I spent some time on this question and I'm totally convinced that his proof is correct that you need Ω(log(n)^2) queries to find the number that occurs an odd number of times. I'm convinced because I ended up recreating the exact same argument after only skimming his solution.
In the solution, an adversary creates an input to make life hard for the algorithm, but also simple for a human analyzer. The input consists of k pages that each have k entries. The total number of entries is n = k^2, and it is important that O(log(k)) = O(log(n)) and Ω(log(k)) = Ω(log(n)). To make the input, the adversary makes a string of length k of the form 00...011...1, with the transition in an arbitrary position. Then each symbol in the string is expanded into a page of length k of the form aa...abb...b, where on the ith page, a=i and b=i+1. The transition on each page is also in an arbitrary position, except that the parity agrees with the symbol that the page was expanded from.
It is important to understand the "adversary method" of analyzing an algorithm's worst case. The adversary answers queries about the algorithm's input, without committing to future answers. The answers have to be consistent, and the game is over when the adversary has been pinned down enough for the algorithm to reach a conclusion.
With that background, here are some observations:
1) If you want to learn the parity of a transition in a page by making queries in that page, you have to learn the exact position of the transition and you need Ω(log(k)) queries. Any collection of queries restricts the transition point to an interval, and any interval of length more than 1 has both parities. The most efficient search for the transition in that page is a binary search.
2) The most subtle and most important point: There are two ways to determine the parity of a transition inside a specific page. You can either make enough queries in that page to find the transition, or you can infer the parity if you find the same parity in both an earlier and a later page. There is no escape from this either-or. Any set of queries restricts the transition point in each page to some interval. The only restriction on parities comes from intervals of length 1. Otherwise the transition points are free to wiggle to have any consistent parities.
3) In the adversary method, there are no lucky strikes. For instance, suppose that your first query in some page is toward one end instead of in the middle. Since the adversary hasn't committed to an answer, he's free to put the transition on the long side.
4) The end result is that you are forced to directly probe the parities in Ω(log(k)) pages, and the work for each of these subproblems is also Ω(log(k)).
5) Things are not much better with random choices than with adversarial choices. The math is more complicated, because now you can get partial statistical information, rather than a strict yes you know a parity or no you don't know it. But it makes little difference. For instance, you can give each page length k^2, so that with high probability, the first log(k) queries in each page tell you almost nothing about the parity in that page. The adversary can make random choices at the beginning and it still works.
Start at the middle of the array and walk backward until you get to a value that's different from the one at the center. Check whether the number above that boundary is at an odd or even index. If it's odd, then the number occurring an odd number of times is to the left, so repeat your search between the beginning and the boundary you found. If it's even, then the number occurring an odd number of times must be later in the array, so repeat the search in the right half.
As stated, this has both a logarithmic and a linear component. If you want to keep the whole thing logarithmic, instead of just walking backward through the array to a different value, you want to use a binary search instead. Unless you expect many repetitions of the same numbers, the binary search may not be worthwhile though.
I have an algorithm which works in log(N/C)*log(K), where K is the length of maximum same-value range, and C is the length of range being searched for.
The main difference of this algorithm from most posted before is that it takes advantage of the case where all same-value ranges are short. It finds boundaries not by binary-searching the entire array, but by first quickly finding a rough estimate by jumping back by 1, 2, 4, 8, ... (log(K) iterations) steps, and then binary-searching the resulting range (log(K) again).
The algorithm is as follows (written in C#):
// Finds the start of the range of equal numbers containing the index "index",
// which is assumed to be inside the array
//
// Complexity is O(log(K)) with K being the length of range
static int findRangeStart (int[] arr, int index)
{
int candidate = index;
int value = arr[index];
int step = 1;
// find the boundary for binary search:
while(candidate>=0 && arr[candidate] == value)
{
candidate -= step;
step *= 2;
}
// binary search:
int a = Math.Max(0,candidate);
int b = candidate+step/2;
while(a+1!=b)
{
int c = (a+b)/2;
if(arr[c] == value)
b = c;
else
a = c;
}
return b;
}
// Finds the index after the only "odd" range of equal numbers in the array.
// The result should be in the range (start; end]
// The "end" is considered to always be the end of some equal number range.
static int search(int[] arr, int start, int end)
{
if(arr[start] == arr[end-1])
return end;
int middle = (start+end)/2;
int rangeStart = findRangeStart(arr,middle);
if((rangeStart & 1) == 0)
return search(arr, middle, end);
return search(arr, start, rangeStart);
}
// Finds the index after the only "odd" range of equal numbers in the array
static int search(int[] arr)
{
return search(arr, 0, arr.Length);
}
Take the middle element e. Use binary search to find the first and last occurrence. O(log(n))
If it is odd return e.
Otherwise, recurse onto the side that has an odd number of elements [....]eeee[....]
Runtime will be log(n) + log(n/2) + log(n/4).... = O(log(n)^2).
AHhh. There is an answer.
Do a binary search and as you search, for each value, move backwards until you find the first entry with that same value. If its index is even, it is before the oddball, so move to the right.
If its array index is odd, it is after the oddball, so move to the left.
In pseudocode (this is the general idea, not tested...):
private static int FindOddBall(int[] ary)
{
int l = 0,
r = ary.Length - 1;
int n = (l+r)/2;
while (r > l+2)
{
n = (l + r) / 2;
while (ary[n] == ary[n-1])
n = FindBreakIndex(ary, l, n);
if (n % 2 == 0) // even index we are on or to the left of the oddball
l = n;
else // odd index we are to the right of the oddball
r = n-1;
}
return ary[l];
}
private static int FindBreakIndex(int[] ary, int l, int n)
{
var t = ary[n];
var r = n;
while(ary[n] != t || ary[n] == ary[n-1])
if(ary[n] == t)
{
r = n;
n = (l + r)/2;
}
else
{
l = n;
n = (l + r)/2;
}
return n;
}
You can use this algorithm:
int GetSpecialOne(int[] array, int length)
{
int specialOne = array[0];
for(int i=1; i < length; i++)
{
specialOne ^= array[i];
}
return specialOne;
}
Solved with the help of a similar question which can be found here on http://www.technicalinterviewquestions.net
We don't have any information about the distribution of lenghts inside the array, and of the array as a whole, right?
So the arraylength might be 1, 11, 101, 1001 or something, 1 at least with no upper bound, and must contain at least 1 type of elements ('number') up to (length-1)/2 + 1 elements, for total sizes of 1, 11, 101: 1, 1 to 6, 1 to 51 elements and so on.
Shall we assume every possible size of equal probability? This would lead to a middle length of subarrays of size/4, wouldn't it?
An array of size 5 could be divided into 1, 2 or 3 sublists.
What seems to be obvious is not that obvious, if we go into details.
An array of size 5 can be 'divided' into one sublist in just one way, with arguable right to call it 'dividing'. It's just a list of 5 elements (aaaaa). To avoid confusion let's assume the elements inside the list to be ordered characters, not numbers (a,b,c, ...).
Divided into two sublist, they might be (1, 4), (2, 3), (3, 2), (4, 1). (abbbb, aabbb, aaabb, aaaab).
Now let's look back at the claim made before: Shall the 'division' (5) be assumed the same probability as those 4 divisions into 2 sublists? Or shall we mix them together, and assume every partition as evenly probable, (1/5)?
Or can we calculate the solution without knowing the probability of the length of the sublists?
The clue is you're looking for log(n). That's less than n.
Stepping through the entire array, one at a time? That's n. That's not going to work.
We know the first two indexes in the array (0 and 1) should be the same number. Same with 50 and 51, if the odd number in the array is after them.
So find the middle element in the array, compare it to the element right after it. If the change in numbers happens on the wrong index, we know the odd number in the array is before it; otherwise, it's after. With one set of comparisons, we figure out which half of the array the target is in.
Keep going from there.
Use a hash table
For each element E in the input set
if E is set in the hash table
increment it's value
else
set E in the hash table and initialize it to 0
For each key K in hash table
if K % 2 = 1
return K
As this algorithm is 2n it belongs to O(n)
Try this:
int getOddOccurrence(int ar[], int ar_size)
{
int i;
int xor = 0;
for (i=0; i < ar_size; i++)
xor = xor ^ ar[i];
return res;
}
XOR will cancel out everytime you XOR with the same number so 1^1=0 but 1^1^1=1 so every pair should cancel out leaving the odd number out.
Assume indexing start at 0. Binary search for the smallest even i such that x[i] != x[i+1]; your answer is x[i].
edit: due to public demand, here is the code
int f(int *x, int min, int max) {
int size = max;
min /= 2;
max /= 2;
while (min < max) {
int i = (min + max)/2;
if (i==0 || x[2*i-1] == x[2*i])
min = i+1;
else
max = i-1;
}
if (2*max == size || x[2*max] != x[2*max+1])
return x[2*max];
return x[2*min];
}

Resources