Converting nested for loop into lower than O(n^3) - c

So, I have this code and I need to make it run on a time complexity of less than O(n^3). I've just started learning about complexity and I really have no idea what to do.
int n, i, j, k, x=0;
printf("Type n: \n");
scanf("%d",&n);
for(i=1; i<n; i++)
{
for(j=1; j<i; j++)
{
for(k=1; k<j; k++)
{
x=x+1;
}
}
}
printf("%d\n",x);
I think I get why it's O(n^3), but I don't really know how to make it more efficient. I tried turning it into a recursive function, is it possible?

You're adding 1 to the result for each i, j, k with 0 < k < j < i < n. There's choose(n-1, 3) such values of i, j, k (one for each subset of size 3 of {1, 2, ..., n-1}). (Here "choose" in the binomial coefficient function).
Thus, you can replace your loop-based computation with choose(n-1, 3) which is (n - 1)(n - 2)(n - 3) / 6 if n is positive.
int n;
printf("Type n: \n");
scanf("%d",&n);
printf("%d\n", n > 0 ? (n-1)*(n-2)*(n-3)/6 : 0);
This is O(1) to compute the result, and O(log N) to output it (since the result has O(log N) digits).

Your current function is just a lousy O(n^3) way to calculate some mathematical function ...
In Out
0 0
1 0
2 0
3 0
4 1
5 4
6 10
7 20
8 35
9 56
10 84
x will end up being equal to the number of iterations.
Your assignment is likely to reinterpret that for loop into an equation.
We know that the outer loop will execute its block (n-1) times. The next inner loop will execute its block a total of 1+2+..+(n-2) times. That's (n-1)(n-2)/2 times. (At this point I get stuck myself, none of my extrapolations get (n-1)(n-2)(n-3)/6)
Another way: Since we know that 1, 2, 3 all are zero roots, we also know the function at least is (n - 1)(n - 2)(n - 3). Solve for n=4 and you get 1/6 as the constant factor.

I refactored your loop as follows:
for(i=1; i<n-2; i++)
{
x = x + ( ( i * ( i + 1 ) ) / 2 );
}
This works because ( ( i * ( i + 1 ) ) / 2 ) = sum of all values in the series 1 through i.
You inner most loop (using variable k) is the equivalent of adding the value of j to x. Your second loop (using variable j) is then the equivalent of calculating the sum of the series 1 through i.
So I've replaced your second and third loop with the sum of the series 1 through i. We keep your first loop, and at each iteration add the sum of the series 1 through i to your previous value.
Note that I've added a -2 to your outer loop to simulate the < sign in your two inner loops. If your requirement was <= on each inner loop then that -2 would not be needed.
This is an O(n) solution, which is not as good as Paul Hankin's O(1) solution.

Related

Magical array A of N integers with K length

Given an array A of N integers, An array called magical if its all the elements have exactly 3 divisors. Now you have to convert the given array into the magical array of K length. You can perform the following operations in any order of time.
Increase the value of any element of the array by 1.
Decrease the value of any element of the array by 1.
Delete any element of the array.
Constraints:
1 <= N <= 1000000
1 <= K <= N
1 <= A <= 1000000
Sample Input
5(size of the array) 3(K)
1 4 10 8 15
Output
4
A solution I tried:
Iterated every element of the array, checking near a prime number square and adding this difference to global count operation(variable used to count required operations). This time-order is n^2.
Searching for a better solution.
Make an array with absolute values of differences with closest prime squares
Use QuickSelect algorithm to separate K smaller differences (average complexity tends to O(N), while the worst quadratic case is possible)
Calculate their sum
you can try with below method to find number with 3 divisors
void numbersWith3Divisors(int n)
{
boolean[] isPrime = new boolean[n+1];
Arrays.fill(isPrime, true);
isPrime[0] = isPrime[1] = false;
for (int p=2; p*p<=n; p++)
{
if (isPrime[p] == true)
{
for (int i=p*2; i<=n; i += p)
isPrime[i] = false;
}
}
System.out.print("Numbers with 3 divisors :- ");
for (int i=0; i*i <= n ; i++)
if (isPrime[i])
System.out.print(i*i + " ");
}
the same you can apply for array,
hope it will help

what is the complexity/BIG O of this function "loops"

void mystery2 (int n)
{
int i;
for (i = 1; i <= n; i++) {
double x = i;
double delta = 1 / (double)i;
while ( x > 0 )
x -= delta;
}
return 0;
}
Why is the BIG O, The time complexity of this function is O(n^3) and not O(n^2)?
What I done is when i=1 ==> 1 iterations, i=2 ==> 2iterations(in while) i=3 ==> 3 iteration ........ i=n ==> n iteration, if we sum all the iterations we get 1+2+3+4....+n = n*(n+1)/2. so what I missing here?
This is because inner loop runs like this.
For i=1, inner loop runs 1 time,
For i=2, inner loop runs 4 time,
//because x=2 and delta=0.5 so for x to become 0 it has to iterate 4 time
For i=3, inner loop runs 9 time
//because x=3 and delta=0.33 so for x to become 0 it has to iterate 9(atleast) time
and so on..
So inner loop run i^2 time and the equation becomes 1^2+2^2+3^2+...+n^2=n(n+1)(2n+1)/6 which is equal to O(n^3) complexity.
I think you are looking at it as a standard integer decrement loop, which I also did at first, but the number are doubles, and delta is not 1 but actually 1 / (double)i, so the number of inner loop iterations it takes to fully decrement x does not increase linearly as n increases, but much more sharply, because delta gets smaller as n gets larger.

How to make a program to list all possible three digit addition solutions to a user inputted number. (can not use zero)

I want to make a C program to that will give me all the possible ways to add three numbers(without using zero) to equal whatever the user entered. For example, if the user entered 4, the solutions would be 1+1+2. If the user entered 3, the only solution would be 1+1+1. From what I noticed, after 4, there is always another solution. So, if 5 was entered there would be 2 solutions..1+1+3,2+2+1...and if 6 was entered, there would be three solutions and the solution number would always increase by 1. The thing I can't figure out is the logic on how to get all the answers everytime. My current code is kind of brute force and just gives me the 1+1+(whatever number is left) solution and the other part only works for one solution and only if it is odd. My current code is as follows:
#include <stdio.h>
int main(void)
{
int num;
printf("Enter a number: ");
scanf("%d",&num);
if(num < 3)
printf("No solution.\n");
int p = num/2;
int q = num%2;
int v = num - (p+q);
printf("%d+%d+%d\n",p,q,v);//prints a single solution but only works for odd numbers
int k = num - 2;
printf("1+1+%d\n",k);
//prints a single solution but only 1+1+whatever is left
return 0;
}
Any advice or different way to approach this would be very helpful. I was told I could do 3 nested for loops but I was looking for a different approach.
This solution uses 3 nested for loops, (although I see that something else would be preferred). There is at least one other way:
Using recursion
There might be another one: solving the problem mathematically (and this would be the most elegant)
It is still brute force, but doesn't try some combinations that we know for sure that won't yield a (good) result.
Since the numbers order is not important meaning that for example:
6 =
1 + 2 + 3
1 + 3 + 2
2 + 1 + 3
2 + 3 + 1
3 + 1 + 2
3 + 2 + 1
all 6 permutations of (1, 2, 3) constitute a single solution, only one variant of the 3 numbers permutations matters. That we can use it in our advantage: we chose the 3 number sequence where No1 <= No2 <= No3 (No1 + No2 + No3 = N (entered by the user)).
That translates in lesser operations to compute: instead for each of the 3 indexes (i, j, k) which correspond to (No1, No2, No3) to swipe across the whole [1 .. n] interval:
The outer index (i) only iterates [1 .. n / 3] (it makes no point to go higher since the other 2 numbers are greater or equal to it, and if it did the sum would be greater than n). This alone reduces the number of operations to one third
The mid index (j) only iterates [i .. n / 2] (it doesn't go below the previous one since i <= j, and makes no sense to go higher than n / 2 since the other number will be greater or equal to it, and again the sum would be greater than n)
The inner index (k) only iterates [j .. n - 2] (it's obvious why).
Notes:
It might be possible (actually, I'm pretty sure) that the operations can be reduced even more
The last 3 variables declared (having the bogus names) are for speeding things up: they are calculated once, at the beginning (although I might be reinventing the wheel here since I'm pretty sure that the compiler is optimizing these kind of situations). But, regardless of the optimizations, the algorithm is still O(n ** 3) which is highly inefficient. I feel that I'm missing something obvious, but I can't put my finger on it
I checked (not very thoroughly though), and it doesn't seem to skip solutions
code.c:
#include <stdio.h>
void generate(int n) {
int i, j, k, count = 0, n_div_3 = n / 3, n_div_2 = n / 2, n_minus_1 = n - 1;
for (i = 1; i <= n_div_3; i++)
for (j = i; j <= n_div_2; j++)
for (k = j; k < n_minus_1; k++)
if (i + j + k == n) {
printf("Solution %d: %d %d %d\n", count++, i, j, k);
break;
}
printf("\n%d solutions\n", count);
}
int main () {
int num;
printf("Enter a number: ");
scanf("%d", &num);
generate(num);
return 0;
}

What is the complexity of this piece of code

I had to determinate big O complexity of this piece of code.
I thought the answer is nlogn but apparently its n. Can anyone help explain why that is so?
void funct(int n)
{
for (int i = n; i > 0; i /= 2)
for(int j = 0; j < i; j++)
printf("%d\n", j%2);
}
That's geometric progression
The first time the inner loop is executed n times.
The second time it is executed n/2 times.
etc...
So we have the sequence:
n + n/2 + n/4 + ... + 1
so the final formula is:
n*(1 - (1/2)^(log n))/(1/2)
which is equivalent to n
Look these can be solved using Double Sigma's :
Let $ represents sigma.
so this problem is :
$(i=n downto 0 by a factor of 2 )$(j=0 to i-1) 1
where 1 represent a unit cost
now for inner sigma its sum of 1 i times that is = i
now problem is
$(i=n downto 1 by a factor of 2 ) i
which is sum of i values i.e. n+n/2+n/4+...+1(when n/2^x=1 or after log(n) terms)+0
or
n*(1+1/2+.....log(n) terms)
which is a convergent Geometric progression. and the result will be n*(1 - (1/2)^(log n))/(1/2) i.e O(n)
The outer loop, as I'm sure you can see is executed log(n) times. The inner loop is executed on average log(n)/2 times. So the printf statement is executed log(n) * (log(n) / 2) times which equals n / 2. So the complexity of the code is O(n).

Total number of possible triangles from n numbers

If n numbers are given, how would I find the total number of possible triangles? Is there any method that does this in less than O(n^3) time?
I am considering a+b>c, b+c>a and a+c>b conditions for being a triangle.
Assume there is no equal numbers in given n and it's allowed to use one number more than once. For example, we given a numbers {1,2,3}, so we can create 7 triangles:
1 1 1
1 2 2
1 3 3
2 2 2
2 2 3
2 3 3
3 3 3
If any of those assumptions isn't true, it's easy to modify algorithm.
Here I present algorithm which takes O(n^2) time in worst case:
Sort numbers (ascending order).
We will take triples ai <= aj <= ak, such that i <= j <= k.
For each i, j you need to find largest k that satisfy ak <= ai + aj. Then all triples (ai,aj,al) j <= l <= k is triangle (because ak >= aj >= ai we can only violate ak < a i+ aj).
Consider two pairs (i, j1) and (i, j2) j1 <= j2. It's easy to see that k2 (found on step 2 for (i, j2)) >= k1 (found one step 2 for (i, j1)). It means that if you iterate for j, and you only need to check numbers starting from previous k. So it gives you O(n) time complexity for each particular i, which implies O(n^2) for whole algorithm.
C++ source code:
int Solve(int* a, int n)
{
int answer = 0;
std::sort(a, a + n);
for (int i = 0; i < n; ++i)
{
int k = i;
for (int j = i; j < n; ++j)
{
while (n > k && a[i] + a[j] > a[k])
++k;
answer += k - j;
}
}
return answer;
}
Update for downvoters:
This definitely is O(n^2)! Please read carefully "An Introduction of Algorithms" by Thomas H. Cormen chapter about Amortized Analysis (17.2 in second edition).
Finding complexity by counting nested loops is completely wrong sometimes.
Here I try to explain it as simple as I could. Let's fix i variable. Then for that i we must iterate j from i to n (it means O(n) operation) and internal while loop iterate k from i to n (it also means O(n) operation). Note: I don't start while loop from the beginning for each j. We also need to do it for each i from 0 to n. So it gives us n * (O(n) + O(n)) = O(n^2).
There is a simple algorithm in O(n^2*logn).
Assume you want all triangles as triples (a, b, c) where a <= b <= c.
There are 3 triangle inequalities but only a + b > c suffices (others then hold trivially).
And now:
Sort the sequence in O(n * logn), e.g. by merge-sort.
For each pair (a, b), a <= b the remaining value c needs to be at least b and less than a + b.
So you need to count the number of items in the interval [b, a+b).
This can be simply done by binary-searching a+b (O(logn)) and counting the number of items in [b,a+b) for every possibility which is b-a.
All together O(n * logn + n^2 * logn) which is O(n^2 * logn). Hope this helps.
If you use a binary sort, that's O(n-log(n)), right? Keep your binary tree handy, and for each pair (a,b) where a b and c < (a+b).
Let a, b and c be three sides. The below condition must hold for a triangle (Sum of two sides is greater than the third side)
i) a + b > c
ii) b + c > a
iii) a + c > b
Following are steps to count triangle.
Sort the array in non-decreasing order.
Initialize two pointers ‘i’ and ‘j’ to first and second elements respectively, and initialize count of triangles as 0.
Fix ‘i’ and ‘j’ and find the rightmost index ‘k’ (or largest ‘arr[k]‘) such that ‘arr[i] + arr[j] > arr[k]‘. The number of triangles that can be formed with ‘arr[i]‘ and ‘arr[j]‘ as two sides is ‘k – j’. Add ‘k – j’ to count of triangles.
Let us consider ‘arr[i]‘ as ‘a’, ‘arr[j]‘ as b and all elements between ‘arr[j+1]‘ and ‘arr[k]‘ as ‘c’. The above mentioned conditions (ii) and (iii) are satisfied because ‘arr[i] < arr[j] < arr[k]'. And we check for condition (i) when we pick 'k'
4.Increment ‘j’ to fix the second element again.
Note that in step 3, we can use the previous value of ‘k’. The reason is simple, if we know that the value of ‘arr[i] + arr[j-1]‘ is greater than ‘arr[k]‘, then we can say ‘arr[i] + arr[j]‘ will also be greater than ‘arr[k]‘, because the array is sorted in increasing order.
5.If ‘j’ has reached end, then increment ‘i’. Initialize ‘j’ as ‘i + 1′, ‘k’ as ‘i+2′ and repeat the steps 3 and 4.
Time Complexity: O(n^2).
The time complexity looks more because of 3 nested loops. If we take a closer look at the algorithm, we observe that k is initialized only once in the outermost loop. The innermost loop executes at most O(n) time for every iteration of outer most loop, because k starts from i+2 and goes upto n for all values of j. Therefore, the time complexity is O(n^2).
I have worked out an algorithm that runs in O(n^2 lgn) time. I think its correct...
The code is wtitten in C++...
int Search_Closest(A,p,q,n) /*Returns the index of the element closest to n in array
A[p..q]*/
{
if(p<q)
{
int r = (p+q)/2;
if(n==A[r])
return r;
if(p==r)
return r;
if(n<A[r])
Search_Closest(A,p,r,n);
else
Search_Closest(A,r,q,n);
}
else
return p;
}
int no_of_triangles(A,p,q) /*Returns the no of triangles possible in A[p..q]*/
{
int sum = 0;
Quicksort(A,p,q); //Sorts the array A[p..q] in O(nlgn) expected case time
for(int i=p;i<=q;i++)
for(int j =i+1;j<=q;j++)
{
int c = A[i]+A[j];
int k = Search_Closest(A,j,q,c);
/* no of triangles formed with A[i] and A[j] as two sides is (k+1)-2 if A[k] is small or equal to c else its (k+1)-3. As index starts from zero we need to add 1 to the value*/
if(A[k]>c)
sum+=k-2;
else
sum+=k-1;
}
return sum;
}
Hope it helps........
possible answer
Although we can use binary search to find the value of 'k' hence improve time complexity!
N0,N1,N2,...Nn-1
sort
X0,X1,X2,...Xn-1 as X0>=X1>=X2>=...>=Xn-1
choice X0(to Xn-3) and choice form rest two item x1...
choice case of (X0,X1,X2)
check(X0<X1+X2)
OK is find and continue
NG is skip choice rest
It seems there is no algorithm better than O(n^3). In the worst case, the result set itself has O(n^3) elements.
For Example, if n equal numbers are given, the algorithm has to return n*(n-1)*(n-2) results.

Resources