Recursive functions complexity containing another function (Big O notation) - c

I've seen many similar questions but not quite what I'm looking for. I'm supposed to find the complexity for the code below. What makes this code different from what I've seen here already is that the function I have to find the complexity contains another function with a given complexity.
I think I can solve this but can't give the correct answer. Any detailed explanation would be very nice, also to help me better understand the flow of finding the complexity in those kinds of functions. The code is in C.
void f(int v[], int n, int i, int j){
int a = v[i];
int b = v[j];
int m = (i+j)/2;
g(v,n);
if(i<j){
if(a<b) f(v,n,i,m);
else f(v,n,m,j)
}
return;
}
The f function is called in the main where v is an array: f(v, n, 0, n-1).
The g function complexity is O(n).
Now, I really can't decide between O(log n) or O(n log n). Seeing that we're dividing the workspace in half using the int m I know it's logarithmic, but does the G function adds up and turn everything into O(n log n)?
Thank you.
PS: if an answer like this has been asked already, I couldn't find it and redirection would be great in case anyone else stumbles on the same problem as mine.

Your f function will execute exactly log(n) times (the range between i and j is always halved); each of these times, it will execute g, with an additional cost of O(n). Therefore, the total complexity is O(n * log(n)), which is the total number of times the inner loop* of g is called.
(* I am assuming that there is an inner loop in g for explanation purposes, because that is what you find in many, but certainly not all, O(n) functions).

Related

Big Oh Runtime of a Recursive Sum

If I use a for loop to find the sum of n numbers between 0 and n my runtime is O(n). But if I create a recursive function such as:
int sum(int n) {
if(n == 0)
return 0;
return n + sum((n - 1));
}
Would my runtime still be O(n)?
Yes, your runtime will still be O(N). Your recursive function will "loop" N times until it hits the base case.
However keep in mind that your space complexity is also O(N). Your language has to save n + ... before evaluating sum((n - 1)), creating a stack of recursive calls that is N long.
#Primusa's answer addresses your recursive runtime question. While my answer wont address your runtime question, it should be noted that you dont need an algorithm for this. The closed formula for the sum is (n+1)*(n) / 2.
thanks Carl Gauss!

Time complexity of N Queen using backtracking?

#include<stdio.h>
#include<math.h>
void printboard(int n);
void fourQueen(int k,int n);
int place(int k,int i);
int x[100];
void NQueen(int k,int n)
{
int i;
for(i=1;i<=n;i++)
{
if(place(k,i)==1)
{ x[k]=i;
if(k==n)
{
printf("Solution\n");
printboard(n);
}
else
NQueen(k+1,n);
}
}
}
int place(int k,int i)
{
int j;
for(j=1;j<k;j++)
{
if((x[j]==i)||abs(x[j]-i)==abs(j-k))
return 0;
}
return 1;
}
void printboard(int n)
{
int i;
for(i=1;i<=n;i++)
printf("%d ",x[i]);
}
void main()
{
int n;
printf("Enter Value of N:");
scanf("%d",&n);
NQueen(1,n);
}
I think it has time complexity: O(n^n), As NQueen function is recursively calling, but is there is any tighter bound possible for this program? what about best case, and worst case time complexity. I am also confused about the place() function which is O(k) and calling from NQueen().
There are a lot of optimizations than can improve the time complexity of the algorithm.
There is more information in these links:
https://sites.google.com/site/nqueensolver/home/algorithm-results
https://sites.google.com/site/nqueensolver/home/algorithms/2backtracking-algorithm
For Your function T(n) = n*T(n-1) + O(n^2) which translates to O(N!) time complexity approximately.
TIME COMPLEXITY OF N-QUEEN PROBLEM IS
> O(N!)
Explanation:
If we add all this up and define the run time as T(N). Then T(N) = O(N2) + N*T(N-1). If you draw a recursion tree using this recurrence, the final term will be something like n3+ n!O(1). By the definition of Big O, this can be reduced to O(n!) running time.
O(n^n) is definitely an upper bound on solving n-queens using backtracking.
I'm assuming that you are solving this by assigning a queen column-wise.
However, consider this - when you assign a location of the queen in the first column, you have n options, after that, you only have n-1 options as you can't place the queen in the same row as the first queen, then n-2 and so on. Thus, the worst-case complexity is still upper bounded by O(n!).
Hope this answers your question even though I'm almost 4 years late!
Let us consider that our queen is a rook, meaning we need not take care of diagonal conflicts.
Time complexity in this case will be O(N!) in the worst case, supposed if we were on a hunt to check if any solution exists or not. Here is a simple explanation.
Let us take an example where N=4.
Supposed we are want to fill the 2-D matrix. X represents a vacant position while 1 represents a taken position.
In the starting, the answer matrix (which we need to fill) looks like,
X X X X
X X X X
X X X X
X X X X
Let us fill this row-wise, meaning will select one location in each row then move forward to the next row.
For the first row, since none is filled in the matrix as a whole, we have 4 options.
For the second row, we have 3 options as one row has already been taken off.
Similarly, for the third row, we have 2 options and for the final row, we are left with just 1 option.
Total options = 4*3*2*1 = 24 (4!)
Now, this was the case had our queen were a rook but since we have more constraints in case of a queen. Complexity should be less than O(N!) in terms of the actual number of operations.
The complexity is n^n and here is the explanation
Here n represent the number of of queens and will remain same for every function call.
K is the row number and function will be called times till k reaches the n.There if n=8,we have n rows and n queens.
T(n)=n(n+t(max of k - 1))=n^max of k=n^n as the max of k is n.
Note:The function has two parameters.In loop, n is not decreasing ,For every function call it remains same.But for number of times the function is called it is decreasing so that recursion could terminate.
The complexity is (n+1)!n^n begin with T(i)=O(niT(i+1)), T(n)=n^3.
So, T(1)=nT(2)=2n^2T(3)=...=(n-1)n^(n-1)!T(n)=(n+1)

Recurrence For Running Time Of A Recursive Insertion Sort

Currently, I was assigned to write a recursive version of the insertion sort algorithm. And I did that. In fact, here is that:
void recursiveInsertionSort(int* inputArray, int p, int q)
{
while (q > p)
{
recursiveInsertionSort(inputArray, p, q-1);
if (inputArray[q-1] > inputArray[q])
{
int temp = inputArray[q];
int temp2 = inputArray[q-1];
inputArray[q] = temp2;
inputArray[q-1] = temp;
q--;
}
}
}
My problem is twofold. First, i'm not sure if the recurrence relation I came up with is right. I came up with
T(n) = T(n-1) + T(n^2)
as my recurrence relation. Is that right? I'm bouncing between that and just
T(n) = T(n^2)
Second, I am supposed to use algebra to prove that
f(n) = ((n+1)n / 2)
solves that recurrence relation. Which i'm having a real tough time doing because A. I'm not sure if my recurrence is right and B. I am sometimes awful at math in general.
Any help on any of the issues would be greatly appreciated.
Thanks.
Alright I managed to figure it out with the help of a math professor :P I'm going to leave this up here so that others know how to do it. Someone should copy this as an answer :D
So the recurrence relation for this should be T(n) = T(n-1) + n and not what I originally had, that was the main problem. Why? Well its the time it takes to do the recursive backtravel which is n-1 since if you were to go to n, you would have only one element and that's arleady sorted. Plus the time it takes to do one insertion or one actual sort.
The reason that that is n is because when you get down there, you are checking one number against every number before it which happens to be n amount of times.
Now how do you show that that function f(n) solves the T(n)?
Well we know that f(n) solves T(n). So that means you can do this:
We know that f(n) is equal to (n(n+1))/2 . So if T(n) is equal to T(n-1) + n, that means we take away 1 from every n in f(n) and then plug that into T(n).
That gives us T((n+1-)n-1)/2)) + n . That simplifies to T((n(n-1)/2) + n. Take that + n thats out there and multiply it by 2/2 to be able to have it all over a common denominator. Giving you (n^2 - n + 2n)/2 . Simplifies down to ((n^2) + n)/2 which further simplifies to, if you factor out an n, (n(n+1))/2. Which is f(n).
Woo!

Time complexity of this code sample

i=n;
while (i>=1) {
--x=x+1;
--i=i/2;
}
What is the running time of this code?
A O(N^2)
B O(N^3)
C O(N^4)
D O (LOG N)
E O(2^N)
I believe it is the option D
This is for revision. Not homework
This will never terminate as the while condition is
i>=i
However, assuming you wanted to type
i>=1
The answer will be log(n).
Your belief would be correct if you change the while condition to i>=1
As it stands the complexity is O(INFINITY)

Sorting an almost sorted array (elements misplaced by no more than k)

I was asked this interview question recently:
You're given an array that is almost sorted, in that each of the N elements may be misplaced by no more than k positions from the correct sorted order. Find a space-and-time efficient algorithm to sort the array.
I have an O(N log k) solution as follows.
Let's denote arr[0..n) to mean the elements of the array from index 0 (inclusive) to N (exclusive).
Sort arr[0..2k)
Now we know that arr[0..k) are in their final sorted positions...
...but arr[k..2k) may still be misplaced by k!
Sort arr[k..3k)
Now we know that arr[k..2k) are in their final sorted positions...
...but arr[2k..3k) may still be misplaced by k
Sort arr[2k..4k)
....
Until you sort arr[ik..N), then you're done!
This final step may be cheaper than the other steps when you have less than 2k elements left
In each step, you sort at most 2k elements in O(k log k), putting at least k elements in their final sorted positions at the end of each step. There are O(N/k) steps, so the overall complexity is O(N log k).
My questions are:
Is O(N log k) optimal? Can this be improved upon?
Can you do this without (partially) re-sorting the same elements?
As Bob Sedgewick showed in his dissertation work (and follow-ons), insertion sort absolutely crushes the "almost-sorted array". In this case your asymptotics look good but if k < 12 I bet insertion sort wins every time. I don't know that there's a good explanation for why insertion sort does so well, but the place to look would be in one of Sedgewick's textbooks entitled Algorithms (he has done many editions for different languages).
I have no idea whether O(N log k) is optimal, but more to the point, I don't really careā€”if k is small, it's the constant factors that matter, and if k is large, you may as well just sort the array.
Insertion sort will nail this problem without re-sorting the same elements.
Big-O notation is all very well for algorithm class, but in the real world, constants matter. It's all too easy to lose sight of this. (And I say this as a professor who has taught Big-O notation!)
If using only the comparison model, O(n log k) is optimal. Consider the case when k = n.
To answer your other question, yes it is possible to do this without sorting, by using heaps.
Use a min-heap of 2k elements. Insert 2k elements first, then remove min, insert next element etc.
This guarantees O(n log k) time and O(k) space and heaps usually have small enough hidden constants.
Since k is apparently supposed to be pretty small, an insertion sort is probably the most obvious and generally accepted algorithm.
In an insertion sort on random elements, you have to scan through N elements, and you have to move each one an average of N/2 positions, giving ~N*N/2 total operations. The "/2" constant is ignored in a big-O (or similar) characterization, giving O(N2) complexity.
In the case you're proposing, the expected number of operations is ~N*K/2 -- but since k is a constant, the whole k/2 term is ignored in a big-O characterization, so the overall complexity is O(N).
Your solution is a good one if k is large enough. There is no better solution in terms of time complexity; each element might be out of place by k places, which means you need to learn log2 k bits of information to place it correctly, which means you need to make log2 k comparisons at least--so it's got to be a complexity of at least O(N log k).
However, as others have pointed out, if k is small, the constant terms are going to kill you. Use something that's very fast per operation, like insertion sort, in that case.
If you really wanted to be optimal, you'd implement both methods, and switch from one to the other depending on k.
It was already pointed out that one of the asymptotically optimal solutions uses a min heap and I just wanted to provide code in Java:
public void sortNearlySorted(int[] nums, int k) {
PriorityQueue<Integer> minHeap = new PriorityQueue<>();
for (int i = 0; i < k; i++) {
minHeap.add(nums[i]);
}
for (int i = 0; i < nums.length; i++) {
if (i + k < nums.length) {
minHeap.add(nums[i + k]);
}
nums[i] = minHeap.remove();
}
}

Resources