i=n;
while (i>=1) {
--x=x+1;
--i=i/2;
}
What is the running time of this code?
A O(N^2)
B O(N^3)
C O(N^4)
D O (LOG N)
E O(2^N)
I believe it is the option D
This is for revision. Not homework
This will never terminate as the while condition is
i>=i
However, assuming you wanted to type
i>=1
The answer will be log(n).
Your belief would be correct if you change the while condition to i>=1
As it stands the complexity is O(INFINITY)
Related
I've seen many similar questions but not quite what I'm looking for. I'm supposed to find the complexity for the code below. What makes this code different from what I've seen here already is that the function I have to find the complexity contains another function with a given complexity.
I think I can solve this but can't give the correct answer. Any detailed explanation would be very nice, also to help me better understand the flow of finding the complexity in those kinds of functions. The code is in C.
void f(int v[], int n, int i, int j){
int a = v[i];
int b = v[j];
int m = (i+j)/2;
g(v,n);
if(i<j){
if(a<b) f(v,n,i,m);
else f(v,n,m,j)
}
return;
}
The f function is called in the main where v is an array: f(v, n, 0, n-1).
The g function complexity is O(n).
Now, I really can't decide between O(log n) or O(n log n). Seeing that we're dividing the workspace in half using the int m I know it's logarithmic, but does the G function adds up and turn everything into O(n log n)?
Thank you.
PS: if an answer like this has been asked already, I couldn't find it and redirection would be great in case anyone else stumbles on the same problem as mine.
Your f function will execute exactly log(n) times (the range between i and j is always halved); each of these times, it will execute g, with an additional cost of O(n). Therefore, the total complexity is O(n * log(n)), which is the total number of times the inner loop* of g is called.
(* I am assuming that there is an inner loop in g for explanation purposes, because that is what you find in many, but certainly not all, O(n) functions).
If I use a for loop to find the sum of n numbers between 0 and n my runtime is O(n). But if I create a recursive function such as:
int sum(int n) {
if(n == 0)
return 0;
return n + sum((n - 1));
}
Would my runtime still be O(n)?
Yes, your runtime will still be O(N). Your recursive function will "loop" N times until it hits the base case.
However keep in mind that your space complexity is also O(N). Your language has to save n + ... before evaluating sum((n - 1)), creating a stack of recursive calls that is N long.
#Primusa's answer addresses your recursive runtime question. While my answer wont address your runtime question, it should be noted that you dont need an algorithm for this. The closed formula for the sum is (n+1)*(n) / 2.
thanks Carl Gauss!
void fn(int n){
int p,q;
for(int i=0;i<n;i++){
p=0;
for(int j=n;j>1;j=j/2)
++p;
for(int k=1;k<p;k=k*2)
++q;
}
}
I think its complexity is nlogn
My friend says its nlog(logn)
and also please tell me - Do inner loops depend upon each other in this function?
It's actually of undefined complexity because you use q uninitialised.
Ignoring that small bug, the outer loop is obviously O(n). The first inner loop is O(log n). The second inner loop is O(log p) and p is log n so it's O(log log n) but it doesn't matter because it is executed sequentially after the first inner loop and therefore the total for both inner loops is O(log n) (When you add two complexities, the overall complexity is the fastest growing one). So your overall complexity is O(n log n)
Currently, I was assigned to write a recursive version of the insertion sort algorithm. And I did that. In fact, here is that:
void recursiveInsertionSort(int* inputArray, int p, int q)
{
while (q > p)
{
recursiveInsertionSort(inputArray, p, q-1);
if (inputArray[q-1] > inputArray[q])
{
int temp = inputArray[q];
int temp2 = inputArray[q-1];
inputArray[q] = temp2;
inputArray[q-1] = temp;
q--;
}
}
}
My problem is twofold. First, i'm not sure if the recurrence relation I came up with is right. I came up with
T(n) = T(n-1) + T(n^2)
as my recurrence relation. Is that right? I'm bouncing between that and just
T(n) = T(n^2)
Second, I am supposed to use algebra to prove that
f(n) = ((n+1)n / 2)
solves that recurrence relation. Which i'm having a real tough time doing because A. I'm not sure if my recurrence is right and B. I am sometimes awful at math in general.
Any help on any of the issues would be greatly appreciated.
Thanks.
Alright I managed to figure it out with the help of a math professor :P I'm going to leave this up here so that others know how to do it. Someone should copy this as an answer :D
So the recurrence relation for this should be T(n) = T(n-1) + n and not what I originally had, that was the main problem. Why? Well its the time it takes to do the recursive backtravel which is n-1 since if you were to go to n, you would have only one element and that's arleady sorted. Plus the time it takes to do one insertion or one actual sort.
The reason that that is n is because when you get down there, you are checking one number against every number before it which happens to be n amount of times.
Now how do you show that that function f(n) solves the T(n)?
Well we know that f(n) solves T(n). So that means you can do this:
We know that f(n) is equal to (n(n+1))/2 . So if T(n) is equal to T(n-1) + n, that means we take away 1 from every n in f(n) and then plug that into T(n).
That gives us T((n+1-)n-1)/2)) + n . That simplifies to T((n(n-1)/2) + n. Take that + n thats out there and multiply it by 2/2 to be able to have it all over a common denominator. Giving you (n^2 - n + 2n)/2 . Simplifies down to ((n^2) + n)/2 which further simplifies to, if you factor out an n, (n(n+1))/2. Which is f(n).
Woo!
I was asked this interview question recently:
You're given an array that is almost sorted, in that each of the N elements may be misplaced by no more than k positions from the correct sorted order. Find a space-and-time efficient algorithm to sort the array.
I have an O(N log k) solution as follows.
Let's denote arr[0..n) to mean the elements of the array from index 0 (inclusive) to N (exclusive).
Sort arr[0..2k)
Now we know that arr[0..k) are in their final sorted positions...
...but arr[k..2k) may still be misplaced by k!
Sort arr[k..3k)
Now we know that arr[k..2k) are in their final sorted positions...
...but arr[2k..3k) may still be misplaced by k
Sort arr[2k..4k)
....
Until you sort arr[ik..N), then you're done!
This final step may be cheaper than the other steps when you have less than 2k elements left
In each step, you sort at most 2k elements in O(k log k), putting at least k elements in their final sorted positions at the end of each step. There are O(N/k) steps, so the overall complexity is O(N log k).
My questions are:
Is O(N log k) optimal? Can this be improved upon?
Can you do this without (partially) re-sorting the same elements?
As Bob Sedgewick showed in his dissertation work (and follow-ons), insertion sort absolutely crushes the "almost-sorted array". In this case your asymptotics look good but if k < 12 I bet insertion sort wins every time. I don't know that there's a good explanation for why insertion sort does so well, but the place to look would be in one of Sedgewick's textbooks entitled Algorithms (he has done many editions for different languages).
I have no idea whether O(N log k) is optimal, but more to the point, I don't really careāif k is small, it's the constant factors that matter, and if k is large, you may as well just sort the array.
Insertion sort will nail this problem without re-sorting the same elements.
Big-O notation is all very well for algorithm class, but in the real world, constants matter. It's all too easy to lose sight of this. (And I say this as a professor who has taught Big-O notation!)
If using only the comparison model, O(n log k) is optimal. Consider the case when k = n.
To answer your other question, yes it is possible to do this without sorting, by using heaps.
Use a min-heap of 2k elements. Insert 2k elements first, then remove min, insert next element etc.
This guarantees O(n log k) time and O(k) space and heaps usually have small enough hidden constants.
Since k is apparently supposed to be pretty small, an insertion sort is probably the most obvious and generally accepted algorithm.
In an insertion sort on random elements, you have to scan through N elements, and you have to move each one an average of N/2 positions, giving ~N*N/2 total operations. The "/2" constant is ignored in a big-O (or similar) characterization, giving O(N2) complexity.
In the case you're proposing, the expected number of operations is ~N*K/2 -- but since k is a constant, the whole k/2 term is ignored in a big-O characterization, so the overall complexity is O(N).
Your solution is a good one if k is large enough. There is no better solution in terms of time complexity; each element might be out of place by k places, which means you need to learn log2 k bits of information to place it correctly, which means you need to make log2 k comparisons at least--so it's got to be a complexity of at least O(N log k).
However, as others have pointed out, if k is small, the constant terms are going to kill you. Use something that's very fast per operation, like insertion sort, in that case.
If you really wanted to be optimal, you'd implement both methods, and switch from one to the other depending on k.
It was already pointed out that one of the asymptotically optimal solutions uses a min heap and I just wanted to provide code in Java:
public void sortNearlySorted(int[] nums, int k) {
PriorityQueue<Integer> minHeap = new PriorityQueue<>();
for (int i = 0; i < k; i++) {
minHeap.add(nums[i]);
}
for (int i = 0; i < nums.length; i++) {
if (i + k < nums.length) {
minHeap.add(nums[i + k]);
}
nums[i] = minHeap.remove();
}
}