What is time complexity of this algorithm as Big-O notation? - c

This is a algorithm for this question: Rotate a array of n elements left by i positions. For instance, with n = 8 and i = 3, the array abcdefg is rotated to defghabc.
/* Alg 1: Rotate by reversal */
void reverse(int i, int j)
{ int t;
while (i < j) {
t = x[i]; x[i] = x[j]; x[j] = t;
i++;
j--;
}
}
void revrot(int rotdist, int n)
{ reverse(0, rotdist-1);
reverse(rotdist, n-1);
reverse(0, n-1);
}
What is the time complexity of this method? And is there any better solution to this problem?
Thanks indeed.

Should be roughly linear O(n).

The loop has to go for no more than (i+j)/2 times. Dropping the constant, O(i+j).

Big-O notation:
n is always O(n). (loops, as they have to go through several iterations)
1 so O(1). (if statement, specified quantity)

Agreed, it'd O(n) since we're merely shifting.
As a food for thought, another possible algorithm is to make a new array with the original appended to itself (ie. abcd --> abcdabcd). Then shift the pointers right n times! Of course, you'll need two pointers, one for the end, one for the beginning. Remember to cut off the end with '\0'
Same run time btw.

Related

Time complexity of N Queen using backtracking?

#include<stdio.h>
#include<math.h>
void printboard(int n);
void fourQueen(int k,int n);
int place(int k,int i);
int x[100];
void NQueen(int k,int n)
{
int i;
for(i=1;i<=n;i++)
{
if(place(k,i)==1)
{ x[k]=i;
if(k==n)
{
printf("Solution\n");
printboard(n);
}
else
NQueen(k+1,n);
}
}
}
int place(int k,int i)
{
int j;
for(j=1;j<k;j++)
{
if((x[j]==i)||abs(x[j]-i)==abs(j-k))
return 0;
}
return 1;
}
void printboard(int n)
{
int i;
for(i=1;i<=n;i++)
printf("%d ",x[i]);
}
void main()
{
int n;
printf("Enter Value of N:");
scanf("%d",&n);
NQueen(1,n);
}
I think it has time complexity: O(n^n), As NQueen function is recursively calling, but is there is any tighter bound possible for this program? what about best case, and worst case time complexity. I am also confused about the place() function which is O(k) and calling from NQueen().
There are a lot of optimizations than can improve the time complexity of the algorithm.
There is more information in these links:
https://sites.google.com/site/nqueensolver/home/algorithm-results
https://sites.google.com/site/nqueensolver/home/algorithms/2backtracking-algorithm
For Your function T(n) = n*T(n-1) + O(n^2) which translates to O(N!) time complexity approximately.
TIME COMPLEXITY OF N-QUEEN PROBLEM IS
> O(N!)
Explanation:
If we add all this up and define the run time as T(N). Then T(N) = O(N2) + N*T(N-1). If you draw a recursion tree using this recurrence, the final term will be something like n3+ n!O(1). By the definition of Big O, this can be reduced to O(n!) running time.
O(n^n) is definitely an upper bound on solving n-queens using backtracking.
I'm assuming that you are solving this by assigning a queen column-wise.
However, consider this - when you assign a location of the queen in the first column, you have n options, after that, you only have n-1 options as you can't place the queen in the same row as the first queen, then n-2 and so on. Thus, the worst-case complexity is still upper bounded by O(n!).
Hope this answers your question even though I'm almost 4 years late!
Let us consider that our queen is a rook, meaning we need not take care of diagonal conflicts.
Time complexity in this case will be O(N!) in the worst case, supposed if we were on a hunt to check if any solution exists or not. Here is a simple explanation.
Let us take an example where N=4.
Supposed we are want to fill the 2-D matrix. X represents a vacant position while 1 represents a taken position.
In the starting, the answer matrix (which we need to fill) looks like,
X X X X
X X X X
X X X X
X X X X
Let us fill this row-wise, meaning will select one location in each row then move forward to the next row.
For the first row, since none is filled in the matrix as a whole, we have 4 options.
For the second row, we have 3 options as one row has already been taken off.
Similarly, for the third row, we have 2 options and for the final row, we are left with just 1 option.
Total options = 4*3*2*1 = 24 (4!)
Now, this was the case had our queen were a rook but since we have more constraints in case of a queen. Complexity should be less than O(N!) in terms of the actual number of operations.
The complexity is n^n and here is the explanation
Here n represent the number of of queens and will remain same for every function call.
K is the row number and function will be called times till k reaches the n.There if n=8,we have n rows and n queens.
T(n)=n(n+t(max of k - 1))=n^max of k=n^n as the max of k is n.
Note:The function has two parameters.In loop, n is not decreasing ,For every function call it remains same.But for number of times the function is called it is decreasing so that recursion could terminate.
The complexity is (n+1)!n^n begin with T(i)=O(niT(i+1)), T(n)=n^3.
So, T(1)=nT(2)=2n^2T(3)=...=(n-1)n^(n-1)!T(n)=(n+1)

Finding a specific sum of elements from two arrays

Could you please help me with this one? :
"Let A and B are an incrementally ordered arrays of natural numbers and K be some arbitrary natural number. Find an effective algorithm which determines all possible pairs of indexes (i,j) such that A[i]+B[j]=K. Prove algorithm's correctness and estimate its complexity."
Should I just iterate over the first array and do a binary search on the other one?
Thanks :)
No!
Both arrays are ordered, so you do the following:
Place an iterator itA on the beginning of A
Place an iterator itB on the end of B
Move iterators in opposite directions, testing *itA + *itB at each iteration. If the value is equal to K, return both indexes. If the value is smaller than K, increment itA. Else, decrement itB.
When you go through both arrays, you are done, in linear time.
Since, for every A[i] there can only be one B[j], you can find the solution with O(n+m) complexity. You can rely on the fact that, if (A[i1] B[j1]) and (A[i2] B[i2]) are both correct pairs, and i1 is less than i2 then j1 must be greater than j2. Hope this helps.
I don't know if it helps, it's just an idea. Loop A linearly and binary search B, but do A backwards. This might give you a better best case, by being able to exclude some portion of B at each step in A.
If you know that A[i] needed say B[42] to solve for K, you'll know that A[i-1] will need at least B[43] or higher.
EDIT: I would also like to add that if B has fewer elements than A, turn it around and do B linearly instead.
Possible implementation in C++ can look as follows:
#include <iostream>
int main()
{
int A[]={1,2,3,6,7,8,9};
int B[]={0,2,4,5,6,7,8,12};
int K=9;
int sizeB=sizeof B/sizeof(int);
int sizeA=sizeof A/sizeof(int);
int i=0;
int j=sizeB-1;
while(i<sizeA && j>=0)
{
if ((A[i]+B[j])==K){
std::cout << i<<","<<j<< std::endl;
i++;
j--;
}
else if((A[i]+B[j])<K){
i++;
}
else{
j--;
}
}
return 0;
}

Given a sorted array with a few numbers in between reversed. How to sort it?

I am actually trying to solve a problem where I have an array which is sorted but a few numbers are reversed . For example : 1 2 3 4 9 8 7 11 12 14 is the array.
Now , my first thought was applying a Binary Search algorithm to find a PEAK ( a[i]>a[i+1] && a[i]>a[i-1])
However , I feel it might not always give the correct result. Moreover it might not be efficient since the list is almost sorted.
Next impression : Applying Insertion Sort since the list is sorted and insertion sort gives best performance in such case IF I am not wrong.
So can anyone suggest better solutions or whether my solutions are correct or not? Efficient of In-efficient?
P.S - This is NOT homework !
UPDATE : Insertion Sort (O(n) in this case) or Linear Scan to find the subsequence and then reversing it (O(n)) again. Is there any chance if we could optimize it? Or probably do in O(logn) ?
Search linearly for the first inversion (i.e. a[i+1] < a[i]), call its index inv1. Continue until inversions stop, call the last index inv2. Reverse the array between inv1 and inv2, inclusive.
In your example, inv1 is 4, and inv2 is 6; array elements are numbered from zero.
The algorithm is linear in the number of entries in the original.
If you're sure that the list is sorted except for an embedded subsequence that is reversed, I suggest that you do a simple scan, detect the start of the reversed subsequence (by finding the first counter-directional change), scan to the end of the subsequence (where the changes resume the correct direction) and reverse the subsequence. This should also work for multiple subsequences provided they do not overlap. The complexity should be O(n).
Note: there should be an extra decision whether to cut between {4,9}, or between {9,8}. (I just add one ;-)
#include <stdio.h>
int array[] = {1,2,3,4,9,8,7,11,12,14};
unsigned findrev(int *arr, unsigned len, unsigned *ppos);
void revrev(int *arr, unsigned len);
unsigned findrev(int *arr, unsigned len, unsigned *ppos)
{
unsigned zlen,pos;
for(zlen=pos=0; pos < len-1; pos++ ) {
if (arr[pos+1] < arr[pos]) zlen++;
else if (zlen) break;
}
if (zlen) *ppos = pos - zlen++;
return zlen;
}
void revrev(int *arr, unsigned len)
{
unsigned pos;
for (pos = 0; pos < --len; pos++) {
int tmp;
tmp = arr[pos];
arr[pos] = arr[len] ;
arr[len] = tmp;
}
}
int main(void)
{
unsigned start,len;
len = findrev(array, 10, &start);
printf("Start=%u Len=%u\n", start, len);
revrev(array+start, len);
for (start=0; start < 10; start++) {
printf(" %d", array[start] );
}
printf("\n" );
return 0;
}
NOTE: the length of the reversed run could also be found by a binary-search for the first value larger (or equal) than the first element of the reversed sequence.
Timsort is quite good at sorting mostly-already-sorted arrays - on top of that, it does an in-place mergesort by using two different mergesteps depending on which will work better. I'm told it's found in the python and java standard libraries, perhaps others. You still probably shouldn't use it inside a loop though - inside a loop you're better off with a treap (for good average speed) or red-black tree (for low standard deviation speed).
I think linear solution [O(n)] is the best possible solution since in a list of n numbers, if n/2 numbers are reverse sorted as in example below we will have to invert n/2 numbers which gives complexity of O(n).
Also even in this case for a similar sequence, I think insertion sort will be O (n^2) and not O (n) in worst case.
Example: Consider an array with distribution below and we attempt to use insertion sort,
n/4 sorted numbers | n2 reverse sorted numbers | n/4 sorted numbers
For the n/2 reverse sorted numbers the sorting complexity will be O(n^2).

Finding kth smallest number from n sorted arrays

So, you have n sorted arrays (not necessarily of equal length), and you are to return the kth smallest element in the combined array (i.e the combined array formed by merging all the n sorted arrays)
I have been trying it and its other variants for quite a while now, and till now I only feel comfortable in the case where there are two arrays of equal length, both sorted and one has to return the median of these two.
This has logarithmic time complexity.
After this I tried to generalize it to finding kth smallest among two sorted arrays. Here is the question on SO.
Even here the solution given is not obvious to me. But even if I somehow manage to convince myself of this solution, I am still curious as to how to solve the absolute general case (which is my question)
Can somebody explain me a step by step solution (which again in my opinion should take logarithmic time i.e O( log(n1) + log(n2) ... + log(nN) where n1, n2...nN are the lengths of the n arrays) which starts from the more specific cases and moves on to the more general one?
I know similar questions for more specific cases are there all over the internet, but I haven't found a convincing and clear answer.
Here is a link to a question (and its answer) on SO which deals with 5 sorted arrays and finding the median of the combined array. The answer just gets too complicated for me to able to generalize it.
Even clean approaches for the more specific cases (as I mentioned during the post) are welcome.
PS: Do you think this can be further generalized to the case of unsorted arrays?
PPS: It's not a homework problem, I am just preparing for interviews.
This doesn't generalize the links, but does solve the problem:
Go through all the arrays and if any have length > k, truncate to length k (this is silly, but we'll mess with k later, so do it anyway)
Identify the largest remaining array A. If more than one, pick one.
Pick the middle element M of the largest array A.
Use a binary search on the remaining arrays to find the same element (or the largest element <= M).
Based on the indexes of the various elements, calculate the total number of elements <= M and > M. This should give you two numbers: L, the number <= M and G, the number > M
If k < L, truncate all the arrays at the split points you've found and iterate on the smaller arrays (use the bottom halves).
If k > L, truncate all the arrays at the split points you've found and iterate on the smaller arrays (use the top halves, and search for element (k-L).
When you get to the point where you only have one element per array (or 0), make a new array of size n with those data, sort, and pick the kth element.
Because you're always guaranteed to remove at least half of one array, in N iterations, you'll get rid of half the elements. That means there are N log k iterations. Each iteration is of order N log k (due to the binary searches), so the whole thing is N^2 (log k)^2 That's all, of course, worst case, based on the assumption that you only get rid of half of the largest array, not of the other arrays. In practice, I imagine the typical performance would be quite a bit better than the worst case.
It can not be done in less than O(n) time. Proof Sketch If it did, it would have to completely not look at at least one array. Obviously, one array can arbitrarily change the value of the kth element.
I have a relatively simple O(n*log(n)*log(m)) where m is the length of the longest array. I'm sure it is possible to be slightly faster, but not a lot faster.
Consider the simple case where you have n arrays each of length 1. Obviously, this is isomorphic to finding the kth element in an unsorted list of length n. It is possible to find this in O(n), see Median of Medians algorithm, originally by Blum, Floyd, Pratt, Rivest and Tarjan, and no (asymptotically) faster algorithms are possible.
Now the problem is how to expand this to longer sorted arrays. Here is the algorithm: Find the median of each array. Sort the list of tuples (median,length of array/2) and sort it by median. Walk through keeping a sum of the lengths, until you reach a sum greater than k. You now have a pair of medians, such that you know the kth element is between them. Now for each median, we know if the kth is greater or less than it, so we can throw away half of each array. Repeat. Once the arrays are all one element long (or less), we use the selection algorithm.
Implementing this will reveal additional complexities and edge conditions, but nothing that increases the asymptotic complexity. Each step
Finds the medians or the arrays, O(1) each, so O(n) total
Sorts the medians O(n log n)
Walks through the sorted list O(n)
Slices the arrays O(1) each so, O(n) total
that is O(n) + O(n log n) + O(n) + O(n) = O(n log n). And, we must perform this untill the longest array is length 1, which will take log m steps for a total of O(n*log(n)*log(m))
You ask if this can be generalized to the case of unsorted arrays. Sadly, the answer is no. Consider the case where we only have one array, then the best algorithm will have to compare at least once with each element for a total of O(m). If there were a faster solution for n unsorted arrays, then we could implement selection by splitting our single array into n parts. Since we just proved selection is O(m), we are stuck.
You could look at my recent answer on the related question here. The same idea can be generalized to multiple arrays instead of 2. In each iteration you could reject the second half of the array with the largest middle element if k is less than sum of mid indexes of all arrays. Alternately, you could reject the first half of the array with the smallest middle element if k is greater than sum of mid indexes of all arrays, adjust k. Keep doing this until you have all but one array reduced to 0 in length. The answer is kth element of the last array which wasn't stripped to 0 elements.
Run-time analysis:
You get rid of half of one array in each iteration. But to determine which array is going to be reduced, you spend time linear to the number of arrays. Assume each array is of the same length, the run time is going to be cclog(n), where c is the number of arrays and n is the length of each array.
There exist an generalization that solves the problem in O(N log k) time, see the question here.
Old question, but none of the answers were good enough. So I am posting the solution using sliding window technique and heap:
class Node {
int elementIndex;
int arrayIndex;
public Node(int elementIndex, int arrayIndex) {
super();
this.elementIndex = elementIndex;
this.arrayIndex = arrayIndex;
}
}
public class KthSmallestInMSortedArrays {
public int findKthSmallest(List<Integer[]> lists, int k) {
int ans = 0;
PriorityQueue<Node> pq = new PriorityQueue<>((a, b) -> {
return lists.get(a.arrayIndex)[a.elementIndex] -
lists.get(b.arrayIndex)[b.elementIndex];
});
for (int i = 0; i < lists.size(); i++) {
Integer[] arr = lists.get(i);
if (arr != null) {
Node n = new Node(0, i);
pq.add(n);
}
}
int count = 0;
while (!pq.isEmpty()) {
Node curr = pq.poll();
ans = lists.get(curr.arrayIndex)[curr.elementIndex];
if (++count == k) {
break;
}
curr.elementIndex++;
pq.offer(curr);
}
return ans;
}
}
The maximum number of elements that we need to access here is O(K) and there are M arrays. So the effective time complexity will be O(K*log(M)).
This would be the code. O(k*log(m))
public int findKSmallest(int[][] A, int k) {
PriorityQueue<int[]> queue = new PriorityQueue<>(Comparator.comparingInt(x -> A[x[0]][x[1]]));
for (int i = 0; i < A.length; i++)
queue.offer(new int[] { i, 0 });
int ans = 0;
while (!queue.isEmpty() && --k >= 0) {
int[] el = queue.poll();
ans = A[el[0]][el[1]];
if (el[1] < A[el[0]].length - 1) {
el[1]++;
queue.offer(el);
}
}
return ans;
}
If the k is not that huge, we can maintain a priority min queue. then loop for every head of the sorted array to get the smallest element and en-queue. when the size of the queue is k. we get the first k smallest .
maybe we can regard the n sorted array as buckets then try the bucket sort method.
This could be considered the second half of a merge sort. We could simply merge all the sorted lists into a single list...but only keep k elements in the combined lists from merge to merge. This has the advantage of only using O(k) space, but something slightly better than merge sort's O(n log n) complexity. That is, it should in practice operate slightly faster than a merge sort. Choosing the kth smallest from the final combined list is O(1). This is kind of complexity is not so bad.
It can be done by doing binary search in each array, while calculating the number of smaller elements.
I used the bisect_left and bisect_right to make it work for non-unique numbers as well,
from bisect import bisect_left
from bisect import bisect_right
def kthOfPiles(givenPiles, k, count):
'''
Perform binary search for kth element in multiple sorted list
parameters
==========
givenPiles are list of sorted list
count is the total number of
k is the target index in range [0..count-1]
'''
begins = [0 for pile in givenPiles]
ends = [len(pile) for pile in givenPiles]
#print('finding k=', k, 'count=', count)
for pileidx,pivotpile in enumerate(givenPiles):
while begins[pileidx] < ends[pileidx]:
mid = (begins[pileidx]+ends[pileidx])>>1
midval = pivotpile[mid]
smaller_count = 0
smaller_right_count = 0
for pile in givenPiles:
smaller_count += bisect_left(pile,midval)
smaller_right_count += bisect_right(pile,midval)
#print('check midval', midval,smaller_count,k,smaller_right_count)
if smaller_count <= k and k < smaller_right_count:
return midval
elif smaller_count > k:
ends[pileidx] = mid
else:
begins[pileidx] = mid+1
return -1
Please find the below C# code to Find the k-th Smallest Element in the Union of Two Sorted Arrays. Time Complexity : O(logk)
public int findKthElement(int k, int[] array1, int start1, int end1, int[] array2, int start2, int end2)
{
// if (k>m+n) exception
if (k == 0)
{
return Math.Min(array1[start1], array2[start2]);
}
if (start1 == end1)
{
return array2[k];
}
if (start2 == end2)
{
return array1[k];
}
int mid = k / 2;
int sub1 = Math.Min(mid, end1 - start1);
int sub2 = Math.Min(mid, end2 - start2);
if (array1[start1 + sub1] < array2[start2 + sub2])
{
return findKthElement(k - mid, array1, start1 + sub1, end1, array2, start2, end2);
}
else
{
return findKthElement(k - mid, array1, start1, end1, array2, start2 + sub2, end2);
}
}

Time complexity of this function

I am pretty sure about my answer but today had a discussion with my friend who said I was wrong.
I think the complexity of this function is O(n^2) in average and worst case and O(n) in best case. Right?
Now what happens when k is not length of array? k is the number of elements you want to sort (rather than the whole array).
Is it O(nk) in best and worst case and O(n) in best case?
Here is my code:
#include <stdio.h>
void bubblesort(int *arr, int k)
{
// k is the number of item of array you want to sort
// e.g. arr[] = { 4,15,7,1,19} with k as 3 will give
// {4,7,15,1,19} , only first k elements are sorted
int i=k, j=0;
char test=1;
while (i && test)
{
test = 0;
--i;
for (j=0 ; j<i; ++j)
{
if ((arr[j]) > (arr[j+1]))
{
// swap
int temp = arr[j];
arr[j]=arr[j+1];
arr[j+1]=temp;
test=1;
}
} // end for loop
} // end while loop
}
int main()
{
int i =0;
int arr[] = { 89,11,15,13,12,10,55};
int n = sizeof(arr)/sizeof(arr[0]);
bubblesort(arr,n-3);
for (i=0;i<n;i++)
{
printf("%d ",arr[i]);
}
return 0;
}
P.S. This is not homework, just looks like one. The function we were discussing is very similar to Bubble sort. In any case, I have added homework tag.
Please help me confirm if I was right or not. Thank you for your help.
Complexity is normally given as a function over n (or N), like O(n), O(n*n), ...
Regarding your code the complexity is as you stated. It is O(n) in best case and O(n*n) in worst case.
What might have lead to misunderstanding in your case is that you have a variable n (length of array) and a variable k (length of part in array to sort). Of course the complexity of your sort does not depend on the length of the array but on the length of the part that you want to sort. So with respect to your variables the complexity is O(k) or O(k*k). But since normally complexity notation is over n you would say O(n) or O(n*n) where n is the length of the part to sort.
Is it O(nk) in best and worst case and O(n) in best case?
No, it's O(k^2) worst case and O(k) best case. Sorting the first k elements of an array of size n is exactly the same as sorting an array of k elements.
That's O(n^2), the outer while goes from k down to 1 (possibly stopping earlier for specific data, but we're talking worst case here), and the inner for goes from 0 to i (which in turn goes up to k), so multiplied they're k^2 in the worst case.
If you care about the best case, that's O(n) because the outer while loop only executes once then gets aborted.

Resources