insertion sort theoretical analysis, total number of shifts. - arrays

Given the following array:
[14 17 21 34 47 19 71 22 29 41 8]
and the following excerpt from the book Algorithms Unlocked by Thomas Cormen
(slightly edited, [START] and [STOP] flags are not part of the text):
Insertion sort is an excellent choice when the array starts out as
''almost sorted''. [START] Suppose that each array element starts out within
k positions of where it ends up in the sorted array. Then the total
number of times that a given element is shifted over all iterations
of the inner loop is at most k. Therefore, the total number of times
that all elements are shifted over all inner-loop iterations, is at
most kn, which in turn tells us that the total number of inner-loop
iterations is at most kn (since each inner-loop iteration shifts
exactly one element by one position).[STOP] If k is a constant, then the
total running time of insertion sort would he only Θ(n), because the
Θ-notation subsumes the constant factor k. In fact we can even
tolerate some elements moving a long distance in the array, as long as
there are not too many such elements. In particular, if L elements can
move anywhere in the array (so that each of these elements can move by
up to n-1 positions), and the remaining n - L elements can more at
most k positions, then the total number of shifts is at most L * (n –
1) + (n – L) * k = (k + L) * n – (k + 1) * L, which is Θ(n) if both k
and L are constants.
The books is trying to explain how it crafts a formula, which it presents at the bottom of the text. I would like some help to better understand what it says, very likely, it could help a specific example using the above sample array, so that what is going on with the k and n variables. Can you help me to better understand the above excerpt's analysis?
To be more specific what is confusing me, the lines between [START] and [STOP] flags ,these are the lines:
Suppose that each array element..... which in turn tells us that the
total number of inner-loop iterations is at most kn(since each
inner-loop iteration shifts exactly one element by one position).
(anything below these lines is totally understood all the way to the end.)

Let is consider the insertion sort algorithm
Algorithm: InsertionSort(A)
i ← 1
while i < length(A)
j ← i
while j > 0 and A[j-1] > A[j]
swap A[j] and A[j-1]
j ← j - 1
end while
i ← i + 1
end while
The inner loop - move elements of A[0..i-1] one by one, till A[i] is in its correct position.
Therefore if a given element is atmost k position away from its correct place, we will have a maximum of k compares and swaps. For n elements it will be k*n.
Hope it helps!

Related

can someone explain the while loop part of this pseudocode

I understand this is a code to check if elements in the list are different. But the while loop is aspect is confusing. Can someone explain this part
// Input: list (or array) of n integers a[0]; a[1]; a[2],....., a[n − 1]
// Output: Does there exist a repeated integer in the list?
repeat ← false
i ← 0 // set i to zero
while i <= n − 2 do
j ← i + 1
while j <= n − 1 do
if (a[i] == a[j]) then
repeat ← true
else
repeat ← false
j ← j + 1
i ← i + 1
if (repeat == true) then
print "Some numbers repeated"
else
print "All numbers are different"
As other users mentioned in the comments the code contains a bug. You have to remove the "else" branch from the if-statement in the inner while-loop. If you do that, the code should work according to the specifications checking all pairs of elements in the array for equality. The first while-loop with running index i iterates over all elements of the array up to the second last. In each iteration of the outer while-loop the inner (nested) while-loop iterates from element j = i + 1 to the last element (i.e. j runs over all elements to the right of the i-th element) and checks each pair of elements (i-th and j-th element) for equality (setting the repeated-flag if two elements are equal). To better understand the pattern this algorithm follows and see why it actually compares all pairs of elements it could help to execute the algorithm manually on a small example. This algorithm is quite inefficient, its time complexity is O(n^2). You can use an efficient set data stracture (such as a balanced binary tree or a hash set) to reduce the time complexity to O(nlog(n)) or amortized O(n).

Does "Find all triplets whose sum is less than some number" have any solution better than O(n3) runtime? [duplicate]

This question already has answers here:
Find all triplets in array with sum less than or equal to given sum
(5 answers)
Closed 8 years ago.
I got asked this on an interview.
Given an array of ints, find all triplets whose sum is less than some number
After some scrambling I told the interviewer that the best solution would still lead to worst-case runtime O(n3) and possibly would need O(n3).
The interviewer blatantly disagreed with me and told me "you need to go back to your algorithms...".
Am I missing something?
A possible optimization will be:
Remove all elements in the array that bigger than sum;
Sort the array;
Run O(N^2) to pick up a[i] + a[j], then binary search for sum - a[i] - a[j] in the range of [j + 1, N], the index is the number of possible candidates, but you should subtract j since they have been covered.
The complexity will be O(N^2 log N), slightly better.
You can solve this O(n^2) time:
First, sort the array.
Then, loop over the array with the first pointer i.
Now, use a second pointer j to loop up from there and a third pointer k to simultaneously loop down from the end.
Whenever you're in a situation where A[i]+A[j]+A[k] < X, you know that the same holds for all j<k'<k so increment your count with k-j and increment j. I keep the hidden invariant that A[i]+A[j]+A[k+1] >= X, so incrementing j only makes that statement stronger.
Otherwise, decrement k. When j and k meet, increment i.
You will only increment j and decrement k, so they need O(n) amortized time to meet.
In pseudocode:
count= 0
for i = 0; i < N; i++
j = i+1
k = N-1
while j < k
if A[i] + A[j] + A[k] < X
count += k-j
j++
else
k--
I see that you ask for all triplets. It is quite obvious that there can be O(n^3) triplets, so if you want them all you will need as much time, worst case.
This is an example of a problem where the output size matters. For example, if the array contains just 1, 2, 3, 4, 5, ..., n and the maximum value is set at 3n then every single triplet will be an answer, and you have to do Ω(n3) work just to list them all. On the other hand, if the maximum value had been 0, it would be nice to finish in O(n) time after confirming all the items are too large.
Basically, we want an output-sensitive algorithm with a running time that's something like O(f(n) + t) where t is the output size and n is the input size.
An O(n2 + t) algorithm would work by essentially tracking the transition points where triplets transitioned from being over the limit to under the limit. Then it would yield everything under that surface. The space is three-dimensional so the surface is two-dimensional, and you can track along it from point to point in aggregate constant time.
Here's some python code (untested!):
def findTripletsBelow(items, limit):
surfaceCoords = []
s = sorted(items)
for i in range(len(s)):
k = len(s)-1
for j in range(i, len(s))
while k >= 0 and s[i]+s[j]+s[k] > limit:
k -= 1
if k < 0: break
surfaceCoords.append((i,j,k))
results = []
for (i,j,k) in surfaceCoords:
for k2 in range(k+1):
results.append((s[i], s[j], s[k2]))
return results
O(n2) algorithm.
Sort the list.
For every element ai, this is how you calculate the number of combinations:
Binary search and find maximum aj such that j < i and ai+aj <= total.
Binary search and find maximum ak such that k < j and ai+aj+ak <= total
For this particular combination of (ai, aj), k is the number of sums that is less than or equal to total.
Now decrement j and increment k as much as possible (but ai+aj+ak <= total )
The total number of increments and decrements is less than i. So for a particular i the complexity is O(i). Therefore overall complexity is O(n2).
I am leaving out many corner conditions, but this should give you an idea.
Edit:
In the worst case there are O(n3) solutions. So outputting them explicitly would certainly require O(n3) time. There is no way around it.
But if you want to return a implicit list (i.e. a compressed list of combinations) this would still work. An example of compressed output would be (ai, aj, ak) for k in 1:p.

Is there an O(n) algorithm to generate a prefix-less array for an positive integer array?

For array [4,3,5,1,2],
we call prefix of 4 is NULL, prefix-less of 4 is 0;
prefix of 3 is [4], prefix-less of 3 is 0, because none in prefix is less than 3;
prefix of 5 is [4,3], prefix-less of 5 is 2, because 4 and 3 are both less than 5;
prefix of 1 is [4,3,5], prefix-less of 1 is 0, because none in prefix is less than 1;
prefix of 2 is [4,3,5,1], prefix-less of 2 is 1, because only 1 is less than 2
So for array [4, 3, 5, 1, 2], we get prefix-less arrary of [0,0, 2,0,1],
Can we get an O(n) algorithm to get prefix-less array?
It can't be done in O(n) for the same reasons a comparison sort requires O(n log n) comparisons. The number of possible prefix-less arrays is n! so you need at least log2(n!) bits of information to identify the correct prefix-less array. log2(n!) is O(n log n), by Stirling's approximation.
Assuming that the input elements are always fixed-width integers you can use a technique based on radix sort to achieve linear time:
L is the input array
X is the list of indexes of L in focus for current pass
n is the bit we are currently working on
Count is the number of 0 bits at bit n left of current location
Y is the list of indexs of a subsequence of L for recursion
P is a zero initialized array that is the output (the prefixless array)
In pseudo-code...
Def PrefixLess(L, X, n)
if (n == 0)
return;
// setup prefix less for bit n
Count = 0
For I in 1 to |X|
P(I) += Count
If (L(X(I))[n] == 0)
Count++;
// go through subsequence with bit n-1 with bit(n) = 1
Y = []
For I in 1 to |X|
If (L(X(I))[n] == 1)
Y.append(X(I))
PrefixLess(L, Y, n-1)
// go through subsequence on bit n-1 where bit(n) = 0
Y = []
For I in 1 to |X|
If (L(X(I))[n] == 0)
Y.append(X(I))
PrefixLess(L, Y, n-1)
return P
and then execute:
PrefixLess(L, 1..|L|, 32)
I think this should work, but double check the details. Let's call an element in the original array a[i] and one in the prefix array as p[i] where i is the ith element of the respective arrays.
So, say we are at a[i] and we have already computed the value of p[i]. There are three possible cases. If a[i] == a[i+1], then p[i] == p[i+1]. If a[i] < a[i+1], then p[i+1] >= p[i] + 1. This leaves us with the case where a[i] > a[i+1]. In this situation we know that p[i+1] >= p[i].
In the naïve case, we go back through the prefix and start counting items less than a[i]. However, we can do better than that. First, recognize that the minimum value for p[i] is 0 and the maximum is i. Next look at the case of an index j, where i > j. If a[i] >= a[j], then p[i] >= p[j]. If a[i] < a[j], then p[i] <= p[j] + j . So, we can start going backwards through p updating the values for p[i]_min and p[i]_max. If p[i]_min equals p[i]_max, then we have our solution.
Doing a back of the envelope analysis of the algorithm, it has O(n) best case performance. This is the case where the list is already sorted. The worst case is where it is reversed sorted. Then the performance is O(n^2). The average performance is going to be O(k*n) where k is how much one needs to backtrack. My guess is for randomly distributed integers, k will be small.
I am also pretty sure there would be ways to optimize this algorithm for cases of partially sorted data. I would look at Timsort for some inspiration on how to do this. It uses run detection to detect partially sorted data. So the basic idea for the algorithm would be to go through the list once and look for runs of data. For ascending runs of data you are going to have the case where p[i+1] = p[i]+1. For descending runs, p[i] = p_run[0] where p_run is the first element in the run.

Need idea for solving this algorithm puzzle

I've came across some similar problems to this one in the past, and I still haven't got good idea how to solve this problem. Problem goes like this:
You are given an positive integer array with size n <= 1000 and k <= n which is the number of contiguous subarrays that you will have to split your array into. You have to output minimum m, where m = max{s[1],..., s[k]}, and s[i] is the sum of the i-th subarray. All integers in the array are between 1 and 100. Example :
Input: Output:
5 3 >> n = 5 k = 3 3
2 1 1 2 3
Splitting array into 2+1 | 1+2 | 3 will minimize the m.
My brute force idea was to make first subarray end at position i (for all possible i) and then try to split the rest of the array in k-1 subarrays in the best way possible. However, this is exponential solution and will never work.
So I'm looking for good ideas to solve it. If you have one please tell me.
Thanks for your help.
You can use dynamic programming to solve this problem, but you can actually solve with greedy and binary search on the answer. This algorithm's complexity is O(n log d), where d is the output answer. (An upper bound would be the sum of all the elements in the array.) (or O( n d ) in the size of the output bits)
The idea is to binary search on what your m would be - and then greedily move forward on the array, adding the current element to the partition unless adding the current element pushes it over the current m -- in that case you start a new partition. The current m is a success (and thus adjust your upper bound) if the numbers of partition used is less than or equal to your given input k. Otherwise, you used too many partitions, and raise your lower bound on m.
Some pseudocode:
// binary search
binary_search ( array, N, k ) {
lower = max( array ), upper = sum( array )
while lower < upper {
mid = ( lower + upper ) / 2
// if the greedy is good
if partitions( array, mid ) <= k
upper = mid
else
lower = mid
}
}
partitions( array, m ) {
count = 0
running_sum = 0
for x in array {
if running_sum + x > m
running_sum = 0
count++
running_sum += x
}
if running_sum > 0
count++
return count
}
This should be easier to come up with conceptually. Also note that because of the monotonic nature of the partitions function, you can actually skip the binary search and do a linear search, if you are sure that the output d is not too big:
for i = 0 to infinity
if partitions( array, i ) <= k
return i
Dynamic programming. Make an array
int best[k+1][n+1];
where best[i][j] is the best you can achieve splitting the first j elements of the array int i subarrays. best[1][j] is simply the sum of the first j array elements. Having row i, you calculate row i+1 as follows:
for(j = i+1; j <= n; ++j){
temp = min(best[i][i], arraysum[i+1 .. j]);
for(h = i+1; h < j; ++h){
if (min(best[i][h], arraysum[h+1 .. j]) < temp){
temp = min(best[i][h], arraysum[h+1 .. j]);
}
}
best[i+1][j] = temp;
}
best[m][n] will contain the solution. The algorithm is O(n^2*k), probably something better is possible.
Edit: a combination of the ideas of ChingPing, toto2, Coffee on Mars and rds (in the order they appear as I currently see this page).
Set A = ceiling(sum/k). This is a lower bound for the minimum. To find a good upper bound for the minimum, create a good partition by any of the mentioned methods, moving borders until you don't find any simple move that still decreases the maximum subsum. That gives you an upper bound B, not much larger than the lower bound (if it were much larger, you'd find an easy improvement by moving a border, I think).
Now proceed with ChingPing's algorithm, with the known upper bound reducing the number of possible branches. This last phase is O((B-A)*n), finding B unknown, but I guess better than O(n^2).
I have a sucky branch and bound algorithm ( please dont downvote me )
First take the sum of array and dvide by k, which gives you the best case bound for you answer i.e. the average A. Also we will keep a best solution seen so far for any branch GO ( global optimal ).Lets consider we put a divider( logical ) as a partition unit after some array element and we have to put k-1 partitions. Now we will put the partitions greedily this way,
Traverse the array elements summing them up until you see that at the next position we will exceed A, now make two branches one where you put the divider at this position and other where you put at next position, Do this recursiely and set GO = min (GO, answer for a branch ).
If at any point in any branch we have a partition greater then GO or the no of position are less then the partitions left to be put we bound. In the end you should have GO as you answer.
EDIT:
As suggested by Daniel, we could modify the divider placing strategy a little to place it until you reach sum of elements as A or the remaining positions left are less then the dividers.
This is just a sketch of an idea... I'm not sure that it works, but it's very easy (and probably fast too).
You start say by putting the separations evenly distributed (it does not actually matter how you start).
Make the sum of each subarray.
Find the subarray with the largest sum.
Look at the right and left neighbor subarrays and move the separation on the left by one if the subarray on the left has a lower sum than the one on the right (and vice-versa).
Redo for the subarray with the current largest sum.
You'll reach some situation where you'll keep bouncing the separation between the same two positions which will probably mean that you have the solution.
EDIT: see the comment by #rds. You'll have to think harder about bouncing solutions and the end condition.
My idea, which unfortunately does not work:
Split the array in N subarrays
Locate the two contiguous subarrays whose sum is the least
Merge the subarrays found in step 2 to form a new contiguous subarray
If the total number of subarrays is greater than k, iterate from step 2, else finish.
If your array has random numbers, you can hope that a partition where each subarray has n/k is a good starting point.
From there
Evaluate this candidate solution, by computing the sums
Store this candidate solution. For instance with:
an array of the indexes of every sub-arrays
the corresponding maximum of sum over sub-arrays
Reduce the size of the max sub-array: create two new candidates: one with the sub-array starting at index+1 ; one with sub-array ending at index-1
Evaluate the new candidates.
If their maximum is higher, discard
If their maximum is lower, iterate on 2, except if this candidate was already evaluated, in which case it is the solution.

Total number of possible triangles from n numbers

If n numbers are given, how would I find the total number of possible triangles? Is there any method that does this in less than O(n^3) time?
I am considering a+b>c, b+c>a and a+c>b conditions for being a triangle.
Assume there is no equal numbers in given n and it's allowed to use one number more than once. For example, we given a numbers {1,2,3}, so we can create 7 triangles:
1 1 1
1 2 2
1 3 3
2 2 2
2 2 3
2 3 3
3 3 3
If any of those assumptions isn't true, it's easy to modify algorithm.
Here I present algorithm which takes O(n^2) time in worst case:
Sort numbers (ascending order).
We will take triples ai <= aj <= ak, such that i <= j <= k.
For each i, j you need to find largest k that satisfy ak <= ai + aj. Then all triples (ai,aj,al) j <= l <= k is triangle (because ak >= aj >= ai we can only violate ak < a i+ aj).
Consider two pairs (i, j1) and (i, j2) j1 <= j2. It's easy to see that k2 (found on step 2 for (i, j2)) >= k1 (found one step 2 for (i, j1)). It means that if you iterate for j, and you only need to check numbers starting from previous k. So it gives you O(n) time complexity for each particular i, which implies O(n^2) for whole algorithm.
C++ source code:
int Solve(int* a, int n)
{
int answer = 0;
std::sort(a, a + n);
for (int i = 0; i < n; ++i)
{
int k = i;
for (int j = i; j < n; ++j)
{
while (n > k && a[i] + a[j] > a[k])
++k;
answer += k - j;
}
}
return answer;
}
Update for downvoters:
This definitely is O(n^2)! Please read carefully "An Introduction of Algorithms" by Thomas H. Cormen chapter about Amortized Analysis (17.2 in second edition).
Finding complexity by counting nested loops is completely wrong sometimes.
Here I try to explain it as simple as I could. Let's fix i variable. Then for that i we must iterate j from i to n (it means O(n) operation) and internal while loop iterate k from i to n (it also means O(n) operation). Note: I don't start while loop from the beginning for each j. We also need to do it for each i from 0 to n. So it gives us n * (O(n) + O(n)) = O(n^2).
There is a simple algorithm in O(n^2*logn).
Assume you want all triangles as triples (a, b, c) where a <= b <= c.
There are 3 triangle inequalities but only a + b > c suffices (others then hold trivially).
And now:
Sort the sequence in O(n * logn), e.g. by merge-sort.
For each pair (a, b), a <= b the remaining value c needs to be at least b and less than a + b.
So you need to count the number of items in the interval [b, a+b).
This can be simply done by binary-searching a+b (O(logn)) and counting the number of items in [b,a+b) for every possibility which is b-a.
All together O(n * logn + n^2 * logn) which is O(n^2 * logn). Hope this helps.
If you use a binary sort, that's O(n-log(n)), right? Keep your binary tree handy, and for each pair (a,b) where a b and c < (a+b).
Let a, b and c be three sides. The below condition must hold for a triangle (Sum of two sides is greater than the third side)
i) a + b > c
ii) b + c > a
iii) a + c > b
Following are steps to count triangle.
Sort the array in non-decreasing order.
Initialize two pointers ‘i’ and ‘j’ to first and second elements respectively, and initialize count of triangles as 0.
Fix ‘i’ and ‘j’ and find the rightmost index ‘k’ (or largest ‘arr[k]‘) such that ‘arr[i] + arr[j] > arr[k]‘. The number of triangles that can be formed with ‘arr[i]‘ and ‘arr[j]‘ as two sides is ‘k – j’. Add ‘k – j’ to count of triangles.
Let us consider ‘arr[i]‘ as ‘a’, ‘arr[j]‘ as b and all elements between ‘arr[j+1]‘ and ‘arr[k]‘ as ‘c’. The above mentioned conditions (ii) and (iii) are satisfied because ‘arr[i] < arr[j] < arr[k]'. And we check for condition (i) when we pick 'k'
4.Increment ‘j’ to fix the second element again.
Note that in step 3, we can use the previous value of ‘k’. The reason is simple, if we know that the value of ‘arr[i] + arr[j-1]‘ is greater than ‘arr[k]‘, then we can say ‘arr[i] + arr[j]‘ will also be greater than ‘arr[k]‘, because the array is sorted in increasing order.
5.If ‘j’ has reached end, then increment ‘i’. Initialize ‘j’ as ‘i + 1′, ‘k’ as ‘i+2′ and repeat the steps 3 and 4.
Time Complexity: O(n^2).
The time complexity looks more because of 3 nested loops. If we take a closer look at the algorithm, we observe that k is initialized only once in the outermost loop. The innermost loop executes at most O(n) time for every iteration of outer most loop, because k starts from i+2 and goes upto n for all values of j. Therefore, the time complexity is O(n^2).
I have worked out an algorithm that runs in O(n^2 lgn) time. I think its correct...
The code is wtitten in C++...
int Search_Closest(A,p,q,n) /*Returns the index of the element closest to n in array
A[p..q]*/
{
if(p<q)
{
int r = (p+q)/2;
if(n==A[r])
return r;
if(p==r)
return r;
if(n<A[r])
Search_Closest(A,p,r,n);
else
Search_Closest(A,r,q,n);
}
else
return p;
}
int no_of_triangles(A,p,q) /*Returns the no of triangles possible in A[p..q]*/
{
int sum = 0;
Quicksort(A,p,q); //Sorts the array A[p..q] in O(nlgn) expected case time
for(int i=p;i<=q;i++)
for(int j =i+1;j<=q;j++)
{
int c = A[i]+A[j];
int k = Search_Closest(A,j,q,c);
/* no of triangles formed with A[i] and A[j] as two sides is (k+1)-2 if A[k] is small or equal to c else its (k+1)-3. As index starts from zero we need to add 1 to the value*/
if(A[k]>c)
sum+=k-2;
else
sum+=k-1;
}
return sum;
}
Hope it helps........
possible answer
Although we can use binary search to find the value of 'k' hence improve time complexity!
N0,N1,N2,...Nn-1
sort
X0,X1,X2,...Xn-1 as X0>=X1>=X2>=...>=Xn-1
choice X0(to Xn-3) and choice form rest two item x1...
choice case of (X0,X1,X2)
check(X0<X1+X2)
OK is find and continue
NG is skip choice rest
It seems there is no algorithm better than O(n^3). In the worst case, the result set itself has O(n^3) elements.
For Example, if n equal numbers are given, the algorithm has to return n*(n-1)*(n-2) results.

Resources