Resizing an array by a non-constant, continually - arrays

I’d like to perform amortized analysis of a dynamic array:
When we perform a sequence of n insertions, whenever an array of size k fills up, we reallocate an array of size k+sqrt(k), and copy the existing k values into the new array.
I’m new to amortized analysis and this is a problem I have yet to encounter, as we resize the array each time by a different non-constant value. (newSize=prevSize+sqrt(prevSize))
The total cost should be Θ(n*sqrt(n)), thus Θ(sqrt(n)) per operation.
I realize that whenever k >= c^2 for some constant c, then our array grows by c.
Let’s start off with an array of size k=1 (and assume n is large enough, for the sake of this example). After n insertions, we get the following sum of the total cost of the insertions + copies:
1+1(=k)+1+2(=k)+1+3(=k)+1+4(=k)+2+6(=k)+2+8(=k)+2+10(=k)+3+13(=k)+3+16(=k)+4+20(=k)+4+24+4+28+5+33+5+38+6+44+6+50+7…+n
I am seeing the pattern, but I can’t seem to be able to compute the bounds.
I’m trying to use all kinds of amortized analysis methods to bound this aggregated sum.
Let’s consider the accounting method for example, then I thought I needed round([k+sqrt(k)]\sqrt(k)) or simply round(sqrt(k)+1) coins per insertion, but it doesn’t add up.
I’d love to get your help, in trying to properly find the upper and lower sqrt(n) bound.
Thank you very much! :)

The easiest way is to lump together each resize operation with the inserts that follow it before the next resize.
The cost of each is lump is O(k + sqrt(k)), and each lump consists of O(sqrt(k)) operations, so the cost per operations is O( (k + k0.5)/k0.5) = O(k0.5 + 1) = O(k0.5)
Of course you want an answer in terms if n, but since k(n) is in Θ(n), O(k0.5) = O(N0.5).

This can be easily shown using the Accounting Method. Consider an array of size k+sqrt(k) such that the first k entries are occupied and the rest sqrt(k) are empty. Let us make Insert-Last operation draft sqrt(k)+2 coins: One will be used to pay for insertion while the rest (sqrt(k)+1 coins) will be deposited and used for credit. From here, execute Insert-Last sqrt(k) times. We shall then have k+sqrt(k) credit coins: in total we had drafted k+2sqrt(k) coins, sqrt(k) of which we used for paying for the insertions. Hence, we won't need to pay for the resizing of the array. As soon as the array gets full, we would be able to utilize our k+sqrt(k) credit coins and pay for the resizing operation. Since k = Θ(n), each Insert-Last operation drafts sqrt(k)+2 = O(sqrt(k)) = O(sqrt(n)) coins and thus takes O(sqrt(n)) amortized.

Related

what does worst case big omega(n) means?

If Big-Omega is the lower bound then what does it mean to have a worst case time complexity of Big-Omega(n).
From the book "data structures and algorithms with python" by Michael T. Goodrich:
consider a dynamic array that doubles it size when the element reaches its capacity.
this is from the book:
"we fully explored the append method. In the worst case, it requires
Ω(n) time because the underlying array is resized, but it uses O(1)time in the amortized sense"
The parameterized version, pop(k), removes the element that is at index k < n
of a list, shifting all subsequent elements leftward to fill the gap that results from
the removal. The efficiency of this operation is O(n−k), as the amount of shifting
depends upon the choice of index k. Note well that this
implies that pop(0) is the most expensive call, using Ω(n) time.
how is "Ω(n)" describes the most expensive time?
The number inside the parenthesis is the number of operations you must do to actually carry out the operation, always expressed as a function of the number of items you are dealing with. You never worry about just how hard those operations are, only the total number of them.
If the array is full and has to be resized you need to copy all the elements into the new array. One operation per item in the array, thus an O(n) runtime. However, most of the time you just do one operation for an O(1) runtime.
Common values are:
O(1): One operation only, such as adding it to the list when the list isn't full.
O(log n): This typically occurs when you have a binary search or the like to find your target. Note that the base of the log isn't specified as the difference is just a constant and you always ignore constants.
O(n): One operation per item in your dataset. For example, unsorted search.
O(n log n): Commonly seen in good sort routines where you have to process every item but can divide and conquer as you go.
O(n^2): Usually encountered when you must consider every interaction of two items in your dataset and have no way to organize it. For example a routine I wrote long ago to find near-duplicate pictures. (Exact duplicates would be handled by making a dictionary of hashes and testing whether the hash existed and thus be O(n)--the two passes is a constant and discarded, you wouldn't say O(2n).)
O(n^3): By the time you're getting this high you consider it very carefully. Now you're looking at three-way interactions of items in your dataset.
Higher orders can exist but you need to consider carefully what's it's going to do. I have shipped production code that was O(n^8) but with very heavy pruning of paths and even then it took 12 hours to run. Had the nature of the data not been conductive to such pruning I wouldn't have written it at all--the code would still be running.
You will occasionally encounter even nastier stuff which needs careful consideration of whether it's going to be tolerable or not. For large datasets they're impossible:
O(2^n): Real world example: Attempting to prune paths so as to retain a minimum spanning tree--I computed all possible trees and kept the cheapest. Several experiments showed n never going above 10, I thought I was ok--until a different seed produced n = 22. I rewrote the routine for not-always-perfect answer that was O(n^2) instead.
O(n!): I don't know any examples. It blows up horribly fast.

Which of the following methods is more efficient

Problem Statement:- Given an array of integers and an integer k, print all the pairs in the array whose sum is k
Method 1:-
Sort the array and maintain two pointers low and high, start iterating...
Time Complexity - O(nlogn)
Space Complexity - O(1)
Method 2:-
Keep all the elements in the dictionary and do the process
Time Complexity - O(n)
Space Complexity - O(n)
Now, out of above 2 approaches, which one is the most efficient and on what basis I am going to compare the efficiency, time (or) space in this case as both are different in both the approaches
I've left my comment above for reference.
It was hasty. You do allow O(nlogn) time for the Method 1 sort (I now think I understand?) and that's fair (apologies;-).
What happens next? If the input array must be used again, then you need a sorted copy (the sort would not be in-place) which adds an O(n) space requirement.
The "iterating" part of Method 1 also costs ~O(n) time.
But loading up the dictionary in Method 2 is also ~O(n) time (presumably a throw-away data structure?) and dictionary access - although ~O(1) - is slower (than array indexing).
Bottom line: O-notation is helpful if it can identify an "overpowering cost" (rendering others negligible by comparison), but without a hint at use-cases (typical and boundary, details like data quantities and available system resources etc), questions like this (seeking a "generalised ideal" answer) can't benefit from it.
Often some simple proof-of-concept code and performance tests on representative data can make "the right choice obvious" (more easily and often more correctly than speculative theorising).
Finally, in the absence of a clear performance winner, there is always "code readability" to help decide;-)

K nearest elements

This is an interview question.
There are billions and billions of stars in the universe. Which data structure would you use to answer the query
"Give me the k stars nearest to Earth".
I thought of heaps. As we can do heapify in O(n) and get the n_smallest in O(logn). Is there a better data structure suited for this purpose?
Assuming the input could not be all stored in memory at the same time (that would be a challenge!), but would be a stream of the stars in the universe -- like you would get an iterator or something -- you could benefit from using a Max Heap (instead of a Min Heap which might come to mind first).
At the start you would just push the stars in the heap, keyed by their distance to earth, until your heap has k entries.
From then on, you ignore any new star when it has a greater distance than the root of your heap. When it is closer than the root-star, substitute the root with that new star and sift it down to restore the heap property.
Your heap will not grow greater than k elements, and at all times it will consist of the k closest stars among those you have processed.
Some remarks:
Since it is a Max Heap, you don't know which is the closest star (in constant time). When you would stop the algorithm and then pull out the root node one after the other, you would get the k closest stars in order of descending distance.
As the observable(!) Universe has an estimated 1021 number of stars, you would need one of the best supercomputers (1 exaFLOPS) to hope to process all those stars in a reasonable time. But at least this algorithm only needs to keep k stars in memory.
The first problem you're going to run into is scale. There are somewhere between 100 billion and 400 billion stars in the Milky Way galaxy alone. There is an estimated 10 billion galaxies. If we assume an average of 100 billion stars per galaxy, that's 10^19 stars in the universe. It's unlikely you'll have the memory for that. And even if you did have enough memory, you probably don't have the time. Assuming your heapify operation could do a billion iterations per second, it would take a trillion seconds (31,700 years). And then you have to add the time it would take to remove the k smallest from the heap.
It's unlikely that you could get a significant improvement by using multiple threads or processes to build the heap.
The key here will be to pre-process the data and store it in a form that lets you quickly eliminate the majority of possibilities. The easiest way would be to have a sorted list of stars, ordered by their distance from Earth. So Sol would be at the top of the list, Proxima Centauri would be next, etc. Then, getting the nearest k stars would be an O(k) operation: just read the top k items from the list.
A sorted list would be pretty hard to update, though. A better alternative would be a k-d tree. It's easier to update, and getting the k nearest neighbors is still reasonably quick.

Certainty of data distribution profile when performing a sort operation

Lets assume we have some data structure like an array of n entries, and for arguments sake lets assume that the data has bounded numerical values .
Is there a way to determine the profile of the data , say monotonically ascending ,descending etc to a reasonable degree, perhaps with a certainty value of z having checked k entries within the data structure?
Assuming we have an array of size N, this means that we have N-1 comparisons between each adjacent elements in the array. Let M=N-1. M represents the number of relations. The probability of the array not being in the correct order is
1/M
If you select a subset of K relations to determine monoticallly ascending or descending, the theoretical probability of certainty is
K / M
Since these are two linear equations, it is easy to see that if you want to be .9 sure, then you will need to check about 90% of the entries.
This only takes into account the assumptions in your question. If you can are aware of the probability distribution, then using statistics, you could randomly check a small percentage of the array.
If you only care about the array being in relative order(for example, on an interval from [0,10], most 1s would be close to the beginning.), this is another question altogether. An algorithm that does this as opposed to just sorting, would have to have a high cost for swapping elements and a cheap cost for comparisons. Otherwise, there would be no performance pay offs from writing a complex algorithm to handle the check.
It is important to note that this is theoretically speaking. I am assuming no distribution in the array.
The easier problem is to check the probability of encountering such orderly behavior from random data.
Eg. If numbers are arranged randomly there is p=0.5 that the first number is lower than the second number (we will come to the case of repetitions later). Now, if you sample k pairs and in every case first number is lower than second number, the probability of observing it is 2^(-k).
Coming back to repetitions, keep track of observed repetitions and factor it in. Eg. If the probability of repetition is q, probability of not observing repetitions is (1-q), probability of observing increasing or equal is q + (1-q)/2, so exponentiatewith k to get the probaility.

Cache oblivious lookahead array

I am trying to understand simipiled cache oblivious lookahead array which is described at here, and from the page 35 of this presentation
Analysis of Insertion into Simplified
Fractal Tree:
Cost to merge 2 arrays of size X is O(X=B) block I/Os. Merge is very
I/O efficient.
Cost per element to merge is O(1/B) since O(X) elements were
merged.
Max # of times each element is merged is O(logN).
Average insert cost is O(logN/B)
I can understhand #1,#2 and #3, but I can't understand #4, From the paper, merge can be considered as binary addition carry, for example, (31)B could be presented:
11111
when inserting a new item(plus 1), there should be 5 = log(32) merge(5 carries). But, in this situation, we have to merge 32 elements! In addition, if each time we plus 1, then how many carryies will be performed from 0 to 2^k ? The anwser should be 2^k - 1. In other words, one merge per insertion!
so How does #4 is computed?
While you are right on both that the number of merged elements (and so transfers) is N in worst case and that the number of total merges is also of the same order, the average insertion cost is still logarithmic. It comes from two facts: merges vary in cost, and the number of low-cost merges is much higher than the number of high-cost ones.
It might be easier to see by example.
Let's set B=1 (i.e. 1 element per block, worst case of each merge having a cost) and N=32 (e.g. we insert 32 elements into an initially empty array).
Half of the insertions (16) put an element into the empty subarray of size 1, and so do not cause a merge. Of the remaining insertions, one (the last) needs to merge (move) 32 elements, one (16th) moves 16, two (8th and 24th) move 8 elements, four move 4 elements, and eight move 2 elements. Thus, overall number of element moves is 96, giving the average of 3 moves per insertion.
Hope that helps.
The first log B levels fit in (a single page of) memory, and so any stuff that happens in those levels does not incur an I/O. (This also fixes the problem with rrenaud's analysis that there's O(1) merges per insertion, since you only start paying for them after the first log B merges.)
Once you are merging at least B elements, then Fact 2 kicks in.
Consider the work from an element's point of view. It gets merged O(log N) times. It gets charged O(1/B) each time that happens. It's total cost of insertion is O((log N)/B) (need the extra parens to differentiate from O(log N/B), which would be quite bad insertion performance -- even worse than a B-tree).
The "average" cost is really the amortized cost -- it's the amount you charge to that element for its insertion. A little more formally it's the total work for inserting N elements, then divide by N. An amortized cost of O((log N)/B) really means that inserting N elements is O((N log N)/B) I/Os -- for the whole sequence. This compares quite favorable with B-trees, which for N insertions do a total of O((N log N)/log B) I/Os. Dividing by B is obviously a whole lot better than dividing by log B.
You may complain that the work is lumpy, that you sometimes do an insertion that causes a big cascade of merges. That's ok. You don't charge all the merges to the last insertion. Everyone is paying its own small amount for each merge they participate in. Since (log N)/B will typically be much less than 1, everyone is being charged way less than a single I/O over the course of all of the merges it participates in.
What happens if you don't like amortized analysis, and you say that even though the insertion throughput goes up by a couple of orders of magnitude, you don't like it when a single insertion can cause a huge amount of work? Aha! There are standard ways to deamortize such a data structure, where you do a bit of preemptive merging during each insertion. You get the same I/O complexity (you'll have to take my word for it), but it's pretty standard stuff for people who care about amortized analysis and deamortizing the result.
Full disclosure: I'm one of the authors of the COLA paper. Also, rrenaud was in my algorithms class. Also, I'm a founder of Tokutek.
In general, the amortized number of changed bits per increment is 2 = O(1).
Here is a proof by logic/reasoning. http://www.cs.princeton.edu/courses/archive/spr11/cos423/Lectures/Binary%20Counting.pdf
Here is a "proof" by experimentation. http://codepad.org/0gWKC3rW

Resources