Determine if two arrays are identical up to permutation? [duplicate] - arrays

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Check if array B is a permutation of A
Is there a way to tell if two arrays of numbers (which can contain positives, negatives or repeats) are permutations of each other in O(n) time complexity and O(1) space complexity? I could not solve it because of tight space constraints.

If the numbers are integers - in-place radix sort can give you O(nlogk) time, where k is the range of the numbers, and n is the number of elements.
Note that the algorithm requires O(logk) space, for the stack trace of recursive calls.
If you can bound k to a constant (2^64 for example) - you get O(n) time with O(1) space.
After sorting - you can simply iterate on both arrays and check if they are identical.

It can be done if you have a hard limit on the range of the numbers themselves.
Say for example you know that you have two arrays A and B and that the numbers are bound between -128 and +127(8bit signed). You simply have an array of 256 locations. Each number n would map to the location n + 128.
You iterate over both arrays, for array A you would increment the corresponding location, for array B you decrement. Then you check if all locations are 0 or not. If they are, the arrays are permutations, if not, they aren't.
The time complexity is O(n+k). The space complexity is O(k) where k is the range of the numbers. Since k is independent of n, so that's O(n) and O(1) as far as n is concerned and as long as you have a bound on k.
Note also that the time complexity can be further reduced to simply O(n) instead of O(n+k). You simply keep a running total of numbers that have non-zero counts. Every time an increment/decrement would push a count from to something else, you increment the running total. Every time it would be pushed to zero, you decrement the total. At the end, if the total is 0, then all counts are 0.
Edit: Amit's answer probably has a better space complexity though :)
PS: However, this algorithm can be applied if the arrays of numbers are streamed in, so they never actually have to be all kept in memory. So it might have a smaller space complexity than outright sorting if the conditions are right

Related

Is O(cn) at least as fast as O(n) in a non asymptotically way?

So first of all let me talk about the motivation for this question. Let's supose you have to find the minimum and the maximum values in an array. In this case, you wave two ways of doing so.
The first one consists in iterating over the array and finding the maximum value, then doing the same thing to find the minimum value. This solution is O(2n).
The second one consists in iterating over the array just one time and finding both the minimum and maximum value at the same time. This solution is O(n).
Even though the time complexity has been halved, for each iteration of the O(n) solution you now have twice as many instructions (ignoring how the compiler can possibly optmize these instructions) so I believe they should take the same amount of time to execute.
Let me give you a second example. Now you need to reverse an array. Again, you have two ways of doing so.
The first one is to create an empty array, iterate over the data array filling the empty array. This solution is O(n).
The second one is to iterate over the data array, swapping the 0th and n-1th elements, then the 1th and n-2th elements and so on (using this strategy) until you reach the middle of the array. This solution is O((1/2)n).
Again, even though the time complexity has been cutted in half, you have three times more instructions per iteration. You're iterating over (1/2)n elements, but for each iteration you have to perform three XOR instructions. If you were not to use XOR, but an auxiliary variable you would still need 2 more instructions to perform the variable swapping, so now I believe that o((1/2)n) should actually be worse than o(n).
Having said these things, my question is the following:
Ignoring space complexity, garbage collecting and the compiler possible optimizations, can I assume that having O(c1*n) and O(c2*n) algorithms so that c1 > c2, can I be sure that the algorithm that gives me O(c1*n) is as fast or faster than the one that gives me O(c2*n)?
This question is cool because it can make a difference on how I start writing code from here and on. If the "more complex" (c1) way is as fast as the "less complex" (c2) but more readable, i'm sticking with the "more complex" one.
c1 > c2, can I be sure that the algorithm that gives me O(c1n) is as fast or faster than the one that gives me O(c2n)?
The whole issue lies within the words "fast" or "faster". Computational complexity doesn't strictly measure what we intuitively understand as "fast". Without going into mathematical details (although it's a good idea: https://en.wikipedia.org/wiki/Big_O_notation), it answers the question "how fast it will go slower when my input grows". So if you have O(n^2) complexity you can roughly expect that doubling the size of the input will make your algorithm take 4 times more time. Whereas for linear complexity, 2 times bigger input gives only doubles the time. As you can see, it's relative, so any constants cancel out.
To sum up: from the way you ask your question, it doesn't seem the big-O notation is the correct tool here.
By definition, if c1 and c2 are constants, O(c1*n) === O(c2*c) === O(n). That is, the number of operations per element of your array of length n is completely irrelevant in this kind of complexity analysis.
All that it will tell you is that "it's linear". That is, if you have 1 bazillion operations for an array of length n, then you'll have 2 bazillion operations for an array of length 2*n (plus or minus something that grows slower than linear).
can I assume that having O(c1n) and O(c2n) algorithms so that c1 > c2, can I be sure that the algorithm that gives me O(c1n) is as fast or faster than the one that gives me O(c2n)?
Nope, not at all.
First, because the constants there are meaningless in that analysis. There's no way to put it: it is absolutely irrelevant whatever restrictions you put in c1 and c2 for big-O analysis. The whole idea is that it will discard those restrictions.
Second, because they don't tell you anything that would enable you to compare the two algorithms runtime for a specific value of n.
Such complexity analysis only enables you to compare the asymptotic behavior of algorithms. Real-world problems in general don't care about where the asymptotes are.
Assume that A1(n) is the number of operations Algorithm 1 needs for an input of length n, and A2(n) is the same for Algorithm 2. You could have:
A1(n) = 10n + 900
A2(n) = 100n
The complexity of both is O(A1) = O(A2) = O(n). For small inputs, A2 is faster. For large inputs, A1 is faster. The point where they change is n == 10.
This question is cool because it can make a difference on how I start writing code from here and on. If the "more complex" (c1) way is as fast as the "less complex" (c2) but more readable, i'm sticking with the "more complex" one.
Not only that, but also there's the fact that when you have 2 different algorithms that are really of different complexity classes (e.g., linear vs quadratic), it might still make sense to use the one of higher complexity as it may still be faster.
For example:
A3(n) = n^2
A4(n) = n + 10^20.
E.g., Algorithm 3 is quadratic, while Algorithm 4 is linear but it has a constant huge initialization time.
For inputs of size of up to around n == 10^10, it will be faster to use the quadratic algorithm.
It may very well be the case that all relevant inputs for your specific problem fall within that range, meaning that the quadratic algorithm would be the better, faster choice.
The bottom line is: for analyzing the actual time it will take to run an algorithm on a given input (or a given bounded range of inputs, as nearly all real-world problems are) and compare it with another algorithm, big-O analysis is meaningless.
Another way to put it: you're asking a practical "engineering" question (i.e., which option is better / faster) but trying to answer the question with a tool that's only useful for "theoretical" analysis. That tool is important, yes. But it has no chance of giving you the answer you're looking for, by design.
By definition, time complexity ignores constants. So O((1/2)n) == O(n) == O(2n) == O(cn).
Your example of O((1/2)n) shows why this is the case, because the constants can measure units of anything, so comparing them is meaningless.
You can never tell which algorithm is faster based only on the time complexity. But, you can tell which one would be faster as n approaches infinity. Since constants are removed from the time complexity, they would be considered equal and therefore with O(c1n) and O(c2n) you still would not be able to tell which one is faster even as n approaches infinity.
(my theoretical computer science courses are a couple of decades ago)
O(cn) is O(n).
It's still a linear search over the array.

If 1D and 2D array always have equivalent content will time complexity differ?

If, for example, I have a set of numbers and I populate a copy in a 1d array and a copy in a 2d array. So essentially I have, and will always have an equivalent amount of elements in each array. In this case does the time complexity actually differ, holding in mind that the number of elements will will be always equivalent?
No, the time complexity of the same algorithm operating on both types of inputs will be the same. Intuitively, the time complexity of an algorithm will not change just because the input data is arranged in a different way.
That being said, apparently the notion of input size depends a bit on the context, which can be puzzling. When discussing sorting algorithms, the input consists of n elements, which means that a time complexity of e.g. O(n) (which however is impossible for comparison-based sorting) would be termed linear. In contrast, when discussing algorithms for matrix multiplication, the input is usually imagined as an n*n matrix - which has not n, but n^2 elements. In this case, an algorithm of complexity of O(n*n) (which however is unlikely again) would again be termed linear, although the expression describing it is actually a square term.
To put it all in a nutshell, the time complexity refers to the actual input size, not some technical parameter which might be different from it.

dynamic array's time complexity of putting an element

In a written examination, I meet a question like this:
When a Dynamic Array is full, it will extend to double space, it's just like 2 to 4, 16 to 32 etc. But what's time complexity of putting an element to the array?
I think that extending space should not be considered, so I wrote O(n), but I am not sure.
what's the answer?
It depends on the question that was asked.
If the question asked for the time required for one insertion, then the answer is O(n) because big-O implies "worst case." In the worst case, you need to grow the array. Growing an array requires allocating a bigger memory block (as you say often 2 times as big, but other factors bigger than 1 may be used) and then copying the entire contents, which is the n existing elements. In some languages like Java, the extra space must also be initialized.
If the question asked for amortized time, then the answer is O(1). Another way of saying this is that the cost of n adds is O(n).
How can this be? Each addition is O(n), but n of them also require O(n). This is the beauty of amortization. For simplicity, say the array starts with size 1 and grows by a factor of 2 every time it fills, so we're always copying a power of 2 elements. This means the cost of growing is 1 the first time, 2 the second time, etc. In general, the total cost of growing to n elements is TC=1+2+4+...n. Well, it's not hard to see that TC = 2n-1. E.g. if n = 8, then TC=1+2+4+8=15=2*8-1. So TC is proportional to n or O(n).
This analysis works no matter the initial array size or the factor of growth, so long as the factor is greater than 1.
If your teacher is good, he or she asked this question in an ambiguous manner to see if you could discuss both answers.
In order to grow the array size you cannot simply "add more to the end" because you will more likely get a "segmentation fault" type of error. So even though as a mean value it takes θ(1) steps because you have enough space, in terms if O notation is O(n) because you have to copy the old array into a new bigger array (for which you allocated memory) and that should take n steps...generally. On the other hand of course that you can copy arrays faster generally because it's just a memory copy from a continuous space and that should be 1 step in the best scenario ,i.e where the page (OS) can take the whole array. In the end ... mathematically , even considering that we are making making n / (4096 * 2^10) (4 KB) steps, it still means a O(n) complexity.

How to find the kth smallest element of a list without sorting the list?

I need to find the median of an array without sorting or copying the array.
The array is stored in the shared memory of a cuda program. Copying it to global memory would slow the program down and there is not enough space in shared memory to make an additional copy of it there.
I could use two 'for' loops and iterate over every possible value and count how many values are smaller than it but this would be O(n^2). Not ideal
Does anybody now of a O(n) or O(nlogn) algorithm which solves my problem?
Thanks.
If your input are integers with absolute value smaller than C, there's a simple O(n log C) algorithm that needs only constant additional memory: Just binary search for the answer, i.e. find the smallest number x such that x is larger than or equal to at least k elements in the array. It's easily parallelizable too via a parallel prefix scan to do the counting.
Your time and especially memory constraints make this problem difficult. It becomes easy, however, if you're able to use an approximate median.
Say an element y is an ε approximate median if
m/2 − ε m < rank(y) < m/2 + ε m
Then all you need to do is sample
t = 7ε−2
log(2δ
−1
)
elements, and find their median any way you want.
Note that the number of samples you need is independent of your array's size - it is just a function of ε and δ.

A Memory-Adaptive Merge Algorithm?

Many algorithms work by using the merge algorithm to merge two different sorted arrays into a single sorted array. For example, given as input the arrays
1 3 4 5 8
and
2 6 7 9
The merge of these arrays would be the array
1 2 3 4 5 6 7 8 9
Traditionally, there seem to be two different approaches to merging sorted arrays (note that the case for merging linked lists is quite different). First, there are out-of-place merge algorithms that work by allocating a temporary buffer for storage, then storing the result of the merge in the temporary buffer. Second, if the two arrays happen to be part of the same input array, there are in-place merge algorithms that use only O(1) auxiliary storage space and rearrange the two contiguous sequences into one sorted sequence. These two classes of algorithms both run in O(n) time, but the out-of-place merge algorithm tends to have a much lower constant factor because it does not have such stringent memory requirements.
My question is whether there is a known merging algorithm that can "interpolate" between these two approaches. That is, the algorithm would use somewhere between O(1) and O(n) memory, but the more memory it has available to it, the faster it runs. For example, if we were to measure the absolute number of array reads/writes performed by the algorithm, it might have a runtime of the form n g(s) + f(s), where s is the amount of space available to it and g(s) and f(s) are functions derivable from that amount of space available. The advantage of this function is that it could try to merge together two arrays in the most efficient way possible given memory constraints - the more memory available on the system, the more memory it would use and (ideally) the better the performance it would have.
More formally, the algorithm should work as follows. Given as input an array A consisting of two adjacent, sorted ranges, rearrange the elements in the array so that the elements are completely in sorted order. The algorithm is allowed to use external space, and its performance should be worst-case O(n) in all cases, but should run progressively more quickly given a greater amount of auxiliary space to use.
Is anyone familiar with an algorithm of this sort (or know where to look to find a description of one?)
at least according to the documentation, the in-place merge function in SGI STL is adaptive and "its run-time complexity depends on how much memory is available". The source code is available of course you could at least check this one.
EDIT: STL has inplace_merge, which will adapt to the size of the temporary buffer available. If the temporary buffer is at least as big as one of the sub-arrays, it's O(N). Otherwise, it splits the merge into two sub-merges and recurses. The split takes O(log N) to find the right part of the other sub array to rotate in (binary search).
So it goes from O(N) to O(N log N) depending on how much memory you have available.

Resources