I have a problem where i have to find missing numbers within an array and add them to a set.
The question goes like so:
Array of size (n-m) with numbers from 1..n with m of them missing.
Find one all of the missing numbers in O(log). Array is sorted.
Example:
n = 8
arr = [1,2,4,5,6,8]
m=2
Result has to be a set {3, 7}.
This is my solution so far and wanted to know how i can calculate the big o of a solution. Also most solution I have seen uses the divide and conquer approach. How do i calculate the big oh of my algorithm below ?
ps If i don't meet the requirement, Is there any way I can do this without having to do it recursively ? I am really not a fan of recursion, I simply cant get my head around it ! :(
var arr = [1,2,4,5,6,8];
var mySet = [];
findMissingNumbers(arr);
function findMissingNumbers(arr){
var temp = 0;
for (number in arr){ //O(n)
temp = parseInt(number)+1;
if(arr[temp] - arr[number] > 1){
addToSet(arr[number], arr[temp]);
}
}
}
function addToSet(min, max){
while (min != max-1){
mySet.push(++min);
}
}
There are two things you want to look at, one you have pointed out: how many times do you iterate the loop "for (number in arr)"? If you array contains n-m elements, then this loop should be iterated n-m times. Then look at each operation you do inside the loop and try to figure out a worst-case scenario (or typical) scenario for each. The temp=... line should be a constant cost (say 1 unit per loop), the conditional is constant cost (say 1 unit per loop) and then there is the addToSet. The addToset is more difficult to analyze because it isn't called every time, and it may vary in how expensive it is each time called. So perhaps what you want to think is that for each of the m missing elements, the addToSet is going to perform 1 operation... a total of m operations (which you don't know when they will occur, but all m must occur at some point). Then add up all of your costs.
n-m loops iterations, in each one you do 2 operations total of 2(n-m) then add in the m operations done by addToSet, for a total of something like 2n-m ~ 2n (assuming that m is small compared to n). This could be O(n-m) or also O(n) (If it is O(n-m) it is also O(n) since n >= n-m.) Hope this helps.
In your code you have a complexity of O(n) in time because you check n index of your array. A faster way to do this is something like that :
Go to the half of your array
Is this number at the right place (this
means the other ones will be too because array is sorted)
If it's the expected number : go to the half of the second half
If not : add this number in the set and go to the half of the first half
Stop when the number you looking at is at index size-1
Note that you can have some optimization, for example you can directly check if the array have the correct size and return an empty array. It depends of your problem.
My algorithm is also in O(n) because you always take the worst set of data. In my case I would be that we miss one data at the end of the array. So technically it should be O(n-1) but constants are negligible in front of n (assumed to be very high). That's why you have to keep in mind the average complexity too.
For what it's worth here is a more succinct implementation of the algorithm (javascript):
var N = 10;
var arr = [2,9];
var mySet = [];
var index = 0;
for(var i=1;i<=N;i++){
if(i!=arr[index]){
mySet.push(i);
}else{
index++;
}
}
Here the big(O) is trivial as there is only a single loop which runs exactly N times with constant cost operations each iteration.
Big O is the complexity of the algorithm. It is a function for the number of steps it takes your program to come up with a solution.
This gives a pretty good explanation of how it works:
Big O, how do you calculate/approximate it?
Related
i was wondering what would be the time complexity of this piece of code?
last = 0
ans = 0
array = [1,2,3,3,3,4,5,6]
for number in array:
if number != last then: ans++;
last = number
return ans
im thinking O(n^2) as we look at all the array elements twice, once in executing the for loop and then another time when comparing the two subsequent values, but I am not sure if my guess is correct.
While processing each array element, you just make one comparison, based on which you update ans and last. The complexity of the algorithm stands at O(n), and not O(n^2).
The answer is actually O(1) for this case, and I will explain why after explaining why a similar algorithm would be O(n) and not O(n^2).
Take a look at the following example:
def do_something(array):
for number in array:
if number != last then: ans++;
last = number
return ans
We go through each item in the array once, and do two operations with it.
The rule for time complexity is you take the largest component and remove a factor.
if we actually wanted to calculate the exact number of operaitons, you might try something like:
for number in array:
if number != last then: ans++; # n times
last = number # n times
return ans # 1 time
# total number of instructions = 2 * n + 1
Now, Python is a high level language so some of these operations are actually multiple operations put together, so that instruction count is not accurate. Instead, when discussing complexity we just take the largest contributing term (2 * n) and remove the coefficient to get (n). big-O is used when discussing worst case, so we call this O(n).
I think your confused because the algorithm you provided looks at two numbers at a time. the distinction you need to understand is that your code only "looks at 2 numbers at a time, once for each item in the array". It does not look at every possible pair of numbers in the array. Even if your code looked at half of the number of possible pairs, this would still be O(n^2) because the 1/2 term would be excluded.
Consider this code that does, here is an example of an O(n^2) algorithm.
for n1 in array:
for n2 in array:
print(n1 + n2)
In this example, we are looking at each pair of numbers. How many pairs are there? There are n^2 pairs of numbers. Contrast this with your question, we look at each number individually, and compare with last. How many pairs of number and last are there? At worst, 2 * n, which we call O(n).
I hope this clears up why this would be O(n) and not O(n^2). However, as I said at the beginning of my answer this is actually O(1). This is because the length of the array is specifically 8, and not some arbitrary length n. Every time you execute this code it will take the same amount of time, it doesn't vary with anything and so there is no n. n in my example was the length of the array, but there is no such length term provided in your example.
**I have 3 arrays a[1...n] b[1...n] c[1....n] which contain integers.
It is not mentioned if the arrays are sorted or if each array has or has not duplicates.
The task is to check if there is any common number in the given arrays and return true or false.
For example : these arrays a=[3,1,5,10] b=[4,2,6,1] c=[5,3,1,7] have one common number : 1
I need to write an algorithm with time complexity O(n^2).
I let the current element traversed in a[] be x, in b[] be y and in c[] be z and have following cases inside the loop : If x, y and z are same, I can simply return true and stop the program,something like:
for(x=1;x<=n;x++)
for(y=1;y<=n;y++)
for(z=1;z<=n;z++)
if(a[x]==b[y]==c[z])
return true
But this algorithm has time complexity O(n^3) and I need O(n^2).Any suggestions?
There is a pretty simple and efficient solution for this.
Sort a and b. Complexity = O(NlogN)
For each element in c, use binary search to check if it exists in both a and b. Complexity = O(NlogN).
That'll give you a total complexity of O(NlogN), better than O(N^2).
Create a new array, and save common elements in a and b arrays. Then find common elements in this array with c array.
python solution
def find_d(a, b, c):
for i in a:
for j in b:
if i==j:
d.append(i)
def findAllCommon(c, d):
for i in c:
for j in d:
if i==j:
e.append(i)
break
a = [3,1,5,10]
b = [4,2,6,1]
c = [5,3,1,7]
d = []
e = []
find_d(a, b, c)
findAllCommon(c, d)
if len(e)>0:
print("True")
else:
print("False")
Since I haven't seen a solution based on sets, so I suggest looking for how sets are implemented in your language of choice and do the equivalent of this:
set(a).intersection(b).intersection(c) != set([])
This evaluates to True if there is a common element, False otherwise. It runs in O(n) time.
All solutions so far either require O(n) additional space (creating a new array/set) or change the order of the arrays (sorting).
If you want to solve the problem in O(1) additional space and without changing the original arrays, you indeed can't do better than O(n^2) time:
foreach(var x in a) { // n iterations
if (b.Contains(x) && c.Contains(x)) return true; // max. 2n
} // O(n^2)
return false;
A suggestion:
Combine into a single array(z) where z = sum of the entries in each array. Keep track of how many entries there were in Array 1, Array 2, Array 3.
For each entry Z traverse the array to see how many duplicates there are within the combined array and where they are. For those which have 2 or more (ie there are 3 or more of the same number), check that the location of those duplicates correspond to having been in different arrays to begin with (ruling our duplicates within the original arrays). If your number Z has 2 or more duplicates and they are all in different arrays (checked through their position in the array) then store that number Z in result array.
Report result array.
You will traverse the entire combined array once and then almost (no need to check if Z is a duplicate of itself) traverse it again for each Z, so n^2 complexity.
Also worth noting that the time complexity will now be a function of total number of entries and not of number of arrays (your nested loops would become n^4 with 4 arrays - this will stay as n^2)
You could make it more efficient by always checking the already found duplicates before checking for a new Z - if the new Z is already found as a duplicate to an earlier Z you need not traverse to check for that number again. This will make it more efficient the more duplicates there are - with few duplicates the reduction in number of traverses is probably not worth the extra complexity.
Of course you could also do this without actually combining the values into a single array - you would just need to make sure that your traversing routine looks through the arrays and keeps track of what it finds the in the right order.
Edit
Thinking about it, the above is doing way more than you want. It would allow you to report on doubles, quads etc. as well.
If you just want triples, then it is much easier/quicker. Since a triple needs to be in all 3 arrays, you can start by finding those numbers which are in any of the 2 arrays (if they are different lengths, compare the 2 shortest arrays first) and then to check any doublets found against the third array. Not sure what that brings the complexity down to but it will be less than n^2...
there are many ways to solve this here few selected ones sorted by complexity (descending) assuming n is average size of your individual arrays:
Brute force O(n^3)
its basicaly the same as you do so test any triplet combination by 3 nested for loops
for(x=1;x<=n;x++)
for(y=1;y<=n;y++)
for(z=1;z<=n;z++)
if(a[x]==b[y]==c[z])
return true;
return false;
slightly optimized brute force O(n^2)
simply check if each element from a is in b and if yes then check if it is also in c which is O(n*(n+n)) = O(n^2) as the b and c loops are not nested anymore:
for(x=1;x<=n;x++)
{
for(ret=false,y=1;y<=n;y++)
if(a[x]==b[y])
{ ret=true; break; }
if (!ret) continue;
for(ret=false,z=1;z<=n;z++)
if(a[x]==c[z])
{ ret=true; break; }
if (ret) return true;
}
return false;
exploit sorting O(n.log(n))
simply sort all arrays O(n.log(n)) and then just traverse all 3 arrays together to test if each element is present in all arrays (single for loop, incrementing the smallest element array). This can be done also with binary search like one of the other answers suggest but that is slower still not exceeding n.log(n). Here the O(n) traversal:
for(x=1,y=1,z=1;(x<=n)&&(y<=n)&&(z<=n);)
{
if(a[x]==b[y]==c[z]) return true;
if ((a[x]<b[y])&&(a[x]<c[z])) x++;
else if ((b[y]<a[x])&&(b[y]<c[z])) y++;
else z++;
}
return false;
however this needs to change the contents of arrays or need additional arrays for index sorting instead (so O(n) space).
histogram based O(n+m)
this can be used only if the range of elements in your array is not too big. Let say the arrays can hold numbers 1 .. m then you add (modified) histogram holding set bit for each array where value is presen and simply check if value is present in all 3:
int h[m]; // histogram
for(i=1;i<=m;i++) h[i]=0; // clear histogram
for(x=1;x<=n;x++) h[a[x]]|=1;
for(y=1;y<=n;y++) h[b[y]]|=2;
for(z=1;z<=n;z++) h[c[z]]|=4;
for(i=1;i<=m;i++) if (h[i]==7) return true;
return false;
This needs O(m) space ...
So you clearly want option #2
Beware all the code is just copy pasted yours and modified directly in answer editor so there might be typos or syntax error I do not see right now...
You are given an unsorted array of n integers, and you would like to find if there are any duplicates in the array (i.e. any integer appearing more than once).
Describe an algorithm (implemented with two nested loops) to do this.
The question that I am stuck at is:
How can you limit the input data to achieve a better Big O complexity? Describe an algorithm for handling this limited data to find if there are any duplicates. What is the Big O complexity?
Your help will be greatly appreciated. This is not related to my coursework, assignment or coursework and such. It's from the previous year exam paper and I am doing some self-study but seem to be stuck on this question. The only possible solution that i could come up with is:
If we limit the data, and use nested loops to perform operations to find if there are duplicates. The complexity would be O(n) simply because the amount of time the operations take to perform is proportional to the data size.
If my answer makes no sense, then please ignore it and if you could, then please suggest possible solutions/ working out to this answer.
If someone could help me solve this answer, I would be grateful as I have attempted countless possible solution, all of which seems to be not the correct one.
Edited part, again.. Another possible solution (if effective!):
We could implement a loop to sort the array so that it sorts the array (from lowest integer to highest integer), therefore the duplicates will be right next to each other making them easier and faster to be identified.
The big O complexity would still be O(n^2).
Since this is linear type, it would simply use the first loop and iterate n-1 times as we are getting the index in the array (in the first iteration it could be, for instance, 1) and store this in a variable names 'current'.
The loop will update the current variable by +1 each time through the iteration, within that loop, we now write another loop to compare the current number to the next number and if it equals to the next number, we can print using a printf statement else we move back to the outer loop to update the current variable by + 1 (next value in the array) and update the next variable to hold the value of the number after the value in current.
You can do linearly (O(n)) for any input if you use hash tables (which have constant look-up time).
However, this is not what you are being asked about.
By limiting the possible values in the array, you can achieve linear performance.
E.g., if your integers have range 1..L, you can allocate a bit array of length L, initialize it to 0, and iterate over your input array, checking and flipping the appropriate bit for each input.
A variance of Bucket Sort will do. This will give you complexity of O(n) where 'n' is the number of input elements.
But one restriction - max value. You should know the max value your integer array can take. Lets say it as m.
The idea is to create a bool array of size m (all initialized to false). Then iterate over your array. As you find an element, set bucket[m] to true. If it is already true then you've encountered a duplicate.
A java code,
// alternatively, you can iterate over the array to find the maxVal which again is O(n).
public boolean findDup(int [] arr, int maxVal)
{
// java by default assigns false to all the values.
boolean bucket[] = new boolean[maxVal];
for (int elem : arr)
{
if (bucket[elem])
{
return true; // a duplicate found
}
bucket[elem] = true;
}
return false;
}
But the constraint here is the space. You need O(maxVal) space.
nested loops get you O(N*M) or O(N*log(M)) for O(N) you can not use nested loops !!!
I would do it by use of histogram instead:
DWORD in[N]={ ... }; // input data ... values are from < 0 , M )
DWORD his[M]={ ... }; // histogram of in[]
int i,j;
// compute histogram O(N)
for (i=0;i<M;i++) his[i]=0; // this can be done also by memset ...
for (i=0;i<N;i++) his[in[i]]++; // if the range of values is not from 0 then shift it ...
// remove duplicates O(N)
for (i=0,j=0;i<N;i++)
{
his[in[i]]--; // count down duplicates
in[j]=in[i]; // copy item
if (his[in[i]]<=0) j++; // if not duplicate then do not delete it
}
// now j holds the new in[] array size
[Notes]
if value range is too big with sparse areas then you need to convert his[]
to dynamic list with two values per item
one is the value from in[] and the second is its occurrence count
but then you need nested loop -> O(N*M)
or with binary search -> O(N*log(M))
Can we find the mode of an array in O(n) time without using Additional O(n) space, nor Hash. Moreover the data is not sorted?
The problem is not easier then Element distinctness problem1 - so basically without the additional space - the problem's complexity is Theta(nlogn) at best (and since it can be done in Theta(nlogn) - it is ineed the case).
So basically - if you cannot use extra space for the hash table, best is sort and iterate, which is Theta(nlogn).
(1) Given an algorithm A that runs in O(f(n)) for this problem, it is easy to see that one can run A and then verify that the resulting element repeats more then once with an extra iteration to solve the element distinctness problem in O(f(n) + n).
Under the right circumstances, yes. Just for example, if your data is amenable to a radix sort, then you can sort with only constant extra space in linear time, followed by a linear scan through the sorted data to find the mode.
If your data requires comparison-based sorting, then I'm pretty sure O(N log N) is about as well as you can do in the general case.
Just count the frequencies. This is not O(n) space, it is O(k), with k being the number of distinct values in the range. This is actually constant space.
Time is clearly linear O(n)
//init
counts = array[k]
for i = 0 to k
counts[i] = 0
maxCnt = 0
maxVal = vals[0]
for val in vals
counts[val]++
if (counts[val] > maxCnt)
maxCnt = counts[val]
maxVal = val
The main problem here, is that while k may be a constant, it may also be very very huge. However, k could also be small. Regardless, this does properly answer your question, even if it isn't practical.
As a homework question, the following task had been given:
You are given an array with integers between 1 and 1,000,000. One
integer is in the array twice. How can you determine which one? Can
you think of a way to do it using little extra memory.
My solutions so far:
Solution 1
List item
Have a hash table
Iterate through array and store its elements in hash table
As soon as you find an element which is already in hash table, it is
the dup element
Pros
It runs in O(n) time and with only 1 pass
Cons
It uses O(n) extra memory
Solution 2
Sort the array using merge sort (O(nlogn) time)
Parse again and if you see a element twice you got the dup.
Pros
It doesn't use extra memory
Cons
Running time is greater than O(n)
Can you guys think of any better solution?
The question is a little ambiguous; when the request is "which one," does it mean return the value that is duplicated, or the position in the sequence of the duplicated one? If the former, any of the following three solutions will work; if it is the latter, the first is the only that will help.
Solution #1: assumes array is immutable
Build a bitmap; set the nth bit as you iterate through the array. If the bit is already set, you've found a duplicate. It runs on linear time, and will work for any size array.
The bitmap would be created with as many bits as there are possible values in the array. As you iterate through the array, you check the nth bit in the array. If it is set, you've found your duplicate. If it isn't, then set it. (Logic for doing this can be seen in the pseudo-code in this Wikipedia entry on Bit arrays or use the System.Collections.BitArray class.)
Solution #2: assumes array is mutable
Sort the array, and then do a linear search until the current value equals the previous value. Uses the least memory of all. Bonus points for altering the sort algorithm to detect the duplicate during a comparison operation and terminating early.
Solution #3: (assumes array length = 1,000,001)
Sum all of the integers in the array.
From that, subtract the sum of the integers 1 through 1,000,000 inclusive.
What's left will be your duplicated value.
This take almost no extra memory, can be done in one pass if you calculate the sums at the same time.
The disadvantage is that you need to do the entire loop to find the answer.
The advantages are simplicity, and a high probability it will in fact run faster than the other solutions.
Assuming all the numbers from 1 to 1,000,000 are in the array, the sum of all numbers from 1 to 1,000,000 is (1,000,000)*(1,000,000 + 1)/2 = 500,000 * 1,000,001 = 500,000,500,000.
So just add up all the numbers in the array, subtract 500,000,500,000, and you'll be left with the number that occured twice.
O(n) time, and O(1) memory.
If the assumption isn't true, you could try using a Bloom Filter - they can be stored much more compactly than a hash table (since they only store fact of presence), but they do run the risk of false positives. This risk can be bounded though, by our choice of how much memory to spend on the bloom filter.
We can then use the bloom filter to detect potential duplicates in O(n) time and check each candidate in O(n) time.
This python code is a modification of QuickSort:
def findDuplicate(arr):
orig_len = len(arr)
if orig_len <= 1:
return None
pivot = arr.pop(0)
greater = [i for i in arr if i > pivot]
lesser = [i for i in arr if i < pivot]
if len(greater) + len(lesser) != orig_len - 1:
return pivot
else:
return findDuplicate(lesser) or findDuplicate(greater)
It finds a duplicate in O(n logn)), I think. It uses extra memory on the stack, but it can be rewritten to use only one copy of the original data, I believe:
def findDuplicate(arr):
orig_len = len(arr)
if orig_len <= 1:
return None
pivot = arr.pop(0)
greater = [arr.pop(i) for i in reversed(range(len(arr))) if arr[i] > pivot]
lesser = [arr.pop(i) for i in reversed(range(len(arr))) if arr[i] < pivot]
if len(arr):
return pivot
else:
return findDuplicate(lesser) or findDuplicate(greater)
The list comprehensions that produce greater and lesser destroy the original with calls to pop(). If arr is not empty after removing greater and lesser from it, then there must be a duplicate and it must be pivot.
The code suffers from the usual stack overflow problems on sorted data, so either a random pivot or an iterative solution which queues the data is necessary:
def findDuplicate(full):
import copy
q = [full]
while len(q):
arr = copy.copy(q.pop(0))
orig_len = len(arr)
if orig_len > 1:
pivot = arr.pop(0)
greater = [arr.pop(i) for i in reversed(range(len(arr))) if arr[i] > pivot]
lesser = [arr.pop(i) for i in reversed(range(len(arr))) if arr[i] < pivot]
if len(arr):
return pivot
else:
q.append(greater)
q.append(lesser)
return None
However, now the code needs to take a deep copy of the data at the top of the loop, changing the memory requirements.
So much for computer science. The naive algorithm clobbers my code in python, probably because of python's sorting algorithm:
def findDuplicate(arr):
arr = sorted(arr)
prev = arr.pop(0)
for element in arr:
if element == prev:
return prev
else:
prev = element
return None
Rather than sorting the array and then checking, I would suggest writing an implementation of a comparison sort function that exits as soon as the dup is found, leading to no extra memory requirement (depending on the algorithm you choose, obviously) and a worst case O(nlogn) time (again, depending on the algorithm), rather than a best (and average, depending...) case O(nlogn) time.
E.g. An implementation of in-place merge sort.
http://en.wikipedia.org/wiki/Merge_sort
Hint: Use the property that A XOR A == 0, and 0 XOR A == A.
As a variant of your solution (2), you can use radix sort. No extra memory, and will run in
linear time. You can argue that time is also affected by the size of numbers representation, but you have already given bounds for that: radix sort runs in time O(k n), where k is the number of digits you can sort ar each pass. That makes the whole algorithm O(7n)for sorting plus O(n) for checking the duplicated number -- which is O(8n)=O(n).
Pros:
No extra memory
O(n)
Cons:
Need eight O(n) passes.
And how about the problem of finding ALL duplicates? Can this be done in less than
O(n ln n) time? (Sort & scan) (If you want to restore the original array, carry along the original index and reorder after the end, which can be done in O(n) time)
def singleton(array):
return reduce(lambda x,y:x^y, array)
Sort integer by sorting them on place they should be. If you get "collision" than you found the correct number.
space complexity O(1) (just same space that can be overwriten)
time complexity less than O(n) becuse you will statisticaly found collison before getting on the end.