Algorithm for intersection of n arrays in C - c

I need to write a function which returns the intersection (AND condition) of all the arrays generated in every iteration for an array of queries.
If my query is given by: query[] = {"num>20", "avg==5", "deviation != 0.5"} then, n runs from 0 to length of query. The query is passed on to a function (get_sample_ids) which compares the condition against a list of samples possessing certain information. The returned array numbers from get_sample_ids are the index of the respective samples.
query[] = {"num>20", "avg==5", "deviation != 0.5"}
int intersected_array*;
for n=0:query.length-1
int arr* = get_sample_ids(query[n]);
// n=0: [1, 7, 4, 2, 6]
// n=1: [3, 6, 2]
// n=2: [6, 2]
end;
Expected output: intersected_array* = [6, 2]
I've coded an implementation which has 2 arrays (arr*, temp*). For every array returned in the iteration, it is first stored in the temp* array and the intersection of arr* and temp* is stored in arr*. Is this an optimal solution or what is the best approach?

This is quite efficient but could be tiresome to implement (haven't tried it).
Determine the shortest array. Benefit of using C is that if you don't know their length, you can use the pointers to arrays to determine it if they are placed sequentially in memory.
Make a <entry,boolean> hash map for the entries in shortest. We know the size and if anything it's only going down in next steps.
Iterate through an array. Start by initiating the whole map to false. For each entry check it in map.
Iterate through map deleting all the entries that weren't checked. Set all the values to false.
If there any new arrays left go back to step 3. with a new array.
The result are the keys in the final map.
It looks like much but we didn't have to resort to any high complexity measures. Key to good performance is using hash map because of constant access time.
Alternatively:
Make the map be <entry,int>. This way you can count up all the recurrences and don't have to reset it at every iteration, which adds to complexity.
At the end just compare the number of array's to the the values in map. Those that match are your solution.
Complexity:
Seems like O(n).

First I would sort the arrays in ascending order more easy to preform tasks
you could also zero pad the arrays so all the arrays shall be in the same size
[1, 2, 0, 4, 0, 0, 6, 7]
[0, 2, 3, 4, 0, 0, 6, 7]
[0, 2, 0, 0, 0, 0, 6, 0]
like a matrix so you could easily find the intersection
all this shall take a lot of PC run time
enjoy

Here is Jquery implementation of #ZbyszekKr solution -
I have $indexes as array of arrays for all characters in English alphabets which stores which char is present in which rows. $chars is the array of char string I am trying to filter in my HTML table rows. Below method is a part of larger scheme in filtering rows as a user types, when there are more than say 5000 rows in your table.
PS - There are some obvious redundancies, but those are necessary for my plugin I am making.
function intersection($indexes, $chars){
map = {};
$minLength = Number.MAX_SAFE_INTEGER; $minIdx = 0;
//get shortest array
$.each($chars, function(key, c){
$index = getDiffInNum(c, $initialChar);
$len = $indexes[$index].rows.length;
if($len < $minLength){
$minLength = $len;
$minIdx = $index;
}
});
//put that array values in map
$minCount = 1;
$.each($indexes[$minIdx].rows, function(key, val){
map[val] = $minCount;
});
//iterate through other arrays to figure out count of element
$.each($chars, function(key, c){
$index = getDiffInNum(c, $initialChar);
if($index != $minIdx){
$array = $indexes[$index].rows;
$.each($array, function(key, val){
if(val in map){
map[val] = map[val] + 1;
}
});
$.each(map, function(key, val){
if(val == $minCount){
delete map[key];
}
});
$minCount++;
}
});
//get the elements which belong in intersection
$intersect = new Array();
$.each(map, function(key, val){
if(val == $chars.length){
$intersect.push(parseInt(key));
}
});
return $intersect;
}

Related

Permuting an array with a given order without making a copy of the Array or a change to the order

I already found almost the same question asked here. But I need to do it a bit more complicated.
So here is the problem. You have an array with elements and another array with the specific order the elements of the first array should be in. Here is an example:
int[] a = {5, 35, 7, 2, 7};
int[] order = {3, 0, 2, 4, 1};
After the algorithm a should look like this:
a = {2, 5, 7, 7, 35};
The array named order must not be changed in any way and all copies of an array are forbidden. Only constant variables like a normal integer are allowed.
Note that this problem is not based on a specific language. It should be in a pseudocode-like language. Just understandable.
So does anyone here have an idea? I am sitting in front of this problem for 3 days now and hope to get some help because I think I am really stuck now.
Thank you in advance.
Given the ranges of numbers shown you could:
Add 100 times the corresponding order value to each item of a.
Sort a.
Replace every item of a by item modulo 100.
Some Python:
a = [5, 35, 7, 2, 7]
order = [3, 0, 2, 4, 1]
mult = max(a) + 1
a = [a_item + order_item * mult
for a_item, order_item in zip(a, order)]
a.sort(reverse=True)
a = [a_item % mult for a_item in a]
print(a) # [2, 5, 7, 7, 35]
I should emphasize that it works for the numbers shown; negatives and overflow considerations may limit more general applicability.
The permutation defined by order consists of one or more cycles. It is straightforward to apply one cycle to array a, but the challenge is to somehow know which array elements belong to a cycle that you already processed in that way. If there is a way to mark visited elements, like with an extra bit, then that problem is solved. But using an extra bit is a cover-up for an array with additional data. So that must be ruled out.
When no possibilities exist to perform such marking, then there still is a way out: only perform the cycle operation on array a when you are at the left-most index of that cycle (or right most). The downside is that at every index you need to go through the cycle that index belongs to, to check whether you are indeed at its left-side extreme or not. This means that you'll cycle through the same cycle several times.
Here is how that looks in JavaScript:
function isLeftOfCycle(order, i) {
let j = order[i];
while (j > i) {
j = order[j];
}
return (j === i); // a boolean
}
function applyCycle(arr, order, i) {
let temp = arr[i];
let k = i;
let j = order[i];
while (j > i) {
arr[k] = arr[j];
k = j;
j = order[j];
}
arr[k] = temp;
}
function sort(a, order) {
for (let i = 0; i < order.length; i++) {
if (isLeftOfCycle(order, i)) {
applyCycle(a, order, i);
}
}
}
// Example run:
let a = [5, 35, 7, 2, 7];
let order = [3, 0, 2, 4, 1];
sort(a, order);
console.log(a);
Obviously, this comes at a price: the time complexity is no longer O(n), but O(n²).

How do I generate random numbers from an array without repetition?

I know similar question have been asked before but bear with me.
I have an array:
int [] arr = {1,2,3,4,5,6,7,8,9};
I want numbers to be generated randomly 10 times. Something like this:
4,6,8,2,4,9,3,8,7
Although some numbers are repeated, no number is generated more than once in a row. So not like this:
7,3,1,8,8,2,4,9,5,6
As you can see, the number 8 was repeated immediately after it was generated. This is not the desired effect.
So basically, I'm ok with a number being repeated as long as it doesn't appear more than once in a row.
Generate a random number.
Compare it to the last number you generated
If it is the same; discard it
If it is different, add it to the array
Return to step 1 until you have enough numbers
generate a random index into the array.
repeat until it's different from the last index used.
pull the value corresponding to that index out of the array.
repeat from beginning until you have as many numbers as you need.
While the answers posted are not bad and would work well, someone might be not pleased with the solution as it is possible (tough incredibly unlikely) for it to hang if you generate long enough sequence of same numbers.
Algorithm that deals with this "problem", while preserving distribution of numbers would be:
Pick a random number from the original array, let's call it n, and output it.
Make array of all elements but n
Generate random index from the shorter array. Swap the element on the index with n. Output n.
Repeat last step until enough numbers is outputed.
int[] arr = {1, 2, 3, 4, 5, 6, 7, 8, 9};
int[] result = new int[10];
int previousChoice = -1;
int i = 0;
while (i < 10) {
int randomIndex = (int) (Math.random() * arr.length);
if (arr[randomIndex] != previousChoice) {
result[i] = arr[randomIndex];
i++;
}
}
The solutions given so far all involve non-constant work per generation; if you repeatedly generate indices and test for repetition, you could conceivably generate the same index many times before finally getting a new index. (An exception is Kiraa's answer, but that one involves high constant overhead to make copies of partial arrays)
The best solution here (assuming you want unique indices, not unique values, and/or that the source array has unique values) is to cycle the indices so you always generate a new index in (low) constant time.
Basically, you'd have a with loop like this (using Python for language mostly for brevity):
# randrange(x, y) generates an int in range x to y-1 inclusive
from random import randrange
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9]
result = []
selectidx = 0
randstart = 0
for _ in range(10): # Runs loop body 10 times
# Generate offset from last selected index (randstart is initially 0
# allowing any index to be selected; on subsequent loops, it's 1, preventing
# repeated selection of last index
offset = randrange(randstart, len(arr))
randstart = 1
# Add offset to last selected index and wrap so we cycle around the array
selectidx = (selectidx + offset) % len(arr)
# Append element at newly selected index
result.append(arr[selectidx])
This way, each generation step is guaranteed to require no more than one new random number, with the only constant additional work being a single addition and remainder operation.

Finding count of distinct elements in every k subarray

How to solve this question efficiently?
Given an array of size n and an integer k we need to return the sum of count of all distinct numbers in a window of size k. The window slides forward.
e.g. arr[] = {1,2,1,3,4,2,3};
Let k = 4.
The first window is {1,2,1,3}, count of distinct numbers is 2….(1 is repeated)
The second window is {2,1,3,4} count of distinct numbers is 4
The third window is {1,3,4,2} count of distinct numbers is 4
The fourth window is {3,4,2,3} count of distinct numbers is 2
You should keep track of
a map that counts frequencies of elements in your window
a current sum.
The map with frequencies can also be an array if the possible elements are from a limited set.
Then when your window slides to the right...
increase the frequency of the new number by 1.
if that frequency is now 1, add it to the current sum.
decrease the frequency of the old number by 1.
if that frequency is now 0, subtract it from the current sum.
Actually, I am the asker of the question, I am not answering the question, but i just wanted to comment on the answers, but I can't since I have very less reputation.
I think that for {1, 2, 1, 3} and k = 4, the given algorithms produce count = 3, but according to the question, the count should be 2 (since 1 is repeated)
You can use a hash table H to keep track of the window as you iterate over the array. You also keep an additional field for each entry in the hash table that tracks how many times that element occurs in your window.
You start by adding the first k elements of arr to H. Then you iterate through the rest of arr and you decrease the counter field of the element that just leaves the windows and increase the counter field of the element that enters the window.
At any point (including the initial insertion into H), if a counter field turns 1, you increase the number of distinct elements you have in your window. This can happen while the last but one occurrence of an element leaves the window or while a first occurrence enters it. If a counter field turns to any other value but 1, you decrease the number of distinct elements you have in the window.
This is a linear solution in the number of elements in arr. Hashing integers can be done like this, but depending on the language you use to implement your solution you might not really need to hash them yourself. In case the range in which the elements of arr reside in is small enough, you can use a simple array instead of the hash table, as the other contributors suggested.
This is how I solved the problem
private static int[] getSolve(int[] A, int B) {
Map<Integer, Integer> map = new HashMap<>();
for (int i = 0; i < B; i++) {
map.put(A[i], map.getOrDefault(A[i], 0) + 1);
}
List<Integer> res = new ArrayList<>();
res.add(map.size());
//4, 1, 3, 1, 5, 2, 5, 6, 7
//3, 1, 5, 2, 5, 6 count = 5
for (int i = B; i < A.length; i++) {
if (map.containsKey(A[i - B]) && map.get(A[i - B]) == 1) {
map.remove(A[i - B]);
}
if (map.containsKey(A[i - B])) {
map.put(A[i - B], map.get(A[i - B]) - 1);
}
map.put(A[i], map.getOrDefault(A[i], 0) + 1);
System.out.println(map.toString());
res.add(map.size());
}
return res.stream().mapToInt(i -> i).toArray();
}

Find if one integer array is a permutation of other

Given two integer arrays of size N, design an algorithm to determine whether one is a permutation of the other. That is, do they contain exactly the same entries but, possibly, in a different order.
I can think of two ways:
Sort them and compare : O(N.log N + N)
Check if the array have same number of integers and the sum of these integers is same, then XOR both the arrays and see if the result is 0. This is O(N). I am not sure if this method will eliminate false positives completely. Thoughts. Better algorithms?
Check if the array have same number of integers and the sum of these integers is same, then XOR both the arrays and see if the result is 0.
This doesn't work. Example:
a = [1,6] length(a) = 2, sum(a) = 7, xor(a) = 7
b = [3,4] length(b) = 2, sum(b) = 7, xor(b) = 7
Others have already suggested HashMap for an O(n) solution.
Here's an O(n) solution in C# using a Dictionary<T, int>:
bool IsPermutation<T>(IList<T> values1, IList<T> values2)
{
if (values1.Count != values2.Count)
{
return false;
}
Dictionary<T, int> counts = new Dictionary<T, int>();
foreach (T t in values1)
{
int count;
counts.TryGetValue(t, out count);
counts[t] = count + 1;
}
foreach (T t in values2)
{
int count;
if (!counts.TryGetValue(t, out count) || count == 0)
{
return false;
}
counts[t] = count - 1;
}
return true;
}
In Python you could use the Counter class:
>>> a = [1, 4, 9, 4, 6]
>>> b = [4, 6, 1, 4, 9]
>>> c = [4, 1, 9, 1, 6]
>>> d = [1, 4, 6, 9, 4]
>>> from collections import Counter
>>> Counter(a) == Counter(b)
True
>>> Counter(c) == Counter(d)
False
The best solution is probably a counting one using a map whose keys are the values in your two arrays.
Go through one array creating/incrementing the appropriate map location and go through the other one creating/decrementing the appropriate map location.
If the resulting map consists entirely of zeros, your arrays are equal.
This is O(N), and I don't think you can do better.
I suspect this is approximately what Mark Byers was going for in his answer.
If a space complexity of O(n) is not a problem, you can do it in O(n), by first storing in a hash map the number of occurrences for each value in the first array, and then running a second pass on the second array and check that every element exists in the map, decrementing the number the occurrences for each element.
Sort the contents of both arrays numerically, and then compare each nth item.
You could also take each item in array1, and then check if it is present in array2. Keep a count of how many matches you find. At the end, the number of matches should equal the length of the arrays.

Remove duplicates from Array without using Hash Table

i have an array which might contain duplicate elements(more than two duplicates of an element). I wonder if it's possible to find and remove the duplicates in the array:
without using Hash Table (strict requirement)
without using a temporary secondary array. No restrictions on complexity.
P.S: This is not Home work question
Was asked to my friend in yahoo technical interview
Sort the source array. Find consecutive elements that are equal. (I.e. what std::unique does in C++ land). Total complexity is N lg N, or merely N if the input is already sorted.
To remove duplicates, you can copy elements from later in the array over elements earlier in the array also in linear time. Simply keep a pointer to the new logical end of the container, and copy the next distinct element to that new logical end at each step. (Again, exactly like std::unique does (In fact, why not just download an implementation of std::unique and do exactly what it does? :P))
O(NlogN) : Sort and replace consecutive same element with one copy.
O(N2) : Run nested loop to compare each element with the remaining elements in the array, if duplicate found, swap the duplicate with the element at the end of the array and decrease the array size by 1.
No restrictions on complexity.
So this is a piece of cake.
// A[1], A[2], A[3], ... A[i], ... A[n]
// O(n^2)
for(i=2; i<=n; i++)
{
duplicate = false;
for(j=1; j<i; j++)
if(A[i] == A[j])
{duplicate = true; break;}
if(duplicate)
{
// "remove" A[i] by moving all elements from its left over it
for(j=i; j<n; j++)
A[j] = A[j+1];
n--;
}
}
In-place duplicate removal that preserves the existing order of the list, in quadratic time:
for (var i = 0; i < list.length; i++) {
for (var j = i + 1; j < list.length;) {
if (list[i] == list[j]) {
list.splice(j, 1);
} else {
j++;
}
}
}
The trick is to start the inner loop on i + 1 and not increment the inner counter when you remove an element.
The code is JavaScript, splice(x, 1) removes the element at x.
If order preservation isn't an issue, then you can do it quicker:
list.sort();
for (var i = 1; i < list.length;) {
if (list[i] == list[i - 1]) {
list.splice(i, 1);
} else {
i++;
}
}
Which is linear, unless you count the sort, which you should, so it's of the order of the sort -- in most cases n × log(n).
In functional languages you can combine sorting and unicification (is that a real word?) in one pass.
Let's take the standard quick sort algorithm:
- Take the first element of the input (x) and the remaining elements (xs)
- Make two new lists
- left: all elements in xs smaller than or equal to x
- right: all elements in xs larger than x
- apply quick sort on the left and right lists
- return the concatenation of the left list, x, and the right list
- P.S. quick sort on an empty list is an empty list (don't forget base case!)
If you want only unique entries, replace
left: all elements in xs smaller than or equal to x
with
left: all elements in xs smaller than x
This is a one-pass O(n log n) algorithm.
Example implementation in F#:
let rec qsort = function
| [] -> []
| x::xs -> let left,right = List.partition (fun el -> el <= x) xs
qsort left # [x] # qsort right
let rec qsortu = function
| [] -> []
| x::xs -> let left = List.filter (fun el -> el < x) xs
let right = List.filter (fun el -> el > x) xs
qsortu left # [x] # qsortu right
And a test in interactive mode:
> qsortu [42;42;42;42;42];;
val it : int list = [42]
> qsortu [5;4;4;3;3;3;2;2;2;2;1];;
val it : int list = [1; 2; 3; 4; 5]
> qsortu [3;1;4;1;5;9;2;6;5;3;5;8;9];;
val it : int list = [1; 2; 3; 4; 5; 6; 8; 9]
Since it's an interview question it is usually expected by the interviewer to be asked precisions about the problem.
With no alternative storage allowed (that is O(1) storage allowed in that you'll probably use some counters / pointers), it seems obvious that a destructive operation is expected, it might be worth pointing it out to the interviewer.
Now the real question is: do you want to preserve the relative order of the elements ? ie is this operation supposed to be stable ?
Stability hugely impact the available algorithms (and thus the complexity).
The most obvious choice is to list Sorting Algorithms, after all, once the data is sorted, it's pretty easy to get unique elements.
But if you want stability, you cannot actually sort the data (since you could not get the "right" order back) and thus I wonder if it solvable in less than O(N**2) if stability is involved.
doesn't use a hash table per se but i know behind the scenes it's an implementation of one. Nevertheless, thought I might post in case it can help. This is in JavaScript and uses an associative array to record duplicates to pass over
function removeDuplicates(arr) {
var results = [], dups = [];
for (var i = 0; i < arr.length; i++) {
// check if not a duplicate
if (dups[arr[i]] === undefined) {
// save for next check to indicate duplicate
dups[arr[i]] = 1;
// is unique. append to output array
results.push(arr[i]);
}
}
return results;
}
Let me do this in Python.
array1 = [1,2,2,3,3,3,4,5,6,4,4,5,5,5,5,10,10,8,7,7,9,10]
array1.sort()
print(array1)
current = NONE
count = 0
# overwriting the numbers at the frontal part of the array
for item in array1:
if item != current:
array1[count] = item
count +=1
current=item
print(array1)#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 5, 5, 5, 5, 6, 7, 7, 8, 9, 10, 10, 10]
print(array1[:count])#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
The most Efficient method is :
array1 = [1,2,2,3,3,3,4,5,6,4,4,5,5,5,5,10,10,8,7,7,9,10]
array1.sort()
print(array1)
print([*dict.fromkeys(array1)])#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
#OR#
aa = list(dict.fromkeys(array1))
print( aa)#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

Resources