Finding maximum value [duplicate] - c

This question already has answers here:
Finding Maximum Value in an array
(3 answers)
Closed 7 years ago.
Ten million elements are entered into an array (no memory constraints). As we know, while entering the elements we can update the max out of entered values by a check whenever we enter a value.
But imagine if the position of max value is somewhere around 9 million
If I remove 2 million elements in positions 8 to 10 million
without doing any more comparisons, we should have the next maximum value.
Will that mean while entering the data we should have a plan to organize the data in some way to get the max value out of the remaining data?
Deleting and inserting will keep on happening, but we should have the new/residual maximum value updated in less time with a smaller number of steps. (Using multiple stacks might help.)

For that you can also do by inserting the values in the array in the sorted order,means checking the right place of the inserted element in the sorted form.

If block deletions are a common operation, you could maintain a hierarchy of maxima of spans of the list. For each edit you then have to update that data, but that's something like O(log n) rather than O(m log n) if you simply iterated through the list of m deletions removing them one-by-one from the heap.

Related

Find a fixed length path of adjacent 2D array elements

I have a m x m two-dimensional array and I want to randomly select a sequence of n elements. The elements have to be adjacent (not diagonally). What is a good approach here? I though about a depth-first search from a random starting point but that seemed a little bit overkill for such a simple problem.
If I get this right, you are looking for sequence like continuous numbers ?
When i simplyfy this:
9 4 3
0 7 2
5 6 1
So when the 1 is selected, you'd like to have path from 1 to 4 right ? I personally think that Depth-First search would be the best choice. It's not that hard, it's actually pretty simple. Imagine you select number 2. You'll remember position of number 2 and then you can look for lowest numbers until there are any. When you are done with this part, you just do the same for higher numbers.
You have two stacks one for possible ways and another one for final path.
When going through the array, you are just poping from possibilities and pushing right ones into the final stack.
The best approach would be finding the lowest possible number without saving anything and then just looking for higher numbers and storing them so at the end you'll get stack from the highest number to the lowest.
If I get that wrong and you mean just like selecting elements that are "touching" like (from my table) 9 0 7 6, which means that the content doesn't matter, then you can do it simple by picking one number, storing all possibilities (every element around it) and then pick random number from 0 to size of that stored values. When you select one, you remove it from these stored values but you keep them. Then you run this on the new element and you just add these new elements to stored elements so the random will always pick surrounding around these selected numbers.

Sorted rotated integer array, search algorithm [duplicate]

This question already has answers here:
Searching a number in a rotated sorted Array
(20 answers)
Closed 7 years ago.
An integer sorted array rotated to left by unknown no of times, write an efficient algorithm to search for an element.
Example: 4 5 6 7 8 9 1 2 3 4
I am thinking every time I find a mid in Binary Search I compare the element with extreme end element and take a decision on which half to pick to repeat the process. Is it wrong? Or is there any efficient algo?
Your example array contains duplicates. When there are duplicates there is no efficient algorithm - you must always do O(n) work in the worst case.
To prove this, consider arrays of this form:
000000000000000010000000
It is a rotation of a sorted array, but in the worst cast you must iterate over every element to see where the 1 is.

Algorithm to detect duplication of integers in an unsorted array of n integers. (implemented with 2 nested loops) [duplicate]

This question already has answers here:
Limit input data to achieve a better Big O complexity
(3 answers)
Closed 8 years ago.
You are given an unsorted array of n integers, and you would like to find if there are any duplicates in the array (i.e. any integer appearing more than once).
Describe an algorithm (implemented with two nested loops) to do this.
My description of the algorithm:
In step 1, we write a while loop to check to see if the array is empty/null, if the array is not null then we proceed with an inner loop.
Step 2, we now write a for loop to run an iteration of n-1 and in that loop we will assign to current (variable) the first index in the array (in the first iteration) and we will update the current variable by index + 1 each time through the iteration which means that the first time, current will hold the first index in the array and the second time, it will hold the second index in the array and so on until the loop ends.
Step 3, we will write a loop within the for loop (used in step 2) to compare the current number to all the integers in the array, if the integer equals to the next number then we will print the number using a printf statement else update next to hold the next index in the array and use that to compare to the current variable and do so until it has been compared to all the integers in the array and once this has been done, the current variable will be updated to store the next index of the array and will compare that particular number to all the integers in the array.
Will the algorithm be correct? (according to the question)... you're suggestions would be grateful. And no! it's not a homework question or such. Thank you for your time.
The complexity is definitely O (N^2) = N * ((N + 1)/2) Or O(N^2) in its simplified manner.
Edit:
I have added a description of an algorithm that is more efficient (in the question below). But going back to the question above, would it be suitable as an answer for an exam question? (it has shown up in previous papers so i would really appreciate your help).
If we limit the input data in order to achieve some best case scenario, how can you limit the input data to achieve a better Big O complexity? Describe an algorithm for handling this limited data to find if there are any duplicates. What is the Big O complexity?
If we limit the data to, let’s say, array size of 5 (n = 5), we could reduce the complexity to O(N). If the array is sorted, than all we need is a single loop to compare each element to the next element in the array and this will find if duplicates exist. Which simply means that if an array given to us is by default (or luckily) already sorted (from lowest to highest value) in this case the reduction will be from O(N^2) to O(N) as we wouldn’t need the inner loop for comparing the integers for sorting since it is already sorted therefore we could implement a single loop to compare the integers to its successor and if a duplicate is encountered, then we could, for instance, use a printf statement to print the duplicates and proceed to iterate the loop n-1 times (which would be 4)- ending the program once that has been done. The best case in this algorithm would be O(N) simply because the performance grows linearly and in direct proportion to the size of the input/ data so if we have a sorted array of size 50 (50 integers in the array) then the iteration would be n-1 (the loop will iterate 50 – 1 times) where n is the length of the array which is 50. The running time in this algorithm increases in direct proportion to the input size. This simply means that in a sorted array, the amount of time the operations take to perform is completely dependent on the input size of the array.
p.s. Sure there are other algorithms efficient and faster but from my knowledge and from what the question asks is for a better big o complexity in the first question and i believe this algorithm achieves that. (correct me if i'm wrong)- thanks :)
You describe three loops, but the first is actually just a condition (If is null or empty abort).
The remaining algo sounds good, except I'd say instead of "current will hold the first index in the array" (which nitpicks would insist is always 0 in C) "current will hold value of first element in the array" or such.
As an aside (although I understand it's a practice assignment) it's so terribly inefficient (I think n^2 is correct). I urge to just have one loop over the array, copying the checked numbers in a sorted structure of some kind and do binary searches in it. (As a teacher I'd have my students describe a balanced tree first so that they can use it here, like a virtual library ;-) ).

Sort by Frequency

I want to sort the elements in an array by their frequency.
Input: 2 5 2 8 5 6 8 8
Output: 8 8 8 2 2 5 5 6
Now one solution to this would be:
Sort the elements using Quick sort or Merge sort. O(nlogn)
Construct a 2D array of element and count by scanning the sorted array. O(n)
Sort the constructed 2D array according to count. O(nlogn)
Among the other probable methods that I have read, one uses a Binary Search Tree and the other uses Hashing.
Could anyone suggest me a better algorithm? I know the complexity can't be reduced. But I want to avoid so many traversals.
You can perform one pass on the array without sorting it, and on a separate structure go counting how many times you find an element. This could be done on a separate array, if you know the ranges of the elements you'll find, or on a hash table, if you don't. In any case this process will be O(n). Then you can perform a sort of the second structure generated (where you have the count), using as sort parameter the amount that each element has associated. This second process is, as you said O(nlogn) if you choose a proper algorithm.
For this second phase I would recommend using Heap sort, by the means of a priority queue. You can tell the queue to order the elements by the count attribute (the one calculated on step one), and then just add the elements one by one. When you finish adding, the queue will be already sorted, and the algorithm has the desired complexity. TO retrieve your elements in order you just have to start popping.

Determine if more than half the keys in an array of size n are the same key in O(n) time? [duplicate]

This question already has answers here:
Linear Time Voting Algorithm. I don't get it
(5 answers)
Closed 10 years ago.
You have an array or list of keys of known size n. It is unknown how many unique keys there are in this list, could be as little as 0 and up to and including n. The keys are in no particular order and they really can't be, as these keys have no concept of greater than or less than, only equality or inequality. Now before you say hash map, here's one more condition that I think throws a wrench in that idea: The value of each key is private. The only information you can get about the key is whether or not it is equal to another key. So basically:
class key{
private:
T data;
...
public:
...
bool operator==(const key &k){return data==k.data}
bool operator!=(const key &k){return data!=k.data}
};
key array[n];
Now, is there an algorithm that can determine if more than half of the keys in the array are the same key in linear time? If not, what about O(n*log(n))? So for example say the array only has 3 unique keys. 60% of the array is populated with keys where key.data==foo, 30% key.data==bar and 10% key.data==derp. The algorithm only needs to determine that more than 50% of the keys are of the same kind (keys with data==foo) and also return one of those keys.
According to my professor it can be done in O(n) time but he says we only have to find one that can do it in O(n*log(n)) time.
If you can extract and hold any key for further comparisons, then the Boyer-Moore Majority Vote Algorithm is your friend.
If you don't want use BM algorithm, you could use the 2 following algorithm based on the same idea.
Algorithm a. At each run maintain a set S of M (a small part of N, for example 10) pairs element-counts, while going through array for each element:
1.1. If element E is in the set, increase count of the corresponding pair (E,count) -> (E, count+1)
1.2. If not, drop out element with minimal count and insert new pair (E,1)
If element has frequency F > 0.5 it will be in the set at the end of this procedure with probability (very roughly, actuallty much higher) 1 - (1-F)^M, at the second run calculate actual frequencies of elements in the set S.
Algorithm b. Take N series of length M of randomly picked elements of the array, select the most frequent element by any method and calculate frequency for it and middle frequency over series, the maximal error of frequency calculation would be
F (realFrequency) / sqrt(N). So if you get F_e* 1 - 1.0 /sqrt(N) ) > 0.5 then you find the most frequent element, if you get F_e(1 + 1.0/sqrt(N)) < 0.5 this element don't exists. F_e is estimated frequency.
One solution that comes in my mind is.. You will pick first element from this array, and traverse through the list and all the matching elements you put in a separate arraylist, Now you pick the second element from the original list and compare it with first if they are equal , leave this element and pick the next one. This could be a possible solution.

Resources