Find the majority element in array - arrays
The majority element is the element that occurs more than half of the size of the array.
How to find the majority element in an array in O(n)?
Example input:
{2,1,2,3,4,2,1,2,2}
Expected output:
2
// returns -1 if there is no element that is the majority element, otherwise that element
// funda :: if there is a majority element in an array, say x, then it is okay to discard
// a part of that array that has no majority element, the remaining array will still have
// x as the majority element
// worst case complexity : O(n)
int findMajorityElement(int* arr, int size) {
int count = 0, i, majorityElement;
for (i = 0; i < size; i++) {
if (count == 0)
majorityElement = arr[i];
if (arr[i] == majorityElement)
count++;
else
count--;
}
count = 0;
for (i = 0; i < size; i++)
if (arr[i] == majorityElement)
count++;
if (count > size/2)
return majorityElement;
return -1;
}
It is sad to see that in 5 years no one has written a proper explanation for this problem.
This is a standard problem in streaming algorithms (where you have a huge (potentially infinite) stream of data) and you have to calculate some statistics from this stream, passing through this stream once.
Clearly you can approach it with hashing or sorting, but with a potentially infinite stream you can clearly run out of memory. So you have to do something smart here.
The majority element is the element that occurs more than half of the size of the array. This means that the majority element occurs more than all the other elements combined. That is, if you count the number of times the majority element appears, and subtract the number of occurrences of all the other elements, you will get a positive number.
So if you count the occurrences of some element, and subtract the number of occurrences of all other elements and get the number 0 - then your original element can't be a majority element. This is the basis for a correct algorithm:
Declare two variables, counter and possible_element. Iterate the stream, if the counter is 0 - your overwrite the possible element and initialize the counter, if the number is the same as possible element - increase the counter, otherwise decrease it. Python code:
def majority_element(arr):
counter, possible_element = 0, None
for i in arr:
if counter == 0:
possible_element, counter = i, 1
elif i == possible_element:
counter += 1
else:
counter -= 1
return possible_element
It is clear to see that the algorithm is O(n) with a very small constant before O(n) (like 3). Also it looks like the space complexity is O(1), because we have only three variable initialized. The problem is that one of these variables is a counter which potentially can grow up to n (when the array consists of the same numbers). And to store the number n you need O(log (n)) space. So from theoretical point of view it is O(n) time and O(log(n)) space. From practical, you can fit 2^128 number in a longint and this number of elements in the array is unimaginably huge.
Also note that the algorithm works only if there is a majority element. If such element does not exist it will still return some number, which will surely be wrong. (it is easy to modify the algorithm to tell whether the majority element exists)
History channel: this algorithm was invented somewhere in 1982 by Boyer, Moore and called Boyer–Moore majority vote algorithm
The majority element (if it exists) will also be the median. We can find the median in O(n) and then check that it is indeed a valid majority element in O(n).
More details for implementation link
Majority Element:
A majority element in an array A[] of size n is an element that appears more than n/2 times (and hence there is at most one such element).
Finding a Candidate:
The algorithm for first phase that works in O(n) is known as Moore’s Voting Algorithm. Basic idea of the algorithm is if we cancel out each occurrence of an element e with all the other elements that are different from e then e will exist till end if it is a majority element.
findCandidate(a[], size)
1. Initialize index and count of majority element
maj_index = 0, count = 1
2. Loop for i = 1 to size – 1
(a)If a[maj_index] == a[i]
count++
(b)Else
count--;
(c)If count == 0
maj_index = i;
count = 1
3. Return a[maj_index]
Above algorithm loops through each element and maintains a count of a[maj_index], If next element is same then increments the count, if next element is not same then decrements the count, and if the count reaches 0 then changes the maj_index to the current element and sets count to 1.
First Phase algorithm gives us a candidate element. In second phase we need to check if the candidate is really a majority element.
Second phase is simple and can be easily done in O(n). We just need to check if count of the candidate element is greater than n/2.
Read geeksforgeeks for more details
Time:O(n)
Space:O(n)
Walk the tree and count the occurrence of elements in a hash table.
Time:O(n lg n) or O(n*m)(depends on the sort used)
space:(1)
sort the array then count occurrences of the elements.
The interview correct answer: Moore’s Voting Algorithm
Time: O(n)
Space:O(1)
Walk the list compare the current number vs current best guess number. If the number is equal to the current best guess number increment a counter, otherwise decrement the counter and if the counter hits zero replace the current best guess number with the current number and set the counter to 1. When you get to the end the current best guess is the Candidate number, walk the list again just counting instances of the candidate. If the final count is greater than n/2 then it is the majority number otherwise there isn't one.
How about a random sampling approach? You could sample, say sqrt(n) elements and for each element that occurred more than sqrt(n) / 4 times (can be accomplished naively in O(n) time and O(sqrt(n)) space), you could check whether it was a majority element in O(n) time.
This method finds the majority with high probability because the majority element would be sampled at least sqrt(n) / 2 times in expectation, with a standard deviation of at most n^{1/4} / 2.
Another sampling approach that is similar to an approach I saw in one of the duplicate links is to draw two samples, and if they are equal verify that you have found the majority element in O(n) time. The additional verification step is necessary because the other elements besides the majority may not be distinct.
In Monte-Carlo algorithm,
Majority (a,n)
//a[]-array of 'n' natural numbers
{
j=random(0,1,....,n-1)
/*Selecting the integers at random ranging from 0 to n-1*/
b=a[j];c=0;
for k from 0 to n-1 do
{
if a[k]=b then,
c=c+1;
}
return (c>n/2)
}
public class MajorityElement
{
public static void main(String[] args)
{
int arr[]={3,4,3,5,3,90,3,3};
for(int i=0;i<arr.length;i++)
{
int count=0;
int j=0;
while(j<arr.length-1)
{
if(i==j)
j=j+1;
if(arr[i]==arr[j])
count++;
j++;
}
if(count>=arr.length/2)
{
System.out.println("majority element"+arr[i]);
break;
}
}
}
}
To find the majority of an element in an array then you can use Moore's Majority Vote Algorithm which is one of best algorithm for it.
Time Complexity: O(n) or linear time
Space Complexity: O(1) or constant space
Read more at Moore's Majority Vote Algorithm and GeeksforGeeks
If you are allowed to create a hash-table and assume hash-entry lookup is constant you just hash_map each entry against the number of occurrences.
You could do a second pass through the table you get the one with the highest count, but if you know in advance the number of elements in the table, you will know immediately if we have a majority element on the first pass when we hit the required count on the element.
You cannot guarantee of course that there is even a sequence of 2 consecutive occurrences of the element eg 1010101010101010101 has no consecutive 1s but it is a majority element.
We are not told anything about whether there is any kind of ordering on the element type although obviously we must be able to compare two for equality.
int majorityElement(int[] num) {
int major=num[0], count = 1;
for(int i=1; i<num.length;i++){
if(count==0){
count++;
major=num[i];
}
else if(major==num[i]){
count++;
}
else
count--;
}
return major;
}
Time Complexity O(n)
A modified version Boyer's Algorithm,
3 passes where,
In the first pass, we do a forward iteration of the array
In the second pass, we do a reverse iteration of the array.
In third pass, get counts for both the majority elements obtained in first and second passes.
Technically a linear complexity algorithm (O(3n)).
I believe this should work for an array with a majority element that occurs at least n/2 times.
#include <iostream>
#include <vector>
template <typename DataType>
DataType FindMajorityElement(std::vector<DataType> arr) {
// Modified BOYERS ALGORITHM with forward and reverse passes
// Count will stay positive if there is a majority element
auto GetMajority = [](auto seq_begin, auto seq_end) -> DataType{
int count = 1;
DataType majority = *(seq_begin);
for (auto itr = seq_begin+1; itr != seq_end; ++itr) {
count += (*itr == majority) ? 1 : -1;
if (count <= 0) { // Flip the majority and set the count to zero whenever it falls below zero
majority = *(itr);
count = 0;
}
}
return majority;
};
DataType majority1 = GetMajority(arr.begin(), arr.end());
DataType majority2 = GetMajority(arr.rbegin(), arr.rend());
int maj1_count = 0, maj2_count = 0;
// Check if any of the the majority elements is really the majority
for (const auto& itr: arr) {
maj1_count += majority1 == itr ? 1 : 0;
maj2_count += majority2 == itr ? 1 : 0;
}
if (maj1_count >= arr.size()/2)
return majority1;
if (maj2_count >= arr.size()/2)
return majority2;
// else return -1
return -1;
}
Code tested here
Thanks for the previous answers which inspired me to know Bob Boyer's algo. :)
Java generic version: A modified version of Boyer's Algorithm
Note: array of primitive type could use wrapper.
import com.sun.deploy.util.ArrayUtil;
import com.sun.tools.javac.util.ArrayUtils;
/**
* Created by yesimroy on 11/6/16.
*/
public class FindTheMajority {
/**
*
* #param array
* #return the value of the majority elements
*/
public static <E> E findTheMajority(E[] array){
E majority =null;
int count =0;
for(int i=0; i<array.length; i++){
if(count==0){
majority = array[i];
}
if(array[i].equals(majority)){
count++;
}else{
count--;
}
}
count = 0;
for(int i=0; i<array.length ; i++){
if(array[i].equals(majority)){
count++;
}
}
if(count > (array.length /2)){
return majority;
}else{
return null;
}
}
public static void main(String[] args){
String[] test_case1 = {"Roy","Roy","Roy","Ane","Dan","Dan","Ane","Ane","Ane","Ane","Ane"};
Integer[] test_case2 = {1,3,2,3,3,3,3,4,5};
System.out.println("test_case1_result:" + findTheMajority(test_case1));
System.out.println("test case1 the number of majority element should greater than" + test_case1.length/2);
System.out.println();
System.out.println("test_case2_result:" + findTheMajority(test_case2));
System.out.println("test case2 the number of majority element should greater than" + test_case2.length/2);
System.out.println();
}
}
//Suppose we are given an array A.
//If we have all the elements in the given array such each element is less than K, then we can create an additional array B with length K+1.
//Initialize the value at each index of the array with 0.
//Then iterate through the given array A, for each array value A[i], increment the value with 1 at the corresponding index A[i] in the created array B.
//After iterating through the array A, now iterate through the array B and find the maximum value. If you find the value greater than the n/2 then return that particular index.
//Time Complexity will be O(n+K) if K<=n then equivalent to O(n).
//We have a constraint here that all elements of the array are O(K).
//Assuming that each element is less than or equal to 100, in this case K is 100.
import javax.print.attribute.standard.Finishings;
public class MajorityElement {
private static int maxElement=100;
//Will have all zero values initially
private static int arrB[]=new int[maxElement+1];
static int findMajorityElement(int[] arrA) {
int count = 0, i, majorityElement;
int n=arrA.length;
for (i = 0; i < n; i++) {
arrB[arrA[i]]+=1;
}
int maxElementIndex=1;
for (i = 2; i < arrB.length; i++){
if (arrB[i]>n/2) {
maxElementIndex=i;
break;
}
}
return maxElementIndex;
}`
public static void main(String[] args) {
int arr[]={2,6,3,2,2,3,2,2};
System.out.println(findMajorityElement(arr));
}
}
This will Help you and if two elements repeat same number of times if will show none.
int findCandidate(int a[], int size)
{
int count,temp=0,i,j, maj;
for (i = 0; i < size; i++) {
count=0;
for(j=i;j<size;j++)
{
if(a[j]==a[i])
count++;
}
if(count>temp)
{
temp=count;
maj=i;
}
else if(count==temp)
{
maj=-1;
}
}
return maj;
}
This is how I do it in C++ using vector and multimap (JSON with repeat keys).
#include <iostream>
#include <vector>
#include <algorithm>
#include <map>
#include <iterator>
using namespace std;
vector <int> majorityTwoElement(vector <int> nums) {
// declare variables
multimap <int, int> nums_map;
vector <int> ret_vec, nums_unique (nums);
int count = 0;
bool debug = false;
try {
// get vector of unique numbers and sort them
sort(nums_unique.begin(), nums_unique.end());
nums_unique.erase(unique(nums_unique.begin(), nums_unique.end()), nums_unique.end());
// create map of numbers and their count
for(size_t i = 0; i < nums_unique.size(); i++){
// get number
int num = nums_unique.at(i);
if (debug) {
cout << "num = " << num << endl;
}
// get count of number
count = 0; // reset count
for(size_t j = 0; j < nums.size(); j++) {
if (num == nums.at(j)) {
count++;
}
}
// insert number and their count into map (sorted in ascending order by default)
if (debug) {
cout << "num = " << num << "; count = " << count << endl;
}
nums_map.insert(pair<int, int> (count, num));
}
// print map
if (debug) {
for (const auto &p : nums_map) {
cout << "nums_map[" << p.first << "] = " << p.second << endl;
}
}
// create return vector
if (!nums_map.empty()) {
// get data
auto it = prev(nums_map.end(), 1);
auto it1 = prev(nums_map.end(), 2);
int last_key = it->first;
int second_last_key = it1->first;
// handle data
if (last_key == second_last_key) { // tie for repeat count
ret_vec.push_back(it->second);
ret_vec.push_back(it1->second);
} else { // no tie
ret_vec.push_back(it->second);
}
}
} catch(const std::exception& e) {
cerr << "e.what() = " << e.what() << endl;
throw -1;
}
return ret_vec;
}
int main() {
vector <int> nums = {2, 1, 2, 3, 4, 2, 1, 2, 2};
try {
// get vector
vector <int> result = majorityTwoElement(nums);
// print vector
for(size_t i = 0; i < result.size(); i++) {
cout << "result.at(" << i << ") = " << result.at(i) << endl;
}
} catch(int error) {
cerr << "error = " << error << endl;
return -1;
}
return 0;
}
// g++ test.cpp
// ./a.out
Use Divide and Conquer to find majority element. If we divide the array in to two halves the majority element should be a majority in one of the halves. If we go ahead and combine the sub arrays we can find out if the majority element is also the majority of the combined array. This has O(nlogN)complexity.
Here is a C++ implementation:
#include <algorithm>
#include <iostream>
#include <vector>
using std::vector;
// return the count of elem in the array
int count(vector <int> &a, int elem, int low, int high)
{
if (elem == -1) {
return -1;
}
int num = 0;
for (int i = low; i <= high; i++) {
if (a[i] == elem) {
num++;
}
}
return num;
}
// return the majority element of combined sub-array. If no majority return -1
int combined(vector <int> &a, int maj1, int maj2, int low, int mid, int high)
{
// if both sub arrays have same majority elem then we can safely say
// the entire array has same majority elem.
// NOTE: No majority ie. -1 will be taken care too
if (maj1 == maj2) {
return maj1;
}
// Conflicting majorities
if (maj1 != maj2) {
// Find the count of each maj1 and maj2 in complete array
int num_maj1 = count(a, maj1, low, high);
int num_maj2 = count(a, maj2, low, high);
if (num_maj1 == num_maj2) {
return -1;
}
int half = (high - low + 1) / 2;
if (num_maj1 > half) {
return maj1;
} else if (num_maj2 > half) {
return maj2;
}
}
return -1;
}
// Divide the array into 2 sub-arrays. If we have a majority element, then it
// should be a majority in at least one of the half. In combine step we will
// check if this majority element is majority of the combination of sub-arrays.
// array a and low is lower index and high is the higher index of array
int get_majority_elem(vector<int> &a, int low, int high)
{
if (low > high) return -1;
if (low == high) return a[low];
int mid = (low + high) / 2;
int h1 = get_majority_elem(a, low, mid);
int h2 = get_majority_elem(a, mid + 1, high);
// calculate the majority from combined sub arrays
int me = combined(a, h1, h2, low, mid, high);
return me;
}
public class MajorityElement {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int testCases = sc.nextInt();
while(testCases-- > 0) {
int n = sc.nextInt();
int a[] = new int[n];
int maxCount = 0;
int index = -1;
for(int i = 0 ; i < n; i++) {
a[i] = sc.nextInt();
}
for(int i = 0; i < n; i++) {
int count =0;
for(int j = 0; j < n; j++) {
if(a[i] == a[j])
count++;
}
if(count > maxCount) {
maxCount = count;
index = i;
}
}
if(maxCount > n/2)
System.out.println(a[index]);
else
System.out.println(-1);
}
sc.close();
}
}
Find majority element in O(n*logn) time
A majority element occurs at least (n/2 +1) times (say L equals this value), where n is the size of the array. Hence, its obvious that 2*L > n.
Consequently, there cannot be 2 sequences of different values of length L in the array.
Now, sort the array - which will take O(n*logn) time, if you are able to do it best.
Here comes the tricky part:
In the sorted list, try underlining each sequence of length L from the beginning till the end of the array.
i.e. 1st sequence will be from 0 to L, 2nd will be 1 to (L+1),...and so on.
Observe that the element at L-th position is common in all such sequences. In other words, if a majority element exists, it will always be in median.
So, we can conclude that the element at the L-th position can be a candidate key.
Now, count the number of occurrences of the element at the L-th position. If its greater than n/2, then voila! you have your answer, else I'm sorry, the array doesn't possess any majority element.
The majority element is the element that appears more than ⌊n / 2⌋ times.
Input: nums = [3,2,2,2,3]
Output: 2
Python Code:
def majorityElement(nums):
nums.sort() or nums = sorted(nums)
return nums[len(nums)//2]
nums = [int(x) for x in input()]
print(majorityElements(nums))
I have changed and added more to this question: Majority elements in an array[] of size n is an element that appears more than n/2 times
ex:
{1,2,1,3,1} here n/2 is 2, so 1 appeared more than 2 times o/p:1
{1,2,1,2,3} here also n/2 is 2, but no element is appeard more than 2 times so o/p is "No element"
import java.io.*;
import java.util.*;
import java.lang.Math;
class Demo{
public static void main(String args[]) {
Demo d=new Demo();
int[] arr={2,2,2,3,1};
int res=d.majorityNo(arr);
int count=0;
for(int i=0;i<arr.length;i++){
if(res==arr[i]){
count++;
}
}
if(count>(arr.length/2)){
System.out.println(arr[res]);
}else{
System.out.println("No element ");
}
}
public static int majorityNo(int[] arr){
int temp=1;
int index=0;
int res=0;
for(int i=0;i<arr.length;i++){
if(arr[index]==arr[i]){
temp++;
}else{
temp--;
if(temp==0){
index=i;
temp=1;
}
}
}
return arr[index];
}
}
to find the Majority Element, Boyer–Moore Majority Vote Algorithm can be used. here's C# implementation.
public int MajorityItem(int[] array)
{
if (array == null || array.Length == 0)
{
return -1;
}
if (array.Length == 1)
{
return array[0];
}
int count = 1;
int result = array[0];
for (int i = 1; i < array.Length; i++)
{
if (array[i] == result)
{
count++;
}
else
{
count--;
if (count == 0)
{
result = array[i];
count = 1;
}
}
}
return result;
}
For finding majority of element: I was able to find majority element using sorting approach. I sorted those elements first afterwards I used the definition of majority element that it is always > n/2. I choosed the middle element and the counted it coz if it is a majority element the middle element should be majority element. after countiung I compared it with n/2. And got the result. Correct me if I am wrong.
Majority element can be found in O(n) time complexity using Moore's Voting Algorithm. Below are the two steps
Step 1: Find the candidate which is majority in the array.
Step 2: Check for validating the candidate found in step 1 for its majority.
Below is the code
def majority_element(A, N):
# A: Array and N: size of the array
res = 0
count = 1
for i in range(1, N):
if A[res] == A[i]:
count += 1
else:
count -= 1
if count == 0:
count = 1
res = i
count = 0
for i in range(N):
if A[res] == A[i]:
count += 1
if count <= N // 2:
return -1
else:
return A[res]
int result = -1;
int times = size/2;
HashMap<Integer,Integer> counterMap = new HashMap<>();
int count = 0;
for(int i = 0 ;i<size;i++) {
if(counterMap.containsKey(a[i])){
count = counterMap.get(a[i]);
}else {
count = 0;
}
counterMap.put(a[i], count+1);
}
for (Map.Entry<Integer, Integer> pair : counterMap.entrySet()) {
if(pair.getValue()>times) {
result = pair.getKey();
}
}
return result;
Sort the given array : O(nlogn).
If the array size is 7, then the majority element occurs atleast ceiling(7/2) = 4 times in the array.
After the array is sorted, it means that if the majority element is first found at position i, it is also found at position i + floor(7/2) (4 occurences).
Example - Given array A - {7,3,2,3,3,6,3}
Sort the array - {2,3,3,3,3,6,7}
The element 3 is found at position 1 (array index starting from 0.) If the position 1 + 3 = 4th element is also 3, then it means 3 is the majority element.
if we loop through the array from beginning..
compare position 0 with position 3, different elements 2 and 3.
compare position 1 with position 4, same element. We found our majority match!
Complexity - O(n)
Total time complexity - O(n).
Related
Minimum number of steps required to reach the last index
Given an array of non-negative integers, you are initially positioned at the first index of the array. Each element in the array represents your maximum jump length at that position. Your goal is to reach the last index in the minimum number of jumps. For example: Given array A = [2,3,1,1,4] The minimum number of jumps to reach the last index is 2. (Jump 1 step from index 0 to 1, then 3 steps to the last index.) I have built a dp[] array from left to right such that dp[i] indicates the minimum number of jumps needed to reach arr[i] from arr[0]. Finally, we return dp[n-1]. Worst case time complexity of my code is O(n^2). Can this be done in a better time complexity. This question is copied from leetcode.
int jump(vector<int>& a) { int i,j,k,n,jumps,ladder,stairs; n = a.size(); if(n==0 || n==1)return 0; jumps = 1, ladder = stairs = a[0]; for(i = 1; i<n; i++){ if(i + a[i] > ladder) ladder = i+a[i]; stairs --; if(stairs + i >= n-1) return jumps; if(stairs == 0){ jumps++; stairs = ladder - i; } } return jumps; }
You can use a range-minimum segment tree to solve this problem. A segment tree is a data structure which allows you to maintain an array of values and also query aggregate operations on subsegments of the array. More information can be found here: https://cses.fi/book/book.pdf (section 9.3) You will store values d[i] in the segment tree, d[i] is the minimum number of steps needed to reach the last index if you start from index i. Clearly, d[n-1] = 0. In general: d[i] = 1 + min(d[i+1], ..., d[min(n-1, i+a[i])]), so you can find all the values in d by computing them backwards, updating the segment tree after each step. The final solution is d[0]. Since both updates and queries on segment trees work in O(log n), the whole algorithm works in O(n log n).
I think, you can boost computing the dynamic with these technique: You spend O(N) for compute current d[i]. But you can keep a set with d[j], where j = 0..i - 1. And now all you need to use binary search to find: such d[j], that is minimum among all(0..i-1) and from j position i-pos is reachable. It will be O(n * logn) solution
That is a simple excercise in dynamic programming. As you have tagged it already, I wonder why you're not trying to apply it. Let V[k] be the minimum number of steps to get from position k to the end of the list a = (a[0], a[1], ...., a[n-1]). Then obviously V[n-1]=0. Now loop backwards: for(int k=n-2;k>=0;--k) { int minStep = n + 1; for(int j=k+1;j<=std::min(n-1,k+a[k]);++j) { minStep = std::min(minStep, V[j]) } V[k]= minStep + 1; } Demo in C++ After the loop, which takes O(a[0]+a[1]+...+a[n-1]) time, V[0] contains the minimum number of steps to reach the end of the list. In order to find the way through the list, you can then choose the action greedily. That is, from position k you always go to an allowed position l where V[l] is minimal. (Note that I've assumed positive entries of the list here, not non-negative ones. Possible zeros can easily be removed from the problem, as it is never optimal to go there.)
https://leetcode.com/problems/jump-game-ii class Solution { public int jump(int[] nums) { int n = nums.length; if(n < 2){ return 0; } int ans = 1; int rightBoundaryCovered = nums[0]; for(int i=1;i<n;i++){ if(rightBoundaryCovered >= n-1){ return ans; } int currMax = i+ nums[i]; while(rightBoundaryCovered>=i){ currMax = Math.max(currMax, i+nums[i]); i++; } //missed this decrement statement and faced multiple WA's i--; ans++; if(currMax>rightBoundaryCovered){ rightBoundaryCovered = currMax; } } return ans; } }
Java solution (From Elements of Programming Interviews): public boolean canJump(int[] nums) { int maximumReach = 0; for(int i = 0; i < nums.length; i++) { // Return false if you jump more. if(i > maximumReach) { return false; } // Logic is we need to keep checking every index the // farthest we can travel // Update the maxReach accordingly. maximumReach = Math.max(i + nums[i], maximumReach); } return true; }
Algorithm to find a consecutive sub-sequence whose sum would be a asked number M from a sequence of numbers in O(n)
Lets say we have an array of positive numbers and we were given a value M. Our goal is to find if there is a consecutive sub sequence in the array of positive numbers such that the sum of the sequence is exactly equal to sum M. If A[1],A[2],....A[n] is an array then we have to find if there exist i and j such that A[i]+...+A[j] = M. I am trying to get the O(n) solution using greedy approach.
I believe you can solve this in linear time with a pointer chasing algorithm. Here's the intuition. Start off a pointer at the left side of the array. Keep moving it to the right, tracking the sum of the elements you've seen so far, until you either hit exactly M (done!), your total exceeds M (stop for now, adding in more elements only makes it worse), or you hit the end of the array without reaching at least M (all the elements combined are too small). If you do end up in a case where the sum exceeds M, you can be guaranteed that no subarray starting at the beginning of the array adds up to exactly M, since you tried all of them and they were either too small or too big. Now, start a second pointer at the first element and keep advancing it forward, subtracting out the current element, until you either get to exactly M (done!), you reach the first pointer (stop for now), or the total drops below M (stop for now). All the elements you skipped over with this pointer can't be the starting point of the subarray you're looking for. At this point, start marching the first pointer forward again. Overall, each pointer advances at most n times and you do O(1) work per step, so this runs in time O(n). Plus, it uses only O(1) space, which is as good as it's going to get!
This is a standard two pointer problem. First of all, create an array, prefix that will store the prefix sum of the given array, say arr. So prefix[i] = arr[1] + .. + arr[i] Start with two pointers, lower and upper. Initialize them as lower = 0 upper = 1 (Note: Initialize prefix[0] to 0) Now, try to understand this code: lower = 0, upper = 1; while(upper <= n) { // n is the number of elements if(prefix[upper] - prefix[lower] == m) { return true; } else if(prefix[upper] - prefix[lower] > m) { lower++; } else { upper++; } } return false; Here we are using the fact that the array consists of positive integers, hence prefix is increasing
Assume that the subarray with indices X ≤ i < Y might be the solution. You start with X = 1, Y= 1, sum of elements = 0. As long as the sum is less than M, and Y <= n, increase the sum by array [Y] and replace Y with Y + 1. If the sum is equal to M, you found a solution. If the sum is less than M, you remove array elements at the start: As long as the sum is greater than M, subtract array [X] from the sum and replace X with X + 1. If the sum became equal to M, you have a solution. Otherwise you start with the first loop.
(edited: see templatetypedef's comment) Use the two indices approach: increase the lower index if subsequence too small otherwise increase higher index. Example: void solve(int *a, int n, int M) { if (n <= 0) return; int i, j, s; i = 0, j = 0, s = a[j]; while (j < n) { if (s == M) { printf("%dth through %dth elements\n", i + 1, j + 1); return; } else if (s < M) { j++; s += a[j]; } else { s -= a[i]; i++; } } }
public class FindSumEquals { public static void main(String[] args) { int n = 15; System.out.println("Count is "+ findPossible(n)); } private static int findPossible(int n) { int temp = n; int arrayLength = n / 2 + 2; System.out.println("arrayLength : " + arrayLength) ; int a [] = new int[arrayLength]; int count = 0; for(int i = 1; i < arrayLength; i++){ a[i] = i + a[i - 1]; } int lower = 0, upper = 1; while(upper <= arrayLength - 1) { if(a[upper] - a[lower] == temp) { System.out.println("hello - > " + ++lower + " to "+ upper); upper++; count++; } else if(a[upper] - a[lower] > temp) { lower++; } else { upper++; } } return count; } }
Rearrange an array so that arr[i] becomes arr[arr[i]] with O(1) extra space
The task is to rearrange an array so that arr[i] becomes arr[arr[i]] with O(1) extra space. Example: 2 1 3 5 4 0 becomes: 3 1 5 0 4 2 I can think of an O(n²) solution. An O(n) solution was presented here: Increase every array element arr[i] by (arr[arr[i]] % n)*n. Divide every element by n. But this is very limited as it will cause buffer overflow. Can anyone come up with an improvement upon this?
If the values in the array are all positive (or all negative), one way to avoid overflow could be to run the permutation cycles and use the integer sign to mark visited indexes. (Alternatively, if the array length is smaller than 2^(number of bits for one array element - 1), rather than use the sign, we could shift all the values one bit to the left and use the first bit to mark visited indexes.) This algorithm results in both less iterations and less modifications of the original array values during run-time than the algorithm you are asking to improve. JSFiddle: http://jsfiddle.net/alhambra1/ar6X6/ JavaScript code: function rearrange(arr){ var visited = 0,tmp,indexes,zeroTo function cycle(startIx){ tmp = {start: startIx, value: arr[startIx]} indexes = {from: arr[startIx], to: startIx} while (indexes.from != tmp.start){ if (arr[indexes.from] == 0) zeroTo = indexes.to if (indexes.to == visited){ visited++ arr[indexes.to] = arr[indexes.from] } else { arr[indexes.to] = -arr[indexes.from] } indexes.to = indexes.from if (indexes.from != tmp.start) indexes.from = arr[indexes.from] } if (indexes.to == visited){ visited++ arr[indexes.to] = tmp.value } else { arr[indexes.to] = -tmp.value } } while (visited < arr.length - 1){ cycle(visited) while (arr[visited] < 0 || visited == zeroTo){ arr[visited] = -arr[visited] visited++ } } return arr }
//Traverse the array till the end. //For every index increment the element by array[array[index] % n]. To get //the ith element find the modulo with n, i.e array[index]%n. //Again traverse to end //Print the ith element after dividing the ith element by n, i.e. array[i]/n class Rearrange { void rearrange(int arr[], int n) { for (int i = 0; i < n; i++) arr[i] += (arr[arr[i]] % n) * n; for (int i = 0; i < n; i++) arr[i] /= n; } void printArr(int arr[], int n) { for (int i = 0; i < n; i++) System.out.print(arr[i] + " "); System.out.println(""); } public static void main(String[] args) { Rearrange rearrange = new Rearrange(); int arr[] = {6, 4, 9, 2, 5, 7}; int n = arr.length; System.out.println("Given Array is :"); rearrange.printArr(arr, n); rearrange.rearrange(arr, n); System.out.println("Modified Array is :"); rearrange.printArr(arr, n); } }
Finding an element repeating n times in 2n size array. Will this solution work?
I have an array which has 2n elements where n elements are same and remaining n elements are all different. There are lot of other complex algorithms to solve this problem. Question: Does this approach give the same result or I am wrong somewhere? #include<stdio.h> main() { int arr[10],i,res,count=0; printf("Enter the array elements:\t"); for(i=0;i<10;i++) scanf("%d",&arr[i]); for(i=0;i<8;i++) { if(arr[i]==arr[i+1] || arr[i]==arr[i+2]) { res=arr[i]; break; } else if(arr[i+1]==arr[i+2]) { res=arr[i+1]; break; } } for(i=0;i<10;i++) if(arr[i]==res) count++; if(count==5) printf("true, no. repeated is:\t%d",res); else printf("false"); return 0; }
In addition to failing for the trivial 2 element case, it also fails for 4 elements in this case: a b c a I think the easiest way to solve this problem is to solve the majority element problem on a[1] ... a[2*N-1], and if no majority is found, then it must be a[0] if a solution exists at all. One solution to the majority element problem is to scan through the array counting up a counter whenever the majority candidate element is encountered, and counting down the counter when a number different from the candidate is encountered. When the counter is 0, the next element is automatically considered the new candidate. If the counter is positive at the end of the scan, the candidate is checked with another scan over the array. If the counter is 0, or the second scan fails, there is no majority element. int majority (int a[], int sz) { int i, count1 = 0, count2 = 0; int candidate = -1; for (i = 0; i < sz; ++i) { if (count1 == 0) candidate = i; count1 += ((a[candidate] == a[i]) ? 1 : -1); } if (count1 > 0) { for (i = 0; i < sz; ++i) count2 += (a[candidate] == a[i]); } if (count2 <= sz/2) candidate = -1; return candidate; }
Your algorithm will fail when the array has only 2 elements. It does not handle trivial case
Algorithm: efficient way to remove duplicate integers from an array
I got this problem from an interview with Microsoft. Given an array of random integers, write an algorithm in C that removes duplicated numbers and return the unique numbers in the original array. E.g Input: {4, 8, 4, 1, 1, 2, 9} Output: {4, 8, 1, 2, 9, ?, ?} One caveat is that the expected algorithm should not required the array to be sorted first. And when an element has been removed, the following elements must be shifted forward as well. Anyway, value of elements at the tail of the array where elements were shifted forward are negligible. Update: The result must be returned in the original array and helper data structure (e.g. hashtable) should not be used. However, I guess order preservation is not necessary. Update2: For those who wonder why these impractical constraints, this was an interview question and all these constraints are discussed during the thinking process to see how I can come up with different ideas.
A solution suggested by my girlfriend is a variation of merge sort. The only modification is that during the merge step, just disregard duplicated values. This solution would be as well O(n log n). In this approach, the sorting/duplication removal are combined together. However, I'm not sure if that makes any difference, though.
I've posted this once before on SO, but I'll reproduce it here because it's pretty cool. It uses hashing, building something like a hash set in place. It's guaranteed to be O(1) in axillary space (the recursion is a tail call), and is typically O(N) time complexity. The algorithm is as follows: Take the first element of the array, this will be the sentinel. Reorder the rest of the array, as much as possible, such that each element is in the position corresponding to its hash. As this step is completed, duplicates will be discovered. Set them equal to sentinel. Move all elements for which the index is equal to the hash to the beginning of the array. Move all elements that are equal to sentinel, except the first element of the array, to the end of the array. What's left between the properly hashed elements and the duplicate elements will be the elements that couldn't be placed in the index corresponding to their hash because of a collision. Recurse to deal with these elements. This can be shown to be O(N) provided no pathological scenario in the hashing: Even if there are no duplicates, approximately 2/3 of the elements will be eliminated at each recursion. Each level of recursion is O(n) where small n is the amount of elements left. The only problem is that, in practice, it's slower than a quick sort when there are few duplicates, i.e. lots of collisions. However, when there are huge amounts of duplicates, it's amazingly fast. Edit: In current implementations of D, hash_t is 32 bits. Everything about this algorithm assumes that there will be very few, if any, hash collisions in full 32-bit space. Collisions may, however, occur frequently in the modulus space. However, this assumption will in all likelihood be true for any reasonably sized data set. If the key is less than or equal to 32 bits, it can be its own hash, meaning that a collision in full 32-bit space is impossible. If it is larger, you simply can't fit enough of them into 32-bit memory address space for it to be a problem. I assume hash_t will be increased to 64 bits in 64-bit implementations of D, where datasets can be larger. Furthermore, if this ever did prove to be a problem, one could change the hash function at each level of recursion. Here's an implementation in the D programming language: void uniqueInPlace(T)(ref T[] dataIn) { uniqueInPlaceImpl(dataIn, 0); } void uniqueInPlaceImpl(T)(ref T[] dataIn, size_t start) { if(dataIn.length - start < 2) return; invariant T sentinel = dataIn[start]; T[] data = dataIn[start + 1..$]; static hash_t getHash(T elem) { static if(is(T == uint) || is(T == int)) { return cast(hash_t) elem; } else static if(__traits(compiles, elem.toHash)) { return elem.toHash; } else { static auto ti = typeid(typeof(elem)); return ti.getHash(&elem); } } for(size_t index = 0; index < data.length;) { if(data[index] == sentinel) { index++; continue; } auto hash = getHash(data[index]) % data.length; if(index == hash) { index++; continue; } if(data[index] == data[hash]) { data[index] = sentinel; index++; continue; } if(data[hash] == sentinel) { swap(data[hash], data[index]); index++; continue; } auto hashHash = getHash(data[hash]) % data.length; if(hashHash != hash) { swap(data[index], data[hash]); if(hash < index) index++; } else { index++; } } size_t swapPos = 0; foreach(i; 0..data.length) { if(data[i] != sentinel && i == getHash(data[i]) % data.length) { swap(data[i], data[swapPos++]); } } size_t sentinelPos = data.length; for(size_t i = swapPos; i < sentinelPos;) { if(data[i] == sentinel) { swap(data[i], data[--sentinelPos]); } else { i++; } } dataIn = dataIn[0..sentinelPos + start + 1]; uniqueInPlaceImpl(dataIn, start + swapPos + 1); }
How about: void rmdup(int *array, int length) { int *current , *end = array + length - 1; for ( current = array + 1; array < end; array++, current = array + 1 ) { while ( current <= end ) { if ( *current == *array ) { *current = *end--; } else { current++; } } } } Should be O(n^2) or less.
If you are looking for the superior O-notation, then sorting the array with an O(n log n) sort then doing a O(n) traversal may be the best route. Without sorting, you are looking at O(n^2). Edit: if you are just doing integers, then you can also do radix sort to get O(n).
One more efficient implementation int i, j; /* new length of modified array */ int NewLength = 1; for(i=1; i< Length; i++){ for(j=0; j< NewLength ; j++) { if(array[i] == array[j]) break; } /* if none of the values in index[0..j] of array is not same as array[i], then copy the current value to corresponding new position in array */ if (j==NewLength ) array[NewLength++] = array[i]; } In this implementation there is no need for sorting the array. Also if a duplicate element is found, there is no need for shifting all elements after this by one position. The output of this code is array[] with size NewLength Here we are starting from the 2nd elemt in array and comparing it with all the elements in array up to this array. We are holding an extra index variable 'NewLength' for modifying the input array. NewLength variabel is initialized to 0. Element in array[1] will be compared with array[0]. If they are different, then value in array[NewLength] will be modified with array[1] and increment NewLength. If they are same, NewLength will not be modified. So if we have an array [1 2 1 3 1], then In First pass of 'j' loop, array[1] (2) will be compared with array0, then 2 will be written to array[NewLength] = array[1] so array will be [1 2] since NewLength = 2 In second pass of 'j' loop, array[2] (1) will be compared with array0 and array1. Here since array[2] (1) and array0 are same loop will break here. so array will be [1 2] since NewLength = 2 and so on
1. Using O(1) extra space, in O(n log n) time This is possible, for instance: first do an in-place O(n log n) sort then walk through the list once, writing the first instance of every back to the beginning of the list I believe ejel's partner is correct that the best way to do this would be an in-place merge sort with a simplified merge step, and that that is probably the intent of the question, if you were eg. writing a new library function to do this as efficiently as possible with no ability to improve the inputs, and there would be cases it would be useful to do so without a hash-table, depending on the sorts of inputs. But I haven't actually checked this. 2. Using O(lots) extra space, in O(n) time declare a zero'd array big enough to hold all integers walk through the array once set the corresponding array element to 1 for each integer. If it was already 1, skip that integer. This only works if several questionable assumptions hold: it's possible to zero memory cheaply, or the size of the ints are small compared to the number of them you're happy to ask your OS for 256^sizepof(int) memory and it will cache it for you really really efficiently if it's gigantic It's a bad answer, but if you have LOTS of input elements, but they're all 8-bit integers (or maybe even 16-bit integers) it could be the best way. 3. O(little)-ish extra space, O(n)-ish time As #2, but use a hash table. 4. The clear way If the number of elements is small, writing an appropriate algorithm is not useful if other code is quicker to write and quicker to read. Eg. Walk through the array for each unique elements (ie. the first element, the second element (duplicates of the first having been removed) etc) removing all identical elements. O(1) extra space, O(n^2) time. Eg. Use library functions which do this. efficiency depends which you have easily available.
Well, it's basic implementation is quite simple. Go through all elements, check whether there are duplicates in the remaining ones and shift the rest over them. It's terrible inefficient and you could speed it up by a helper-array for the output or sorting/binary trees, but this doesn't seem to be allowed.
If you are allowed to use C++, a call to std::sort followed by a call to std::unique will give you the answer. The time complexity is O(N log N) for the sort and O(N) for the unique traversal. And if C++ is off the table there isn't anything that keeps these same algorithms from being written in C.
You could do this in a single traversal, if you are willing to sacrifice memory. You can simply tally whether you have seen an integer or not in a hash/associative array. If you have already seen a number, remove it as you go, or better yet, move numbers you have not seen into a new array, avoiding any shifting in the original array. In Perl: foreach $i (#myary) { if(!defined $seen{$i}) { $seen{$i} = 1; push #newary, $i; } }
The return value of the function should be the number of unique elements and they are all stored at the front of the array. Without this additional information, you won't even know if there were any duplicates. Each iteration of the outer loop processes one element of the array. If it is unique, it stays in the front of the array and if it is a duplicate, it is overwritten by the last unprocessed element in the array. This solution runs in O(n^2) time. #include <stdio.h> #include <stdlib.h> size_t rmdup(int *arr, size_t len) { size_t prev = 0; size_t curr = 1; size_t last = len - 1; while (curr <= last) { for (prev = 0; prev < curr && arr[curr] != arr[prev]; ++prev); if (prev == curr) { ++curr; } else { arr[curr] = arr[last]; --last; } } return curr; } void print_array(int *arr, size_t len) { printf("{"); size_t curr = 0; for (curr = 0; curr < len; ++curr) { if (curr > 0) printf(", "); printf("%d", arr[curr]); } printf("}"); } int main() { int arr[] = {4, 8, 4, 1, 1, 2, 9}; printf("Before: "); size_t len = sizeof (arr) / sizeof (arr[0]); print_array(arr, len); len = rmdup(arr, len); printf("\nAfter: "); print_array(arr, len); printf("\n"); return 0; }
Here is a Java Version. int[] removeDuplicate(int[] input){ int arrayLen = input.length; for(int i=0;i<arrayLen;i++){ for(int j = i+1; j< arrayLen ; j++){ if(((input[i]^input[j]) == 0)){ input[j] = 0; } if((input[j]==0) && j<arrayLen-1){ input[j] = input[j+1]; input[j+1] = 0; } } } return input; }
Here is my solution. ///// find duplicates in an array and remove them void unique(int* input, int n) { merge_sort(input, 0, n) ; int prev = 0 ; for(int i = 1 ; i < n ; i++) { if(input[i] != input[prev]) if(prev < i-1) input[prev++] = input[i] ; } }
An array should obviously be "traversed" right-to-left to avoid unneccessary copying of values back and forth. If you have unlimited memory, you can allocate a bit array for sizeof(type-of-element-in-array) / 8 bytes to have each bit signify whether you've already encountered corresponding value or not. If you don't, I can't think of anything better than traversing an array and comparing each value with values that follow it and then if duplicate is found, remove these values altogether. This is somewhere near O(n^2) (or O((n^2-n)/2)). IBM has an article on kinda close subject.
Let's see: O(N) pass to find min/max allocate bit-array for found O(N) pass swapping duplicates to end.
This can be done in one pass with an O(N log N) algorithm and no extra storage. Proceed from element a[1] to a[N]. At each stage i, all of the elements to the left of a[i] comprise a sorted heap of elements a[0] through a[j]. Meanwhile, a second index j, initially 0, keeps track of the size of the heap. Examine a[i] and insert it into the heap, which now occupies elements a[0] to a[j+1]. As the element is inserted, if a duplicate element a[k] is encountered having the same value, do not insert a[i] into the heap (i.e., discard it); otherwise insert it into the heap, which now grows by one element and now comprises a[0] to a[j+1], and increment j. Continue in this manner, incrementing i until all of the array elements have been examined and inserted into the heap, which ends up occupying a[0] to a[j]. j is the index of the last element of the heap, and the heap contains only unique element values. int algorithm(int[] a, int n) { int i, j; for (j = 0, i = 1; i < n; i++) { // Insert a[i] into the heap a[0...j] if (heapInsert(a, j, a[i])) j++; } return j; } bool heapInsert(a[], int n, int val) { // Insert val into heap a[0...n] ...code omitted for brevity... if (duplicate element a[k] == val) return false; a[k] = val; return true; } Looking at the example, this is not exactly what was asked for since the resulting array preserves the original element order. But if this requirement is relaxed, the algorithm above should do the trick.
In Java I would solve it like this. Don't know how to write this in C. int length = array.length; for (int i = 0; i < length; i++) { for (int j = i + 1; j < length; j++) { if (array[i] == array[j]) { int k, j; for (k = j + 1, l = j; k < length; k++, l++) { if (array[k] != array[i]) { array[l] = array[k]; } else { l--; } } length = l; } } }
How about the following? int* temp = malloc(sizeof(int)*len); int count = 0; int x =0; int y =0; for(x=0;x<len;x++) { for(y=0;y<count;y++) { if(*(temp+y)==*(array+x)) { break; } } if(y==count) { *(temp+count) = *(array+x); count++; } } memcpy(array, temp, sizeof(int)*len); I try to declare a temp array and put the elements into that before copying everything back to the original array.
After review the problem, here is my delphi way, that may help var A: Array of Integer; I,J,C,K, P: Integer; begin C:=10; SetLength(A,10); A[0]:=1; A[1]:=4; A[2]:=2; A[3]:=6; A[4]:=3; A[5]:=4; A[6]:=3; A[7]:=4; A[8]:=2; A[9]:=5; for I := 0 to C-1 do begin for J := I+1 to C-1 do if A[I]=A[J] then begin for K := C-1 Downto J do if A[J]<>A[k] then begin P:=A[K]; A[K]:=0; A[J]:=P; C:=K; break; end else begin A[K]:=0; C:=K; end; end; end; //tructate array setlength(A,C); end;
The following example should solve your problem: def check_dump(x): if not x in t: t.append(x) return True t=[] output = filter(check_dump, input) print(output) True
import java.util.ArrayList; public class C { public static void main(String[] args) { int arr[] = {2,5,5,5,9,11,11,23,34,34,34,45,45}; ArrayList<Integer> arr1 = new ArrayList<Integer>(); for(int i=0;i<arr.length-1;i++){ if(arr[i] == arr[i+1]){ arr[i] = 99999; } } for(int i=0;i<arr.length;i++){ if(arr[i] != 99999){ arr1.add(arr[i]); } } System.out.println(arr1); } }
This is the naive (N*(N-1)/2) solution. It uses constant additional space and maintains the original order. It is similar to the solution by #Byju, but uses no if(){} blocks. It also avoids copying an element onto itself. #include <stdio.h> #include <stdlib.h> int numbers[] = {4, 8, 4, 1, 1, 2, 9}; #define COUNT (sizeof numbers / sizeof numbers[0]) size_t undup_it(int array[], size_t len) { size_t src,dst; /* an array of size=1 cannot contain duplicate values */ if (len <2) return len; /* an array of size>1 will cannot at least one unique value */ for (src=dst=1; src < len; src++) { size_t cur; for (cur=0; cur < dst; cur++ ) { if (array[cur] == array[src]) break; } if (cur != dst) continue; /* found a duplicate */ /* array[src] must be new: add it to the list of non-duplicates */ if (dst < src) array[dst] = array[src]; /* avoid copy-to-self */ dst++; } return dst; /* number of valid alements in new array */ } void print_it(int array[], size_t len) { size_t idx; for (idx=0; idx < len; idx++) { printf("%c %d", (idx) ? ',' :'{' , array[idx] ); } printf("}\n" ); } int main(void) { size_t cnt = COUNT; printf("Before undup:" ); print_it(numbers, cnt); cnt = undup_it(numbers,cnt); printf("After undup:" ); print_it(numbers, cnt); return 0; }
This can be done in a single pass, in O(N) time in the number of integers in the input list, and O(N) storage in the number of unique integers. Walk through the list from front to back, with two pointers "dst" and "src" initialized to the first item. Start with an empty hash table of "integers seen". If the integer at src is not present in the hash, write it to the slot at dst and increment dst. Add the integer at src to the hash, then increment src. Repeat until src passes the end of the input list.
Insert all the elements in a binary tree the disregards duplicates - O(nlog(n)). Then extract all of them back in the array by doing a traversal - O(n). I am assuming that you don't need order preservation.
Use bloom filter for hashing. This will reduce the memory overhead very significantly.
In JAVA, Integer[] arrayInteger = {1,2,3,4,3,2,4,6,7,8,9,9,10}; String value =""; for(Integer i:arrayInteger) { if(!value.contains(Integer.toString(i))){ value +=Integer.toString(i)+","; } } String[] arraySplitToString = value.split(","); Integer[] arrayIntResult = new Integer[arraySplitToString.length]; for(int i = 0 ; i < arraySplitToString.length ; i++){ arrayIntResult[i] = Integer.parseInt(arraySplitToString[i]); } output: { 1, 2, 3, 4, 6, 7, 8, 9, 10} hope this will help
Create a BinarySearchTree which has O(n) complexity.
First, you should create an array check[n] where n is the number of elements of the array you want to make duplicate-free and set the value of every element(of the check array) equal to 1. Using a for loop traverse the array with the duplicates, say its name is arr, and in the for-loop write this : { if (check[arr[i]] != 1) { arr[i] = 0; } else { check[arr[i]] = 0; } } With that, you set every duplicate equal to zero. So the only thing is left to do is to traverse the arr array and print everything it's not equal to zero. The order stays and it takes linear time (3*n).
Given an array of n elements, write an algorithm to remove all duplicates from the array in time O(nlogn) Algorithm delete_duplicates (a[1....n]) //Remove duplicates from the given array //input parameters :a[1:n], an array of n elements. { temp[1:n]; //an array of n elements. temp[i]=a[i];for i=1 to n temp[i].value=a[i] temp[i].key=i //based on 'value' sort the array temp. //based on 'value' delete duplicate elements from temp. //based on 'key' sort the array temp.//construct an array p using temp. p[i]=temp[i]value return p. In other of elements is maintained in the output array using the 'key'. Consider the key is of length O(n), the time taken for performing sorting on the key and value is O(nlogn). So the time taken to delete all duplicates from the array is O(nlogn).
this is what i've got, though it misplaces the order we can sort in ascending or descending to fix it up. #include <stdio.h> int main(void){ int x,n,myvar=0; printf("Enter a number: \t"); scanf("%d",&n); int arr[n],changedarr[n]; for(x=0;x<n;x++){ printf("Enter a number for array[%d]: ",x); scanf("%d",&arr[x]); } printf("\nOriginal Number in an array\n"); for(x=0;x<n;x++){ printf("%d\t",arr[x]); } int i=0,j=0; // printf("i\tj\tarr\tchanged\n"); for (int i = 0; i < n; i++) { // printf("%d\t%d\t%d\t%d\n",i,j,arr[i],changedarr[i] ); for (int j = 0; j <n; j++) { if (i==j) { continue; } else if(arr[i]==arr[j]){ changedarr[j]=0; } else{ changedarr[i]=arr[i]; } // printf("%d\t%d\t%d\t%d\n",i,j,arr[i],changedarr[i] ); } myvar+=1; } // printf("\n\nmyvar=%d\n",myvar); int count=0; printf("\nThe unique items:\n"); for (int i = 0; i < myvar; i++) { if(changedarr[i]!=0){ count+=1; printf("%d\t",changedarr[i]); } } printf("\n"); }
It'd be cool if you had a good DataStructure that could quickly tell if it contains an integer. Perhaps a tree of some sort. DataStructure elementsSeen = new DataStructure(); int elementsRemoved = 0; for(int i=0;i<array.Length;i++){ if(elementsSeen.Contains(array[i]) elementsRemoved++; else array[i-elementsRemoved] = array[i]; } array.Length = array.Length - elementsRemoved;