What kind of drawbacks are there performance-wise , if I sort an array by using hashing? - arrays

I wrote a simple function to sort an array int a[]; using hash.
For that I stored frequency for every element in new array hash1[] and then I put back in original array in linear time.
#include<bits/stdc++.h>
using namespace std;
int hash1[10000];
void sah(int a[],int n)
{
int maxo=-1;
for(int i=0;i<n;i++)
{
hash1[a[i]]++;
if(maxo<a[i]){maxo=a[i];}
}
int i=0,freq=0,idx=0;
while(i<maxo+1)
{
freq=hash1[i];
if(freq>0)
{
while(freq>0)
{
a[idx++]=i;freq--;
}
}
i++;
}
}
int main()
{
int a[]={6,8,9,22,33,59,12,5,99,12,57,7};
int n=sizeof(a)/sizeof(a[0]);
sah(a,n);
for(int i=0;i<n;i++)
{
printf("%d ",a[i]);
}
}
This algorithm runs in O(max_element). What kind of disadvantages I'm facing here considering only performance( time and space)?

The algorithm you've implemented is called counting sort. Its runtime is O(n + U), where n is the total number of elements and U is the maximum value in the array (assuming the numbers go from 0 to U), and its space usage is Θ(U). Your particular implementation assumes that U = 10,000. Although you've described your approach as "hashing," this really isn't a hash (computing some function of the elements and using that to put them into buckets) as a distribution (spreading elements around according to their values).
If U is a fixed constant - as it is in your case - then the runtime is O(n) and the space usage is O(1), though remember that big-O talks about long-term growth rates and that if U is large the runtime can be pretty high. This makes it attractive if you're sorting very large arrays with a restricted range of values. However, if the range of values can be large, this is not a particularly good approach. Interestingly, you can think of radix sort as an algorithm that repeatedly runs counting sort with U = 10 (if using the base-10 digits of the numbers) or U = 2 (if going in binary) and has a runtime of O(n log U), which is strongly preferable for large values of U.
You can clean up this code in a number of ways. For example, you have an if statement and a while loop with the same condition, which can be combined together into a single while loop. You also might want to put in some assert checks to make sure all the values are in the range from 0 to 9,999, inclusive, since otherwise you'll have a bounds error. Additionally, you could consider making the global array either a local variable (though watch your stack usage) or a static local variable (to avoid polluting the global namespace). You could alternatively have the user pass in a parameter specifying the maximum size or could calculate it yourself.

Issues you may consider:
Input validation. What if the user enters -10 or a very large value.
If the maximum element is large, you will at some point get a performance hit when the L1 cache is exhausted. The hash1-array will compete for memory bandwidth with the a-array. When I have implemented radix-sorting in the past I found that 8-bits per iteration was fastest.
The time complexity is actually O(max_element + number_of_elements). E.g. what if you sorted 2 million ones or zeros. It is not as fast as sorting 2 ones or zeros.

Related

Sorting 4 numbers with minimum x<y comparisons

This is an interview question. Say you have an array of four ints named A, and also this function:
int check(int x, int y){
if (x<=y) return 1;
return 0;
}
Now, you want to create a function that will sort A, and you can use only the function checkfor comparisons. How many calls for check do you need?
(It is ok to return a new array for result).
I found that I can do this in 5 calls. Is it possible to do it with less calls (on worst case)?
This is how I thought of doing it (pseudo code):
int[4] B=new int[4];
/*
The idea: put minimum values in even cells and maximum values in odd cells using check.
Then swap (if needed) between minimum values and also between maximum values.
And finally, swap the second element (max of minimums)
and the third element (min of maximums) if needed.
*/
if (check(A[0],A[1])==1){ //A[0]<=A[1]
B[0]=A[0];
B[2]=A[1];
}
else{
B[0]=A[1];
B[2]=A[0];
}
if (check(A[2],A[3])==1){ //A[2]<=A[3]
B[1]=A[2];
B[3]=A[3];
}
else{
B[1]=A[3];
B[3]=A[2];
}
if (check(B[0],B[1])==0){ //B[0]>B[1]
swap(B[0],B[1]);
}
if (check(B[2],B[3])==0){ //B[2]>B[3]
swap(B[2],B[3]);
}
if (check(B[1],B[2])==0){ // B[1]>B[2]
swap(B[1],B[2]);
}
There are 24 possible orderings of a 4-element list. (4 factorial) If you do only 4 comparisons, then you can get only 4 bits of information, which is enough to distinguish between 16 different cases, which isn't enough to cover all the possible output cases. Therefore, 5 comparisons is the optimal worst case.
In The Art of Computer Programming, p. 183 (Section 3.5.1), Donald Knuth has the following table of lower and upper bounds on the minimum numbers of comparisons:
The ceil(ln n!) is the "information theoretic" lower bound, whereas B(n) is the maximum number of comparisons in an insertion binary sort. Since the lower and upper bounds are equal for n=4, 5 comparisons are needed.
The information theoretic bound is derived by recognizing that there are n! possible orderings of n unique items. We distinguish these cases by asking S yes-no questions in the form of is X<Y?. These questions form a tree which has at most 2^S tips. We need n!<=2^S; solving for S gives ceil(lg(n!)).
Incidentally, you can use Stirling's approximation to show that this implies that sorting requires O(n log n) time.
The rest of the section goes on to describe a number of approaches to creating these bounds and studying this question, though work is on-going (see, for instance Peczarski (2011)).

Finding no. of shifts in Insertion sort for large inputs in C

I'm trying to write a program that counts the number of swaps made by insertion sort. My program works on small inputs, but produces the wrong answer on large inputs. I'm also not sure how to use the long int type.
This problem came up in a setting described at https://drive.google.com/file/d/0BxOMrMV58jtmNF9EcUNQNGpreDQ/edit?usp=sharing
Input is given as
The first line contains the number of test cases T. T test cases follow.
The first line for each case contains N, the number of elements to be sorted.
The next line contains N integers a[1],a[2]...,a[N].
Code I used is
#include <stdio.h>
#include <stdlib.h>
int insertionSort(int ar_size,int * ar)
{
int i,j,t,temp,count;
count=0;
int n=ar_size;
for(i=0;i<n-1;i++)
{
j=i;
while(ar[j+1]<ar[j])
{
temp=ar[j+1];
ar[j+1]=ar[j];
ar[j]=temp;
j--;
count++;
}
}
return count;
}
int main()
{
int _ar_size,tc,i,_ar_i;
scanf("%d", &tc);
int sum=0;
for(i=0;i<tc;i++)
{
scanf("%d", &_ar_size);
int *_ar;
_ar=(int *)malloc(sizeof(int)*_ar_size);
for(_ar_i = 0; _ar_i < _ar_size; _ar_i++)
{
scanf("%d", &_ar[_ar_i]);
}
sum=insertionSort(_ar_size, _ar);
printf("%d\n",sum);
}
return 0;
}
There are two issues that I currently see with the solution you have.
First, there's an issue brought up in the comments about integer overflow. On most systems, the int type can hold numbers up through 231 - 1. In insertion sort, the number of swaps that need to be made in the worst case on an array of length n is n(n - 1) / 2 (details later), so for an array of size 217, you may end up not being able to store the number of swaps that you need inside an int. To address this, consider using a larger integer type. For example, the uint64_t type can store numbers up to roughly 1018, which should be good enough to store the answer for arrays up to length around 109. You mentioned that you're not sure how to use it, but the good news is that it's not that hard. Just add the line
#include <stdint.h>
(for C) or
#include <cstdint>
(for C++) to the top of your program. After that, you should just be able to use uint64_t in place of int without making any other modifications and everything should work out just fine.
Next, there's an issue of efficiency. The code you've posted essentially runs insertion sort and therefore takes time O(n2) in the worst-case. For large inputs - say, inputs around size 108 - this is prohibitively expensive. Amazingly, though, you can actually determine how many swaps insertion sort will make without actually running insertion sort.
In insertion sort, the number of swaps made is equal to the number of inversions that exist in the input array (an inversion is a pair of elements that are out of order). There's a beautiful divide-and-conquer algorithm for counting inversions that runs in time O(n log n), which likely will scale up to work on much larger inputs than just running insertion sort. I think that the "best" answer to this question would be to use this algorithm, while taking care to use the uint64_t type or some other type like it, since it will make your algorithm work correctly on much larger inputs.

Optimising C for performance vs memory optimisation using multidimensional arrays

I am struggling to decide between two optimisations for building a numerical solver for the poisson equation.
Essentially, I have a two dimensional array, of which I require n doubles in the first row, n/2 in the second n/4 in the third and so on...
Now my difficulty is deciding whether or not to use a contiguous 2d array grid[m][n], which for a large n would have many unused zeroes but would probably reduce the chance of a cache miss. The other, and more memory efficient method, would be to dynamically allocate an array of pointers to arrays of decreasing size. This is considerably more efficient in terms of memory storage but would it potentially hinder performance?
I don't think I clearly understand the trade-offs in this situation. Could anybody help?
For reference, I made a nice plot of the memory requirements in each case:
There is no hard and fast answer to this one. If your algorithm needs more memory than you expect to be given then you need to find one which is possibly slower but fits within your constraints.
Beyond that, the only option is to implement both and then compare their performance. If saving memory results in a 10% slowdown is that acceptable for your use? If the version using more memory is 50% faster but only runs on the biggest computers will it be used? These are the questions that we have to grapple with in Computer Science. But you can only look at them once you have numbers. Otherwise you are just guessing and a fair amount of the time our intuition when it comes to optimizations are not correct.
Build a custom array that will follow the rules you have set.
The implementation will use a simple 1d contiguous array. You will need a function that will return the start of array given the row. Something like this:
int* Get( int* array , int n , int row ) //might contain logical errors
{
int pos = 0 ;
while( row-- )
{
pos += n ;
n /= 2 ;
}
return array + pos ;
}
Where n is the same n you described and is rounded down on every iteration.
You will have to call this function only once per entire row.
This function will never take more that O(log n) time, but if you want you can replace it with a single expression: http://en.wikipedia.org/wiki/Geometric_series#Formula
You could use a single array and just calculate your offset yourself
size_t get_offset(int n, int row, int column) {
size_t offset = column;
while (row--) {
offset += n;
n << 1;
}
return offset;
}
double * array = calloc(sizeof(double), get_offset(n, 64, 0));
access via
array[get_offset(column, row)]

Homework: Creating O(n) algorithm for sorting

I am taking the cs50 course on edx and am doing the hacker edition of pset3 (in essence it is the advanced version).
Basically the program takes a value to be searched for as the command-line argument, and then asks for a bunch of numbers to be used in an array.
Then it sorts and searches that array for the value entered at the command-line.
The way the program is implemented, it uses a pseudo-random number generator to feed the numbers for the array.
The task is to write the search and sorting functions.
I already have the searching function, but the sorting function is supposed to be O(n).
In the regular version you were supposed to use a O(n ^ 2) algorithm which wasn't a problem to implement. Also using a log n algorithm wouldn't be an issue either.
But the problem set specifically ask's for a big O(n) algorithm.
It gives a hint in saying that, since no number in the array is going to be negative, and the not greater than LIMIT (the numbers output by the generator are modulo'd so they are not greater than 65000). But how does that help in getting the algorithm to be O(n)?
But the counting sort algorithm, which purports to be an acceptable solution, returns a new sorted array rather than actually sort the original one, and that contradicts with the pset specification that reads 'As this return type of void implies, this function must not return a sorted array; it must instead "destructively" sort the actual array that it’s passed by moving around the values therein.'
Also, if we decide to copy the sorted array onto the original one using another loop, with so many consecutive loops, I'm not sure if the sorting function can be considered to have a running time of O(n) anymore. Here is the actual pset, the question is about the sorting part.
Any ideas to how to implement such an algorithm would be greatly appreciated. It's not necessary to provide actual code, rather just the logic of you can create a O(n) algorithm under the conditions provided.
It gives a hint in saying that, since no number in the array is going
to be negative, and the not greater than LIMIT (the numbers outputted
by the generator are modulo'd to not be higher than 65000). But how
does that help in getting the algorithm to be O(n).
That hint directly seems to point towards counting sort.
You create 65000 buckets and use them to count the number of occurrences of each number.
Then, you just revisit the buckets and you have the sorted result.
It takes advantage of the fact that:
They are integers.
They have a limited range.
Its complexity is O(n) and as this is not a comparison-based sort, the O(nlogn) lower bound on sorting does not apply. A very good visualization is here.
As #DarkCthulhu said, counting sort is clearly what they were urging you to use. But you could also use a radix sort.
Here is a particularly concise radix 2 sort that exploits a nice connection to Gray codes. In your application it would require 16 passes over the input, one per data bit. For big inputs, the counting sort is likely to be faster. For small ones, the radix sort ought to be fster because you avoid initializing 256K bytes or more of counters.
See this article for explanation.
void sort(unsigned short *a, int len)
{
unsigned short bit, *s = a, *d = safe_malloc(len * sizeof *d), *t;
unsigned is, id0, id1;
for (bit = 1; bit; bit <<= 1, t = s, s = d, d = t)
for (is = id0 = 0, id1 = len; is < len; ++is)
if (((s[is] >> 1) ^ s[is]) & bit)
d[--id1] = s[is];
else
d[id0++] = s[is];
free(d);
}

Counting unique element in large array

One of my colleagues was asked this question in an interview.
Given a huge array which stores unsigned int. Length of array is 100000000. Find the effective way to count the unique number of elements present in the array.
E.g arr = {2,34,5,6,7,2,2,5,1,34,5}
O/p: Count of 2 is 3, Count of 34 is 2 and so on.
What are effective algorithms to do this? I thought at first dictionary/hash would be one of the options, but since the array is very large it is inefficient. Is there any way to do this?
Heap sort is O(nlogn) and in-place. In-place is necessary when dealing with large data sets. Once sorted you can make one pass through the array tallying occurrences of each value. Because the array is sorted, once a value changes you know you've seen all occurrences of the previous value.
Many other posters have suggested sorting the data and then finding the number of adjacent values, but no one has mentioned using radix sort yet to get the runtime to be O(n lg U) (where U is the maximum value in the array) instead of O(n lg n). Since lg U = O(lg n), assuming that integers take up one machine word, this approach is asymptotically faster than heapsort.
Non-comparison sorts are always fun in interviews. :-)
Sort it, then scan it from the beginning to determine the counts for each item.
This approach requires no additional storage, and can be done in O(n log n) time (for the sort).
If the range of the int values is limited, then you may allocate an array, which serves to count the occurrences for each possible value. Then you just iterate through your huge array and increment the counters.
foreach x in huge_array {
counter[x]++;
}
Thus you find the solution in linear time (O(n)), but at the expense of memory consumption. That is, if your ints span the whole range allowed by 32-bit ints, you would need to allocate an array of 4G ints, which is impractical...
How about using a BloomFilter impl: like http://code.google.com/p/java-bloomfilter/
first do a bloom.contains(element) if true continue if false bloom.add(element).
At the end count the number of elements added. Bloomfilter needs approx. 250mb memory to store 100000000 elements at 10bits per element.
Problem is that false positives are possible in BloomFilters and can only be minimized by increasing the number of bits per element. This could be addressed by two BloomFilters with different hashing that need to agree.
Hashing in this case is not inneficient. The cost will be approximately O(N) (O(N) for iterating over the array and ~O(N) for iterating over the hashtable). Since you need O(N) for checking each element, the complexity is good.
Sorting is a good idea. However type of sorting depends on range of possible values. For small range counting sort would be good. While dealing with such a big array it would be efficient to use multiple cores - radix sort might be good.
Look at its variation that might help you to find no. of distinct elements.
#include <bits/stdc++.h>
using namespace std;
#define ll long long int
#define ump unordered_map
void file_i_o()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
#endif
}
int main() {
file_i_o();
ll t;
cin>>t;
while(t--)
{
int n,q;
cin>>n>>q;
ump<int,int> num;
int x;
int arr[n+1];
int a,b;
for(int i=1;i<=n;i++)
{
cin>>x;
arr[i]=x;
num[x]++;
}
for(int i=0;i<q;i++)
{
cin>>a>>b;
num[arr[a]]--;
if((num[arr[a]])==0)
{ num.erase(arr[a]); }
arr[a]=b;
num[b]++;
cout<<num.size()<<"\n";
}
}
return 0;
}

Resources