C: Looping the big array efficiently - c

I have a master list integer array which has around 500 numbers. And, i have a set of 100 randomized number which has picked from the master list to find the missing numbers. Now, I need to go through this randomized number list against the master list. What would be the best approach in C programming to go through it without hanging the program. If i go through in simple 'for' loop for 500 elements, it will hang as it needs to go through the entire list. Could someone direct me on this?
Thanks.

First, you should profile it. It's only 500*100=50,000 operations at the max we're talking about. An average modern computer is capable of finishing it off in under one-tenth of a second, unless you code it very inefficiently.
Assuming that you would like to optimize it anyway, you should sort the master array, and run a binary search on it for each element of the randomized array. This would reduce the number of operations from 50,000 to at most 900, because a binary search of 500 numbers requires at most 9 comparisons.
Here is an implementation that uses built-in sorting and binary search functions (qsort and bsearch) of the standard C library:
int less_int(const void* left, const void* right) {
return *((const int*)left) - *((const int*)right);
}
int main(void) {
size_t num_elements = 500;
int* a = malloc(num_elements*sizeof(int));
for(size_t i=0 ; i<num_elements ; i++) {
a[i] = rand() % num_elements;
}
qsort(a, num_elements, sizeof(int), less_int);
size_t num_rand = 100;
int* r = malloc(num_rand*sizeof(int));
for(size_t i=0 ; i < num_rand ; i++) {
r[i] = rand() % num_rand;
}
for (size_t i = 0 ; i != num_rand ; i++) {
int *p = (int*) bsearch (&r[i], a, num_elements, sizeof(int), less_int);
if (p) {
printf ("%d is in the array.\n", *p);
} else {
printf ("%d is not in the array.\n", r[i]);
}
}
free(a);
free(r);
return 0;
}
Here is a link to this running program on ideone.

n - Randomised array length.
m - Masterlist array length.
Sort the randomised arrary. n*log(n)
Binary search in sorted array for every element in Master list. Hence you'll have every missing element. (m)*log(n)
=> (m+n) * log(n) for the whole operation. With n=100 and m=500 we've
600 * log(100) log to base 2
approx 3986 iterations compared to 50000 iteration with raw coding.
PS: If both arrays are sorted, just comparisons of O(m) should suffice.

Related

Shuffle an array while making each index have the same probability to be in any index

I want to shuffle an array, and that each index will have the same probability to be in any other index (excluding itself).
I have this solution, only i find that always the last 2 indexes will always ne swapped with each other:
void Shuffle(int arr[]. size_t n)
{
int newIndx = 0;
int i = 0;
for(; i > n - 2; ++i)
{
newIndx = rand() % (n - 1);
if (newIndx >= i)
{
++newIndx;
}
swap(i, newIndx, arr);
}
}
but in the end it might be that some indexes will go back to their first place once again.
Any thoughts?
C lang.
A permutation (shuffle) where no element is in its original place is called a derangement.
Generating random derangements is harder than generating random permutations, can be done in linear time and space. (Generating a random permutation can be done in linear time and constant space.) Here are two possible algorithms.
The simplest solution to understand is a rejection strategy: do a Fisher-Yates shuffle, but if the shuffle attempts to put an element at its original spot, restart the shuffle. [Note 1]
Since the probability that a random shuffle is a derangement is approximately 1/e, the expected number of shuffles performed is about e (that is, 2.71828…). But since unsuccessful shuffles are restarted as soon as the first fixed point is encountered, the total number of shuffle steps is less than e times the array size for a detailed analysis, see this paper, which proves the expected number of random numbers needed by the algorithm to be around (e−1) times the number of elements.
In order to be able to do the check and restart, you need to keep an array of indices. The following little function produces a derangement of the indices from 0 to n-1; it is necessary to then apply the permutation to the original array.
/* n must be at least 2 for this to produce meaningful results */
void derange(size_t n, size_t ind[]) {
for (size_t i = 0; i < n; ++i) ind[i] = i;
swap(ind, 0, randint(1, n));
for (size_t i = 1; i < n; ++i) {
int r = randint(i, n);
swap(ind, i, r);
if (ind[i] == i) i = 0;
}
}
Here are the two functions used by that code:
void swap(int arr[], size_t i, size_t j) {
int t = arr[i]; arr[i] = arr[j]; arr[j] = t;
}
/* This is not the best possible implementation */
int randint(int low, int lim) {
return low + rand() % (lim - low);
}
The following function is based on the 2008 paper "Generating Random Derangements" by Conrado Martínez, Alois Panholzer and Helmut Prodinger, although I use a different mechanism to track cycles. Their algorithm uses a bit vector of size N but uses a rejection strategy in order to find an element which has not been marked. My algorithm uses an explicit vector of indices not yet operated on. The vector is also of size N, which is still O(N) space [Note 2]; since in practical applications, N will not be large, the difference is not IMHO significant. The benefit is that selecting the next element to use can be done with a single call to the random number generator. Again, this is not particularly significant since the expected number of rejections in the MP&P algorithm is very small. But it seems tidier to me.
The basis of the algorithms (both MP&P and mine) is the recursive procedure to produce a derangement. It is important to note that a derangement is necessarily the composition of some number of cycles where each cycle is of size greater than 1. (A cycle of size 1 is a fixed point.) Thus, a derangement of size N can be constructed from a smaller derangement using one of two mechanisms:
Produce a derangement of the N-1 elements other than element N, and add N to some cycle at any point in that cycle. To do so, randomly select any element j in the N-1 cycle and place N immediately after j in the j's cycle. This alternative covers all possibilities where N is in a cycle of size > 3.
Produce a derangement of N-2 of the N-1 elements other than N, and add a cycle of size 2 consisting of N and the element not selected from the smaller derangement. This alternative covers all possibilities where N is in a cycle of size 2.
If Dn is the number of derangements of size n, it is easy to see from the above recursion that:
Dn = (n−1)(Dn−1 + Dn−2)
The multiplier is n−1 in both cases: in the first alternative, it refers to the number of possible places N can be added, and in the second alternative to the number of possible ways to select n−2 elements of the recursive derangement.
Therefore, if we were to recursively produce a random derangement of size N, we would randomly select one of the N-1 previous elements, and then make a random boolean decision on whether to produce alternative 1 or alternative 2, weighted by the number of possible derangements in each case.
One advantage to this algorithm is that it can derange an arbitrary vector; there is no need to apply the permuted indices to the original vector as with the rejection algorithm.
As MP&P note, the recursive algorithm can just as easily be performed iteratively. This is quite clear in the case of alternative 2, since the new 2-cycle can be generated either before or after the recursion, so it might as well be done first and then the recursion is just a loop. But that is also true for alternative 1: we can make element N the successor in a cycle to a randomly-selected element j even before we know which cycle j will eventually be in. Looked at this way, the difference between the two alternatives reduces to whether or not element j is removed from future consideration or not.
As shown by the recursion, alternative 2 should be chosen with probability (n−1)Dn−2/Dn, which is how MP&P write their algorithm. I used the equivalent formula Dn−2 / (Dn−1 + Dn−2), mostly because my prototype used Python (for its built-in bignum support).
Without bignums, the number of derangements and hence the probabilities need to be approximated as double, which will create a slight bias and limit the size of the array to be deranged to about 170 elements. (long double would allow slightly more.) If that is too much of a limitation, you could implement the algorithm using some bignum library. For ease of implementation, I used the Posix drand48 function to produce random doubles in the range [0.0, 1.0). That's not a great random number function, but it's probably adequate to the purpose and is available in most standard C libraries.
Since no attempt is made to verify the uniqueness of the elements in the vector to be deranged, a vector with repeated elements may produce a derangement where one or more of these elements appear to be in the original place. (It's actually a different element with the same value.)
The code:
/* Deranges the vector `arr` (of length `n`) in place, to produce
* a permutation of the original vector where every element has
* been moved to a new position. Returns `true` unless the derangement
* failed because `n` was 1.
*/
bool derange(int arr[], size_t n) {
if (n < 2) return n != 1;
/* Compute derangement counts ("subfactorials") */
double subfact[n];
subfact[0] = 1;
subfact[1] = 0;
for (size_t i = 2; i < n; ++i)
subfact[i] = (i - 1) * (subfact[i - 2] + subfact[i - 1]);
/* The vector 'todo' is the stack of elements which have not yet
* been (fully) deranged; `u` is the count of elements in the stack
*/
size_t todo[n];
for (size_t i = 0; i < n; ++i) todo[i] = i;
size_t u = n;
/* While the stack is not empty, derange the element at the
* top of the stack with some element lower down in the stack
*/
while (u) {
size_t i = todo[--u]; /* Pop the stack */
size_t j = u * drand48(); /* Get a random stack index */
swap(arr, i, todo[j]); /* i will follow j in its cycle */
/* If we're generating a 2-cycle, remove the element at j */
if (drand48() * (subfact[u - 1] + subfact[u]) < subfact[u - 1])
todo[j] = todo[--u];
}
return true;
}
Notes
Many people get this wrong, particularly in social occasions such as "secret friend" selection (I believe this is sometimes called "the Santa game" in other parts of the world.) The incorrect algorithm is to just choose a different swap if the random shuffle produces a fixed point, unless the fixed point is at the very end in which case the shuffle is restarted. This will produce a random derangement but the selection is biased, particularly for small vectors. See this answer for an analysis of the bias.
Even if you don't use the RAM model where all integers are considered fixed size, the space used is still linear in the size of the input in bits, since N distinct input values must have at least N log N bits. Neither this algorithm nor MP&P makes any attempt to derange lists with repeated elements, which is a much harder problem.
Your algorithm is only almost correct (which in algorithmics means unexpected results). Because of some little errors scattered along, it will not produce expected results.
First, rand() % N is not guaranteed to produce an uniformal distribution, unless N is a divisor of the number of possible values. In any other case, you will get a slight bias. Anyway my man page for rand describes it as a bad random number generator, so you should try to use random or if available arc4random_uniform.
But avoiding that an index come back at its original place is both incommon, and rather hard to achieve. The only way I can imagine is to keep an array of the numbers [0; n[ and swap it the same as the real array to be able to know the original index of a number.
The code could become:
void Shuffle(int arr[]. size_t n)
{
int i, newIndx;
int *indexes = malloc(n * sizeof(int));
for (i=0; i<n; i++) indexes[i] = i;
for(i=0; i < n - 1; ++i) // beware to the inequality!
{
int i1;
// search if index i is in the [i; n[ current array:
for (i1=i; i1 < n; ++i) {
if (indexes[i1] == i) { // move it to i position
if (i1 != i) { // nothing to do if already at i
swap(i, i1, arr);
swap(i, i1, indexes);
}
break;
}
}
i1 = (i1 == n) ? i : i+1; // we will start the search at i1
// to guarantee that no element keep its place
newIndx = i1 + arc4random_uniform(n - i1);
/* if arc4random is not available:
newIndx = i1 + (random() % (n - i1));
*/
swap(i, newIndx, arr);
swap(i, newIndx, indexes);
}
/* special case: a permutation of [0: n-1[ have left last element in place
* we will exchange the last element with a random one
*/
if (indexes[n-1] == n-1) {
newIndx = arc4random_uniform(n-1)
swap(n-1, newIndx, arr);
swap(n-1, newIndx, indexes);
}
free(indexes); // don't forget to free what we have malloc'ed...
}
Beware: the algorithm should be correct, but the code has not been tested and can contain typos...

What is the best way to find N consecutive elements of a sorted version of an unordered array?

For instance: I have an unsorted list A of 10 elements. I need the sublist of k consecutive elements from i through i+k-1 of the sorted version of A.
Example:
Input: A { 1, 6, 13, 2, 8, 0, 100, 3, -4, 10 }
k = 3
i = 4
Output: sublist B { 2, 3, 6 }
If i and k are specified, you can use a specialized version of quicksort where you stop recursion on parts of the array that fall outside of the i .. i+k range. If the array can be modified, perform this partial sort in place, if the array cannot be modified, you will need to make a copy.
Here is an example:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
// Partial Quick Sort using Hoare's original partition scheme
void partial_quick_sort(int *a, int lo, int hi, int c, int d) {
if (lo < d && hi > c && hi - lo > 1) {
int x, pivot = a[lo];
int i = lo - 1;
int j = hi;
for (;;) {
while (a[++i] < pivot)
continue;
while (a[--j] > pivot)
continue;
if (i >= j)
break;
x = a[i];
a[i] = a[j];
a[j] = x;
}
partial_quick_sort(a, lo, j + 1, c, d);
partial_quick_sort(a, j + 1, hi, c, d);
}
}
void print_array(const char *msg, int a[], int count) {
printf("%s: ", msg);
for (int i = 0; i < count; i++) {
printf("%d%c", a[i], " \n"[i == count - 1]);
}
}
int int_cmp(const void *p1, const void *p2) {
int i1 = *(const int *)p1;
int i2 = *(const int *)p2;
return (i1 > i2) - (i1 < i2);
}
#define MAX 1000000
int main(void) {
int *a = malloc(MAX * sizeof(*a));
clock_t t;
int i, k;
srand((unsigned int)time(NULL));
for (i = 0; i < MAX; i++) {
a[i] = rand();
}
i = 20;
k = 10;
printf("extracting %d elements at %d from %d total elements\n",
k, i, MAX);
t = clock();
partial_quick_sort(a, 0, MAX, i, i + k);
t = clock() - t;
print_array("partial qsort", a + i, k);
printf("elapsed time: %.3fms\n", t * 1000.0 / CLOCKS_PER_SEC);
t = clock();
qsort(a, MAX, sizeof *a, int_cmp);
t = clock() - t;
print_array("complete qsort", a + i, k);
printf("elapsed time: %.3fms\n", t * 1000.0 / CLOCKS_PER_SEC);
return 0;
}
Running this program with an array of 1 million random integers, extracting the 10 entries of the sorted array starting at offset 20 gives this output:
extracting 10 elements at 20 from 1000000 total elements
partial qsort: 33269 38347 39390 45413 49479 50180 54389 55880 55927 62158
elapsed time: 3.408ms
complete qsort: 33269 38347 39390 45413 49479 50180 54389 55880 55927 62158
elapsed time: 149.101ms
It is indeed much faster (20x to 50x) than sorting the whole array, even with a simplistic choice of pivot. Try multiple runs and see how the timings change.
An idea could be to scan your array for bigger or equal numbers of i and smaller or equal numbers of i+k and add them to another list/container.
This will take you O(n) and give an unordered list of the numbers you need. Then you sort that list O(nlogn) and you are done.
For really big arrays the advantage of this method is that you will sort a smaller list of numbers. (given that the k is relatively small).
You can use Quickselect, or a heap selection algorithm to get the i+k smallest items. Quickselect works in-place, but it modifies the original array. It also won't work if the list of items is larger than will fit in memory. Quickselect is O(n), but with a fairly high constant. When the number of items you are selecting is a very small fraction of the total number of items, the heap selection algorithm is faster.
The idea behind the heap selection algorithm is that you initialize a max-heap with the first i+k items. Then, iterate through the rest of the items. If an item is smaller than the largest item on the max-heap, remove the largest item from the max-heap and replace it with the new, smaller item. When you're done, you have the first i+k items on the heap, with the largest k items at the top.
The code is pretty simple:
heap = new max_heap();
Add first `i+k` items from a[] to heap
for all remaining items in a[]
if item < heap.peek()
heap.pop()
heap.push(item)
end-if
end-for
// at this point the smallest i+k items are on the heap
This requires O(i+k) extra memory, and worst case running time is O(n log(i+k)). When (i+k) is less than about 2% of n, it will usually outperform Quickselect.
For much more information about this, see my blog post When theory meets practice.
By the way, you can optimize your memory usage somewhat based on i. That is, if there are a billion items in the array and you want items 999,999,000 through 999,999,910, the standard method above would require a huge heap. But you can re-cast that problem to one in which you need to select the smallest of the last 1,000 items. Your heap then becomes a min-heap of 1,000 items. It just takes a little math to determine which way will require the smallest heap.
That doesn't help much, of course, if you want items 600,000,000 through 600,000,010, because your heap still has 400 million items in it.
It occurs to me, though, that if time isn't a huge issue, you can just build the heap in the array in-place using Floyd's algorithm, pop the first i items like you would with heap sort, and the next k items are what you're looking for. This would require constant extra space and O(n + (i+k)*log(n)) time.
Come to think of it, you could implement the heap selection logic with a heap of (i+k) items (as described above) in-place, as well. It would be a little tricky to implement, but it wouldn't require any extra space and would have the same running time O(n*log(i+k)).
Note that both would modify the original array.
One thing you could do is modify heapsort, such that you will first create the heap, but then pop the first i elements. The next k elements you pop form the heap will be your result. Discarding the n - i - k elements remaining let's the algorithm terminate early.
The result will be in O((i + k) log n) which is in O(n log n), but is significantly faster with relative low values for i and k.

How to sort an int array in linear time?

I had been given a homework to do a program to sort an array in ascending order.I did this:
#include <stdio.h>
int main()
{
int a[100],i,n,j,temp;
printf("Enter the number of elements: ");
scanf("%d",&n);
for(i=0;i<n;++i)
{
printf("%d. Enter element: ",i+1);
scanf("%d",&a[i]);
}
for(j=0;j<n;++j)
for(i=j+1;i<n;++i)
{
if(a[j]>a[i])
{
temp=a[j];
a[j]=a[i];
a[i]=temp;
}
}
printf("Ascending order: ");
for(i=0;i<n;++i)
printf("%d ",a[i]);
return 0;
}
The input will not be more than 10 numbers. Can this be done in less amount of code than i did here? I want the code to be as shortest as possible.Any help will be appreciated.Thanks!
If you know the range of the array elements, one way is to use another array to store the frequency of each of the array elements ( all elements should be int :) ) and print the sorted array. I am posting it for large number of elements (106). You can reduce it according to your need:
#include <stdio.h>
#include <malloc.h>
int main(void){
int t, num, *freq = malloc(sizeof(int)*1000001);
memset(freq, 0, sizeof(int)*1000001); // Set all elements of freq to 0
scanf("%d",&t); // Ask for the number of elements to be scanned (upper limit is 1000000)
for(int i = 0; i < t; i++){
scanf("%d", &num);
freq[num]++;
}
for(int i = 0; i < 1000001; i++){
if(freq[i]){
while(freq[i]--){
printf("%d\n", i);
}
}
}
}
This algorithm can be modified further. The modified version is known as Counting sort and it sorts the array in Θ(n) time.
Counting sort:1
Counting sort assumes that each of the n input elements is an integer in the range
0 to k, for some integer k. When k = O(n), the sort runs in Θ(n) time.
Counting sort determines, for each input element x, the number of elements less
than x. It uses this information to place element x directly into its position in the
output array. For example, if 17 elements are less than x, then x belongs in output
position 18. We must modify this scheme slightly to handle the situation in which
several elements have the same value, since we do not want to put them all in the
same position.
In the code for counting sort, we assume that the input is an array A[1...n] and
thus A.length = n. We require two other arrays: the array B[1....n] holds the
sorted output, and the array C[0....k] provides temporary working storage.
The pseudo code for this algo:
for i ← 1 to k do
c[i] ← 0
for j ← 1 to n do
c[A[j]] ← c[A[j]] + 1
//c[i] now contains the number of elements equal to i
for i ← 2 to k do
c[i] ← c[i] + c[i-1]
// c[i] now contains the number of elements ≤ i
for j ← n downto 1 do
B[c[A[i]]] ← A[j]
c[A[i]] ← c[A[j]] - 1
1. Content has been taken from Introduction to Algorithms by
Thomas H. Cormen and others.
You have 10 lines doing the sorting. If you're allowed to use someone else's work (subsequent notes indicate that you can't do this), you can reduce that by writing a comparator function and calling the standard C library qsort() function:
static int compare_int(void const *v1, void const *v2)
{
int i1 = *(int *)v1;
int i2 = *(int *)v2;
if (i1 < i2)
return -1;
else if (i1 > i2)
return +1;
else
return 0;
}
And then the call is:
qsort(a, n, sizeof(a[0]), compare_int);
Now, I wrote the function the way I did for a reason. In particular, it avoids arithmetic overflow which writing this does not:
static int compare_int(void const *v1, void const *v2)
{
return *(int *)v1 - *(int *)v2;
}
Also, the original pattern generalizes to comparing structures, etc. You compare the first field for inequality returning the appropriate result; if the first fields are unequal, then you compare the second fields; then the third, then the Nth, only returning 0 if every comparison shows the values are equal.
Obviously, if you're supposed to write the sort algorithm, then you'll have to do a little more work than calling qsort(). Your algorithm is a Bubble Sort. It is one of the most inefficient sorting techniques — it is O(N2). You can look up Insertion Sort (also O(N2)) but more efficient than Bubble Sort), or Selection Sort (also quadratic), or Shell Sort (very roughly O(N3/2)), or Heap Sort (O(NlgN)), or Quick Sort (O(NlgN) on average, but O(N2) in the worst case), or Intro Sort. The only ones that might be shorter than what you wrote are Insertion and Selection sorts; the others will be longer but faster for large amounts of data. For small sets like 10 or 100 numbers, efficiency is immaterial — all sorts will do. But as you get towards 1,000 or 1,000,000 entries, then the sorting algorithms really matter. You can find a lot of questions on Stack Overflow about different sorting algorithms. You can easily find information in Wikipedia for any and all of the algorithms mentioned.
Incidentally, if the input won't be more than 10 numbers, you don't need an array of size 100.

Effective Algorithms for selecting the top k ( in percent) items from a datastream:

I have to repeatedly sort an array containing 300 random elements. But i have to do a special kind of sort: I need the 5% smallest values from an subset of the array, then some value is calculated and the subset is increased. Now the value is calculated again and the subset also increased. And so on until the subset contains the whole array.
The subset starts with the first 10 elements and is increased by 10 elements after each step.
i.e. :
subset-size k=ceil(5%*subset)
10 1 (so just the smallest element)
20 1 (so also just the smallest)
30 2 (smallest and second smallest)
...
The calculated value is basically a sum of all elements smaller than k and the specially weighted k smallest element.
In code:
k = ceil(0.05 * subset) -1; // -1 because array index starts with 0...
temp = 0.0;
for( int i=0 i<k; i++)
temp += smallestElements[i];
temp += b * smallestElements[i];
I have implemented myself a selection sort based algorithm (code at the end of this post). I use MAX(k) pointers to keep track of the k smallest elements. Therefore I unnecessarily sort all elements smaller than k :/
Furthermore I know selection sort is bad for performance, which is unfortunately crucial in my case.
I tried figuring out a way how I could use some quick- or heapsort based algorithm. I know that quickselect or heapselect are perfect for finding the k smallest elements if k and the subset is fixed.
But because my subset is more like an input stream of data I think that quicksort based algorithm drop out.
I know that heapselect would be perfect for a data stream if k is fixed. But I don't manage it to adjust heapselect for dynamic k's without big performance drops, so that it is less effective than my selection-sort based version :( Can anyone help me to modify heap-select for dynamic k's?
If there is no better algorithm, you maybe find a different/faster approach for my selection sort implementation. Here is a minimal example of my implementation, the calculated variable isn't used in this example, so don't worry about it. (In my real programm i have just some loops unrolled manually for better performance)
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define ARRAY_SIZE 300
#define STEP_SIZE 10
float sortStream( float* array, float** pointerToSmallest, int k_max){
int i,j,k,last = k_max-1;
float temp=0.0;
// init first two pointers
if( array[0] < array[1] ){
pointerToSmallest[0] = &array[0];
pointerToSmallest[1] = &array[1];
}else{
pointerToSmallest[0] = &array[1];
pointerToSmallest[1] = &array[0];
}
// Init remaining pointers until i= k_max
for(i=2; i< k_max;++i){
if( *pointerToSmallest[i-1] < array[i] ){
pointerToSmallest[i] = &array[i];
}else{
pointerToSmallest[i] = pointerToSmallest[i-1];
for(j=0; j<i-1 && *pointerToSmallest[i-2-j] > array[i];++j)
pointerToSmallest[i-1-j] = pointerToSmallest[i-2-j];
pointerToSmallest[i-1-j]=&array[i];
}
if((i+1)%STEP_SIZE==0){
k = ceil(0.05 * i)-1;
for(j=0; j<k; j++)
temp += *pointerToSmallest[j];
temp += 2 * (*pointerToSmallest[k]);
}
}
// Selection sort remaining elements
for( ; i< ARRAY_SIZE; ++i){
if( *pointerToSmallest[ last ] > array[i] ) {
for(j=0; j != last && *pointerToSmallest[ last-1-j] > array[i];++j)
pointerToSmallest[last-j] = pointerToSmallest[last-1-j];
pointerToSmallest[last-j] = &array[i];
}
if( (i+1)%STEP_SIZE==0){
k = ceil(0.05 * i)-1;
for(j=0; j<k; j++)
temp += *pointerToSmallest[j];
temp += 2 * (*pointerToSmallest[k]);
}
}
return temp;
}
int main(void){
int i,k_max = ceil( 0.05 * ARRAY_SIZE );
float* array = (float*)malloc ( ARRAY_SIZE * sizeof(float));
float** pointerToSmallest = (float**)malloc( k_max * sizeof(float*));
for( i=0; i<ARRAY_SIZE; i++)
array[i]= rand() / (float)RAND_MAX*100-50;
// just return a, so that the compiler doens't drop the function call
float a = sortStream(array,pointerToSmallest, k_max);
return (int)a;
}
Thank you very much
By using two heap for storing all items from stream, you can:
find top p% elements in O(1)
update data structure (two heaps) in O(log N)
assume, now we have N elements, k = p% *N,
min heap (LargerPartHeap) for storing top k items
max heap (SmallerPartHeap) for storing the other (N - k) items.
all items in SmallerPartHeap is less or equal to min items of LargerPartHeap (top item # LargerPartHeap).
for query "what is top p% elements?", simply return LargerPartHeap
for update "new element x from stream",
2.a check new k' = (N + 1) * p%, if k' = k + 1, move top of SmallerPartHeap to LargerPartHeap. - O(logN)
2.b if x is larger than top element (min element) of LargerPartHeap, insert x to LargerPartHeap, and move top of LargerPartHeap to SmallerPartHeap; otherwise, insert x to SmallerPartHeap - O(logN)
I believe heap sort is far too complicated for this particular problem, even though that or other priority queue algorithms are well suited to get N minimum or maximum items from a stream.
The first notice is the constraint 0.05 * 300 = 15. That is the maximum amount of data, that has to be sorted at any moment. Also during each iteration one has add 10 elements. The overall operation in-place could be:
for (i = 0; i < 30; i++)
{
if (i != 1)
qsort(input + i*10, 10, sizeof(input[0]), cmpfunc);
else
qsort(input, 20, sizeof(input[0]), cmpfunc);
if (i > 1)
merge_sort15(input, 15, input + i*10, 10, cmpfunc);
}
When i==1, one could also merge sort input and input+10 to produce completely sorted array of 20 inplace, since that has lower complexity than the generic sort. Here the "optimizing" is also on minimizing the primitives of the algorithm.
Merge_sort15 would only consider the first 15 elements of the first array and the first 10 elements of the next one.
EDIT The parameters of the problem will have a considerable effect in choosing the right algorithm; here selecting 'sort 10 items' as basic unit will allow one half of the problem to be parallelized, namely sorting 30 individual blocks of 10 items each -- a problem which can be efficiently solved with fixed pipeline algorithm using sorting networks. With different parametrization such an approach may not be feasible.

Algorithm - Find pure numbers

Description:
A positive integer m is said to a pure number if and only if m can be
expressed as q-th power of a prime p (q >= 1). Here your job is easy,
for a given positive integer k, find the k-th pure number.
Input:
The input consists of multiple test cases. For each test case, it
contains a positive integer k (k<5,000,000). Process to end of file.
Output:
For each test case, output the k-th pure number in a single line. If
the answer is larger than 5,000,000, just output -1.
Sample input:
1
100
400000
Sample output:
2
419
-1
Original page: http://acm.whu.edu.cn/learn/problem/detail?problem_id=1092
Can anyone give me some suggestion on the solution to this?
You've already figured out all the pure numbers, which is the tricky part. Sort the ones less than 5 million and then look up each input in turn in the resulting array.
To optimize you need to efficiently find all primes up to 5 million (note q >= 1 in the problem description: every prime is a pure number), for which you will want to use some kind of sieve (sieve of Erathosthenes will do, look it up).
You could probably adapt the sieve to leave in powers of primes, but I expect that it would not take long to sieve normally and then put the powers back in. You only have to compute powers of primes p where p <= the square root of 5 million, which is 2236, so this shouldn't take long compared with finding the primes.
Having found the numbers with a sieve, you no longer need to sort them, just copy the marked values from the sieve to a new array.
Having now looked at your actual code: your QuickSort routine is suspect. It performs badly for already-sorted data and your array will have runs of sorted numbers in it. Try qsort instead, or if you're supposed to do everything yourself then you need to read up on pivot choice for quicksort.
Try following approach:
static void Main(string[] args)
{
int max = 5000000;
int[] dp = new int[max];
for (int i = 2; i < max; i++)
{
if (dp[i] == 0)
{
long t = i;
while (t < max)
{
dp[t] = 1;
t *= i;
}
int end = max / i;
for (int j = 2; j < end; j++)
if (dp[i * j] == 0)
dp[i * j] = 2;
}
}
int[] result = new int[348978];
int pointer = 1;
for (int i = 2; i < max; i++)
{
if (dp[i] == 1)
result[pointer++] = i;
}
}
Into array as "1" marked pure numbers.
As "2" marked non pure(prime) numbers.
For each output check array ranges if it inside output result[index] if not output should be -1.

Resources