there was a party.there was a log register in which entry and exit time of all the guests was logged.you have to tell the time at which there was maximum guest in the party.
input will be the entry and exit time of all the n guests [1,4] [2,5] [9,12] [5,9] [5,12]
the output will be t=5 as there was maximum 3 guest were there namly guest(starting from 1) 2,4 and 5.
what i tried so far is
main()
{
int ret;
int a[5]={1,2,9,5,5};
int b[5]={4,6,12,9,12};
int i,j;
int runs=5;
int cur = 0,p1 = 0,p2 = 0;
printf("input is ");
for(i=0;i<5;i++)
{
printf("(");
printf("%d,%d",a[i],b[i]);
printf(")");
}
while(runs--)
{
while(p1<5 && p2<5)
{
if(a[p1] <= b[p2])
{
cur ++;
p1 ++ ;
}
else {
cur --;
p2 ++ ;
}
ret = cur ;
}
}
printf("\n the output is %d",ret);
}
i am getting 3 as output..which is completely wrong! where am i making error?
Several things are problematic with your code. Here's a few pointers on where to improve it:
Your algorithm itself is doubtful. Assume that your first guest is the party host and stays from 1 until the end time of the party. With your current code, p2 will never change and you will ignore all other guests' leave times.
Even if your algorithm worked, it would assume that your input is sorted. By iterating p1/p2 you implicitly assume growing times in your array, which is already wrong for your sample input. So you ought to sort the input first.
You are assigning the result ret at each iteration of your main loop. This neglects the fact on whether the current state (cur) is the maximum number of guests or not. Hint: If yo are to compute a maximum of something and don't have any maximum computation in your code, there may be something missing.
Here's a different idea: Assuming you can spare an array of size maxtime, create an array filled with 0s. Process your input by increment the array at a certain time, if a guest arrives, and decrement it when a guest leaves. For example, the first 5 minutes would then look like [1, 1, 0, -1, 1, ...]. Then it's much simpler to walk linearly through the array and compute the maximum prefix sum. It's also much easier this way to compute the full time-interval for how long this maximum number of guests was present.
(If you want to go more fancy and have a much larger total time interval to cover, instead rely on a map with times as keys. Initialize like the array, then process the keys in sorted order.)
You are printing an index instead of the actual time. Try printing edited p1[ret] a[ret].
Related
I have been attempting to solve the following problem:
You are given an array of n+1 integers where all the elements lies in [1,n]. You are also given that one of the elements is duplicated a certain number of times, whilst the others are distinct. Develop an algorithm to find both the duplicated number and the number of times it is duplicated.
Here is my solution where I let k = number of duplications:
struct LatticePoint{ // to hold duplicate and k
int a;
int b;
LatticePoint(int a_, int b_) : a(a_), b(b_) {}
}
LatticePoint findDuplicateAndK(const std::vector<int>& A){
int n = A.size() - 1;
std::vector<int> Numbers (n);
for(int i = 0; i < n + 1; ++i){
++Numbers[A[i] - 1]; // A[i] in range [1,n] so no out-of-access
}
int i = 0;
while(i < n){
if(Numbers[i] > 1) {
int duplicate = i + 1;
int k = Numbers[i] - 1;
LatticePoint result{duplicate, k};
return LatticePoint;
}
So, the basic idea is this: we go along the array and each time we see the number A[i] we increment the value of Numbers[A[i]]. Since only the duplicate appears more than once, the index of the entry of Numbers with value greater than 1 must be the duplicate number with the value of the entry the number of duplications - 1. This algorithm of O(n) in time complexity and O(n) in space.
I was wondering if someone had a solution that is better in time and/or space? (or indeed if there are any errors in my solution...)
You can reduce the scratch space to n bits instead of n ints, provided you either have or are willing to write a bitset with run-time specified size (see boost::dynamic_bitset).
You don't need to collect duplicate counts until you know which element is duplicated, and then you only need to keep that count. So all you need to track is whether you have previously seen the value (hence, n bits). Once you find the duplicated value, set count to 2 and run through the rest of the vector, incrementing count each time you hit an instance of the value. (You initialise count to 2, since by the time you get there, you will have seen exactly two of them.)
That's still O(n) space, but the constant factor is a lot smaller.
The idea of your code works.
But, thanks to the n+1 elements, we can achieve other tradeoffs of time and space.
If we have some number of buckets we're dividing numbers between, putting n+1 numbers in means that some bucket has to wind up with more than expected. This is a variant on the well-known pigeonhole principle.
So we use 2 buckets, one for the range 1..floor(n/2) and one for floor(n/2)+1..n. After one pass through the array, we know which half the answer is in. We then divide that half into halves, make another pass, and so on. This leads to a binary search which will get the answer with O(1) data, and with ceil(log_2(n)) passes, each taking time O(n). Therefore we get the answer in time O(n log(n)).
Now we don't need to use 2 buckets. If we used 3, we'd take ceil(log_3(n)) passes. So as we increased the fixed number of buckets, we take more space and save time. Are there other tradeoffs?
Well you showed how to do it in 1 pass with n buckets. How many buckets do you need to do it in 2 passes? The answer turns out to be at least sqrt(n) bucekts. And 3 passes is possible with the cube root. And so on.
So you get a whole family of tradeoffs where the more buckets you have, the more space you need, but the fewer passes. And your solution is merely at the extreme end, taking the most spaces and the least time.
Here's a cheekier algorithm, which requires only constant space but rearranges the input vector. (It only reorders; all the original elements are still present at the end.)
It's still O(n) time, although that might not be completely obvious.
The idea is to try to rearrange the array so that A[i] is i, until we find the duplicate. The duplicate will show up when we try to put an element at the right index and it turns out that that index already holds that element. With that, we've found the duplicate; we have a value we want to move to A[j] but the same value is already at A[j]. We then scan through the rest of the array, incrementing the count every time we find another instance.
#include <utility>
#include <vector>
std::pair<int, int> count_dup(std::vector<int> A) {
/* Try to put each element in its "home" position (that is,
* where the value is the same as the index). Since the
* values start at 1, A[0] isn't home to anyone, so we start
* the loop at 1.
*/
int n = A.size();
for (int i = 1; i < n; ++i) {
while (A[i] != i) {
int j = A[i];
if (A[j] == j) {
/* j is the duplicate. Now we need to count them.
* We have one at i. There's one at j, too, but we only
* need to add it if we're not going to run into it in
* the scan. And there might be one at position 0. After that,
* we just scan through the rest of the array.
*/
int count = 1;
if (A[0] == j) ++count;
if (j < i) ++count;
for (++i; i < n; ++i) {
if (A[i] == j) ++count;
}
return std::make_pair(j, count);
}
/* This swap can only happen once per element. */
std::swap(A[i], A[j]);
}
}
/* If we get here, every element from 1 to n is at home.
* So the duplicate must be A[0], and the duplicate count
* must be 2.
*/
return std::make_pair(A[0], 2);
}
A parallel solution with O(1) complexity is possible.
Introduce an array of atomic booleans and two atomic integers called duplicate and count. First set count to 1. Then access the array in parallel at the index positions of the numbers and perform a test-and-set operation on the boolean. If a boolean is set already, assign the number to duplicate and increment count.
This solution may not always perform better than the suggested sequential alternatives. Certainly not if all numbers are duplicates. Still, it has constant complexity in theory. Or maybe linear complexity in the number of duplicates. I am not quite sure. However, it should perform well when using many cores and especially if the test-and-set and increment operations are lock-free.
my development environment : visual studio
Now, I have to create a input file and print random numbers from 1 to 500000 without duplicating in the file. First, I considered that if I use a big size of local array, problems related to heap may happen. So, I tried to declare as a static array. Then, in main function, I put random numbers without overlapping in the array and wrote the numbers in input file accessing array elements. However, runtime errors(the continuous blinking of the cursor in the console window) continue to occur.
The source code is as follows.
#define SIZE 500000
int sort[500000];
int main()
{
FILE* input = NULL;
input = fopen("input.txt", "w");
if (sort != NULL)
{
srand((unsigned)time(NULL));
for (int i = 0; i < SIZE; i++)
{
sort[i] = (rand() % SIZE) + 1;
for (int j = 0; j < i; j++)
{
if (sort[i] == sort[j])
{
i--;
break;
}
}
}
for (int i = 0; i < SIZE; i++)
{
fprintf(input, "%d ", sort[i]);
}
fclose(input);
}
return 0;
}
When I tried to reduce the array size from 1 to 5000, it has been implemented. So, Carefully, I think it's a memory out phenomenon. Finally, I'd appreciate it if you could comment on how to solve this problem.
“First, I considered that if I use a big size of local array, problems related to heap may happen.”
That does not make any sense. Automatic local objects generally come from the stack, not the heap. (Also, “heap” is the wrong word; a heap is a particular kind of data structure, but the malloc family of routines may use other data structures for managing memory. This can be referred to simply as dynamically allocated memory or allocated memory.)
However, runtime errors(the continuous blinking of the cursor in the console window)…
Continuous blinking of the cursor is normal operation, not a run-time error. Perhaps you are trying to say your program continues executing without ever stopping.
#define SIZE 500000<br>
...
sort[i] = (rand() % SIZE) + 1;
The C standard only requires rand to generate numbers from 0 to 32767. Some implementations may provide more. However, if your implementation does not generate numbers up to 499,999, then it will never generate the numbers required to fill the array using this method.
Also, using % to reduce the rand result skews the distribution. For example, if we were reducing modulo 30,000, and rand generated numbers from 0 to 44,999, then rand() % 30000 would generate the numbers from 0 to 14,999 each two times out of every 45,000 and the numbers from 15,000 to 29,999 each one time out of every 45,000.
for (int j = 0; j < i; j++)
So this algorithm attempts to find new numbers by rejecting those that duplicate previous numbers. When working on the last of n numbers, the average number of tries is n, if the selection of random numbers is uniform. When working on the second-to-last number, the average is n/2. When working on the third-to-last, the average is n/3. So the average number of tries for all the numbers is n + n/2 + n/3 + n/4 + n/5 + … 1.
For 5000 elements, this sum is around 45,472.5. For 500,000 elements, it is around 6,849,790. So your program will average around 150 times the number of tries with 500,000 elements than with 5,000. However, each try also takes longer: For the first try, you check against zero prior elements for duplicates. For the second, you check against one prior element. For try n, you check against n−1 elements. So, for the last of 500,000 elements, you check against 499,999 elements, and, on average, you have to repeat this 500,000 times. So the last try takes around 500,000•499,999 = 249,999,500,000 units of work.
Refining this estimate, for each selection i, a successful attempt that gets completely through the loop of checking requires checking against all i−1 prior numbers. An unsuccessful attempt will average going halfway through the prior numbers. So, for selection i, there is one successful check of i−1 numbers and, on average, n/(n+1−i) unsuccessful checks of an average of (i−1)/2 numbers.
For 5,000 numbers, the average number of checks will be around 107,455,347. For 500,000 numbers, the average will be around 1,649,951,055,183. Thus, your program with 500,000 numbers takes more than 15,000 times as long than with 5,000 numbers.
When I tried to reduce the array size from 1 to 5000, it has been implemented.
I think you mean that with an array size of 5,000, the program completes execution in a short amount of time?
So, Carefully, I think it's a memory out phenomenon.
No, there is no memory issue here. Modern general-purpose computer systems easily handle static arrays of 500,000 int.
Finally, I'd appreciate it if you could comment on how to solve this problem.
Use a Fischer-Yates shuffle: Fill the array A with integers from 1 to SIZE. Set a counter, say d to the number of selections completed so far, initially zero. Then pick a random number r from 1 to SIZE-d. Move the number in that position of the array to the front by swapping A[r] with A[d]. Then increment d. Repeat until d reaches SIZE-1.
This will swap a random element of the initial array into A[0], then a random element from those remaining into A[1], then a random element from those remaining into A[2], and so on. (We stop when d reaches SIZE-1 rather than when it reaches SIZE because, once d reaches SIZE-1, there is only one more selection to make, but there is also only one number left, and it is already in the last position in the array.)
Purpose: To store numbers between 1-1000 in a random order.
My Code:
#include<time.h>
int main(){
int arr[1000]={0}, store[1000];
for(int i=0;i<1000;i++){
int no;
while(1){
srand(time(0));
no=rand();
no%=1001;
if(no==0)
continue;
//This ensures Loop will continue till the time a unique random number is generated
if(arr[no-1]!=no){
arr[no-1]=no;
break;
}
}
store[i]=no;
}
For me the code works perfectly fine,however, it took me 58 minutes to execute. Is there a way to speed up the program?
Practical Purpose: I have around 4000 employees and I want to give each one of them a unique random number for an upcoming project.
I tried to execute a code using 1000 to check the efficiency.
Create an array containing 1 to n. Iterate through the list and swap that entry with one that is randomly selected. You will then have a random list containing 1 to n.
From your first sentence the numbers do not have to be random but only need to be in random order.
Therefore you can try a simple approach:
Create an array arr of n elements and initialize with values 1..n
run a loop (counter i) over range 0..n-1
Pick a random number x in range 0..n-i-1
Swap element at index i with index i+x
With this algorithm you don't need to worry about collisions of random numbers.
You swap the numbers and afterwards you decrease the range of candidates.
A number picked once is not available to pick in later steps.
This solution is similar to William's answer. I don't really know if the result has better "randomness" or not.
Try to avoid branches on random numbers. And it will most likely run faster on modern processors.
This because the processor is not able to predict which way to chose on random numbers.
For example
while(1){
srand(time(0));
no=rand();
no%=1001;
if(no==0)
continue;
// ...
}
could be changed to
srand(time(0)); // better outside the loop
while(1) {
no = rand() % 1000 + 1;
// ...
}
Although it is not clearly stated in my excercise, I am supposed to implement Radix sort recursively. I've been working on the task for days, but yet, I only managed to produce garbage, unfortunately. We are required to work with two methods. The sort method receives a certain array with numbers ranging from 0 to 999 and the digit we are looking at. We are supposed to generate a two-dimensional matrix here in order to distribute the numbers inside the array. So, for example, 523 is positioned at the fifth row and 27 is positioned at the 0th row since it is interpreted as 027.
I tried to do this with the help of a switch-case-construct, dividing the numbers inside the array by 100, checking for the remainder and then position the number with respect to the remainder. Then, I somehow tried to build buckets that include only the numbers with the same digit, so for example, 237 and 247 would be thrown in the same bucket in the first "round". I tried to do this by taking the whole row of the "fields"-matrix where we put in the values before.
In the putInBucket-method, I am required to extent the bucket (which I managed to do right, I guess) and then returning it.
I am sorry, I know that the code is total garbage, but maybe there's someone out there who understands what I am up to and can help me a little bit.
I simply don't see how I need to work with the buckets here, I even don't understand why I have to extent them, and I don't see any way to returning it back to the sort-method (which, I think, I am required to do).
Further description:
The whole thing is meant to work as follows: We take an array with integers ranging from 0 to 999. Every number is then sorted by its first digit, as mentioned above. Imagine you have buckets denoted with the numbers ranging from 0 to 9. You start the sorting by putting 523 in bucket 5, 672 in bucket 6 and so on. This is easy when there is only one number (or no number at all) in one of the buckets. But it gets harder (and that's where recursion might come in hand) when you want to put more than one number in one bucket. The mechanism now goes as follows: We put two numbers with the same first digit in one bucket, for example 237 and 245. Now, we want to sort these numbers again by the same algorithm, meaning we call the sort-method (somehow) again with an array that only contains these two numbers and sorting them again, but now my we do by looking at the second digit, so we would compare 3 and 4. We sort every number inside the array like this, and at the end, in order to get a sorted array, we start at the end, meaning at bucket 9, and then just put everything together. If we would be at bucket 2, the algorithm would look into the recursive step and already receive the sorted array [237, 245] and deliver it in order to complete the whole thing.
My own problems:
I don't understand why we need to extent a bucket and I can't figure it out from the description. It is simply stated that we are supposed to do so. I'd imagine that we would to it to copy another element inside it, because if we have the buckets from 0 to 9, putting in two numbers inside the same bucket would just mean that we would overwrite the first value. This might be the reason why we need to return the new, extended bucket, but I am not sure about that. Plus, I don't know how to go further from there. Even if I have an extened bucket now, it's not like I can simply stick it to the old matrix and copy another element into it again.
public static int[] sort(int[] array, int digit) {
if (array.length == 0)
return array;
int[][] fields = new int[10][array.length];
int[] bucket = new int[array.length];
int i = 0;
for (int j = 0; j < array.length; j++) {
switch (array[j] / 100) {
case 0: i = 0; break;
case 1: i = 1; break;
...
}
fields[i][j] = array[j]
bucket[i] = fields[i][j];
}
return bucket;
}
private static int[] putInBucket(int [] bucket, int number) {
int[] bucket_new = int[bucket.length+1];
for (int i = 1; i < bucket_new.length; i++) {
bucket_new[i] = bucket[i-1];
}
return bucket_new;
}
public static void main (String [] argv) {
int[] array = readInts("Please type in the numbers: ");
int digit = 0;
int[] bucket = sort(array, digit);
}
You don't use digit in sort, that's quite suspicious
The switch/case looks like a quite convoluted way to write i = array[j] / 100
I'd recommend to read the wikipedia description of radix sort.
The expression to extract a digit from a base 10 number is (number / Math.pow(10, digit)) % 10.
Note that you can count digits from left to right or right to left, make sure you get this right.
I suppose you first want to sort for digit 0, then for digit 1, then for digit 2. So there should be a recursive call at the end of sort that does this.
Your buckets array needs to be 2-dimensional. You'll need to call it this way: buckets[i] = putInBucket(buckets[i], array[j]). If you handle null in putInBuckets, you don't need to initialize it.
The reason why you need a 2d bucket array and putInBucket (instead of your fixed size field) is that you don't know how many numbers will end up in each bucket
The second phase (reading back from the buckets to the array) is missing before the recursive call
make sure to stop the recursion after 3 digits
Good luck
Currently my program allows the user to enter 5 integers which are used to create an average number. This is set to five as after the fifth number is entered the loop is broken.
I am trying to implement a method which will let the user continue to add as many numbers as they like to an array from which i can then use to create an average without a limit on the amount of numbers that can be entered.
I have come across a few problems, firstly i cannot create an array which is dyamic as i have no idea how many numbers the user may wish to enter which means i can't give it a definitive size.
Secondly the way my program currently creates the average is by looping through the elements in the array and adding the consecutively to an integer, from which the the average is made. I cannot specify the limit for the loop to continue running if i cannot determine the array.
Hopefully my example explains this better.
#include <stdio.h>
#include <string.h>
void main()
{
int i = 0;
int arrayNum[5];
int temp = 1;
int anotherTemp = 0;
int answer = 0;
printf("Enter as many numbers as you like, when finished enter a negative number\n");
for(i = 0; i < 5; i++)
{
scanf("%d", &temp);
arrayNum[i] = temp;
anotherTemp = anotherTemp + arrayNum[i];
}
answer = anotherTemp / 5;
printf("Average of %d,%d,%d,%d,%d = %d",arrayNum[0],arrayNum[1],arrayNum[2],arrayNum[3],arrayNum[4],answer);
}
Although this may not be the best way to implement it, it does work when the amount of numbers are specified beforehand.
What would be the best way to get around this and allow the user to enter as many number as necessary?
Edit: Although i needed to use an array I have decided that it is not necessary as the solution is much simpler without being restricted to it.
In terms of code simplicity, you might want to check out the realloc() function; you can allocate an initial array of some size, and if the user enters too many numbers call realloc() to get yourself a bigger array and continue from there.
You don't, however, actually need to keep the numbers as you go along at all, at least if you only care about the average:
int input;
int sum = 0;
int count = 0;
int average;
while (1) {
scanf("%d", &input);
if (input < 0) {
break;
}
sum += input;
count++;
}
average = sum / count;
If you're trying to compute an average, then you don't need to save the numbers. Save yourself the work of worrying about the array. Simply accumulate (add) each number to a single total, count each number, then divide when you're done. Two variables are all that you need.
With this method, you aren't in any risk of overflowing your array, so you can use a while loop... while (temp != -1)
Basically you start with a dynamically allocated array with a fixed size, and then allocate a new array that is bigger (say, twice as big as initial size) and copy the stuff from the old array to the new one whenever you run out of space.
For the second part of the problem, keep a counter of the number of items the user entered and use it when averaging.
Something like this.
Use a dynamic array data structure, like Vector in Java (java.util.Vector).
You can implement such a dynamic array yourself easily:
allocate array of size N
as soon as you need more elements than N, allocate a new bigger array (e.g. with size N+10), copy the content of the old array into the new array and set your working reference to the new array and your array size variable N to the new size (e.g. N+10). Free the old array.