I'm currently thinking about a code question about the C language, its a game called Blackjack, and here is the original question:
In practice, one need to play the game a large number of times to get an accurate expected
value. Thus, each row of the table should be the results of at least 100,000 experiments. For example, for a particular target points, say 10 points, two cards are drawn first. If the sum of these two cards exceeds 10 points then this experiment is a failure. If the sum is exactly 10 points, then it is a success. If it is less than 10 points, then another card is drawn. If case of neither a failure (more than 10 points) or a success (exactly 10 points), cards are continuously drawn until a conclusive results is obtained. After 100,000 experiments, the probability of getting 10 points should be printed together with the average number of cards of getting 10 points (the third column of the table).
Below is my current code:
#include <stdio.h>
#include <stdlib.h>
int main(void) {
int r1,r2,count,sum,cardsadd,k;
int aftersum=sum+k;
unsigned int total,cardsum;
float percent,cards;
printf("Points Probability #Cards\n");
for (int points=4; points<=21; points++){
count = 0;
total = 0;
cardsum = 0;
do{
r1 = rand()%13 + 1;
r2 = rand()%13 + 1;
if(r1>10) r1=10;
if(r2>10) r2=10;
sum = r1+r2;
if(r1==1 && r2==1) sum=12;
else if ((r1==1 || r2==1) && r1!=r2) sum+=10;
count++;
cardsadd=0;
if(sum==points){
total++;
cardsum+=2;
}
else if(sum<points){
while(sum<points){
do{
cardsadd+=1;
k = rand()%13 + 1;
if(k>10) k=10;
else if(k==1){
if(sum<=10) k=11;
}
}while(aftersum>points);
sum+=k;
}
total+=1;
cardsum+=aftersum;
}
}while(count<100000);
percent = (float)total/1000;
cards = (float)cardsum/100000;
printf(" %2d %5.2lf%% ",points,percent);
printf("%.2lf\n",cards);
}
return 0;
}
In my code, variable count is the times needed to execute for each cards (4 to 21), total is the correct times when sum of the cards number is successfully equal to the points we want in the beginning (for loop). And cardsum is the total cards we need in 100000 tests, cardsadd is used when the first two cards drawn is less than the point we want, then we will keep drawing until sum of the point is equal to the points in the beginning.
I don't have the correct answer yet but I know my code is surely wrong, as I can clearly see that the average cards we need to get 4 points is not 2.00.
Hope someone can tell me how I should correct my code to get the answer. If anything is not clearly narrated, I will give a more complete explanation of the parts. Thanks for helping.
With an ace you have 2 possibles scores (the soft and the hard);
You cannot compare "points" with only score in case you have an ace because for example with ace and 5 you can have 6 or 16;
You need to modify your program to take this both scores in consideration (in case of an ace);
Related
my development environment : visual studio
Now, I have to create a input file and print random numbers from 1 to 500000 without duplicating in the file. First, I considered that if I use a big size of local array, problems related to heap may happen. So, I tried to declare as a static array. Then, in main function, I put random numbers without overlapping in the array and wrote the numbers in input file accessing array elements. However, runtime errors(the continuous blinking of the cursor in the console window) continue to occur.
The source code is as follows.
#define SIZE 500000
int sort[500000];
int main()
{
FILE* input = NULL;
input = fopen("input.txt", "w");
if (sort != NULL)
{
srand((unsigned)time(NULL));
for (int i = 0; i < SIZE; i++)
{
sort[i] = (rand() % SIZE) + 1;
for (int j = 0; j < i; j++)
{
if (sort[i] == sort[j])
{
i--;
break;
}
}
}
for (int i = 0; i < SIZE; i++)
{
fprintf(input, "%d ", sort[i]);
}
fclose(input);
}
return 0;
}
When I tried to reduce the array size from 1 to 5000, it has been implemented. So, Carefully, I think it's a memory out phenomenon. Finally, I'd appreciate it if you could comment on how to solve this problem.
“First, I considered that if I use a big size of local array, problems related to heap may happen.”
That does not make any sense. Automatic local objects generally come from the stack, not the heap. (Also, “heap” is the wrong word; a heap is a particular kind of data structure, but the malloc family of routines may use other data structures for managing memory. This can be referred to simply as dynamically allocated memory or allocated memory.)
However, runtime errors(the continuous blinking of the cursor in the console window)…
Continuous blinking of the cursor is normal operation, not a run-time error. Perhaps you are trying to say your program continues executing without ever stopping.
#define SIZE 500000<br>
...
sort[i] = (rand() % SIZE) + 1;
The C standard only requires rand to generate numbers from 0 to 32767. Some implementations may provide more. However, if your implementation does not generate numbers up to 499,999, then it will never generate the numbers required to fill the array using this method.
Also, using % to reduce the rand result skews the distribution. For example, if we were reducing modulo 30,000, and rand generated numbers from 0 to 44,999, then rand() % 30000 would generate the numbers from 0 to 14,999 each two times out of every 45,000 and the numbers from 15,000 to 29,999 each one time out of every 45,000.
for (int j = 0; j < i; j++)
So this algorithm attempts to find new numbers by rejecting those that duplicate previous numbers. When working on the last of n numbers, the average number of tries is n, if the selection of random numbers is uniform. When working on the second-to-last number, the average is n/2. When working on the third-to-last, the average is n/3. So the average number of tries for all the numbers is n + n/2 + n/3 + n/4 + n/5 + … 1.
For 5000 elements, this sum is around 45,472.5. For 500,000 elements, it is around 6,849,790. So your program will average around 150 times the number of tries with 500,000 elements than with 5,000. However, each try also takes longer: For the first try, you check against zero prior elements for duplicates. For the second, you check against one prior element. For try n, you check against n−1 elements. So, for the last of 500,000 elements, you check against 499,999 elements, and, on average, you have to repeat this 500,000 times. So the last try takes around 500,000•499,999 = 249,999,500,000 units of work.
Refining this estimate, for each selection i, a successful attempt that gets completely through the loop of checking requires checking against all i−1 prior numbers. An unsuccessful attempt will average going halfway through the prior numbers. So, for selection i, there is one successful check of i−1 numbers and, on average, n/(n+1−i) unsuccessful checks of an average of (i−1)/2 numbers.
For 5,000 numbers, the average number of checks will be around 107,455,347. For 500,000 numbers, the average will be around 1,649,951,055,183. Thus, your program with 500,000 numbers takes more than 15,000 times as long than with 5,000 numbers.
When I tried to reduce the array size from 1 to 5000, it has been implemented.
I think you mean that with an array size of 5,000, the program completes execution in a short amount of time?
So, Carefully, I think it's a memory out phenomenon.
No, there is no memory issue here. Modern general-purpose computer systems easily handle static arrays of 500,000 int.
Finally, I'd appreciate it if you could comment on how to solve this problem.
Use a Fischer-Yates shuffle: Fill the array A with integers from 1 to SIZE. Set a counter, say d to the number of selections completed so far, initially zero. Then pick a random number r from 1 to SIZE-d. Move the number in that position of the array to the front by swapping A[r] with A[d]. Then increment d. Repeat until d reaches SIZE-1.
This will swap a random element of the initial array into A[0], then a random element from those remaining into A[1], then a random element from those remaining into A[2], and so on. (We stop when d reaches SIZE-1 rather than when it reaches SIZE because, once d reaches SIZE-1, there is only one more selection to make, but there is also only one number left, and it is already in the last position in the array.)
Imagine 10 cars randomly, uniformly distributed on a round track of length 1. If the positions are represented by a C double in the range [0,1> then they can be sorted and the gaps between the cars should be the position of the car in front minus the position of the car behind. The last gap needs 1 added to account for the discontinuity.
In the program output, the last column has very different statistics and distribution from the others. The rows correctly add to 1. What's going on?
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
int compare (const void * a, const void * b)
{
if (*(double*)a > *(double*)b) return 1;
else if (*(double*)a < *(double*)b) return -1;
else return 0;
}
double grand_f_0_1(){
static FILE * fp = NULL;
uint64_t bits;
if(fp == NULL) fp = fopen("/dev/urandom", "r");
fread(&bits, sizeof(bits), 1, fp);
return (double)bits * 5.421010862427522170037264004349e-020; // https://stackoverflow.com/a/26867455
}
int main()
{
const int n = 10;
double values[n];
double diffs[n];
int i, j;
for(j=0; j<10000; j++) {
for(i=0; i<n; i++) values[i] = grand_f_0_1();
qsort(values, n, sizeof(double), compare);
for(i=0; i<(n-1); i++) diffs[i] = values[i+1] - values[i];
diffs[n-1] = 1. + values[0] - values[n-1];
for(i=0; i<n; i++) printf("%.5f%s", diffs[i], i<(n-1)?"\t":"\n");
}
return(0);
}
Here is a sample of the output. The first column represents the gap between the first and second car. The last column represents the gap between 10th car and the first car, across the start/finish line. Large numbers like .33 and .51 are much more common in the last column and very small numbers are relatively rare.
0.13906 0.14241 0.24139 0.29450 0.01387 0.07906 0.02905 0.03160 0.00945 0.01962
0.01826 0.36875 0.04377 0.05016 0.05939 0.02388 0.10363 0.04640 0.03538 0.25037
0.04496 0.05036 0.00536 0.03645 0.13741 0.00538 0.24632 0.04452 0.07750 0.35176
0.00271 0.15540 0.03399 0.05654 0.00815 0.01700 0.24275 0.25494 0.00206 0.22647
0.34420 0.03226 0.01573 0.08597 0.05616 0.00450 0.05940 0.09492 0.05545 0.25141
0.18968 0.34749 0.07375 0.01481 0.01027 0.00669 0.04306 0.00279 0.08349 0.22796
0.16135 0.02824 0.07965 0.11255 0.05570 0.05550 0.05575 0.05586 0.07156 0.32385
0.12799 0.18870 0.04153 0.16590 0.02079 0.06612 0.08455 0.14696 0.13088 0.02659
0.00810 0.06335 0.13014 0.06803 0.01878 0.10119 0.00199 0.06656 0.20922 0.33263
0.00715 0.03261 0.05779 0.47221 0.13998 0.11044 0.06397 0.00238 0.04157 0.07190
0.33703 0.02945 0.06164 0.01555 0.03444 0.14547 0.02342 0.03804 0.16088 0.15407
0.10912 0.14419 0.04340 0.09204 0.23033 0.09240 0.14530 0.00960 0.03412 0.09950
0.20165 0.09222 0.04268 0.17820 0.19159 0.02074 0.05634 0.00237 0.09559 0.11863
0.09296 0.01148 0.20442 0.07070 0.05221 0.04591 0.08455 0.25799 0.01417 0.16561
0.08846 0.07075 0.03732 0.11721 0.03095 0.24329 0.06630 0.06655 0.08060 0.19857
0.06225 0.10971 0.10978 0.01369 0.13479 0.17539 0.17540 0.02690 0.00464 0.18744
0.09431 0.10851 0.05079 0.07846 0.00162 0.00463 0.06533 0.18752 0.30896 0.09986
0.23214 0.11937 0.10215 0.04040 0.02876 0.00979 0.02443 0.21859 0.15627 0.06811
0.04522 0.07920 0.02432 0.01949 0.03837 0.10967 0.11123 0.01490 0.03846 0.51915
0.13486 0.02961 0.00818 0.11947 0.17204 0.08967 0.09767 0.03349 0.08077 0.23426
Your code is ok. The mean value of the last difference it two times larger than the others.
The paradox comes from the fact is that rather than selecting 10 points on a unit interval one actually tries to divide it into 11 sub-intervals with 10 cuts.Therefore the expected length of each sub-interval is 1/11.
The difference between consecutive points is approaching 1/11 except the last pair because it contains the last sub-interval (between the last point and point 1) and the first one (between point 0 and the first point).
Thus the mean of the last difference is 2/11.
"There is no special points on the circle"
The thing is that, on a circle, one car looks always the same, and so there is no need to relate to zero: you can just relate to the first car. This means that you can fix the first car at zero, and treat the random positions of the other cars as related to it (measured from it).
And so, the convenient solution is to fix the first car at zero and think of the 9 numbers you still generate as positions related to the first one.
Hope it's a satisfying answer :-)
IDENTITY (or Which diff is first?)
If 10 cars with labels ("1","2" and so on) are placed randomly on a circle, the difference from the "1" to the next will average 1/10.
While sorting, the first diff "loses it's identity", what it is changes: it's similar to that if you chose the 1st diff to be the longest one, it would average more. Choosing it based on relation of cars to zero skews (or, in nicer terms: changes) things in a similar manner.
The first difference (2nd, 3rd etc.) just becomes something different – defining it as a difference from a given car is more intuitive, and gives an option to use it as a reference (playing nicely with circle symmetry); the distribution of the rest of cars with respect to it is a uniform one. Dealing with the smallest of random points is not that simple.
Summary: define what you're calculating, know your definitions and probability is non-intuitive
After 3 months of puzzling over this, I have an explanation that is intuitive, at least to me. This is cumulative to the answers provided by #wojand and #tstanisl.
My original code is correct: it uniformly distributes points on the interval, and the forward differences of all points have the same statistical distribution. The paradox is that the forward difference of the highest-value point, the one that crosses the 0-1 discontinuity, is on average twice the others and it's distribution has a different shape.
The reason this forward difference has a different distribution is that it contains the value 0. Larger forward differences (gaps) are more likely to contain any fixed value, simply because they are larger.
We could search for the gap that contains 1/pi, for example, and it too would have the same atypical distribution.
Started to learn recursion and I am stuck with this simple problem. I believe that there are more optimized ways to do this but first I'm trying to learn the bruteforce approach.
I have bag A and bag B and have n items each one with some time (a float with two decimal places). The idea is to distribute the items by the two bags and obtain the minimum difference in the two bags. The idea is to try all possible outcomes.
I thought only in one bag (lets say bag A) since the other bag will contain all the items that are not in the bag A and therefore the difference will be the absolute value of total times sum - 2 * sum of the items time that are in the bag A.
I'm calling my recursive function like this:
min = total_time;
recursive(0, items_number - 1, 0);
And the code for the function is this:
void recursive(int index, int step, float sum) {
sum += items_time[index];
float difference = fabs(total_time - 2 * sum);
if (min > difference) {
min = difference;
}
if (!(min == 0.00 || step == 1 || sum > middle_time)) {
int i;
for (i = 0; i < items_number; i++) {
if (i != index) {
recursive(i, step - 1, sum);
}
}
}
}
Imagine I have 4 items with the times 1.23, 2.17 , 2.95 , 2.31
I'm getting the result 0.30. I believe that this is the correct result but I'm almost certain that if it is is pure change because If I try with bigger cases the program stops after a while. Probably because the recursion tree gets to bigger.
Can someone point me in some direction?
Okay, after the clarification, let me (hopefully) point you to a direction:
Let's assume that you know what n is, mentioned in n items. In your example, it was 2n is 4, making n = 2. Let's pick another n, let it be 3 this time, and our times shall be:
1.00
2.00
3.00
4.00
5.00
6.00
Now, we can already tell what the answer is; what you had said is all correct, optimally each of the bags will have their n = 3 times summed up to middle_time, which is 21 / 2 = 10.5 in this case. Since integers may never sum up to numbers with decimal points, 10.5 : 10.5 may never be achieved in this example, but 10 : 11 can, and you can have 10 through 6.00 + 3.00 + 1.00 (3 elements), so... yeah, the answer is simply 1.
How would you let a computer calculate it? Well; recall what I said at the beginning:
Let us assume that you know what n is.
In that case a naive programmer would probably simply put all those inside 2 or 3 nested for loops. 2 if he/she knew that the other half will be determined when you pick a half (by simply fixing the very first element in our group, since that element is to be included in one of the groups), like you also know; 3 if he/she didn't know that. Let's make it with 2:
...
float difference;
int i;
for ( i = 1; i < items_number; i++ ) {
sum = items_time[0] + items_time[i];
int j;
for ( j = i + 1; j < items_number; j++ ) {
sum += items_time[j];
difference = fabs( total_time - 2 * sum );
if ( min > difference ) {
min = difference;
}
}
}
...
Let me comment about the code a little for faster understanding: On the first cycle, it will add up the 0th time, the 1st time and then the 2nd time as you may see; then it will do the same check you had made (calculate the difference and compare the it with min). Let us call this the 012 group. The next group that will be checked will be 013, then 014, then 015; then 023, and so on... Each possible combination that will split the 6 into two 3s will be checked.
This operation shouldn't be any tiresome for the computer to issue. Even with this simple approach, the maximum amount of tries will be the amount of combinations of 3 you could have with 6 unique elements divided by 2. In maths, people denote this as C(6, 3), which evaluates to (6 * 5 * 4) / (3 * 2 * 1) = 20; divided by 2, so it's 10.
My guess is that the computer wouldn't make it a problem even if n was 10, making the amount of combinations as high as C(20, 10) / 2 = 92 378. It would, however, be a problem for you to write down 9 nested for loops by hand...
Anyway, the good thing is, you can recursively nest these loops. Here I will end my guidance. Since you apparently are studying for the recursion already, it wouldn't be good for me to offer a solution at this point. I can assure you that it is do-able.
Also the version I have made on my end can do it within a second for up to items_number = 22, without having made any optimizations; simply with brute force. That makes 352 716 combinations, and my machine is just a simple Windows tablet...
Your problem is called the Partition Problem. It is NP-hard and after some point, it will take a very long time to complete: the tree gets exponentially bigger as the number of cases to test grows.
The partition problem is well known and well documented over the internet. There exists some optimized solution
Your approach is not the naive brute-force approach, which would just walk through the list of items and put it into bag A and bag B recursively, chosing the case with the minimum difference, for example:
double recurse(double arr[], int n, double l, double r)
{
double ll, rr;
if (n == 0) return fabs(l - r);
ll = recurse(arr + 1, n - 1, l + *arr, r);
rr = recurse(arr + 1, n - 1, l, r + *arr);
if (ll > rr) return rr;
return ll;
}
(This code is very naive - it doesn't quite early on clearly non-optimal cases and it also wastes time by calculating every case twice with bags A and B swapped. it is brute force, however.)
You maximum recursion depth is the numer of items n, you call the recursive function 2^n - 1 times.
In your code, you can put the same item into a bag over and over:
for (i = 0; i < number_of_pizzas; i++) {
if (i != index) {
recursive(i, step - 1, sum);
}
}
This loop prevents you from treating the current item, but will happily treat items that have been put into the bag in earlier recursions for a second (or third) time. If you want to use that approach, you must keep a state of which item is in which bag.
Also, I don't understand your step. You start with step - 1 and stop recursion when step == 1. That means you are considering n - 2 items. I understand that the other items are in the other bag, but that's a weird condition that won't let you find the solution to, say, {8.0, 2.4, 2.4, 2.8}.
I need to write a program to input a number and output it's factorial in the form in C
4!=(2^3)*(3^1)
5!=(2^3)*(3^1)*(5^1)
I am able to find the prime numbers 2, 3 and 5 but how do i figure out how many times they occur? (^3, ^1, ^1)
Code:
int main() {
int num,i,count,n;
printf("Enter to find prime numbers: ");
scanf("%d",&n);
for(num = 2; num<=n;num++) {
count = 0;
for(i=2;i<=num/2;i++) {
if(num%i==0)
{
count++;
break;
}
}
if(count==0 && num!= 1)
printf("%d ",num);
}
return 0;
}
Without going into any code, Ill explain what the problem in the way you are doing things ...
Let us say you want to find the prime factors of the factorial of 5. So you do:
5! = 2 x 3 x 4 x 5 (this is your outer loop (for(num = ...)
Let us say that for a particular iteration, num = 4. Then, you have another iteration in i that checks of each number upto that num/2 is a factor. Now for a small value of 5! this is not a problem. Consider a bigger number like 25!. In this case, your outer loop will be:
25! = 1 x 2 x 3 x ... 22 x 23 x 24 x 25
Now your outer iteration num goes much further. Consider now the number 24. 24/2 = 12. Your program is going to print all factors that divide 24 upto 12, which happen to be 2, 3, 4, 6, and 12. I am sure, that is not what you want.
First, do not attempt to find the factorial for large numbers. You will run into overflow issues. Next, Ill give you some pointers and hope you can solve the problem on your own. Its a very cool problem, so I really hope you are able to solve it:
Study the prime sieve algorithm (http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes). You will not be using this directly, but only the ideas mentioned here.
Create two arrays. The first one will contain the prime factors, while the next one will contain the total count of the factors occuring in the factorial.
For a particular num u need to be iterating not using the i that you have used but using the values in your prime array.
3.1. Use the method explained by Barmar to find the number of times this num is divisible by the factors, and update the corresponding counts in the count array.
Print out the factors and counts that you have obtained.
Finally, I think its a pretty good question. It teaches you how to not run into overflow errors and still be able to solve problem using the computer. It can teach you dynamic memory allocation and memory management skills, if that is the way you want to go. It also helps you to think critically about a problem. You dont deserve a -1. I have increased your rating.
Have fun programming, and keep thinking critically about each setp in your program.
Cheers!
for(num = 2; num<=n; ++num){
count = 0;
while(n % num == 0){
n /= num;
++count;
}
if(count!=0)
printf("(%d^%d)", num, count);
}
Im learning basic C on my own and trying to create a poker client. I have an array with cards (called kortArray) and a players hand (called kortHand). My implementation does not shuffle the deck, but add all 52 cards in sequence and then randomly selecting 5 cards from the deck. I've added a flag (called draget) which tells if a card has been picked up befor or not.
Now, when I run the algorithm below, it usually generates five random numbers which makes the player's or computer's hand. But sometimes it generates less then five numbers, even though I've specifically stated to generate five accepted values.
I have two loops, one that runs five times, and the other is nested and runs until it finds a card which hasn't yet been picked. The printf in the middle tells me that this algorithm doesn't always generate five accepted numbers, and when that happens the player's hand contains cards with nonsense-values.
srand((unsigned)(time(0)));
for(i = 0; i < 5; i++) {
int x = rand()%52 + 1;
while (kortArray[x].draget!=1) {
x = rand()%52 + 1;
if (kortArray[x].draget != 1) {
printf("%i\n", x);
kortArray[x].draget = 1;
kortHand[i] = kortArray[x];
}
}
}
The problem still lies in the +1 for the random numbers.
Also, you are first checking in the the first assignment to x if the card is already picked, and than you assign it to an other card.
Why not use something like:
int nr_cards_picked = 0 /* Number of uniquely picked cards in hand */
/* Continue picking cards until 5 unique cards are picked. */
while (nr_cards_picked < 5) {
x = rand() % 52; /* Take a random card */
if (kortArray[x].draget == 0) {
/* Pick this card. */
kortArray[x].draget = 1; /* Card is picked */
kortHand[i] = kortArray[x]; /* Add picked card to hand */
nr_cards_picked++;
}
}
Forgive compiler errors; I don't have a compiler near here.
This case you only have one time a random number call.
Theoretically it might never end but this is not likely.
You have:
int x = rand()%52+1;
while (kortArray[x].draget!=1){
x = rand()%52;
Arrays in C are indexed starting at 0. Your first call to rand() will generate a value starting at 1. Assuming that you declared kortArray[] to hold 52 values, there is about a 2% chance that you will overrun the array.
Your first call to rand() generates values in the range 1..52. Your second call generates 0..51. ONE OF THEM IS WRONG.
A few things of note.
First of all, random number generators are not guaranteed to be random if you do a mod operation on them. Far better is to divide it out into the 52 segments, and choose like that.
Secondly, you would be far better off moving your call to generate a random number inside the while loop to the end, or just not generate one at the beginning of the for loop.
Where your problem is coming into play is that you are sometimes leaving the while loop without actually entering it, because you are randomly generating a number before you enter the loop.
Given all of this, I would do code somewhat as follows:
srand((unsigned)(time(0)));
for(i=0;i<5;i++){
x=-1;
while (kortArray[x].draget!=1){
x = floor(rand()*52);
}
printf("%i\n", x);
kortArray[x].draget=1;
kortHand[i]=kortArray[x];
}
The nested loop is only entered when kortArray[x].draget is not 1. So everytime it is 1, nothing is done and no card is assigned. First make sure you have a unique x and then in all cases update kortHand[i]
I'd suggest a different algorithm:
create a random number between 0 and number of cards in deck
assign the card from that position to the appropriate hand
swap the last card to that postion
decrease the number of cards in the deck by 1.
continue with 1 until the necessary number of cards are dealt.
This way, you get rid of the flag and you can guarantee linear performance. No need to check whether a card has already been dealt or not.