Hi there I am currently trying to create a simple game in c based around a map with different ai players.
I've came along to program the calculations to figure out an amount of health lost and although I am sure my math behind it is correct the implemtnation in c doesn't seem to be working! I might have missed a simple point but I'll put the extract of code which isn't working as expected.
random1 = Numbergen(50);
cal1 = (100 - random1)/100;
random2 = Numbergen(50);
cal2 = (100 - random2) / 100;
PtrPlayer[attacker].health = (double)(PtrPlayer[attacker].health * cal1);
PtrPlayer[ncount].health = (double)(PtrPlayer[ncount].health * cal2);
printf("There was a battle but both players drawed and retreated.\n");
return 1;
Numbergen is a function that calculates a random number based on a time seed, 50 is the maximum number I want it to return with.
cal1 and 2 should store a decimal number for example 0.75 to take off 25% health and that's what the cal calculations should be doing however when debugging they are showing a value of zero no matter what the random number is.
This should work by taking the random number lets say 25 way from 100 to leave 75, it then divides by 100 to get a decimal multiplier which can then be used to reduce the health by 25%. Health starts at 100 so for example it will result in 100*0.75 which should leave me with 75 health but instead cal one stores 0 and as a result the health goes down to zero.
To be clear cal1 and 2 are both floats to allow for decimal places.
If anyone can point out where I might have gone wrong I will be so grateful!
If I've missed out something important then please let me know and I'll try and explain.
Also please note I am only a beginner in programming so please don't hit me with super complex code!
As requested random1 and 2 are both ints
PtrPlayer[].health is set as ints
cal1 and 2 are set as floats
cal1 = (100 - random1)/100;
Since random1 is an int, and 100 is an int, (100 - random1) is also an int. When you use the / operator on two integer operands, a truncating integer division is performed.
If you want floating-point division, convert at least one side of / to a floating-point type :
cal1 = (100 - random1) / 100.0f;
// float literal ^^^^^^
... or...
cal1 = (float)(100 - random1) / 100;
Simply converting the result of the division would have no effect, as C expressions' types are determined strictly from the inside out.
Just try changing either of the two 100 as 100.0
100.0 - random1
100.0 - random2
Or
(100 - random1)/100.0
(100 - random2)/100.0
This is the easiest option.
Related
I have stuck (again) and looking for smart human beings of planet earth to help me out.
Background
I have an application which distributes the amounts to some users in a given percentage. Say I have $35000 and it will distribute the amounts to 3 users (A, B and C) in some ratio. So the amount distributed will be
A - 5691.05459265518
B - 14654.473815207
C - 14654.4715921378
which totals up to $35000
The Problem
I have to provide the results on the front end in 2 decimal spaces instead of float. So I use the round function of SQL Server with the precision value of 2 to convert these to 2 decimal spaces with rounding. But the issue is that when I total these values this comes out to be $34999.9999 instead of $35000.
My Findings
I searched a bit and found
If the expression that you are rounding ends with a 5, the Round()
function will round the expression so that the last digit is an even
number. Here are some examples:
Round(34.55, 1) - Result: 34.6 (rounds up)
Round(34.65, 1) - Result: 34.6 (rounds down)
So technically the answer is correct but I am looking for a function or a way to round of the value exactly what it should have been. I found that if I start rounding off (if the value is less than 5 then leave the previous number else increment the previous digit by 1 ) from the last digit after the decimal and keep on backtracking while I am left with only 2 decimal places.
Please advise.
I have a 32.768 kHz oscillator which produces a 1-Hz pulse. I'm measuring this pulse with a 40MHz clock. I am trying to measure the actual frequency of the oscillator by comparing the expected results with the obtained one.
I cannot change the period that I'm measuring, it has to be 1-Hz (1s).
I cannot use float, double or types larger than uint32.
I also need the first digit after the integer part (like 32768.1 Hz).
Without the constraints the solution would be like:
computedFreq = (uint32) ( (float32)32768u / ( ( (float32)measuredPeriod / (float32)40,000,000 ) / (float32)10u ) );
or
computedFreq = ( 32768*10*40,000,000 ) / measuredPeriod;
So for a measured period of 40,008,312 the result will be 327611.
How can this be achieved while satisfying the constraints?
Your problem is that measured range goes up to 40010000 Hz, while you are only interested in 20000 Hz range. So the solution is to limit the range.
You can do this with linear interpolation using 2 points.
Select 2 input values. Min/max points might do:
inMin = 40,000,000 - 10,000
inMax = 40,000,000 + 10,000
Precalculate respective output values using your formula (values here are rough values, calculate your own): You can store these as constants in your code.
outMin = ( 32768*10*40,000,000 ) / inMin = 327761
outMax = ( 32768*10*40,000,000 ) / inMax = 327598
Notice how max is smaller than min, due the nature of your formula. This is important when selecting types below.
Use linear interpolation to calculate result using both previous points:
out = (in - inMin) * (outMax - outMin) / (inMax - inMin) + outMin
You have to use signed integers for this, because (outMax - outMin) is negative.
Upper limit of the multiplication is (inMax - inMin) * (outMax - outMin), which should fit into int32.
You can pick any 2 points, but you should pick ones that will not produce too big values to overflow on multiplication. Also, if points are too near each other, answer may be less accurate.
If you you have extra precision to spare, you could use bigger multiplier than 10 on outMin/outMax and round afterwards to keep more precision.
Note that 32768 * 10 * 40000000 = 3200000000 * 2^12.
You can use this fact in order to approximate the frequency as follows:
computedFreq = (3200000000/measuredPeriod) << 12;
UPDATE:
I understand that 39990000 <= measuredPeriod <= 40010000.
A better way to approximate the frequency under this restriction would be:
computedFreq = 327761-(measuredPeriod-39990000)/122;
I'm facing the problem of computing values of a clothoid in C in real-time.
First I tried using the Matlab coder to obtain auto-generated C code for the quadgk-integrator for the Fresnel formulas. This essentially works great in my test scnearios. The only issue is that it runs incredibly slow (in Matlab as well as the auto-generated code).
Another option was interpolating a data-table of the unit clothoid connecting the sample points via straight lines (linear interpolation). I gave up after I found out that for only small changes in curvature (tiny steps along the clothoid) the results were obviously degrading to lines. What a surprise...
I know that circles may be plotted using a different formula but low changes in curvature are often encountered in real-world-scenarios and 30k sampling points in between the headings 0° and 360° didn't provide enough angular resolution for my problems.
Then I tried a Taylor approximation around the R = inf point hoping that there would be significant curvatures everywhere I wanted them to be. I soon realized I couldn't use more than 4 terms (power of 15) as the polynom otherwise quickly becomes unstable (probably due to numerical inaccuracies in double precision fp-computation). Thus obviously accuracy quickly degrades for large t values. And by "large t values" I'm talking about every point on the clothoid that represents a curve of more than 90° w.r.t. the zero curvature point.
For instance when evaluating a road that goes from R=150m to R=125m while making a 90° turn I'm way outside the region of valid approximation. Instead I'm in the range of 204.5° - 294.5° whereas my Taylor limit would be at around 90° of the unit clothoid.
I'm kinda done randomly trying out things now. I mean I could just try to spend time on the dozens of papers one finds on that topic. Or I could try to improve or combine some of the methods described above. Maybe there even exists an integrate function in Matlab that is compatible with the Coder and fast enough.
This problem is so fundamental it feels to me I shouldn't have that much trouble solving it. any suggetions?
about the 4 terms in Taylor series - you should be able to use much more. total theta of 2pi is certainly doable, with doubles.
you're probably calculating each term in isolation, according to the full formula, calculating full factorial and power values. that is the reason for losing precision extremely fast.
instead, calculate the terms progressively, the next one from the previous one. Find the formula for the ratio of the next term over the previous one in the series, and use it.
For increased precision, do not calculate in theta by rather in the distance, s (to not lose the precision on scaling).
your example is an extremely flat clothoid. if I made no mistake, it goes from (25/22) pi =~ 204.545° to (36/22) pi =~ 294.545° (why not include these details in your question?). Nevertheless it should be OK. Even 2 pi = 360°, the full circle (and twice that), should pose no problem.
given: r = 150 -> 125, 90 degrees turn :
r s = A^2 = 150 s = 125 (s+x)
=> 1+(x/s) = 150/125 = 1 + 25/125 x/s = 1/5
theta = s^2/2A^2 = s^2 / (300 s) = s / 300 ; = (pi/2) * (25/11) = 204.545°
theta2 = (s+x)^2/(300 s) = (6/5)^2 s / 300 ; = (pi/2) * (36/11) = 294.545°
theta2 - theta = ( 36/25 - 1 ) s / 300 == pi/2
=> s = 300 * (pi/2) * (25/11) = 1070.99749554 x = s/5 = 214.1994991
A^2 = 150 s = 150 * 300 * (pi/2) * (25/11)
a = sqrt (2 A^2) = 300 sqrt ( (pi/2) * (25/11) ) = 566.83264608
The reference point is at r = Infinity, where theta = 0.
we have x = a INT[u=0..(s/a)] cos(u^2) d(u) where a = sqrt(2 r s) and theta = (s/a)^2. write out the Taylor series for cos, and integrate it, term-by-term, to get your Taylor approximation for x as function of distance, s, along the curve, from the 0-point. that's all.
next you have to decide with what density to calculate your points along the clothoid. you can find it from a desired tolerance value above the chord, for your minimal radius of 125. these points will thus define the approximation of the curve by line segments, drawn between the consecutive points.
I am doing my thesis in the same area right now.
My approach is the following.
at each point on your clothoid, calculate the following (change in heading / distance traveled along your clothoid), by this formula you can calculate the curvature at each point by this simple equation.
you are going to plot each curvature value, your x-axis will be the distance along the clothoid, the y axis will be the curvature. By plotting this and applying very easy linear regression algorithm (search for Peuker algorithm implementation in your language of choice)
you can easily identify where are the curve sections with value of zero (Line has no curvature), or linearly increasing or decreasing (Euler spiral CCW/CW), or constant value != 0 (arc has constant curvature across all points on it).
I hope this will help you a little bit.
You can find my code on github. I implemented some algorithms for such problems like Peuker Algorithm.
I'm looking at someone else's code and trying to figure out the logic behind what they wrote. What would one use the following random number calculation for?
return ( ((rand() % 10000)+1) <= Rate * 100);
Rate here is being used for a user-specified value representing an overall percentage of when a certain event occurs.
The left part of the expression returns a random number between 1 and 10,000 (due to the + 1; otherwise it would be between 0 and 9,999).
Rate can then be used to determine the effective chance of the expression being true. The higher Rate is, the higher the chance of returning true.
Since Rate is multiplied with 100, you're able to determine the Rate using 0 to 100 (essentially percentages) with Rate = 0 never and Rate = 100 always returning true.
To generate a random integer between 1 and 10000 inclusive. As for the comparison with Rate*100 who can say as you do not specify what this means.
I recently read this question regarding information gain and entropy. I think I have a semi-decent grasp on the main idea, but I'm curious as what to do with situations such as follows:
If we have a bag of 7 coins, 1 of which is heavier than the others, and 1 of which is lighter than the others, and we know the heavier coin + the lighter coin is the same as 2 normal coins, what is the information gain associated with picking two random coins and weighing them against each other?
Our goal here is to identify the two odd coins. I've been thinking this problem over for a while, and can't frame it correctly in a decision tree, or any other way for that matter. Any help?
EDIT: I understand the formula for entropy and the formula for information gain. What I don't understand is how to frame this problem in a decision tree format.
EDIT 2: Here is where I'm at so far:
Assuming we pick two coins and they both end up weighing the same, we can assume our new chances of picking H+L come out to 1/5 * 1/4 = 1/20 , easy enough.
Assuming we pick two coins and the left side is heavier. There are three different cases where this can occur:
HM: Which gives us 1/2 chance of picking H and a 1/4 chance of picking L: 1/8
HL: 1/2 chance of picking high, 1/1 chance of picking low: 1/1
ML: 1/2 chance of picking low, 1/4 chance of picking high: 1/8
However, the odds of us picking HM are 1/7 * 5/6 which is 5/42
The odds of us picking HL are 1/7 * 1/6 which is 1/42
And the odds of us picking ML are 1/7 * 5/6 which is 5/42
If we weight the overall probabilities with these odds, we are given:
(1/8) * (5/42) + (1/1) * (1/42) + (1/8) * (5/42) = 3/56.
The same holds true for option B.
option A = 3/56
option B = 3/56
option C = 1/20
However, option C should be weighted heavier because there is a 5/7 * 4/6 chance to pick two mediums. So I'm assuming from here I weight THOSE odds.
I am pretty sure I've messed up somewhere along the way, but I think I'm on the right path!
EDIT 3: More stuff.
Assuming the scale is unbalanced, the odds are (10/11) that only one of the coins is the H or L coin, and (1/11) that both coins are H/L
Therefore we can conclude:
(10 / 11) * (1/2 * 1/5) and
(1 / 11) * (1/2)
EDIT 4: Going to go ahead and say that it is a total 4/42 increase.
You can construct a decision tree from information-gain considerations, but that's not the question you posted, which is only the compute the information gain (presumably the expected information gain;-) from one "information extraction move" -- picking two random coins and weighing them against each other. To construct the decision tree, you need to know what moves are affordable from the initial state (presumably the general rule is: you can pick two sets of N coins, N < 4, and weigh them against each other -- and that's the only kind of move, parametric over N), the expected information gain from each, and that gives you the first leg of the decision tree (the move with highest expected information gain); then you do the same process for each of the possible results of that move, and so on down.
So do you need help to compute that expected information gain for each of the three allowable values of N, only for N==1, or can you try doing it yourself? If the third possibility obtains, then that would maximize the amount of learning you get from the exercise -- which after all IS the key purpose of homework. So why don't you try, edit your answer to show you how you proceeded and what you got, and we'll be happy to confirm you got it right, or try and help correct any misunderstanding your procedure might reveal!
Edit: trying to give some hints rather than serving the OP the ready-cooked solution on a platter;-). Call the coins H (for heavy), L (for light), and M (for medium -- five of those). When you pick 2 coins at random you can get (out of 7 * 6 == 42 possibilities including order) HL, LH (one each), HM, MH, LM, ML (5 each), MM (5 * 4 == 20 cases) -- 2 plus 20 plus 20 is 42, check. In the weighting you get 3 possible results, call them A (left heavier), B (right heavier), C (equal weight). HL, HM, and ML, 11 cases, will be A; LH, MH, and LM, 11 cases, will be B; MM, 20 cases, will be C. So A and B aren't really distinguishable (which one is left, which one is right, is basically arbitrary!), so we have 22 cases where the weight will be different, 20 where they will be equal -- it's a good sign that the cases giving each results are in pretty close numbers!
So now consider how many (equiprobable) possibilities existed a priori, how many a posteriori, for each of the experiment's results. You're tasked to pick the H and L choice. If you did it at random before the experiment, what would be you chances? 1 in 7 for the random pick of the H; given that succeeds 1 in 6 for the pick of the L -- overall 1 in 42.
After the experiment, how are you doing? If C, you can rule out those two coins and you're left with a mystery H, a mystery L, and three Ms -- so if you picked at random you'd have 1 in 5 to pick H, if successful 1 in 4 to pick L, overall 1 in 20 -- your success chances have slightly more than doubled. It's trickier to see "what next" for the A (and equivalently B) cases because they're several, as listed above (and, less obviously, not equiprobable...), but obviously you won't pick the known-lighter coin for H (and viceversa) and if you pick one of the 5 unweighed coins for H (or L) only one of the weighed coins is a candidate for the other role (L or H respectively). Ignoring for simplicity the "non equiprobable" issue (which is really kind of tricky) can you compute what your chances of guessing (with a random pick not inconsistent with the experiment's result) would be...?