For a class project I need to split some audio clips in smaller sections, for which we are provided a min length and a max length, to figure out whether this is possible, I do the following:
a = length/max
b = length/min
mathematically I figured that [a,b] contains at least one integer if ⌊b⌋ >= ⌈a⌉, but I can't use math.h for floor() and ceil(). Since a and b are always positive I can use type casting for floor(), but I am at a loss at how to do ceil(). I thought about using ((int)x)+1 but that would round integers up which would break the formula.
I would like either a way to do ceil() which would solve my problem, or another way to check whether an interval contains at least one integer.
You don't need the math.h to perform floor. Please look at the following code:
int length=5,min=2,max=3; // only an example of inputs.
int a = length/max;
int b = length/min;
if(a!=b){
//there is at least one integer in the interval.
}else{
if(length % min==0 || length % max==0 ){
//there is at least one integer in the interval.
}else{
//there is no integer in the interval.
}
}
The result for the above example will be that there is an integer in the interval.
You can also perform ceil without using math.h as following:
int a;
if(length % max == 0){
a = length / max;
}else{
a = (length / max) + 1;
}
If I understood you question right, I guess, you can do ceil(a) in this case, and then check if the result is less then b. Thus, for example, for interval [1.3, 3.5], ceil(1.3) will return 2, which fits into this interval.
UPD
Also you could do (b - a). If it's > 1, there's for sure at least one integer between them.
There is a general trick in programming that will come in hand if you ever find yourself programming Apple Basic, or any other language where floating point math is supported.
You can "round" a number by addition, then truncation, as follows:
x = some floating value
rounded_x = int(x + roundoff_amount)
Where roundoff_amount is the difference between the lowest fraction to round up, and 1.
So, to round at .5, your round_off would be 1 - .5 = .5, and you would do int(x + .5). If x is .5 or .51 then the result becomes 1.0 or 1.01 and int() takes that to 1. Obviously, if x is higher, then you still get rounded to 1, until x becomes 1.5 when rounding takes it to 2. To round upwards starting at .6, your roundoff amount would be 1 - .6 = .4, and you would do int(x + .4), etc.
You can do a similar thing to get ceil behavior. Set your roundoff_amount to be 0.99999... and do the round. You can choose your value to provide a "nearby" window, since floats have some inaccuracy inherent that might prevent getting a perfectly integer value after adding fractions.
Related
I'm writing a short program to approximate the definite integral of the gaussian function f(x) = exp(-x^2/2), and my codes are as follows:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
double gaussian(double x) {
return exp((-pow(x,2))/2);
}
int main(void) {
srand(0);
double valIntegral, yReal = 0, xRand, yRand, yBound;
int xMin, xMax, numTrials, countY = 0;
do {
printf("Please enter the number of trials (n): ");
scanf("%d", &numTrials);
if (numTrials < 1) {
printf("Exiting.\n");
return 0;
}
printf("Enter the interval of integration (a b): ");
scanf("%d %d", &xMin, &xMax);
while (xMin > xMax) { //keeps looping until a valid interval is entered
printf("Invalid interval!\n");
printf("Enter the interval of integration (a b): ");
scanf("%d %d", &xMin, &xMax);
}
//check real y upper bound
if (gaussian((double)xMax) > gaussian((double)xMin))
yBound = gaussian((double)xMax);
else
yBound = gaussian((double)xMin);
for (int i = 0; i < numTrials; i++) {
xRand = (rand()% ((xMax-xMin)*1000 + 1))/1000.00 + xMin; //generate random x value between xMin and xMax to 3 decimal places
yRand = (rand()% (int)(yBound*1000 + 1))/1000.00; //generate random y value between 0 and yBound to 3 decimal places
yReal = gaussian(xRand);
if (yRand < yReal)
countY++;
}
valIntegral = (xMax-xMin)*((double)countY/numTrials);
printf("Integral of exp(-x^2/2) on [%.3lf, %.3lf] with n = %d trials is: %.3lf\n\n", (double)xMin, (double)xMax, numTrials, valIntegral);
countY = 0; //reset countY to 0 for the next run
} while (numTrials >= 1);
return 0;
}
However, the outputs from my code doesn't match the solutions. I tried to debug and print out all xRand, yRand and yReal values for 100 trials (and checked yReal value with particular xRand values with Matlab, in case I had any typos), and those values didn't seem to be out of range in any way... I don't know where my mistake is.
The correct output for # of trials = 100 on [0, 1] is 0.810, and mine is 0.880; correct output for # of trials = 50 on [-1, 0] is 0.900, and mine was 0.940. Can anyone find where I did wrong? Thanks a lot.
Another question is, I can't find a reference to the use of following code:
double randomNumber = rand() / (double) RAND MAX;
but it was provided by the instructor and he said it would generate a random number from 0 to 1. Why did he use '/' instead of '%' after "rand()"?
There's a few logical errors / discussion points in your code, both mathematics and programming-wise.
First of all, just to get it out of the way, we're talking about the standard gaussian here, i.e.
except, the definition of the gaussian on line 6, omits the
normalising term. Given the outputs you seem to expect, this seems to have been done on purpose. Fair enough. But if you wanted to calculate the actual integral, such that a practically infinite range (e.g. [-1000, 1000]) would sum up to 1, then you would need that term.
Is my code logically correct?
No. Your code has two logical errors: one on line 29 (i.e. your if statement), and one on line 40 (i.e. the calculation of valIntegral), which is a direct consequence of the first logical error.
For the first error, consider the following plot to see why:
Your Monte Carlo process effectively considers a bounded box over a certain range, and then says "I will randomly place points inside this box, and then count the proportion of the total number of points that randomly fell under the curve; the integral estimate is then the area of the bounded box itself, times this proportion".
Now, if both
and
are to the left of the mean (i.e. 0), then your if statement correctly sets the box's upper bound (i.e. yBound) to
such that the topmost bound of the box contains the highest part of that curve. So, e.g., to estimate the integral for the range [-2,-1], you set the upper bound to
.
Similarly, if both
and
are to the right of the mean, then you correctly set yBound to
However, if
, you should be setting yBound to neither
nor
, since the 0 point is higher than both!. So in this case, your yBound should simply be at the peak of the Gaussian, i.e.
(which in your case of an unnormalised Gaussian, this takes a value of '1').
Therefore, the correct if statement is as follows:
if (xMax < 0.0)
{ yBound = gaussian((double)xMax); }
else if (xMin > 0.0)
{ yBound = gaussian((double)xMin); }
else
{ yBound = gaussian(0.0); }
As for the second logical error, we already mentioned that the value of the integral is the "area of the bounding box" times the "proportion of successes". However, you seem to ignore the height of the box in your calculation. It is true that in the special case where
, the height of your unnormalised Gaussian function defaults to '1', therefore this term can be omitted. I suspect that this is why it may have been missed. However, in the other two cases, the height of the bounding box is necessarily less than 1, and therefore needs to be included in the calculation. So the correct code for line 40 should be:
valIntegral = yBound * (xMax-xMin) * (((double)countY)/numTrials);
Why am I not getting the correct output?
Even despite the above logical errors, as we've discussed above, your output should have been correct for the specific intervals [0,1] and [-1,0] (since they include the mean and therefore the correct yBound of 1). So why are you still getting a 'wrong' output?
The answer is, you are not. Your output is "correct". Except, a Monte Carlo process involves randomness, and 100 trials is not a big enough number to lead to consistent results. If you run the same range for 100 trials again and again, you'll see you'll get very different results each time (though, overall, they'll be distributed around the right value). Run with 1000000 trials, and you'll see that the result becomes a lot more precise.
What's up with that randomNumber code?
The rand() function returns an integer in the range [0, RAND_MAX], where RAND_MAX is system-specific (have a look at man 3 rand).
The modulo approach (i.e. %) works as follows: consider the range [-0.1, 0.3]. This range spans 0.4 units. 0.4 * 1000 + 1 = 401. For a random number from 0 to RAND_MAX, doing rand() modulo 401 will always result in a random number in the range [0,400]. If you then divide this back by 1000, you get a random number in the range [0, 0.4]. Add this to your xmin offset (here: -0.1) and you get a random number in the range [-0.1, 0.3].
In theory, this makes sense. However, unfortunately, as already pointed out in the other answer here, as a method it is susceptible to modulo bias, because RAND_MAX isn't necessarily exactly divisible by 401, therefore the top part of that range leading up to RAND_MAX overrepresents some numbers compared to others.
By contrast, the approach given to you by your teacher is simply saying: divide the result of the rand() function with RAND_MAX. This effectively normalises the returned random number into the range [0,1]. This is a much more straightforward thing to do, and it avoids modulo bias.
Therefore, the way I would implement this would be to make it into a function:
double randomNumber(void) {
return rand() / (double) RAND_MAX;
}
which then simplifies your computations as follows too:
xRand = randomNumber() * (xMax-xMin) + xMin;
yRand = randomNumber() * yBound;
You can see that this is a much more accurate thing to do, if you use a normalised gaussian, i.e.
double gaussian(double x) {
return exp((-pow(x,2.0))/2.0) / sqrt(2.0 * M_PI);
}
and then compare the two methods. You will see that the randomNumber() method for an "effectively infinite" range (e.g. [-1000,1000]) gives the correct result of 1, whereas the modulo approach tends to give numbers that are larger than 1.
Your code has no obvious bug (though there is a bug in the upper bound calculation, as #TasosPapastylianou points out, though it isn't the issue in your test cases). On 100 trials, your answer of 0.880 is closer to the actual value of the integral (0.855624...) than 0.810, and neither of those numbers are so far from the true value to suggest an outright bug in the code. Seems to be within sampling error (though see below). Here is a histogram of 1000 runs of a Monte Carlo integration (done in R, but with the same algorithm) of e^(-x^2/2) on [0,1] with 100 trials:
Unless your instructor specified the algorithm and the seed in precise detail, you shouldn't expect the exact same answer.
As far as your second question about rand() / (double) RAND MAX: it is an attempt to avoid modulo bias. It is possible that such a bias is effecting your code (especially given the way you round to 3 decimal places), since it does seem to overestimate the integral (based on running it a dozen times or so). Perhaps you could use that in your code and see if you get better results.
I need to specify that I need two numbers be the same upon 3rd decimal place: 1.2345 and 1.2348 is correct. But 1.2345 and 1.2388 is not correct. And I need let user specify how many places should program check.
I was thinking about something like that:
do {
x = f(i++);// will count some number with i iterations
x_next = f(i++);// will count some number with i+1 iterations
} while (fabs(x - x_next) > accuracy);// there should be some difference, cause more iterations = accurate number, but different numbers = different iterations needed
But I don't know how should I convert number 3 to 0.001.
Can you suggest me something please?
Divide 1.0 by 10 3 times to get 0.001.
To convert 3 to .001 use the pow() function. .001 = pow(10, -3) (It returns base to the power of exponent, in this case 10^-3.) You would need to include the math.h library to use pow().
A word of caution. abs(x-y)<.001 does not guarantee that they agree on 3 decimal places. For example, 1.00000 and .99999 don't agree on any decimal places but abs(1.00000-.99999)=.00001 < .001.
If you need to check if two numbers are the same upon 3rd decimal place, you can simply multiply both values with 1000 and compare them as integers.
You get the picture, you have to mutiply with 10^decimal_place.
EDIT:
If rounding is required, then simply add 5/10^(decimal_place+1) before multiplying.
Well, there is two ways to approach this:
If you want to see if the difference of two numbers is less than 10 to the power of minus your number (in your example 0.001), you can use the solutions provided. However, it says 1.3458 is equal to 1.3462, which doesn't seems what you wanted.
You can convert the numbers to integers before. In your example (3 decimal places), you can multiply your number by 1000 (10 to the power of 3), and get it's integer part (with an (int) cast), as in:
int multiplier = pow(10,decimalPlaces);
int number1 = (int) numberOriginal1*multiplier;
int number2 = (int) numberOriginal2*multiplier;
if(number1 == number2)
printf("Success\n");
else printf("Fail\n");
Hope that helps.
If you just want to give it as an output simply use printf("%.3f",number)
or if you want to it other way, just go through this question,
Rounding Number to 2 Decimal Places in C
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
So I've been attempting to solve the 3SUM problem and the following is my algorithm
def findThree(seq, goal):
for x in range(len(seq)-1):
left = x+1
right = len(seq)-1
while (left < right):
tmp = seq[left] + seq[right] + seq[x]
if tmp > goal:
right -= 1
elif tmp < goal:
left += 1
else:
return [seq[left],seq[right],seq[x]]
As you can see it's a really generic algorithm that solves it in O(n2).
The problem I've been experiencing is that this algorithm doesn't seem to like working with floating point numbers.
To test that my theory is right, I gave it the following two array
FloatingArr = [89.95, 120.0, 140.0, 179.95, 199.95, 259.95, 259.95, 259.95, 320.0, 320.0]
IntArr = [89, 120, 140, 179, 199, 259, 259, 259, 320, 320]
findThree(FloatingArr, 779.85) // I want it to return [259.95, 259.95, 259,95]
>> None // Fail
findThree(FloatingArr, 777) // I want it to return [259,259,259]
>> [259, 259, 259] // Success
The algorithm does work, but it doesn't seem to work well with floating point numbers. What can I do to fix this?
For additional information, my first array is originally a list of string of prices, but in order to do math with them, I had to strip the "$" sign off. My approach in doing so is this
for x in range(len(prices)):
prices[x] = float(prices[x][1:]) // return all the prices without the "$" sign. Casting them to float.
If there is a much better way, please let me know. I feel as if this problem is not really about findThree() but rather how I modified the original prices array.
Edit: Seeing that it is indeed a floating point problem, I guess my next question would be what is the best way to convert a string to int after I strip off the "$" ?
It doesn't work because numbers like 89.95 typically cannot be stored exactly (because the base-two representation of 0.95 is a repeating decimal).
In general, with dealing with floating-point numbers, instead of comparing for exact equality via ==, you want to check if the numbers are "close enough" to be considered equal; typically done with abs(a - b) < SOME_THRESHOLD. The exact value of SOME_THRESHOLD depends on how accurate you want to be, and typically requires trial and error to get a good value.
In your specific case, because you're working with dollars and cents, you can simply convert to cents by multiplying by 100 and rounding to an integer (via round, because int will round ie 7.999999 to 7). Then, your set of numbers will just be integers, solving the rounding problem.
You can convert your prices from string to integers instead of converting them to floats. Let's assume that all prices have at most k digits after the decimal point(in initial string representation). Then 10^k * price is always a whole number. So you completely can get rid of floating-point computations.
Example: if there are at most two digits after the decimal point, $2.10 becomes 210 and $2.2 becomes 220. There is no need to use float even in intermediate computations because you can shift decimal point by two positions to the right(appending zeros if necessary) and then convert a string directly to an integer.
Here is an example of convert function:
def convert(price, max_digits):
""" price - a string representation of the price
max_digits - maximum number of digits after a decimal point
among all prices
"""
parts = price[1:].split('.')
if len(parts) == 2 and len(parts[1]) > 0:
return int(parts[0]) * 10 ** max_digits + \
int(parts[1]) * 10 ** (max_digits - len(parts[1]))
else:
return int(parts[0]) * 10 ** max_digits
I have a loop like this:
for(uint64_t i=0; i*i<n; i++) {
This requires doing a multiplication every iteration. If I could calculate the sqrt before the loop then I could avoid this.
unsigned cut = sqrt(n)
for(uint64_t i=0; i<cut; i++) {
In my case it's okay if the sqrt function rounds up to the next integer but it's not okay if it rounds down.
My question is: is the sqrt function accurate enough to do this for all cases?
Edit: Let me list some cases. If n is a perfect square so that n = y^2 my question would be - is cut=sqrt(n)>=y for all n? If cut=y-1 then there is a problem. E.g. if n = 120 and cut = 10 it's okay but if n=121 (11^2) and cut is still 10 then it won't work.
My first concern was the fractional part of float only has 23 bits and double 52 so they can't store all the digits of some 32-bit or 64-bit integers. However, I don't think this is a problem. Let's assume we want the sqrt of some number y but we can't store all the digits of y. If we let the fraction of y we can store be x we can write y = x + dx then we want to make sure that whatever dx we choose does not move us to the next integer.
sqrt(x+dx) < sqrt(x) + 1 //solve
dx < 2*sqrt(x) + 1
// e.g for x = 100 dx < 21
// sqrt(100+20) < sqrt(100) + 1
Float can store 23 bits so we let y = 2^23 + 2^9. This is more than sufficient since 2^9 < 2*sqrt(2^23) + 1. It's easy to show this for double as well with 64-bit integers. So although they can't store all the digits as long as the sqrt of what they can store is accurate then the sqrt(fraction) should be sufficient. Now let's look at what happens for integers close to INT_MAX and the sqrt:
unsigned xi = -1-1;
printf("%u %u\n", xi, (unsigned)(float)xi); //4294967294 4294967295
printf("%u %u\n", (unsigned)sqrt(xi), (unsigned)sqrtf(xi)); //65535 65536
Since float can't store all the digits of 2^31-2 and double can they get different results for the sqrt. But the float version of the sqrt is one integer larger. This is what I want. For 64-bit integers as long as the sqrt of the double always rounds up it's okay.
First, integer multiplication is really quite cheap. So long as you have more than a few cycles of work per loop iteration and one spare execute slot, it should be entirely hidden by reorder on most non-tiny processors.
If you did have a processor with dramatically slow integer multiply, a truly clever compiler might transform your loop to:
for (uint64_t i = 0, j = 0; j < cut; j += 2*i+1, i++)
replacing the multiply with an lea or a shift and two adds.
Those notes aside, let’s look at your question as stated. No, you can’t just use i < sqrt(n). Counter-example: n = 0x20000000000000. Assuming adherence to IEEE-754, you will have cut = 0x5a82799, and cut*cut is 0x1ffffff8eff971.
However, a basic floating-point error analysis shows that the error in computing sqrt(n) (before conversion to integer) is bounded by 3/4 of an ULP. So you can safely use:
uint32_t cut = sqrt(n) + 1;
and you’ll perform at most one extra loop iteration, which is probably acceptable. If you want to be totally precise, instead use:
uint32_t cut = sqrt(n);
cut += (uint64_t)cut*cut < n;
Edit: z boson clarifies that for his purposes, this only matters when n is an exact square (otherwise, getting a value of cut that is “too small by one” is acceptable). In that case, there is no need for the adjustment and on can safely just use:
uint32_t cut = sqrt(n);
Why is this true? It’s pretty simple to see, actually. Converting n to double introduces a perturbation:
double_n = n*(1 + e)
which satisfies |e| < 2^-53. The mathematical square root of this value can be expanded as follows:
square_root(double_n) = square_root(n)*square_root(1+e)
Now, since n is assumed to be a perfect square with at most 64 bits, square_root(n) is an exact integer with at most 32 bits, and is the mathematically precise value that we hope to compute. To analyze the square_root(1+e) term, use a taylor series about 1:
square_root(1+e) = 1 + e/2 + O(e^2)
= 1 + d with |d| <~ 2^-54
Thus, the mathematically exact value square_root(double_n) is less than half an ULP away from[1] the desired exact answer, and necessarily rounds to that value.
[1] I’m being fast and loose here in my abuse of relative error estimates, where the relative size of an ULP actually varies across a binade — I’m trying to give a bit of the flavor of the proof without getting too bogged down in details. This can all be made perfectly rigorous, it just gets to be a bit wordy for Stack Overflow.
All my answer is useless if you have access to IEEE 754 double precision floating point, since Stephen Canon demonstrated both
a simple way to avoid imul in loop
a simple way to compute the ceiling sqrt
Otherwise, if for some reason you have a non IEEE 754 compliant platform, or only single precision, you could get the integer part of square root with a simple Newton-Raphson loop. For example in Squeak Smalltalk we have this method in Integer:
sqrtFloor
"Return the integer part of the square root of self"
| guess delta |
guess := 1 bitShift: (self highBit + 1) // 2.
[
delta := (guess squared - self) // (guess + guess).
delta = 0 ] whileFalse: [
guess := guess - delta ].
^guess - 1
Where // is operator for quotient of integer division.
Final guard guess*guess <= self ifTrue: [^guess]. can be avoided if initial guess is fed in excess of exact solution as is the case here.
Initializing with approximate float sqrt was not an option because integers are arbitrarily large and might overflow
But here, you could seed the initial guess with floating point sqrt approximation, and my bet is that the exact solution will be found in very few loops. In C that would be:
uint32_t sqrtFloor(uint64_t n)
{
int64_t diff;
int64_t delta;
uint64_t guess=sqrt(n); /* implicit conversions here... */
while( (delta = (diff=guess*guess-n) / (guess+guess)) != 0 )
guess -= delta;
return guess-(diff>0);
}
That's a few integer multiplications and divisions, but outside the main loop.
What you are looking for is a way to calculate a rational upper bound of the square root of a natural number. Continued fraction is what you need see wikipedia.
For x>0, there is
.
To make the notation more compact, rewriting the above formula as
Truncate the continued fraction by removing the tail term (x-1)/2's at each recursion depth, one gets a sequence of approximations of sqrt(x) as below:
Upper bounds appear at lines with odd line numbers, and gets tighter. When distance between an upper bound and its neighboring lower bound is less than 1, that approximation is what you need. Using that value as the value of cut, here cut must be a float number, solves the problem.
For very large number, rational number should be used, so no precision is lost during conversion between integer and floating point number.
I'm working with a microchip that doesn't have room for floating point precision, however. I need to account for fractional values during some equations. So far I've had good luck using the old *100 -> /100 method like so:
increment = (short int)(((value1 - value2)*100 / totalSteps));
// later in the code I loop through the number of totolSteps
// adding back the increment to arrive at the total I want at the precise time
// time I need it.
newValue = oldValue + (increment / 100);
This works great for values from 0-255 divided by a totalSteps of up to 300. After 300, the fractional values to the right of the decimal place, become important, because they add up over time of course.
I'm curious if anyone has a better way to save decimal accuracy within an integer paradigm? I tried using *1000 /1000, but that didn't work at all.
Thank you in advance.
Fractions with integers is called fixed point math.
Try Googling "fixed point".
Fixed point tips and tricks are out of the scope of SO answer...
Example: 5 tap FIR filter
// C is the filter coefficients using 2.8 fixed precision.
// 2 MSB (of 10) is for integer part and 8 LSB (of 10) is the fraction part.
// Actual fraction precision here is 1/256.
int FIR_5(int* in, // input samples
int inPrec, // sample fraction precision
int* c, // filter coefficients
int cPrec) // coefficients fraction precision
{
const int coefHalf = (cPrec > 0) ? 1 << (cPrec - 1) : 0; // value of 0.5 using cPrec
int sum = 0;
for ( int i = 0; i < 5; ++i )
{
sum += in[i] * c[i];
}
// sum's precision is X.N. where N = inPrec + cPrec;
// return to original precision (inPrec)
sum = (sum + coefHalf) >> cPrec; // adding coefHalf for rounding
return sum;
}
int main()
{
const int filterPrec = 8;
int C[5] = { 8, 16, 208, 16, 8 }; // 1.0 == 256 in 2.8 fixed point. Filter value are 8/256, 16/256, 208/256, etc.
int W[5] = { 10, 203, 40, 50, 72}; // A sampling window (example)
int res = FIR_5(W, 0, C, filterPrec);
return 0;
}
Notes:
In the above example:
the samples are integers (no fraction)
the coefs have fractions of 8 bit.
8 bit fractions mean that each change of 1 is treated as 1/256. 1 << 8 == 256.
Useful notation is Y.Xu or Y.Xs. where Y is how many bits are allocated for the integer part and X for he fraction. u/s denote signed/unsigned.
when multiplying 2 fixed point numbers, their precision (size of fraction bits) are added to each other.
Example A is 0.8u, B is 0.2U. C=A*B. C is 0.10u
when dividing, use a shift operation to lower the result precision. Amount of shifting is up to you. Before lowering precision it's better to add a half to lower the error.
Example: A=129 in 0.8u which is a little over 0.5 (129/256). We want the integer part so we right shift it by 8. Before that we want to add a half which is 128 (1<<7). So A = (A + 128) >> 8 --> 1.
Without adding a half you'll get a larger error in the final result.
Don't use this approach.
New paradigm: Do not accumulate using FP math or fixed point math. Do your accumulation and other equations with integer math. Anytime you need to get some scaled value, divide by your scale factor (100), but do the "add up" part with the raw, unscaled values.
Here's a quick attempt at a precise rational (Bresenham-esque) version of the interpolation if you truly cannot afford to directly interpolate at each step.
div_t frac_step = div(target - source, num_steps);
if(frac_step.rem < 0) {
// Annoying special case to deal with rounding towards zero.
// Alternatively check for the error term slipping to < -num_steps as well
frac_step.rem = -frac_step.rem;
--frac_step.quot;
}
unsigned int error = 0;
do {
// Add the integer term plus an accumulated fraction
error += frac_step.rem;
if(error >= num_steps) {
// Time to carry
error -= num_steps;
++source;
}
source += frac_step.quot;
} while(--num_steps);
A major drawback compared to the fixed-point solution is that the fractional term gets rounded off between iterations if you are using the function to continually walk towards a moving target at differing step lengths.
Oh, and for the record your original code does not seem to be properly accumulating the fractions when stepping, e.g. a 1/100 increment will always be truncated to 0 in the addition no matter how many times the step is taken. Instead you really want to add the increment to a higher-precision fixed-point accumulator and then divide it by 100 (or preferably right shift to divide by a power-of-two) each iteration in order to compute the integer "position".
Do take care with the different integer types and ranges required in your calculations. A multiplication by 1000 will overflow a 16-bit integer unless one term is a long. Go through you calculations and keep track of input ranges and the headroom at each step, then select your integer types to match.
Maybe you can simulate floating point behaviour by saving
it using the IEEE 754 specification
So you save mantisse, exponent, and sign as unsigned int values.
For calculation you use then bitwise addition of mantisse and exponent and so on.
Multiplication and Division you can replace by bitwise addition operations.
I think it is a lot of programming staff to emulate that but it should work.
Your choice of type is the problem: short int is likely to be 16 bits wide. That's why large multipliers don't work - you're limited to +/-32767. Use a 32 bit long int, assuming that your compiler supports it. What chip is it, by the way, and what compiler?