Converting radians to degrees in a reference-circle manner - c

Context: I'm creating a map-replanner for a robot in C, it should add obstacles within the environment to the map depending on which IR-sensors are getting high readings.
For this i need to check how the robot is positioned IRL vs in the internal map (pointing north). But for some reason, i sometimes get negative values in the conversion. Any ideas what i'm doing wrong?
There should never be any negative values. The allowed values should lie between 0-360. As of now, i get negative values sometimes.
#define PI 3.14159265358979323846264338327950
#define DEG(x) (x*57.2957795130823208767981548141)
Code:
float currentRot = now.th; //current rotation-reading in RAD from function.
if(currentRot > PI){ //normalize values
currentRot = currentRot - 2 * PI;
}
if(currentRot < -PI){
currentRot = currentRot + 2 * PI;
}
currentRot = fmod(currentRot,PI*2); //convert into RAD-modular
currentRot = DEG(currentRot); //convert to degrees
Any ideas of what i'm doing wrong?

π cannot be exactly represented as a float, so any mod-ding with fmod(currentRot,PI*2); or comparing if(currentRot > PI){ may fail to provide the expected results for edge cases.
Note that currentRot = currentRot + 2 * PI; is a (foat) + (double) and the conversion to float. The edges of this conversion are problem to avoid slight negative results. Best to avoid mixing float and double math for this sensitive conversion.
Even with the good comment of #M Oehm, the inexactness PI and mixing float/double math can result in negative currentRot.
Instead, convert radians to degrees first and then mod.
float RadianToDegree(float r);
float d = (float)(PI/180.0) * r;
d = fmodf(d, 360.0f);
if (d < 0.0f) d += 360.0f;
return d;
}
The result will be [0.0 - 360.0] inclusive.
Mod-ding by 360 can be expected to incur to no round-off error.
Use fmodf() with float and fmod() with double to avoid unnecessary conversions.
On a higher level note, the goal of "The allowed values should lie between 0-360" sacrifice precision. Consider a primary range of between -180 - +180.

Here:
if(currentRot > PI){ //normalize values
currentRot = currentRot - 2 * PI;
}
if(currentRot < -PI){
currentRot = currentRot + 2 * PI;
}
you try to normalize to -π < currentRot < π, and then you do:
currentRot = fmod(currentRot,PI*2);
which "Returns the floating-point remainder of numer/denom" (ref), meaning that the sign of the result, will be the same.
This does not ensure that negative values are prohibited. I mean your range doesn't start from 0.

Related

How can I round a number to the fourth decimal place without using round functions?

This is what I've tried so far but I cannot figure out how to round to the fourth decimal place:
162.3582 = (int)(162.3582 + 0.005);
printf("%.002f\n", 162.3582);
I'm trying to get 162.3600 but I'm getting 162.00 instead.
Maybe look at it like this (I'm just unpacking Steve Summit's comment):
double d = 162.3582; // original value
d *= 100; // shift decimal point right, giving 16235.82
// Note this is equivalent to Steve Summit's divide by .01
d += .5; // add one half, giving 16236.32
d = (int)d; // discard fraction, giving 16236.0
// Or use floor(d) as Steve Summit suggested (he's a genuine C expert)
d /= 100.0 // shift decimal point back, giving 162.36
// Equivalent to Summit's * 0.01
How can I round a number to the fourth decimal place without using round functions?
The trick is usually to scale the value by 104, add 0.5, cast to some integer type, and the divide 104
// Weak code
double pow10 = 10000.0;
double r1 = x * pow10;
r1 += 0.5;
int i = (int) r1;
double r2 = i / pow10;
Problems:
x * pow10 may overflow.
x * pow10 may form an inexact product, tainting further calculations.
r1 += 0.5; may form an inexact sum, tainting further calculations.
r1 += 0.5; should be -0.5 for negative numbers.
int i = (int) r1; is undefined behavior when r1 is much outside int range.
i / pow10; may form an inexact quotient. Some values like 0.0001, may not be exactly representable as a double, yet the division should get closest.
Some improvements:
// a little better code
long double pow10 = 10000.0L;
long double r1 = x * pow10;
r1 = r1 + (signbit(x) ? -0.5 : 0.5);
long double i = truncl(r1); // not floor
double r2 = (double) (i / pow10);
Above code is just as weak when long double range/precision same as double.
If saddled with the dubious "without using round functions" requirement, consider printing:
sprintf(buf_of suffcient_size, "%.*f", 4, x);
double r2 = atof(buf_of suffcient_size);

Round 37.1-28.75 float calculation correctly to 8.4 instead of 8.3

I have problem with floating point rounding. I want to calculate floating point numbers and round them to (given) N decimals. In this example I want to round to 1 decimal places.
Calculation 37.1-28.75 will result into floating point 8.349998 (instead of 8.35), which will result printf rounding to 8.3 instead of 8.4 for 1 decimal places.
The actual result in math is 37.10-28.75=8.35000000, but due to floating point imprecision it is converted into 8.349998, which is then converted into 8.3 instead of 8.4 when using 1 decimal place rounding.
Minimum reproducible example:
float a = 37.10;
float b = 28.75;
//a-b = 8.35 = 8.4
printf("%.1f\n", a - b); //outputs 8.3 instead of 8.4
Is it valid to add following to the result:
float result = a - b;
if (result > 0.0f)
{
result += powf(10, -nr_of_decimals - 1) / 2;
}
else
{
result -= powf(10, -nr_of_decimals - 1) / 2;
}
EDIT: corrected that I want 1 decimal place rounded output, not 2 decimal places
EDIT2: negative results are needed as well (28.75-37.1 = -8.4)
On my system I do actually get 8.35. It's possible that you have to set the rounding direction to "nearest" first, try this (compile with e.g. gcc ... -lm):
#include <fenv.h>
#include <stdio.h>
int main()
{
float a = 37.10;
float b = 28.75;
float res = a - b;
fesetround(FE_TONEAREST);
printf("%.2f\n", res);
}
Binary floating point is, after all, binary, and if you do care about the correct decimal rounding this much, then your choices would be:
decimal floating point, or
fixed point.
I'd say the solution is to use fixed point, especially if you're on embedded, and forget about everything else.
With
int32_t a = 3710;
int32_t b = 2875;
the result of
a - b
will exactly be
835
every time; and then you just need to have a simple fixed point printing routine for the desired precision, and check the following digit after the last digit to see if it needs to be rounded up.
If you want to round to 2 decimals, you can add 0.005 to the result and then offset it with floorf:
float f = 37.10f - 28.75f;
float r = floorf((f + 0.005f) * 100.f) / 100.f;
printf("%f\n", r);
The output is 8.350000
Why are you using floats instead of doubles?
Regarding your question:
Is it valid to add following to the result:
float result = a - b;
if (result > 0.0f)
{
result += powf(10, -nr_of_decimals - 1) / 2;
}
else
{
result -= powf(10, -nr_of_decimals - 1) / 2;
}
It doesn't seem so, on my computer I get 8.350498 instead of 8.350000.
After your edit:
Calculation 37.1-28.75 will result into floating point 8.349998, which will result printf rounding to 8.3 instead of 8.4.
Then
float r = roundf((f + (f < 0.f ? -0.05f : +0.05f)) * 10.f) / 10.f;
is what you are looking for.

Comparing the ratio of two values to 1

I'm working via a basic 'Programming in C' book.
I have written the following code based off of it in order to calculate the square root of a number:
#include <stdio.h>
float absoluteValue (float x)
{
if(x < 0)
x = -x;
return (x);
}
float squareRoot (float x, float epsilon)
{
float guess = 1.0;
while(absoluteValue(guess * guess - x) >= epsilon)
{
guess = (x/guess + guess) / 2.0;
}
return guess;
}
int main (void)
{
printf("SquareRoot(2.0) = %f\n", squareRoot(2.0, .00001));
printf("SquareRoot(144.0) = %f\n", squareRoot(144.0, .00001));
printf("SquareRoot(17.5) = %f\n", squareRoot(17.5, .00001));
return 0;
}
An exercise in the book has said that the current criteria used for termination of the loop in squareRoot() is not suitable for use when computing the square root of a very large or a very small number.
Instead of comparing the difference between the value of x and the value of guess^2, the program should compare the ratio of the two values to 1. The closer this ratio gets to 1, the more accurate the approximation of the square root.
If the ratio is just guess^2/x, shouldn't my code inside of the while loop:
guess = (x/guess + guess) / 2.0;
be replaced by:
guess = ((guess * guess) / x ) / 1 ; ?
This compiles but nothing is printed out into the terminal. Surely I'm doing exactly what the exercise is asking?
To calculate the ratio just do (guess * guess / x) that could be either higher or lower than 1 depending on your implementation. Similarly, your margin of error (in percent) would be absoluteValue((guess * guess / x) - 1) * 100
All they want you to check is how close the square root is. By squaring the number you get and dividing it by the number you took the square root of you are just checking how close you were to the original number.
Example:
sqrt(4) = 2
2 * 2 / 4 = 1 (this is exact so we get 1 (2 * 2 = 4 = 4))
margin of error = (1 - 1) * 100 = 0% margin of error
Another example:
sqrt(4) = 1.999 (lets just say you got this)
1.999 * 1.999 = 3.996
3.996/4 = .999 (so we are close but not exact)
To check margin of error:
.999 - 1 = -.001
absoluteValue(-.001) = .001
.001 * 100 = .1% margin of error
How about applying a little algebra? Your current criterion is:
|guess2 - x| >= epsilon
You are elsewhere assuming that guess is nonzero, so it is algebraically safe to convert that to
|1 - x / guess2| >= epsilon / guess2
epsilon is just a parameter governing how close the match needs to be, and the above reformulation shows that it must be expressed in terms of the floating-point spacing near guess2 to yield equivalent precision for all evaluations. But of course that's not possible because epsilon is a constant. This is, in fact, exactly why the original criterion gets less effective as x diverges from 1.
Let us instead write the alternative expression
|1 - x / guess2| >= delta
Here, delta expresses the desired precision in terms of the spacing of floating point values in the vicinity of 1, which is related to a fixed quantity sometimes called the "machine epsilon". You can directly select the required precision via your choice of delta, and you will get the same precision for all x, provided that no arithmetic operations overflow.
Now just convert that back into code.
Suggest a different point of view.
As this method guess_next = (x/guess + guess) / 2.0;, once the initial approximation is in the neighborhood, the number of bits of accuracy doubles. Example log2(FLT_EPSILON) is about -23, so 6 iterations are needed. (Think 23, 12, 6, 3, 2, 1)
The trouble with using guess * guess is that it may vanish, become 0.0 or infinity for a non-zero x.
To form a quality initial guess:
assert(x > 0.0f);
int expo;
float signif = frexpf(x, &expo);
float guess = ldexpf(signif, expo/2);
Now iterate N times (e.g. 6), (N based on FLT_EPSILON, FLT_DECIMAL_DIG or FLT_DIG.)
for (i=0; i<N; i++) {
guess = (x/guess + guess) / 2.0f;
}
The cost of perhaps an extra iteration is saved by avoiding an expensive termination condition calculation.
If code wants to compare a/b nearest to 1.0f
Simply use some epsilon factor like 1 or 2.
float a = guess;
float b = x/guess;
assert(b);
float q = a/b;
#define FACTOR (1.0f /* some value 1.0f to maybe 2,3 or 4 */)
if (q >= 1.0f - FLT_EPSILON*N && q <= 1.0f + FLT_EPSILON*N) {
close_enough();
}
First lesson in numerical analysis: for floating point numbers x+y has the potential for large relative errors, especially when the sum is near zero, but x*y has very limited relative errors.

How to compute sine wave with accuracy over the time

Use case is to generate a sine wave for digital synthesis, so, we need to compute all values of sin(d t) where:
t is an integer number, representing the sample number. This is variable. Range is from 0 to 158,760,000 for one hour sound of CD quality.
d is double, representing the delta of the angle. This is constant. And the range is: greater than 0 , less than pi.
Goal is to achieve high accuracy with traditional int and double data types. Performance is not important.
Naive implementation is:
double next()
{
t++;
return sin( ((double) t) * (d) );
}
But, the problem is when t increases, accuracy gets reduced because big numbers provided to "sin" function.
An improved version is the following:
double next()
{
d_sum += d;
if (d_sum >= (M_PI*2)) d_sum -= (M_PI*2);
return sin(d_sum);
}
Here, I make sure to provide numbers in range from 0 to 2*pi to the "sin" function.
But, now, the problem is when d is small, there are many small additions which decreases the accuracy every time.
The question here is how to improve the accuracy.
Appendix 1
"accuracy gets reduced because big numbers provided to "sin" function":
#include <stdio.h>
#include <math.h>
#define TEST (300000006.7846112)
#define TEST_MOD (0.0463259891528704262050786960234519968548937998410258872449766)
#define SIN_TEST (0.0463094209176730795999323058165987662490610492247070175523420)
int main()
{
double a = sin(TEST);
double b = sin(TEST_MOD);
printf("a=%0.20f \n" , a);
printf("diff=%0.20f \n" , a - SIN_TEST);
printf("b=%0.20f \n" , b);
printf("diff=%0.20f \n" , b - SIN_TEST);
return 0;
}
Output:
a=0.04630944601888796475
diff=0.00000002510121488442
b=0.04630942091767308033
diff=0.00000000000000000000
You can try an approach that is used is some implementations of fast Fourier transformation. Values of trigonometric function are calculated based on previous values and delta.
Sin(A + d) = Sin(A) * Cos(d) + Cos(A) * Sin(d)
Here we have to store and update cosine value too and store constant (for given delta) factors Cos(d) and Sin(d).
Now about precision: cosine(d) for small d is very close to 1, so there is risk of precision loss (there are only few significant digits in numbers like 0.99999987). To overcome this issue, we can store constant factors as
dc = Cos(d) - 1 = - 2 * Sin(d/2)^2
ds = Sin(d)
using another formulas to update current value
(here sa = Sin(A) for current value, ca = Cos(A) for current value)
ts = sa //remember last values
tc = ca
sa = sa * dc + ca * ds
ca = ca * dc - ts * ds
sa = sa + ts
ca = ca + tc
P.S. Some FFT implementations periodically (every K steps) renew sa and ca values through trig. functions to avoid error accumulation.
Example result. Calculations in doubles.
d=0.000125
800000000 iterations
finish angle 100000 radians
cos sin
described method -0.99936080743598 0.03574879796994
Cos,Sin(100000) -0.99936080743821 0.03574879797202
windows Calc -0.9993608074382124518911354141448
0.03574879797201650931647050069581
sin(x) = sin(x + 2N∙π), so the problem can be boiled down to accurately finding a small number which is equal to a large number x modulo 2π.
For example, –1.61059759 ≅ 256 mod 2π, and you can calculate sin(-1.61059759) with more precision than sin(256)
So let's choose some integer number to work with, 256. First find small numbers which are equal to powers of 256, modulo 2π:
// to be calculated once for a given frequency
// approximate hard-coded numbers for d = 1 below:
double modB = -1.61059759; // = 256 mod (2π / d)
double modC = 2.37724612; // = 256² mod (2π / d)
double modD = -0.89396887; // = 256³ mod (2π / d)
and then split your index as a number in base 256:
// split into a base 256 representation
int a = i & 0xff;
int b = (i >> 8) & 0xff;
int c = (i >> 16) & 0xff;
int d = (i >> 24) & 0xff;
You can now find a much smaller number x which is equal to i modulo 2π/d
// use our smaller constants instead of the powers of 256
double x = a + modB * b + modC * c + modD * d;
double the_answer = sin(d * x);
For different values of d you'll have to calculate different values modB, modC and modD, which are equal to those powers of 256, but modulo (2π / d). You could use a high precision library for these couple of calculations.
Scale up the period to 2^64, and do the multiplication using integer arithmetic:
// constants:
double uint64Max = pow(2.0, 64.0);
double sinFactor = 2 * M_PI / (uint64Max);
// scale the period of the waveform up to 2^64
uint64_t multiplier = (uint64_t) floor(0.5 + uint64Max * d / (2.0 * M_PI));
// multiplication with index (implicitly modulo 2^64)
uint64_t x = i * multiplier;
// scale 2^64 down to 2π
double value = sin((double)x * sinFactor);
As long as your period is not billions of samples, the precision of multiplier will be good enough.
The following code keeps the input to the sin() function within a small range, while somewhat reducing the number of small additions or subtractions due to a potentially very tiny phase increment.
double next() {
t0 += 1.0;
d_sum = t0 * d;
if ( d_sum > 2.0 * M_PI ) {
t0 -= (( 2.0 * M_PI ) / d );
}
return (sin(d_sum));
}
For hyper accuracy, OP has 2 problems:
multiplying d by n and maintaining more precision than double. That is answered in the first part below.
Performing a mod of the period. The simple solution is to use degrees and then mod 360, easy enough to do exactly. To do 2*π of large angles is tricky as it needs a value of 2*π with about 27 more bits of accuracy than (double) 2.0 * M_PI
Use 2 doubles to represent d.
Let us assume 32-bit int and binary64 double. So double has 53-bits of accuracy.
0 <= n <= 158,760,000 which is about 227.2. Since double can handle 53-bit unsigned integers continuously and exactly, 53-28 --> 25, any double with only 25 significant bits can be multiplied by n and still be exact.
Segment d into 2 doubles dmsb,dlsb, the 25-most significant digits and the 28- least.
int exp;
double dmsb = frexp(d, &exp); // exact result
dmsb = floor(dmsb * POW2_25); // exact result
dmsb /= POW2_25; // exact result
dmsb *= pow(2, exp); // exact result
double dlsb = d - dmsb; // exact result
Then each multiplication (or successive addition) of dmsb*n will be exact. (this is the important part.) dlsb*n will only error in its least few bits.
double next()
{
d_sum_msb += dmsb; // exact
d_sum_lsb += dlsb;
double angle = fmod(d_sum_msb, M_PI*2); // exact
angle += fmod(d_sum_lsb, M_PI*2);
return sin(angle);
}
Note: fmod(x,y) results are expected to be exact give exact x,y.
#include <stdio.h>
#include <math.h>
#define AS_n 158760000
double AS_d = 300000006.7846112 / AS_n;
double AS_d_sum_msb = 0.0;
double AS_d_sum_lsb = 0.0;
double AS_dmsb = 0.0;
double AS_dlsb = 0.0;
double next() {
AS_d_sum_msb += AS_dmsb; // exact
AS_d_sum_lsb += AS_dlsb;
double angle = fmod(AS_d_sum_msb, M_PI * 2); // exact
angle += fmod(AS_d_sum_lsb, M_PI * 2);
return sin(angle);
}
#define POW2_25 (1U << 25)
int main(void) {
int exp;
AS_dmsb = frexp(AS_d, &exp); // exact result
AS_dmsb = floor(AS_dmsb * POW2_25); // exact result
AS_dmsb /= POW2_25; // exact result
AS_dmsb *= pow(2, exp); // exact result
AS_dlsb = AS_d - AS_dmsb; // exact result
double y;
for (long i = 0; i < AS_n; i++)
y = next();
printf("%.20f\n", y);
}
Output
0.04630942695385031893
Use degrees
Recommend using degrees as 360 degrees is the exact period and M_PI*2 radians is an approximation. C cannot represent π exactly.
If OP still wants to use radians, for further insight on performing the mod of π, see Good to the Last Bit

Calculate maclaurin series for sin using C

I wrote a code for calculating sin using its maclaurin series and it works but when I try to calculate it for large x values and try to offset it by giving a large order N (the length of the sum) - eventually it overflows and doesn't give me correct results. This is the code and I would like to know is there an additional way to optimize it so it works for large x values too (it already works great for small x values and really big N values).
Here is the code:
long double calcMaclaurinPolynom(double x, int N){
long double result = 0;
long double atzeretCounter = 2;
int sign = 1;
long double fraction = x;
for (int i = 0; i <= N; i++)
{
result += sign*fraction;
sign = sign*(-1);
fraction = fraction*((x*x) / ((atzeretCounter)*(atzeretCounter + 1)));
atzeretCounter += 2;
}
return result;
}
The major issue is using the series outside its range where it well converges.
As OP said "converted x to radX = (x*PI)/180" indicates the OP is starting with degrees rather than radians, the OP is in luck. The first step in finding my_sin(x) is range reduction. When starting with degrees, the reduction is exact. So reduce the range before converting to radians.
long double calcMaclaurinPolynom(double x /* degrees */, int N){
// Reduce to range -360 to 360
// This reduction is exact, no round-off error
x = fmod(x, 360);
// Reduce to range -180 to 180
if (x >= 180) {
x -= 180;
x = -x;
} else if (x <= -180) {
x += 180;
x = -x;
}
// Reduce to range -90 to 90
if (x >= 90) {
x = 180 - x;
} else if (x <= -90) {
x = -180 - x;
}
//now convert to radians.
x = x*PI/180;
// continue with regular code
Alternative, if using C11, use remquo(). Search SO for sample code.
As #user3386109 commented above, no need to "convert back to degrees".
[Edit]
With typical summation series, summing the least significant terms first improves the precision of the answer. With OP's code this can be done with
for (int i = N; i >= 0; i--)
Alternatively, rather than iterating a fixed number of times, loop until the term has no significance to the sum. The following uses recursion to sum the least significant terms first. With range reduction in the -90 to 90 range, the number of iterations is not excessive.
static double sin_d_helper(double term, double xx, unsigned i) {
if (1.0 + term == 1.0)
return term;
return term - sin_d_helper(term * xx / ((i + 1) * (i + 2)), xx, i + 2);
}
#include <math.h>
double sin_d(double x_degrees) {
// range reduction and d --> r conversion from above
double x_radians = ...
return x_radians * sin_d_helper(1.0, x_radians * x_radians, 1);
}
You can avoid the sign variable by incorporating it into the fraction update as in (-x*x).
With your algorithm you do not have problems with integer overflow in the factorials.
As soon as x*x < (2*k)*(2*k+1) the error - assuming exact evaluation - is bounded by abs(fraction), i.e., the size of the next term in the series.
For large x the biggest source for errors is truncation resp. floating point errors that are magnified via cancellation of the terms of the alternating series. For k about x/2 the terms around the k-th term have the biggest size and have to be offset by other big terms.
Halving-and-Squaring
One easy method to deal with large x without using the value of pi is to employ the trigonometric theorems where
sin(2*x)=2*sin(x)*cos(x)
cos(2*x)=2*cos(x)^2-1=cos(x)^2-sin(x)^2
and first reduce x by halving, simultaneously evaluating the Maclaurin series for sin(x/2^n) and cos(x/2^n) and then employ trigonometric squaring (literal squaring as complex numbers cos(x)+i*sin(x)) to recover the values for the original argument.
cos(x/2^(n-1)) = cos(x/2^n)^2-sin(x/2^n)^2
sin(x/2^(n-1)) = 2*sin(x/2^n)*cos(x/2^n)
then
cos(x/2^(n-2)) = cos(x/2^(n-1))^2-sin(x/2^(n-1))^2
sin(x/2^(n-2)) = 2*sin(x/2^(n-1))*cos(x/2^(n-1))
etc.
See https://stackoverflow.com/a/22791396/3088138 for the simultaneous computation of sin and cos values, then encapsulate it with
def CosSinForLargerX(x,n):
k=0
while abs(x)>1:
k+=1; x/=2
c,s = getCosSin(x,n)
r2=0
for i in range(k):
s2=s*s; c2=c*c; r2=s2+c2
s = 2*c*s
c = c2-s2
return c/r2,s/r2

Resources