I have doubles that represent latitudes and longitudes.
I can easily limit longitudes to (-180.0, 180.0] with the following function.
double limitLon(double lon)
{
return fmod(lon - 180.0, 360.0) + 180.0;
}
This works because one end is exclusive and the other is inclusive. fmod includes 0 but not -360.0.
Can anyone think of an elegant method for latitude?
The required interval is [-90.0, 90.0]. A closed form solution would be best, i.e. no loop. I think fmod() is probably a non-starter because both ends are inclusive now.
Edit: As was pointed out, one can't go to 91 degrees latitude anyway. Technically 91 should map to 89.0. Oh boy, that changes things.
There is a much, much more efficient way to do this than using sin and arcsin. The most expensive operation is a single division. The observation that the required interval is closed is key.
Divide by 360 and take the remainder. This yields a number in the interval [0, 360), which is half-open, as observed.
Fold the interval in half. If the remainder is >=180, subtract it from 360. This maps the interval [180, 360) to the interval (0, 180]. The union of this interval with the bottom half is the closed interval [0, 180].
Subtract 90 from the result. This interval is [-90, 90], as desired.
This is, indeed, the exact same function as arcsin(sin(x)), but without the expense or any issue with numeric stability.
Using trig functions sin()/cos() is expensive in time and introduces loss of precision. Much better to use the remainder() function. Note the result has the same sign as x and magnitude less than the magnitude of y, if able.
OP was on the right track! The below solution is easy to adjust per the edge values of -180 and + 180.0.
#include <math.h>
// Reduce to (-180.0, 180.0]
double Limit_Longitude(double longitude_degrees) {
// A good implementation of `fmod()` will introduce _no_ loss of precision.
// -360.0 <= longitude_reduced <=- 360.0
double longitude_reduced = fmod(longitude_degrees, 360.0);
if (longitude_reduced > 180.0) {
longitude_reduced -= 360.0;
} else if (longitude_reduced <= -180.0) {
longitude_reduced += 360.0;
}
return longitude_reduced;
}
Limiting Latitude to [-90 to +90] is trickier as a latitude of +91 degrees is going over the North Pole but switching the longitude +/- 180 degrees. To preserve longitude precision, adjust by 180 toward 0 degrees.
void Limit_Latitude_Longitude(double *latitude_degrees, double *longitude_degrees) {
*latitude_degrees = Limit_Longitude(*latitude_degrees);
int flip = 0;
if (*latitude_degrees > 90.0) {
*latitude_degrees = 180.0 - *latitude_degrees;
flip = 1;
} else if (*latitude_degrees < -90.0) {
*latitude_degrees = -180.0 - *latitude_degrees;
flip = 1;
}
if (flip) {
*longitude_degrees += *longitude_degrees > 0 ? -180.0 : 180.0;
}
*longitude_degrees = Limit_Longitude(*longitude_degrees);
}
Minor: Although the goal is "limit longitudes to (-180.0, 180.0]", I'd expect ranges of [-180.0, 180.0), [-180.0, 180.0] to be more commonly needed.
How about using the sin and inverse functions?
asin(sin((lat/180.0)*3.14159265)) * (180.0/3.14159265);
Neither answer provided (D Stanley, eh9) works ... though for eh9's I might be misinterpreting something. Try them with multiple values.
The proper answers are unfortunately expensive. See the following from Microsoft Research: https://web.archive.org/web/20150109080324/http://research.microsoft.com/en-us/projects/wraplatitudelongitude/.
From there, the answers are:
latitude_new = atan(sin(latitude)/fabs(cos(latitude))) -- note the absolute value around cos(latitude)
longitude_new = atan2(sin(latitude),cos(latitude))
Note that in C you may want to use atan2f (float vs double). Also, all trig functions take radians.
Related
I want to develop a simple geo-fencing algorithm in C, that works without using sin, cos and tan. I am working with a small microcontroller, hence the restriction. I have no space left for <math.h>. The radius will be around 20..100m. I am not expecting super accurate results this way.
My current solution takes two coordinate sets (decimal, .00001 accuracy, but passed as a value x10^5, in order to eliminate the decimal places) and a radius (in m). When multiplying the coordinates with 0.9, they can approximately be used for a Pythagorean equation which checks, if one coordinate lies within the radius of another:
static int32_t
geo_convert_coordinates(int32_t coordinate)
{
return (cordinate * 10) / 9;
}
bool
geo_check(int32_t lat_fixed,
int32_t lon_fixed,
int32_t lat_var,
int32_t lon_var,
uint16_t radius)
{
lat_fixed = geo_convert_distance(lat_fixed);
lon_fixed = geo_convert_distance(lon_fixed);
lat_var = geo_convert_distance(lat_var);
lon_var = geo_convert_distance(lon_var);
if (((lat_var - lat_fixed) * (lat_var - lat_fixed) + (lon_var - lon_fixed) * (lon_var - lon_fixed))
<= (radius * radius))
{
return true;
}
return false;
}
This solution works quite well for the equator, but when changing the latitude, this becomes increasingly inaccurate, at 70°N the deviation is around 50%. I could change the factor depending on the latitude, but I am not happy with this solution.
Is there a better way to do this calculation? Any help is very much appreciated. Best regards!
UPDATE
I used the input I got and managed to implement a decent solution. I used only signed ints, no floats.
The haversine formula could be simplified: due to the relevant radii (50-500m), the deltas of the latitude and longitude are very small (<0.02°). This means, that the sine can be simplified to sin(x) = x and also the arcsine to asin(x) = x. This approach is very accurate for angles <10° and even better for the small angles used here. This leaves the cosine, which I implemented according to #meaning-matters 's suggestion. The cosine will take an angle and return the actual result multiplied by 100, in order to be able to use ints. The square root was implemented with an iterative loop (I cannot find the so post anymore). The haversine calculation was done with the inputs multiplied by powers of 10 in order to achieve accuracy and afterwards divided by the necessary power of 10.
For my 8bit system, this caused a memory usage of around 2000-2500 Bytes.
Implement the Havesine function using your own trigonometric functions that use lookup tables and do interpolation.
Because you don't want very accurate results, small lookup tables, of perhaps twenty points, would be sufficient. And, simple linear interpolation would also be fine.
In case you don't have much memory space: Bear in mind that to implement sine and cosine, you only need one lookup table for 90 degrees of either function. All values can then be determined by mirroring and offsetting.
I am working on a project which incorporates computing a sine wave as input for a control loop.
The sine wave has a frequency of 280 Hz, and the control loop runs every 30 µs and everything is written in C for an Arm Cortex-M7.
At the moment we are simply doing:
double time;
void control_loop() {
time += 30e-6;
double sine = sin(2 * M_PI * 280 * time);
...
}
Two problems/questions arise:
When running for a long time, time becomes bigger. Suddenly there is a point where the computation time for the sine function increases drastically (see image). Why is this? How are these functions usually implemented? Is there a way to circumvent this (without noticeable precision loss) as speed is a huge factor for us? We are using sin from math.h (Arm GCC).
How can I deal with time in general? When running for a long time, the variable time will inevitably reach the limits of double precision. Even using a counter time = counter++ * 30e-6; only improves this, but it does not solve it. As I am certainly not the first person who wants to generate a sine wave for a long time, there must be some ideas/papers/... on how to implement this fast and precise.
Instead of calculating sine as a function of time, maintain a sine/cosine pair and advance it through complex number multiplication. This doesn't require any trigonometric functions or lookup tables; only four multiplies and an occasional re-normalization:
static const double a = 2 * M_PI * 280 * 30e-6;
static const double dx = cos(a);
static const double dy = sin(a);
double x = 1, y = 0; // complex x + iy
int counter = 0;
void control_loop() {
double xx = dx*x - dy*y;
double yy = dx*y + dy*x;
x = xx, y = yy;
// renormalize once in a while, based on
// https://www.gamedev.net/forums/topic.asp?topic_id=278849
if((counter++ & 0xff) == 0) {
double d = 1 - (x*x + y*y - 1)/2;
x *= d, y *= d;
}
double sine = y; // this is your sine
}
The frequency can be adjusted, if needed, by recomputing dx, dy.
Additionally, all the operations here can be done, rather easily, in fixed point.
Rationality
As #user3386109 points out below (+1), the 280 * 30e-6 = 21 / 2500 is a rational number, thus the sine should loop around after 2500 samples exactly. We can combine this method with theirs by resetting our generator (x=1,y=0) every 2500 iterations (or 5000, or 10000, etc...). This would eliminate the need for renormalization, as well as get rid of any long-term phase inaccuracies.
(Technically any floating point number is a diadic rational. However 280 * 30e-6 doesn't have an exact representation in binary. Yet, by resetting the generator as suggested, we'll get an exactly periodic sine as intended.)
Explanation
Some requested an explanation down in the comments of why this works. The simplest explanation is to use the angle sum trigonometric identities:
xx = cos((n+1)*a) = cos(n*a)*cos(a) - sin(n*a)*sin(a) = x*dx - y*dy
yy = sin((n+1)*a) = sin(n*a)*cos(a) + cos(n*a)*sin(a) = y*dx + x*dy
and the correctness follows by induction.
This is essentially the De Moivre's formula if we view those sine/cosine pairs as complex numbers, in accordance to Euler's formula.
A more insightful way might be to look at it geometrically. Complex multiplication by exp(ia) is equivalent to rotation by a radians. Therefore, by repeatedly multiplying by dx + idy = exp(ia), we incrementally rotate our starting point 1 + 0i along the unit circle. The y coordinate, according to Euler's formula again, is the sine of the current phase.
Normalization
While the phase continues to advance with each iteration, the magnitude (aka norm) of x + iy drifts away from 1 due to round-off errors. However we're interested in generating a sine of amplitude 1, thus we need to normalize x + iy to compensate for numeric drift. The straight forward way is, of course, to divide it by its own norm:
double d = 1/sqrt(x*x + y*y);
x *= d, y *= d;
This requires a calculation of a reciprocal square root. Even though we normalize only once every X iterations, it'd still be cool to avoid it. Fortunately |x + iy| is already close to 1, thus we only need a slight correction to keep it at bay. Expanding the expression for d around 1 (first order Taylor approximation), we get the formula that's in the code:
d = 1 - (x*x + y*y - 1)/2
TODO: to fully understand the validity of this approximation one needs to prove that it compensates for round-off errors faster than they accumulate -- and thus get a bound on how often it needs to be applied.
The function can be rewritten as
double n;
void control_loop() {
n += 1;
double sine = sin(2 * M_PI * 280 * 30e-6 * n);
...
}
That does exactly the same thing as the code in the question, with exactly the same problems. But it can now be simplified:
280 * 30e-6 = 280 * 30 / 1000000 = 21 / 2500 = 8.4e-3
Which means that when n reaches 2500, you've output exactly 21 cycles of the sine wave. Which means that you can set n back to 0.
The resulting code is:
int n;
void control_loop() {
n += 1;
if (n == 2500)
n = 0;
double sine = sin(2 * M_PI * 8.4e-3 * n);
...
}
As long as your code can run for 21 cycles without problems, it'll run forever without problems.
I'm rather shocked at the existing answers. The first problem you detect is easily solved, and the next problem magically disappears when you solve the first problem.
You need a basic understanding of math to see how it works. Recall, sin(x+2pi) is just sin(x), mathematically. The large increase in time you see happens when your sin(float) implementation switches to another algorithm, and you really want to avoid that.
Remember that float has only 6 significant digits. 100000.0f*M_PI+x uses those 6 digits for 100000.0f*M_PI, so there's nothing left for x.
So, the easiest solution is to keep track of x yourself. At t=0 you initialize x to 0.0f. Every 30 us, you increment x+= M_PI * 280 * 30e-06;. The time does not appear in this formula! Finally, if x>2*M_PI, you decrement x-=2*M_PI; (Since sin(x)==sin(x-2*pi)
You now have an x that stays nicely in the range 0 to 6.2834, where sin is fast and the 6 digits of precision are all useful.
How to generate a lovely sine.
DAC is 12bits so you have only 4096 levels. It makes no sense to send more than 4096 samples per period. In real life you will need much less samples to generate a good quality waveform.
Create C file with the lookup table (using your PC). Redirect the output to the file (https://helpdeskgeek.com/how-to/redirect-output-from-command-line-to-text-file/).
#define STEP ((2*M_PI) / 4096.0)
int main(void)
{
double alpha = 0;
printf("#include <stdint.h>\nconst uint16_t sine[4096] = {\n");
for(int x = 0; x < 4096 / 16; x++)
{
for(int y = 0; y < 16; y++)
{
printf("%d, ", (int)(4095 * (sin(alpha) + 1.0) / 2.0));
alpha += STEP;
}
printf("\n");
}
printf("};\n");
}
https://godbolt.org/z/e899d98oW
Configure the timer to trigger the overflow 4096*280=1146880 times per second. Set the timer to generate the DAC trigger event. For 180MHz timer clock it will not be precise and the frequency will be 279.906449045Hz. If you need better precision change the number of samples to match your timer frequency or/and change the timer clock frequency (H7 timers can run up to 480MHz)
Configure DAC to use DMA and transfer the value from the lookup table created in the step 1 to the DAC on the trigger event.
Enjoy beautiful sine wave using your oscilloscope. Note that your microcontroller core will not be loaded at all. You will have it for other tasks. If you want to change the period simple reconfigure the timer. You can do it as many times per second as you wish. To reconfigure the timer use timer DMA burst mode - which will reload PSC & ARR registers on the upddate event automatically not disturbing the generated waveform.
I know it is advanced STM32 programming and it will require register level programming. I use it to generate complex waveforms in our devices.
It is the correct way of doing it. No control loops, no calculations, no core load.
I'd like to address the embedded programming issues in your code directly - #0___________'s answer is the correct way to do this on a microcontroller and I won't retread the same ground.
Variables representing time should never be floating point. If your increment is not a power of two, errors will always accumulate. Even if it is, eventually your increment will be smaller than the smallest increment and the timer will stop. Always use integers for time. You can pick an integer size big enough to ignore roll over - an unsigned 32 bit integer representing milliseconds will take 50 days to roll over, while an unsigned 64 bit integer will take over 500 million years.
Generating any periodic signal where you do not care about the signal's phase does not require a time variable. Instead, you can keep an internal counter which resets to 0 at the end of a period. (When you use DMA with a look-up table, that's exactly what you're doing - the counter is the DMA controller's next-read pointer.)
Whenever you use a transcendental function such as sine in a microcontroller, your first thought should be "can I use a look-up table for this?" You don't have access to the luxury of a modern operating system optimally shuffling your load around on a 4 GHz+ multi-core processor. You're often dealing with a single thread that will stall waiting for your 200 MHz microcontroller to bring the FPU out of standby and perform the approximation algorithm. There is a significant cost to transcendental functions. There's a cost to LUTs too, but if you're hitting the function constantly, there's a good chance you'll like the tradeoffs of the LUT a lot better.
As noted in some of the comments, the time value is continually growing with time. This poses two problems:
The sin function likely has to perform a modulus internally to get the internal value into a supported range.
The resolution of time will become worse and worse as the value increases, due to adding on higher digits.
Making the following changes should improve the performance:
double time;
void control_loop() {
time += 30.0e-6;
if((1.0/280.0) < time)
{
time -= 1.0/280.0;
}
double sine = sin(2 * M_PI * 280 * time);
...
}
Note that once this change is made, you will no longer have a time variable.
Use a look-up table. Your comment in the discussion with Eugene Sh.:
A small deviation from the sine frequency (like 280.1Hz) would be ok.
In that case, with a control interval of 30 µs, if you have a table of 119 samples that you repeat over and over, you will get a sine wave of 280.112 Hz. Since you have a 12-bit DAC, you only need 119 * 2 = 238 bytes to store this if you would output it directly to the DAC. If you use it as input for further calculations like you mention in the comments, you can store it as float or double as desired. On an MCU with embedded static RAM, it only takes a few cycles at most to load from memory.
If you have a few kilobytes of memory available, you can eliminate this problem completely with a lookup table.
With a sampling period of 30 µs, 2500 samples will have a total duration of 75 ms. This is exactly equal to the duration of 21 cycles at 280 Hz.
I haven't tested or compiled the following code, but it should at least demonstrate the approach:
double sin2500() {
static double *table = NULL;
static int n = 2499;
if (!table) {
table = malloc(2500 * sizeof(double));
for (int i=0; i<2500; i++) table[i] = sin(2 * M_PI * 280 * i * 30e-06);
}
n = (n+1) % 2500;
return table[n];
}
How about a variant of others' modulo-based concept:
int t = 0;
int divisor = 1000000;
void control_loop() {
t += 30 * 280;
if (t > divisor) t -= divisor;
double sine = sin(2 * M_PI * t / (double)divisor));
...
}
It calculates the modulo in integer then causes no roundoff errors.
There is an alternative approach to calculating a series of values of sine (and cosine) for angles that increase by some very small amount. It essentially devolves down to calculating the X and Y coordinates of a circle, and then dividing the Y value by some constant to produce the sine, and dividing the X value by the same constant to produce the cosine.
If you are content to generate a "very round ellipse", you can use a following hack, which is attributed to Marvin Minsky in the 1960s. It's much faster than calculating sines and cosines, although it introduces a very small error into the series. Here is an extract from the Hakmem Document, Item 149. The Minsky circle algorithm is outlined.
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse centered at the origin with its size determined by the initial point. epsilon determines the angular velocity of the circulating point, and slightly affects the eccentricity. If epsilon is a power of 2, then we don't even need multiplication, let alone square roots, sines, and cosines! The "circle" will be perfectly stable because the points soon become periodic.
The circle algorithm was invented by mistake when I tried to save one register in a display hack! Ben Gurley had an amazing display hack using only about six or seven instructions, and it was a great wonder. But it was basically line-oriented. It occurred to me that it would be exciting to have curves, and I was trying to get a curve display hack with minimal instructions.
Here is a link to the hakmem: http://inwap.com/pdp10/hbaker/hakmem/hacks.html
I think it would be possible to use a modulo because sin() is periodic.
Then you don’t have to worry about the problems.
double time = 0;
long unsigned int timesteps = 0;
double sine;
void controll_loop()
{
timesteps++;
time += 30e-6;
if( time > 1 )
{
time -= 1;
}
sine = sin( 2 * M_PI * 280 * time );
...
}
Fascinating thread. Minsky's algorithm mentioned in Walter Mitty's answer reminded me of a method for drawing circles that was published in Electronics & Wireless World and that I kept. (Credit: https://www.electronicsworld.co.uk/magazines/). I'm attaching it here for interest.
However, for my own similar projects (for audio synthesis) I use a lookup table, with enough points that linear interpolation is accurate enough (do the math(s)!)
I have written the following function for the Taylor series to calculate cosine.
double cosine(int x) {
x %= 360; // make it less than 360
double rad = x * (PI / 180);
double cos = 0;
int n;
for(n = 0; n < TERMS; n++) {
cos += pow(-1, n) * pow(rad, 2 * n) / fact(2 * n);
}
return cos;
}
My issue is that when i input 90 i get the answer -0.000000. (why am i getting -0.000 instead of 0.000?)
Can anybody explain why and how i can solve this issue?
I think it's due to the precision of double.
Here is the main() :
int main(void){
int y;
//scanf("%d",&y);
y=90;
printf("sine(%d)= %lf\n",y, sine(y));
printf("cosine(%d)= %lf\n",y, cosine(y));
return 0;
}
It's totally expected that you will not be able to get exact zero outputs for cosine of anything with floating point, regardless of how good your approach to computing it is. This is fundamental to how floating point works.
The mathematical zeros of cosine are odd multiples of pi/2. Because pi is irrational, it's not exactly representable as a double (or any floating point form), and the difference between the nearest neighboring values that are representable is going to be at least pi/2 times DBL_EPSILON, roughly 3e-16 (or corresponding values for other floating point types). For some odd multiples of pi/2, you might "get lucky" and find that it's really close to one of the two neighbors, but on average you're going to find it's about 1e-16 away. So your input is already wrong by 1e-16 or so.
Now, cosine has slope +1 or -1 at its zeros, so the error in the output will be roughly proportional to the error in the input. But to get an exact zero, you'd need error smaller than the smallest representable nonzero double, which is around 2e-308. That's nearly 300 orders of magnitude smaller than the error in the input.
While you coudl in theory "get lucky" and have some multiple if pi/2 that's really really close to the nearest representable double, the likelihood of this, just modelling it as random, is astronomically small. I believe there are even proofs that there is no double x for which the correctly-rounded value of cos(x) is an exact zero. For single-precision (float) this can be determined easily by brute force; for double that's probably also doable but a big computation.
As to why printf is printing -0.000000, it's just that the default for %f is 6 places after the decimal point, which is nowhere near enough to see the first significant digit. Using %e or %g, optionally with a large precision modifier, would show you an approximation of the result you got that actually retains some significance and give you an idea whether your result is good.
My issue is that when i input 90 i get the answer -0.000000. (why am i getting -0.000 instead of 0.000?)
cosine(90) is not precise enough to result in a value of 0.0. Use printf("cosine(%d)= %le\n",y, cosine(y)); (note the e) to see a more informative view of the result. Instead, cosine(90) is generating a negative result in the range [-0.0005 ... -0.0] and that is rounded to "-0.000" for printing.
Can anybody explain why and how i can solve this issue?
OP's cosine() lacks sufficient range reduction, which for degrees can be exact.
x %= 360; was a good first step, yet perform a better range reduction to a 90° width like [-45°...45°], [45°...135°], etc.
Also recommend: Use a Taylor series with sufficient terms (e.g. 10) and a good machine PI1. Form the terms more carefully than pow(rad, 2 * n) / fact(2 * n), which inject excessive error.
Example1, example2.
Other improvements possible, yet something to get OP started.
1 #define PI 3.1415926535897932384626433832795
For a class project I need to split some audio clips in smaller sections, for which we are provided a min length and a max length, to figure out whether this is possible, I do the following:
a = length/max
b = length/min
mathematically I figured that [a,b] contains at least one integer if ⌊b⌋ >= ⌈a⌉, but I can't use math.h for floor() and ceil(). Since a and b are always positive I can use type casting for floor(), but I am at a loss at how to do ceil(). I thought about using ((int)x)+1 but that would round integers up which would break the formula.
I would like either a way to do ceil() which would solve my problem, or another way to check whether an interval contains at least one integer.
You don't need the math.h to perform floor. Please look at the following code:
int length=5,min=2,max=3; // only an example of inputs.
int a = length/max;
int b = length/min;
if(a!=b){
//there is at least one integer in the interval.
}else{
if(length % min==0 || length % max==0 ){
//there is at least one integer in the interval.
}else{
//there is no integer in the interval.
}
}
The result for the above example will be that there is an integer in the interval.
You can also perform ceil without using math.h as following:
int a;
if(length % max == 0){
a = length / max;
}else{
a = (length / max) + 1;
}
If I understood you question right, I guess, you can do ceil(a) in this case, and then check if the result is less then b. Thus, for example, for interval [1.3, 3.5], ceil(1.3) will return 2, which fits into this interval.
UPD
Also you could do (b - a). If it's > 1, there's for sure at least one integer between them.
There is a general trick in programming that will come in hand if you ever find yourself programming Apple Basic, or any other language where floating point math is supported.
You can "round" a number by addition, then truncation, as follows:
x = some floating value
rounded_x = int(x + roundoff_amount)
Where roundoff_amount is the difference between the lowest fraction to round up, and 1.
So, to round at .5, your round_off would be 1 - .5 = .5, and you would do int(x + .5). If x is .5 or .51 then the result becomes 1.0 or 1.01 and int() takes that to 1. Obviously, if x is higher, then you still get rounded to 1, until x becomes 1.5 when rounding takes it to 2. To round upwards starting at .6, your roundoff amount would be 1 - .6 = .4, and you would do int(x + .4), etc.
You can do a similar thing to get ceil behavior. Set your roundoff_amount to be 0.99999... and do the round. You can choose your value to provide a "nearby" window, since floats have some inaccuracy inherent that might prevent getting a perfectly integer value after adding fractions.
I'm trying to make a program to calculate the cos(x) function using taylor series so far I've got this:
int factorial(int a){
if(a < 0)
return 0;
else if(a==0 || a==1)
return 1;
else
return a*(factorial(a-1));
}
double Tserie(float angle, int repetitions){
double series = 0.0;
float i;
for(i = 0.0; i < repeticiones; i++){
series += (pow(-1, i) * pow(angle, 2*i))/factorial(2*i);
printf("%f\n", (pow(-1, i) * pow(angle, 2*i))/factorial(2*i));
}
return series;
}
For my example I'm using angle = 90, and repetitions = 20, to calculate cos(90) but it's useless I just keep getting values close to the infinite, any help will be greatly appreciated.
For one thing, the angle is in radians, so for a 90 degree angle, you'd need to pass M_PI/2.
Also, you should avoid recursive functions for something as trivial as factorials, it would take 1/4 the effort to write it iteratively and it would perform a lot better. You don't actually even need it, you can keep the factorial in a temporary variable and just multiply it by 2*i*(2*i-1) at each step. Keep in mind that you'll hit a representability/precision wall really quickly at this step.
You also don't need to actually call pow for -1 to the power of i, a simple i%2?1:-1 would suffice. This way it's faster and it won't lose precision as you increase i.
Oh and don't make i float, it's an integer, make it an integer. You're leaking precision a lot as it is, why make it worse..
And to top it all off, you're approximating cos around 0, but are calling it for pi/2. You'll get really high errors doing that.
The Taylor series is for the mathematical cosine function, whose arguments is in radians. So 90 probably doesn't mean what you thought it meant here.
Furthermore, the series requires more terms the longer the argument is from 0. Generally, the number of terms need to be comparable to the size of the argument before you even begin to see the successive terms becoming smaller, and many more than that in order to get convergence. 20 is pitifully few terms to use for x=90.
Another problem is then that you compute the factorial as an int. The factorial function grows very fast -- already for 13! an ordinary C int (on a 32-bit machine) will overflow, so your terms beyond the sixth will be completely wrong anyway.
In fact the factorials and the powers of 90 quickly become too large to be represented even as doubles. If you want any chance of seeing the series converge, you must not compute each term from scratch but derive it from the previous one using a formula such as
nextTerm = - prevTerm * x * x / (2*i-1) / (2*i);