I'm making a C program in which I simulate a Patriot missile system. In this simulation my Patriot missile has to catch an incoming enemy target missile.
The information about the Patriot missile and the enemy target are stored in a structure like this:
typedef struct _stat {
float32_t x;
float32_t y;
float32_t v; // speed magnitude
float32_t v_theta; // speed angle in radians
float32_t a; // acceleration magnitude
float32_t a_theta; // acceleration angle in radians
} stat;
And I'm storing the informations in two globals variables like those:
stat t_stat; // target stats
stat p_stat; // patriot stats
Now, to simplify the problem the target is moving thanks to an initial speed and is affected only by gravity, so we can consider:
t_stat.x = TARGET_X0;
t_stat.y = TARGET_Y0;
t_stat.v = TARGET_V0;
t_stat.v_theta = TARGET_V_THETA0;
t_stat.a = G; // gravity acceleration
t_stat.a_theta = -(PI / 2);
Again, to simplify I'm also considering to compute the collision point when the Patriot has reached its top speed, so its own acceleration is only used to balance the gravity acceleration. In particular we have:
p_stat.x = PATRIOT_X0;
p_stat.y = PATRIOT_Y0;
p_stat.v = 1701,45; // Mach 5 speed in m/s
p_stat.v_theta = ???? // that's what I need to find
p_stat.a = G; // gravity acceleration
p_stat.a_theta = PI / 2;
In this way we can consider the Patriot as moving at constant speed because the sum of the accelerations by which is affected is equal to 0.
float32_t patr_ax = p_stat.a * cos(p_stat.a_theta); // = 0
float32_t patr_ay = p_stat.a * sin(p_stat.a_theta) - G; // = 0
Now, here comes the problem. I want to write a function which computes the right p_stat.v_theta in order to hit the target (if a collision is possible).
For example the function that I need could have a prototype like this:
uint8_t computeIntercept(stat t, stat p, float32_t *theta);
And it can be used in this way:
if(computeIntercept(t_stat, p_stat, &p_stat.v_theta)) {
printf("Target Hit with an angle of %.2f\n", p_stat.v_theta);
} else {
printf("Target Missed!\n");
}
For making it even more clear, here is the image which I want
Your target projectile is moving with constant acceleration hence the velocity can be described as
Now integrating this equation gives us the equation of the position.
Now by knowing the initial position and we can determine this constant vector is the initial position
Now the position of the target projectile is finally
This are two equations (for x and y coordinate). The equation for y is quadratic and the equation for x is linear since the acceleration (gravitational) is in the vertical direction.
You have
In general you should do something like this :
You can use https://en.wikipedia.org/wiki/Newton%27s_method
in order to solve the last equation for theta that you get.
In order to have a collision you need the coordinates of both objects to be identical at the same instant of time.
You could decompose the problem into two simpler ones, considering each axis, x and y, separately:
You need to calculate the equations of motion for the both objects, once for their horizontal components and once for their vertical components.
Check whether the solutions of both objects contain equal coordinates and if this happens at the same instant of time.
Target coordinates: T (xt, yt)
Patriot coordinates: P (xp, yp)
You could solve this numerically by varying the time, t, and observing whether: T == P.
In your case, one of the equations should contain a parameter accounting for the angle theta.
Simulate the event and find the time the 2 objects are closest.
The missile can only fly so long after launch (negating it going into orbit), let that be tf.
Using struct _stat for each object, write a function that report the x,y for a given t and object.
Simulate, at reasonable intervals (1s?), 0.0 to tf, the square of the distance between the two objects.
From the time t of the closest approach, use it 2 neighbors t-dt and t+dt to do the simulation over again. Could use time from t-dt to t+dt with 10x smaller dt or other methods.
Repeat the above until the distance is close enough or dt is sufficiently small.
If this distance is sufficiently small, evaluate struct _stat for the needed data now that the time is determined.
Note: the details of the complexity of pt2 compute_position(struct _stat st, time t) only need consideration for the the initial dt estimate.
Related
I want to develop a simple geo-fencing algorithm in C, that works without using sin, cos and tan. I am working with a small microcontroller, hence the restriction. I have no space left for <math.h>. The radius will be around 20..100m. I am not expecting super accurate results this way.
My current solution takes two coordinate sets (decimal, .00001 accuracy, but passed as a value x10^5, in order to eliminate the decimal places) and a radius (in m). When multiplying the coordinates with 0.9, they can approximately be used for a Pythagorean equation which checks, if one coordinate lies within the radius of another:
static int32_t
geo_convert_coordinates(int32_t coordinate)
{
return (cordinate * 10) / 9;
}
bool
geo_check(int32_t lat_fixed,
int32_t lon_fixed,
int32_t lat_var,
int32_t lon_var,
uint16_t radius)
{
lat_fixed = geo_convert_distance(lat_fixed);
lon_fixed = geo_convert_distance(lon_fixed);
lat_var = geo_convert_distance(lat_var);
lon_var = geo_convert_distance(lon_var);
if (((lat_var - lat_fixed) * (lat_var - lat_fixed) + (lon_var - lon_fixed) * (lon_var - lon_fixed))
<= (radius * radius))
{
return true;
}
return false;
}
This solution works quite well for the equator, but when changing the latitude, this becomes increasingly inaccurate, at 70°N the deviation is around 50%. I could change the factor depending on the latitude, but I am not happy with this solution.
Is there a better way to do this calculation? Any help is very much appreciated. Best regards!
UPDATE
I used the input I got and managed to implement a decent solution. I used only signed ints, no floats.
The haversine formula could be simplified: due to the relevant radii (50-500m), the deltas of the latitude and longitude are very small (<0.02°). This means, that the sine can be simplified to sin(x) = x and also the arcsine to asin(x) = x. This approach is very accurate for angles <10° and even better for the small angles used here. This leaves the cosine, which I implemented according to #meaning-matters 's suggestion. The cosine will take an angle and return the actual result multiplied by 100, in order to be able to use ints. The square root was implemented with an iterative loop (I cannot find the so post anymore). The haversine calculation was done with the inputs multiplied by powers of 10 in order to achieve accuracy and afterwards divided by the necessary power of 10.
For my 8bit system, this caused a memory usage of around 2000-2500 Bytes.
Implement the Havesine function using your own trigonometric functions that use lookup tables and do interpolation.
Because you don't want very accurate results, small lookup tables, of perhaps twenty points, would be sufficient. And, simple linear interpolation would also be fine.
In case you don't have much memory space: Bear in mind that to implement sine and cosine, you only need one lookup table for 90 degrees of either function. All values can then be determined by mirroring and offsetting.
I am working on a project which incorporates computing a sine wave as input for a control loop.
The sine wave has a frequency of 280 Hz, and the control loop runs every 30 µs and everything is written in C for an Arm Cortex-M7.
At the moment we are simply doing:
double time;
void control_loop() {
time += 30e-6;
double sine = sin(2 * M_PI * 280 * time);
...
}
Two problems/questions arise:
When running for a long time, time becomes bigger. Suddenly there is a point where the computation time for the sine function increases drastically (see image). Why is this? How are these functions usually implemented? Is there a way to circumvent this (without noticeable precision loss) as speed is a huge factor for us? We are using sin from math.h (Arm GCC).
How can I deal with time in general? When running for a long time, the variable time will inevitably reach the limits of double precision. Even using a counter time = counter++ * 30e-6; only improves this, but it does not solve it. As I am certainly not the first person who wants to generate a sine wave for a long time, there must be some ideas/papers/... on how to implement this fast and precise.
Instead of calculating sine as a function of time, maintain a sine/cosine pair and advance it through complex number multiplication. This doesn't require any trigonometric functions or lookup tables; only four multiplies and an occasional re-normalization:
static const double a = 2 * M_PI * 280 * 30e-6;
static const double dx = cos(a);
static const double dy = sin(a);
double x = 1, y = 0; // complex x + iy
int counter = 0;
void control_loop() {
double xx = dx*x - dy*y;
double yy = dx*y + dy*x;
x = xx, y = yy;
// renormalize once in a while, based on
// https://www.gamedev.net/forums/topic.asp?topic_id=278849
if((counter++ & 0xff) == 0) {
double d = 1 - (x*x + y*y - 1)/2;
x *= d, y *= d;
}
double sine = y; // this is your sine
}
The frequency can be adjusted, if needed, by recomputing dx, dy.
Additionally, all the operations here can be done, rather easily, in fixed point.
Rationality
As #user3386109 points out below (+1), the 280 * 30e-6 = 21 / 2500 is a rational number, thus the sine should loop around after 2500 samples exactly. We can combine this method with theirs by resetting our generator (x=1,y=0) every 2500 iterations (or 5000, or 10000, etc...). This would eliminate the need for renormalization, as well as get rid of any long-term phase inaccuracies.
(Technically any floating point number is a diadic rational. However 280 * 30e-6 doesn't have an exact representation in binary. Yet, by resetting the generator as suggested, we'll get an exactly periodic sine as intended.)
Explanation
Some requested an explanation down in the comments of why this works. The simplest explanation is to use the angle sum trigonometric identities:
xx = cos((n+1)*a) = cos(n*a)*cos(a) - sin(n*a)*sin(a) = x*dx - y*dy
yy = sin((n+1)*a) = sin(n*a)*cos(a) + cos(n*a)*sin(a) = y*dx + x*dy
and the correctness follows by induction.
This is essentially the De Moivre's formula if we view those sine/cosine pairs as complex numbers, in accordance to Euler's formula.
A more insightful way might be to look at it geometrically. Complex multiplication by exp(ia) is equivalent to rotation by a radians. Therefore, by repeatedly multiplying by dx + idy = exp(ia), we incrementally rotate our starting point 1 + 0i along the unit circle. The y coordinate, according to Euler's formula again, is the sine of the current phase.
Normalization
While the phase continues to advance with each iteration, the magnitude (aka norm) of x + iy drifts away from 1 due to round-off errors. However we're interested in generating a sine of amplitude 1, thus we need to normalize x + iy to compensate for numeric drift. The straight forward way is, of course, to divide it by its own norm:
double d = 1/sqrt(x*x + y*y);
x *= d, y *= d;
This requires a calculation of a reciprocal square root. Even though we normalize only once every X iterations, it'd still be cool to avoid it. Fortunately |x + iy| is already close to 1, thus we only need a slight correction to keep it at bay. Expanding the expression for d around 1 (first order Taylor approximation), we get the formula that's in the code:
d = 1 - (x*x + y*y - 1)/2
TODO: to fully understand the validity of this approximation one needs to prove that it compensates for round-off errors faster than they accumulate -- and thus get a bound on how often it needs to be applied.
The function can be rewritten as
double n;
void control_loop() {
n += 1;
double sine = sin(2 * M_PI * 280 * 30e-6 * n);
...
}
That does exactly the same thing as the code in the question, with exactly the same problems. But it can now be simplified:
280 * 30e-6 = 280 * 30 / 1000000 = 21 / 2500 = 8.4e-3
Which means that when n reaches 2500, you've output exactly 21 cycles of the sine wave. Which means that you can set n back to 0.
The resulting code is:
int n;
void control_loop() {
n += 1;
if (n == 2500)
n = 0;
double sine = sin(2 * M_PI * 8.4e-3 * n);
...
}
As long as your code can run for 21 cycles without problems, it'll run forever without problems.
I'm rather shocked at the existing answers. The first problem you detect is easily solved, and the next problem magically disappears when you solve the first problem.
You need a basic understanding of math to see how it works. Recall, sin(x+2pi) is just sin(x), mathematically. The large increase in time you see happens when your sin(float) implementation switches to another algorithm, and you really want to avoid that.
Remember that float has only 6 significant digits. 100000.0f*M_PI+x uses those 6 digits for 100000.0f*M_PI, so there's nothing left for x.
So, the easiest solution is to keep track of x yourself. At t=0 you initialize x to 0.0f. Every 30 us, you increment x+= M_PI * 280 * 30e-06;. The time does not appear in this formula! Finally, if x>2*M_PI, you decrement x-=2*M_PI; (Since sin(x)==sin(x-2*pi)
You now have an x that stays nicely in the range 0 to 6.2834, where sin is fast and the 6 digits of precision are all useful.
How to generate a lovely sine.
DAC is 12bits so you have only 4096 levels. It makes no sense to send more than 4096 samples per period. In real life you will need much less samples to generate a good quality waveform.
Create C file with the lookup table (using your PC). Redirect the output to the file (https://helpdeskgeek.com/how-to/redirect-output-from-command-line-to-text-file/).
#define STEP ((2*M_PI) / 4096.0)
int main(void)
{
double alpha = 0;
printf("#include <stdint.h>\nconst uint16_t sine[4096] = {\n");
for(int x = 0; x < 4096 / 16; x++)
{
for(int y = 0; y < 16; y++)
{
printf("%d, ", (int)(4095 * (sin(alpha) + 1.0) / 2.0));
alpha += STEP;
}
printf("\n");
}
printf("};\n");
}
https://godbolt.org/z/e899d98oW
Configure the timer to trigger the overflow 4096*280=1146880 times per second. Set the timer to generate the DAC trigger event. For 180MHz timer clock it will not be precise and the frequency will be 279.906449045Hz. If you need better precision change the number of samples to match your timer frequency or/and change the timer clock frequency (H7 timers can run up to 480MHz)
Configure DAC to use DMA and transfer the value from the lookup table created in the step 1 to the DAC on the trigger event.
Enjoy beautiful sine wave using your oscilloscope. Note that your microcontroller core will not be loaded at all. You will have it for other tasks. If you want to change the period simple reconfigure the timer. You can do it as many times per second as you wish. To reconfigure the timer use timer DMA burst mode - which will reload PSC & ARR registers on the upddate event automatically not disturbing the generated waveform.
I know it is advanced STM32 programming and it will require register level programming. I use it to generate complex waveforms in our devices.
It is the correct way of doing it. No control loops, no calculations, no core load.
I'd like to address the embedded programming issues in your code directly - #0___________'s answer is the correct way to do this on a microcontroller and I won't retread the same ground.
Variables representing time should never be floating point. If your increment is not a power of two, errors will always accumulate. Even if it is, eventually your increment will be smaller than the smallest increment and the timer will stop. Always use integers for time. You can pick an integer size big enough to ignore roll over - an unsigned 32 bit integer representing milliseconds will take 50 days to roll over, while an unsigned 64 bit integer will take over 500 million years.
Generating any periodic signal where you do not care about the signal's phase does not require a time variable. Instead, you can keep an internal counter which resets to 0 at the end of a period. (When you use DMA with a look-up table, that's exactly what you're doing - the counter is the DMA controller's next-read pointer.)
Whenever you use a transcendental function such as sine in a microcontroller, your first thought should be "can I use a look-up table for this?" You don't have access to the luxury of a modern operating system optimally shuffling your load around on a 4 GHz+ multi-core processor. You're often dealing with a single thread that will stall waiting for your 200 MHz microcontroller to bring the FPU out of standby and perform the approximation algorithm. There is a significant cost to transcendental functions. There's a cost to LUTs too, but if you're hitting the function constantly, there's a good chance you'll like the tradeoffs of the LUT a lot better.
As noted in some of the comments, the time value is continually growing with time. This poses two problems:
The sin function likely has to perform a modulus internally to get the internal value into a supported range.
The resolution of time will become worse and worse as the value increases, due to adding on higher digits.
Making the following changes should improve the performance:
double time;
void control_loop() {
time += 30.0e-6;
if((1.0/280.0) < time)
{
time -= 1.0/280.0;
}
double sine = sin(2 * M_PI * 280 * time);
...
}
Note that once this change is made, you will no longer have a time variable.
Use a look-up table. Your comment in the discussion with Eugene Sh.:
A small deviation from the sine frequency (like 280.1Hz) would be ok.
In that case, with a control interval of 30 µs, if you have a table of 119 samples that you repeat over and over, you will get a sine wave of 280.112 Hz. Since you have a 12-bit DAC, you only need 119 * 2 = 238 bytes to store this if you would output it directly to the DAC. If you use it as input for further calculations like you mention in the comments, you can store it as float or double as desired. On an MCU with embedded static RAM, it only takes a few cycles at most to load from memory.
If you have a few kilobytes of memory available, you can eliminate this problem completely with a lookup table.
With a sampling period of 30 µs, 2500 samples will have a total duration of 75 ms. This is exactly equal to the duration of 21 cycles at 280 Hz.
I haven't tested or compiled the following code, but it should at least demonstrate the approach:
double sin2500() {
static double *table = NULL;
static int n = 2499;
if (!table) {
table = malloc(2500 * sizeof(double));
for (int i=0; i<2500; i++) table[i] = sin(2 * M_PI * 280 * i * 30e-06);
}
n = (n+1) % 2500;
return table[n];
}
How about a variant of others' modulo-based concept:
int t = 0;
int divisor = 1000000;
void control_loop() {
t += 30 * 280;
if (t > divisor) t -= divisor;
double sine = sin(2 * M_PI * t / (double)divisor));
...
}
It calculates the modulo in integer then causes no roundoff errors.
There is an alternative approach to calculating a series of values of sine (and cosine) for angles that increase by some very small amount. It essentially devolves down to calculating the X and Y coordinates of a circle, and then dividing the Y value by some constant to produce the sine, and dividing the X value by the same constant to produce the cosine.
If you are content to generate a "very round ellipse", you can use a following hack, which is attributed to Marvin Minsky in the 1960s. It's much faster than calculating sines and cosines, although it introduces a very small error into the series. Here is an extract from the Hakmem Document, Item 149. The Minsky circle algorithm is outlined.
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse centered at the origin with its size determined by the initial point. epsilon determines the angular velocity of the circulating point, and slightly affects the eccentricity. If epsilon is a power of 2, then we don't even need multiplication, let alone square roots, sines, and cosines! The "circle" will be perfectly stable because the points soon become periodic.
The circle algorithm was invented by mistake when I tried to save one register in a display hack! Ben Gurley had an amazing display hack using only about six or seven instructions, and it was a great wonder. But it was basically line-oriented. It occurred to me that it would be exciting to have curves, and I was trying to get a curve display hack with minimal instructions.
Here is a link to the hakmem: http://inwap.com/pdp10/hbaker/hakmem/hacks.html
I think it would be possible to use a modulo because sin() is periodic.
Then you don’t have to worry about the problems.
double time = 0;
long unsigned int timesteps = 0;
double sine;
void controll_loop()
{
timesteps++;
time += 30e-6;
if( time > 1 )
{
time -= 1;
}
sine = sin( 2 * M_PI * 280 * time );
...
}
Fascinating thread. Minsky's algorithm mentioned in Walter Mitty's answer reminded me of a method for drawing circles that was published in Electronics & Wireless World and that I kept. (Credit: https://www.electronicsworld.co.uk/magazines/). I'm attaching it here for interest.
However, for my own similar projects (for audio synthesis) I use a lookup table, with enough points that linear interpolation is accurate enough (do the math(s)!)
I'm currently building a basic raytracing algorithm and need to figure out which system of handling the intersections would be best performance-wise.
In the method I'm checking for a intersection of the ray and the object I'm returning a struct with the distance of the ray traveled to the hit, the position vector of the hit and the normal vector or -1 for the distance if there is no intersection.
For the next step I have to find the shortest distance of all intersections and exclude the ones with a negative distance.
I even thought about having 2 structs, one with only negative distances and one full struct to reduce the amount of space needed, but thought this wouldn't really make a difference.
My options so far:
first go over the array of the intersections and exclude the ones with negative distances, then find the shortest distance from the remainings via a sorting algorithm (probably insertion sort due to quick implementation).
Or put them together in one algorithm and test in each sort step if the distance is negative.
typedef Point3f float[3];
typedef struct {
float distance;
Point3f point;
Point3f normal;
} Intersection;
Intersection intersectObject (Ray-params, object) {
Intersection intersection;
//...
if (hit) {
intersection.distance = distance;
intersection.point = point;
intersection.normal = normal;
} else {
intersection.distance = -1.0f;
}
return intersection;
}
//loop over screen pixel
Intersection* intersections;
int amountIntersections;
//loop over all objects
//here I would handle the intersections
if (amountIntersections) {
//cast additional rays
}
I can't really figure out what would be the best way to handle this, since this would be called a lot of times. The intersection array will probably be a dynamic array with the amountIntersections as the length variable or an array with the most expected amount of intersections which then have intersections in it with negative distances.
Here is the approach I've succesfully used for a huge number of objects. (Especially for ball-and-stick atomic models; see my Wikipedia user page for the equations I used for those.)
First, transform the objects to a coordinate system where the eye is at origin, and the projected plane is parallel to the xy plane, with center on the positive z axis. This simplifies the equations needed a lot, as you can see from the above linked page.
As an example, if you have a unit ray n (so n·n = 1) and a sphere of radius r centered at c, the ray intersects the sphere if and only if h ≥ 0,
h = (n·c)2 + r2 - (c·c)
and if so, at distance d,
d = n·c ± sqrt(h)
If you work out the necessary code, and use sensible temprary variables, you'll see that you can reject non-intersecting spheres using eight multiplications and six additions or subtractions, and that this vectorizes across objects easily using SSE2/AVX intrinsics (#include <x86intrin.h>). (That is, do not try to use an XMM/YMM vector register for n or c, and instead use each register component for a different object, calculating h for 2/4/8 objects at a time.)
For each ray, sort/choose the objects to be tested according to their known minimum z coordinate (say, cz - r for spheres). This way, when you find an intersection at distance d, you can ignore all objects with minimum z coordinate larger than d, because the intersection point would necessarily be further out, behind the already known intersection.
Similarly, you should ignore all intersections where the distance is smaller than the distance to the projection plane (which is zd / nz, if the plane is at z = zd, and only needs to be computed once per ray), because those intersections are between the eye and the projection plane. (Technically, you've "crashed into" something then, if you think of the projection plane as a camera.)
i have two Objects in a 3D World and want to make the one object facing the other object. I already calculated all the angles and stuff (pitch angle and yaw angle).
The problem is i have no functions to set the yaw or pitch individually which means that i have to do it by a quaternion. As the only function i have is: SetEnetyQuaternion(float x, float y, float z, float w). This is my pseudocode i have yet:
float px, py, pz;
float tx, ty, tz;
float distance;
GetEnetyCoordinates(ObjectMe, &px, &py, &pz);
GetEnetyCoordinates(TargetObject, &tx, &ty, &tz);
float yaw, pitch;
float deltaX, deltaY, deltaZ;
deltaX = tx - px;
deltaY = ty - py;
deltaZ = tz - pz;
float hyp = SQRT((deltaX*deltaX) + (deltaY*deltaY) + (deltaZ*deltaZ));
yaw = (ATAN2(deltaY, deltaX));
if(yaw < 0) { yaw += 360; }
pitch = ATAN2(-deltaZ, hyp);
if (pitch < 0) { pitch += 360; }
//here is the part where i need to do a calculation to convert the angles
SetEnetyQuaternion(ObjectMe, pitch, 0, yaw, 0);
What i tried yet was calculating the sinus from those angles devided with 2 but this didnt work - i think this is for euler angles or something like that but didnt help me. The roll(y axis) and the w argument can be left out i think as i dont want my object to have a roll. Thats why i put 0 in.
If anyone has any idea i would really appreciate help.
Thank you in advance :)
Let's suppose that the quaternion you want describes the attitude of the player relative to some reference attitude. It is then essential to know what the reference attitude is.
Moreover, you need to understand that an object's attitude comprises more than just its facing -- it also comprises the object's orientation around that facing. For example, imagine the player facing directly in the positive x direction of the position coordinate system. This affords many different attitudes, from the one where the player is standing straight up to ones where he is horizontal on either his left or right side, to one where he is standing on his head, and all those in between.
Let's suppose that the appropriate reference attitude is the one facing parallel to the positive x direction, and with "up" parallel to the positive z direction (we'll call this "vertical"). Let's also suppose that among the attitudes in which the player is facing the target, you want the one having "up" most nearly vertical. We can imagine the wanted attitude change being performed in two steps: a rotation about the coordinate y axis followed by a rotation about the coordinate z axis. We can write a unit quaternion for each of these, and the desired quaternion for the overall rotation is the Hamilton product of these quaternions.
The quaternion for a rotation of angle θ around the unit vector described by coordinates (x, y, z) is (cos θ/2, x sin θ/2, y sin θ/2, z sin θ/2). Consider then, the first quaternion you want, corresponding to the pitch. You have
double semiRadius = sqrt(deltaX * deltaX + deltaY * deltaY);
double cosPitch = semiRadius / hyp;
double sinPitch = deltaZ / hyp; // but note that we don't actually need this
. But you need the sine and cosine of half that angle. The half-angle formulae come in handy here:
double sinHalfPitch = sqrt((1 - cosPitch) / 2) * ((deltaZ < 0) ? -1 : 1);
double cosHalfPitch = sqrt((1 + cosPitch) / 2);
The cosine will always be nonnegative because the pitch angle must be in the first or fourth quadrant; the sine will be positive if the object is above the player, or negative if it is below. With all that being done, the first quaternion is
(cosHalfPitch, 0, sinHalfPitch, 0)
Similar analysis applies to the second quaternion. The cosine and sine of the full rotation angle are
double cosYaw = deltaX / semiRadius;
double sinYaw = deltaY / semiRadius; // again, we don't actually need this
We can again apply the half-angle formulae, but now we need to account for the full angle to be in any quadrant. The half angle, however, can be only in quadrant 1 or 2, so its sine is necessarily non-negative:
double sinHalfYaw = sqrt((1 - cosYaw) / 2);
double cosHalfYaw = sqrt((1 + cosYaw) / 2) * ((deltaY < 0) ? -1 : 1);
That gives us an overall second quaternion of
(cosHalfYaw, 0, 0, sinHalfYaw)
The quaternion you want is the Hamilton product of these two, and you must take care to compute it with the correct operand order (qYaw * qPitch), because the Hamilton product is not commutative. All the zeroes in the two factors make the overall expression much simpler than it otherwise would be, however:
(cosHalfYaw * cosHalfPitch,
-sinHalfYaw * sinHalfPitch,
cosHalfYaw * sinHalfPitch,
sinHalfYaw * cosHalfPitch)
At this point I remind you that we started with an assumption about the reference attitude for the quaternion system, and the this result depends on that choice. I also remind you that I made an assumption about the wanted attitude, and that also affects this result.
Finally, I observe that this approach breaks down where the target object is very nearly directly above or directly below the player (corresponding to semiRadius taking a value very near zero) and where the player is very nearly on top of the target (corresponding to hyp taking a value very near zero). There is a non-zero chance of causing a division by zero if you use these formulae exactly as given, so you'll want to think about how to deal with that.)
I have a signal made up of the sum of a number of sine waves. These are spaced at 100Hz, with the lowest component frequency at 200Hz (200Hz, 300Hz...etc.) All component sine waves begin at the same point with phase = 0. In my DSP software, where I am going to multiply this signal by several other signals, I need to find a point at which all of the original signal's component signals are all again at phase = 0.
If I were only using one sine wave, I could simply look for a change in sign from negative to positive. However, if the signal has, say, components at 200Hz and 300Hz, there are three zero-crossing where the sign changes from negative to positive, but only one that represents the beginning of the period, and this increases with more component waves. I do have control over the amplitudes of each component frequency during an initial startup sequence. If these waves were strictly harmonic (200Hz, 400Hz, 800Hz, etc.), I could simply remove all but the lowest frequency, find the beginning of its period, and use this as my zero-sample. However, I don't have this bandwidth. Can anyone provide an alternative approach?
Edit:
(I have clarified and integrated this edit into body of question.)
Edit 2:
This graphic should demonstrate the issue. The frequencies two components here are n and 3n/2. Without filtering out all but the lowest frequency, or taking an FFT as proposed by #hotpaw, an algorithm that only looks for zero-crossings where the sign changes from negative positive will land on one of three, and I must find the first of those three (this is the one point at which each component signal is at phase = 0). I realise that taking an FFT will work, but I'm dealing with very limited processing power and wondering if there's a simpler approach.
Look at the derivative of the signal!
Your signal is a sum of sines (sorry, I'm not sure how to format formulas properly)
S = sum(a_n * sin(k_n * t)) ... over all n
a_n is the positive amplitude and k_n the positive frequency. The derivative (that you can compute easily numerically) of the signal is
dS/dt = sum(a_n * k_n * cos(k_n * t)) ... over all n
At t=0 (what you're looking for), the derivative has its maximum since all cosine terms are one at the same time.
Some addition:
For the practical implementation you need to consider that the derivative may be noisy, so some kind of simple first-order filtering could be necessary.
I assume that all the sine waves are exact harmonics of some fundamental frequency, all have a phase of zero with respect to the same reference point at some point in time, and that this is the point in time you wish to find.
You can use an FFT with an aperture length that is an exact multiple of the period of your fundamental frequency (100 Hz). If there is zero noise, you can use 1 period. Estimate the phase with respect to some reference point (FFT aperture start or center) of all the sinusoids using the FFT. Then use the phase of the lowest frequency sinusoid that shows up as significant in the FFT to calculate all its zero crossings in your target time range. Compare with the nearest zero crossing of all the other sinusoids (using the FFT phase to estimate their phases), and find the low frequency zero crossing with the total least squared error of offsets from all the nearest zero crossings of all the other frequencies.
You can go back to the time domain to confirm the least squares estimated crossing as an actual zero crossing and/or to remove some of the numerical noise.
I would go for a first or second order lowpass filter to remove the component frequencies. The difference between 100 Hz and the "noise" makes quite a wide gap. Start with a low frequency that cancels all noise and increase until you are satisfied with the signal.
After that you have your signal and can watch for the sign change.
Second order implementation:
static float a1 = 0;
static float a2 = 0;
static float b1 = 0;
static float b2 = 0;
static float y = 0;
static float y_old = 0;
static float u_old = 0;
void
init_lp_filter(float cutoff_freq, float sample_time)
{
float wc = cutoff_freq;
float h = sample_time;
float epsilon = 1.0f/sqrt(2.0f);
float omega = wc * sqrt(0.5f);
float alpha = exp(-epsilon*wc*h);
float beta = cos(omega*h);
float gamma = sin(omega*h);
b1 = 1.0f - alpha * (beta + epsilon * wc * gamma / omega);
b2 = alpha * alpha + alpha * (epsilon * wc * gamma / omega - beta);
a1 = -2.0f * alpha * beta;
a2 = alpha * alpha;
}
float
getOutput() {
return y;
}
void
update_filter(float input)
{
float tmp = y;
y = b1 * input + b2 * u_old - a1 * y - a2 * y_old;
y_old = tmp;
u_old = input;
}
As the filtered output depends only on old values, this means that the filtered output can be used direct at the beginning of a cycle. The filter can then be updated at the end of the periodic cycle with a sample of measurement. Do note that if you have any output that may affect the signal (i.e. actuators on a physical process), you must sample the signal before any output.)
Good luck!