I'm trying to implement an atan2-like function to map two input sinusoidal signals of arbitrary relative phase shift to a single output signal that linearly goes from 0 to 2π. atan2 normally assumes two signals with a 90 deg phase shift.
Given y0(x) = sin(x) and y1 = sin(x + phase), where phase is a fixed non-zero value, how can I implement a way to return x modulo 2π?
atan2 returns the angle of a 2d vector. Your code does not handle such scaling properly. But no worries, it's actually very easy to reduce your problem to an atan2 that would handle everything nicely.
Notice that calculating sin(x) and sin(x + phase) is the same as projecting a point (cos(x), sin(x)) onto the axes (0, 1) and (sin(phase), cos(phase)). This is the same as taking dot products with those axes, or transforming the coordinate system from the standard orthogonal basis into the skewed one. This suggests a simple solution: inverse the transformation to get the coordinates in the orthogonal basis and then use atan2.
Here's a code that does that:
double super_atan2(double x0, double x1, double a0, double a1) {
double det = sin(a0 - a1);
double u = (x1*sin(a0) - x0*sin(a1))/det;
double v = (x0*cos(a1) - x1*cos(a0))/det;
return atan2(v, u);
}
double duper_atan2(double y0, double y1, double phase) {
const double tau = 6.28318530717958647692; // https://tauday.com/
return super_atan2(y0, y1, tau/4, tau/4 - phase);
}
super_atan2 gets the angles of the two projection axes, duper_atan2 solves the problem exactly as you stated.
Also notice that the calculation of det is not strictly necessary. It is possible to replace it by fmod and copysign (we still need the correct sign of u and v).
Derivation:
In code:
// assume phase != k * pi, for any integer k
double f (double y0, double y1, double phase)
{
double u = (- y0 * cos(phase) + y1) / sin(phase);
double v = y0;
double x = atan2 (v, u);
return (x < 0) ? (x + 2 * M_PI) : x;
}
Related
I am replacing my project's use of glRotatef because I need to be able to transform double matrices. glRotated is not an option because OpenGL does not guarantee the stored matrices or any operations performed to be double precision. However, my new implementation only rotates around the global axes, and does not give the same result as glRotatef.
I have looked at some implementations of glRotatef (like OpenGl rotate custom implementation) and don't see how they account for the initial transformation matrix's local axes when calculating the rotation matrix.
I have a generic rotate function, taken (with some changes) from https://community.khronos.org/t/implementing-rotation-function-like-glrotate/68603:
typedef double double_matrix_t[16];
void rotate_double_matrix(const double_matrix_t in, double angle, double x, double y, double z,
double_matrix_t out)
{
double sinAngle, cosAngle;
double mag = sqrt(x * x + y * y + z * z);
sinAngle = sin ( angle * M_PI / 180.0 );
cosAngle = cos ( angle * M_PI / 180.0 );
if ( mag > 0.0f )
{
double xx, yy, zz, xy, yz, zx, xs, ys, zs;
double oneMinusCos;
double_matrix_t rotMat;
x /= mag;
y /= mag;
z /= mag;
xx = x * x;
yy = y * y;
zz = z * z;
xy = x * y;
yz = y * z;
zx = z * x;
xs = x * sinAngle;
ys = y * sinAngle;
zs = z * sinAngle;
oneMinusCos = 1.0f - cosAngle;
rotMat[0] = (oneMinusCos * xx) + cosAngle;
rotMat[4] = (oneMinusCos * xy) - zs;
rotMat[8] = (oneMinusCos * zx) + ys;
rotMat[12] = 0.0F;
rotMat[1] = (oneMinusCos * xy) + zs;
rotMat[5] = (oneMinusCos * yy) + cosAngle;
rotMat[9] = (oneMinusCos * yz) - xs;
rotMat[13] = 0.0F;
rotMat[2] = (oneMinusCos * zx) - ys;
rotMat[6] = (oneMinusCos * yz) + xs;
rotMat[10] = (oneMinusCos * zz) + cosAngle;
rotMat[14] = 0.0F;
rotMat[3] = 0.0F;
rotMat[7] = 0.0F;
rotMat[11] = 0.0F;
rotMat[15] = 1.0F;
multiply_double_matrices(in, rotMat, out); // Generic matrix multiplication function.
}
}
I call this function with the same rotations I used to call glRotatef with and in the same order, but the result is different. All rotations are done around the global axes, while glRotatef would rotate around the local axis of in.
For example, I have a plane:
and I pitch up 90 degrees (this gives the expected result with both glRotatef and my rotation function) and persist the transformation:
If I bank 90 degrees with glRotatef (glRotatef(90, 0.0f, 0.0f, 1.0f)), the plane rotates around the transformation's local Z axis pointing out of the plane's nose, which is what I want:
But if I bank 90 degrees with my code (rotate_double_matrix(in, 90.0f, 0.0, 0.0, 1.0, out)), I get this:
The plane is still rotating around the global Z axis.
Similar issues happen if I change the order of rotations - the first rotation gives the expected result, but subsequent rotations still happen around the global axes.
How does glRotatef rotate around a matrix's local axes? What do I need to change in my code to get the same result? I assume rotate_double_matrix needs to modify the x, y, z values passed in based on the in matrix somehow, but I'm not sure.
You're probably multiplying the matrices in the wrong order. Try changing
multiply_double_matrices(in, rotMat, out);
to
multiply_double_matrices(rotMat, in, out);
I can never remember which way is right, and there's a reasonable chance multiply_double_matrices is backwards anyway (at least if I'd written it :)
The order you multiply matrices in matters. Since rotMat holds your rotation, and in holds the combination of all other matrices applied so far, i.e. "everything else", multiplying in the wrong order means that rotMat gets applied after everything else instead of before everything else. (And I didn't get that part backwards! If you want rotMat to be the "top of stack" transformation, that means you actually want it to be the first when your vertex coordinates are processed)
Another possibility is that you mixed up rows with columns. OpenGL matrices go down, then across, i.e.
matrix[0] matrix[4] matrix[8] matrix[12]
matrix[1] matrix[5] matrix[9] matrix[13]
matrix[2] matrix[6] matrix[10] matrix[14]
matrix[3] matrix[7] matrix[11] matrix[15]
even though 2D arrays are traditionally stored across, then down:
matrix[0] matrix[1] matrix[2] matrix[3]
matrix[4] matrix[5] matrix[6] matrix[7]
matrix[8] matrix[9] matrix[10] matrix[11]
matrix[12] matrix[13] matrix[14] matrix[15]
Getting this wrong can cause similar-looking, but mathematically different, issues
I am in the process of writing a function to test for the intersection of a rectangle with a superellipse.
The rectangle will always be axis-aligned whereas the superellipse may be oriented with an angle of rotation alpha.
In the case of an axis-aligned rectangle intersecting an axis-aligned superellipse I have written these two short functions that work beautifully.
The code is concise, clear and efficient. If possible, I would like to keep a similar structure for the new more general function.
Here is what I have for detecting if an axis-aligned rectangle intersects an axis-aligned superellipse:
double fclamp(double x, double min, double max)
{
if (x <= min) return min;
if (x >= max) return max;
return x;
}
bool rect_intersects_superellipse(const t_rect *rect, double cx, double cy, double rx, double ry, double exponent)
{
t_pt closest;
closest.x = fclamp(cx, rect->x, rect->x + rect->width);
closest.y = fclamp(cy, rect->y, rect->y + rect->height);
return point_inside_superellipse(&closest, cx, cy, rx, ry, exponent);
}
bool point_inside_superellipse(const t_pt *pt, double cx, double cy, double rx, double ry, double exponent)
{
double dx = fabs(pt->x - cx);
double dy = fabs(pt->y - cy);
double dxp = pow(dx, exponent);
double dyp = pow(dy, exponent);
double rxp = pow(rx, exponent);
double ryp = pow(ry, exponent);
return (dxp * ryp + dyp * rxp) <= (rxp * ryp);
}
This works correctly but - as I said - only for an axis-aligned superellipse.
Now I would like to generalize it to an oriented superellipse, keeping the algorithm structure as close to the above as possible.
The obvious expansion of the previous two functions would then become something like:
bool rect_intersects_oriented_superellipse(const t_rect *rect, double cx, double cy, double rx, double ry, double exponent, double radians)
{
t_pt closest;
closest.x = fclamp(cx, rect->x, rect->x + rect->width);
closest.y = fclamp(cy, rect->y, rect->y + rect->height);
return point_inside_oriented_superellipse(&closest, cx, cy, rx, ry, exponent, radians);
}
bool point_inside_oriented_superellipse(const t_pt *pt, double cx, double cy, double rx, double ry, double exponent, double radians)
{
double dx = pt->x - cx;
double dy = pt->y - cy;
if (radians) {
double c = cos(radians);
double s = sin(radians);
double new_x = dx * c - dy * s;
double new_y = dx * s + dy * c;
dx = new_x;
dy = new_y;
}
double dxp = pow(fabs(dx), exponent);
double dyp = pow(fabs(dy), exponent);
double rxp = pow(rx, exponent);
double ryp = pow(ry, exponent);
return (dxp * ryp + dyp * rxp) < (rxp * ryp);
}
For an oriented superellipse, the above doesn’t work correctly, even though point_inside_oriented_superellipse() by itself works as expected. I cannot use the above functions to test for an intersection with an axis-aligned rectangle. I have been researching online for about a week now and I have found some solutions requiring an inverse matrix transform to equalize the superellipse axes and bring its origin at (0, 0). The tradeoff is that now my rectangle won’t be a rectangle anymore and certainly not axis-aligned. I would like to avoid going down that route.
My question is to show how to make the above algorithm work keeping its structure more or less unaltered. If it is not possible to keep the same algorithmic structure, please show the simplest, most efficient algorithm to test for the intersection between an axis-aligned rectangle and an oriented superellipse. I only need to know if the intersection occurred or not (boolean result).
The range of the exponent parameter can vary from 0.25 to 100.0.
Thanks for any assistance.
Take a look at point 2 in this source. In simple terms, you will need to do the following tests:
1. Are there any rectangle vertexes in the ellipse?
2. Is a rectangle edge intersecting the ellipse?
3. Is the center of the ellipse inside the rectangle?
The ellipse and the rectangle intersect each-other if any of the questions above can be answered with a yes, so, your function should return something like this:
return areVertexesInsideEllipse(/*params*/) || areRectangleEdgesIntersectingEllipse(/*params*/) || isEllipseCenterInsideRectangle(/*params*/);
The doc even has an example of implementation, which is reasonably close to yours.
To check whether any of the vertex is inside the ellipse, you can compute their coordinates against the inequality of the ellipse. To check whether an edge overlaps the ellipse, you will need to check whether its line goes through the ellipse or touches it. If so, you will need to check whether the segment where the line goes through the ellipse or touches it intersects the segment defined by the edge. To check whether the center of the ellipse is inside the rectangle you will need to check the center against the inequalities of the rectangle.
Note, that these are very general terms, they do not even assume that your rectangle is axis oriented, yet alone your ellipse.
First you should rule out the obvious non-intersecting cases using the separating axis theorem -- The super-ellipse has possibly two bounding boxes (cases where exponent n>1) and case where n<=1.
In the SAT, all vertices in Bounding Box ABCD are compared against all (directed) edges in the BB(abcd) of super-ellipse; then vice versa. If the signed distances to the separating axis are all positive (i.e. outside), the objects don't collide.
b
a
A------B
| | d
| | c
C------D
The exponent n==1 divides the cases further -- n<=1 makes the super-ellipsoid concave, in which case ABCD intersects abcd only, if one or more points are inside the super-ellipsoid.
When n>1, one must solve the intersection point of the line segment in AABB and the super-ellipsoid, which may have to be approximated by splines or another proxy must be found. After all, the actual intersection point is not of interest, but putting the equations to wolfram alpha failed to produce any results in standard execution time.
I would like to compute the norm (length) of three- and four-dimensional vectors. I'm using double-precision floating point numbers and want to be careful to avoid unnecessary overflow or underflow.
The C math library provides hypot(x,y) for computing the norm of two-dimensional vectors, being careful to avoid underflow/overflow in intermediate calculations.
My question: Is it safe to use hypot(x, hypot(y, z)) and hypot(hypot(w, x), hypot(y, z)) to compute the lengths of three- and four-dimensional vectors, respectively?
It's safe, but it's wasteful: you only need to compute sqrt() once, but when you cascade hypot(), you will call sqrt() for every call to hypot(). Ordinarily I might not be concerned about the performance, but this may also degrade the precision of the result. You could write your own:
double hypot3(double x, double y, double z) {
return sqrt(x*x + y*y + z*z);
}
etc. This will be faster and more accurate. I don't think anyone would be confused when they see hypot3() in your code.
The standard library hypot() may have tricks to avoid overflow, but you may not be concerned about it. Ordinarily, hypot() is more accurate than sqrt(x*x + y*y). See e_hypot.c in the GLibC source code.
It safe (almost) to use hypot(x, hypot(y, z)) and hypot(hypot(w, x), hypot(y, z)) to compute the lengths of three- and four-dimensional vectors.
C does not strongly specify that hypot() must work for a double x, y that have a finite double answer. It has weasel words of "without undue overflow or underflow".
Yet given that hypot(x, y) works, a reasonable hypot() implementation will perform hypot(hypot(w, x), hypot(y, z)) as needed. There is only 1 increment (at the low end) /decrement (at the high end) of binary exponent range lost when with 4-D vs. 2-D.
Concerning speed, precision, and range, code profile against sqrtl((long double) w*w + (long double) x*x + (long double) y*y + (long double) z*z) as an alternative, but that seems only needed with select coding goals.
I've done some experiments with this sort of thing. In particular I looked at a plain implementation, an implementation using hypots and (a C translation of the reference version of) the BLAS function DNRM2.
I found that as regards over and underflow, the BLAS and hypot implementations were the same (in my tests) and far superior to the plain implementation. As regards time, for high (hundreds) dimensioned vectors, the BLAS was about 6 times slower than the plain, while the hypot was 3 times slower than BLAS. The time differences were a bit smaller for smaller dimensions.
Should code not be able to use hypot() nor wider precision types, a slow method examines the exponents using frexp() and scales the argumnets #greggo.
#include <math.h>
double nibot_norm(double w, double x, double y, double z) {
// Sort the values by some means
if (fabs(x) < fabs(w)) return nibot_norm(x, w, y, z);
if (fabs(y) < fabs(x)) return nibot_norm(w, y, x, z);
if (fabs(z) < fabs(y)) return nibot_norm(w, x, z, y);
if (z == 0.0) return 0.0; // all zero case
// Scale z to exponent half-way 1.0 to MAX_DOUBLE/4
// and w,x,y the same amount
int maxi;
frexp(DBL_MAX, &maxi);
int zi;
frexp(z, &zi);
int pow2scale = (maxi / 2 - 2) - zi;
// NO precision loss expected so far.
// except w,x,y may become 0.0 if _far_ less than z
w = ldexp(w, pow2scale);
x = ldexp(x, pow2scale);
y = ldexp(y, pow2scale);
z = ldexp(z, pow2scale);
// All finite values in range of squaring except for values
// greatly insignificant to z (e.g. |z| > |x|*1e300)
double norm = sqrt(((w * w + x * x) + y * y) + z * z);
// Restore scale
return ldexp(norm, -pow2scale);
}
Test Code
#include <float.h>
#include <stdio.h>
#ifndef DBL_TRUE_MIN
#define DBL_TRUE_MIN DBL_MIN*DBL_EPSILON
#endif
void nibot_norm_test(double w, double x, double y, double z, double expect) {
static int dig = DBL_DECIMAL_DIG - 1;
printf(" w:%.*e x:%.*e y:%.*e z:%.*e\n", dig, w, dig, x, dig, y, dig, z);
double norm = nibot_norm(w, x, y, z);
printf("expect:%.*e\n", dig, expect);
printf("actual:%.*e\n", dig, norm);
if (expect != norm) puts("Different");
}
int main(void) {
nibot_norm_test(0, 0, 0, 0, 0);
nibot_norm_test(10 / 7., 4 / 7., 2 / 7., 1 / 7., 11 / 7.);
nibot_norm_test(DBL_MAX, 0, 0, 0, DBL_MAX);
nibot_norm_test(DBL_MAX / 2, DBL_MAX / 2, DBL_MAX / 2, DBL_MAX / 2, DBL_MAX);
nibot_norm_test(DBL_TRUE_MIN, 0, 0, 0, DBL_TRUE_MIN);
nibot_norm_test(DBL_TRUE_MIN, DBL_TRUE_MIN, DBL_TRUE_MIN,
DBL_TRUE_MIN, DBL_TRUE_MIN * 2);
return 0;
}
Results
w:0.00000000000000000e+00 x:0.00000000000000000e+00 y:0.00000000000000000e+00 z:0.00000000000000000e+00
expect:0.00000000000000000e+00
actual:0.00000000000000000e+00
w:1.42857142857142860e+00 x:5.71428571428571397e-01 y:2.85714285714285698e-01 z:1.42857142857142849e-01
expect:1.57142857142857140e+00
actual:1.57142857142857140e+00
w:1.79769313486231571e+308 x:0.00000000000000000e+00 y:0.00000000000000000e+00 z:0.00000000000000000e+00
expect:1.79769313486231571e+308
actual:1.79769313486231571e+308
w:8.98846567431157854e+307 x:8.98846567431157854e+307 y:8.98846567431157854e+307 z:8.98846567431157854e+307
expect:1.79769313486231571e+308
actual:1.79769313486231571e+308
w:4.94065645841246544e-324 x:0.00000000000000000e+00 y:0.00000000000000000e+00 z:0.00000000000000000e+00
expect:4.94065645841246544e-324
actual:4.94065645841246544e-324
w:4.94065645841246544e-324 x:4.94065645841246544e-324 y:4.94065645841246544e-324 z:4.94065645841246544e-324
expect:9.88131291682493088e-324
actual:9.88131291682493088e-324
I have 2 points A and B in a plane. What I need to find is the points w, x, y and z so that I can have a uniform bounding box.
The conditions are a line formed by wx and yz are parallel to AB.
Similarly wBz and xAy are parallel must be parallel.
Also note that angle zwx and wxy are right angles. Basically wxyz has to be a square.
z
/ /
B /
/ /
w /
/ y
/ /
/ A
/ /
x
Basically finding w, x, y and z is easy if line AB is parallel to x-axis or if AB is parallel to y-axis. I'm having trouble determining the points w,x,y and z when line AB is in an angle with x-axis (slope of line AB could be positive or negative).
Any comments/suggestions is highly appreciated. Thanks!
Treat A and B as vectors in your plane, (xa, ya) and (xb, yb). Take the vector difference, to generate a vector, C, that points from A to B.
C = A - B = (xa - xb, ya - yb) = (xc, yc)
Rotate this vector 90 degrees in each direction, and scale by a half, to get D = (xd, yd) and E = (xe, ye).
D = (-yc/2, +xc/2)
E = -D = (+yc/2, -xc/2)
Use vector arithmetic to get the four points of the square.
w = B + D
x = A + D
y = A + E
z = B + E
EDIT: Fat fingers.
EDIT2: Forgot the factor of a half.
EDIT3: Vector rotation reference, as requested.
To figure out the vector rotation, one can, in general, perform multiplication with a rotation matrix. In this case, the sin and cos factors of +/- pi/2 end up being +/- 1.
If matrix multiplication isn't your thing, draw on paper (or just imagine) a sample vector in any quadrant. Now rotate the paper 90 deg in either direction and see how the x and y components get swapped around and negated.
neirbowjs answer translated to more optimized solution, if optimization floats your boat.
Vars you know (Ax, Ay, Bx, By);
Vars you solve for (Wx, Wy, Xx, Xy, Yx, Yy,Zx, Zy);
float dx = By - Ay / 2;
float dy = Bx - Ax / 2;
float Wx = Ax - dx;
float Wy = Ay + dy;
float Zx = Ax + dx;
float Zy = Ay - dy;
float Xx = Bx - dx;
float Xy = By + dy;
float Yx = Bx + dx;
float Yy = By - dy;
Have created a c++ implementation of the Hough transform for detecting lines in images. Found lines are represented using rho, theta, as described on wikipedia:
"The parameter r represents the distance between the line and the origin, while θ is the angle of the vector from the origin to this closest point "
How can i find the intersection point in x, y space for two lines described using r, θ?
For reference here are my current functions for converting in and out of hough space:
//get 'r' (length of a line from pole (corner, 0,0, distance from center) perpendicular to a line intersecting point x,y at a given angle) given the point and the angle (in radians)
inline float point2Hough(int x, int y, float theta) {
return((((float)x)*cosf(theta))+((float)y)*sinf(theta));
}
//get point y for a line at angle theta with a distance from the pole of r intersecting x? bad explanation! >_<
inline float hough2Point(int x, int r, float theta) {
float y;
if(theta!=0) {
y=(-cosf(theta)/sinf(theta))*x+((float)r/sinf(theta));
} else {
y=(float)r; //wth theta may == 0?!
}
return(y);
}
sorry in advance if this is something obvious..
Looking at the Wikipedia page, I see that the equation of a straight line corresponding to a given given r, θ pair is
r = x cosθ + y sinθ
Thus, if I understand, given two pairs r1, θ1 and r2, θ2, to find the intersection you must solve for the unknowns x,y the following linear 2x2 system:
x cos θ1 + y sin θ1 = r1
x cos θ2 + y sin θ2 = r2
that is AX = b, where
A = [cos θ1 sin θ1] b = |r1| X = |x|
[cos θ2 sin θ2] |r2| |y|
Had never encountered matrix maths before, so took a bit of research and experimentation to work out the proceedure for Fredrico's answer. Thanks, needed to learn about matrices anyway. ^^
function to find where two parameterized lines intersect:
//Find point (x,y) where two parameterized lines intersect :p Returns 0 if lines are parallel
int parametricIntersect(float r1, float t1, float r2, float t2, int *x, int *y) {
float ct1=cosf(t1); //matrix element a
float st1=sinf(t1); //b
float ct2=cosf(t2); //c
float st2=sinf(t2); //d
float d=ct1*st2-st1*ct2; //determinative (rearranged matrix for inverse)
if(d!=0.0f) {
*x=(int)((st2*r1-st1*r2)/d);
*y=(int)((-ct2*r1+ct1*r2)/d);
return(1);
} else { //lines are parallel and will NEVER intersect!
return(0);
}
}