Drawing orbit trial for a planet in OpenGL [closed] - c

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm currently doing a model of the solar system. I have a planet that moves around the sun. This is the planet:
glPushMatrix();
glColor3f(1.0, 0.0, 0.0);
glRotatef(theta * 10, 0.0, 1.0, 0.0);
glTranslatef(P1[0], P1[1], P1[2]);
gluSphere(quad, 0.05, 100, 20);
glPopMatrix();
Now, I want to draw a trial around the sun exactly where the planet moves. How do I do this? I'm supposed to use GL_LINES to draw it. So far I've got this, but I'm not getting the desired result. The circular path isn't exactly the same as the rotation orbit of the planet.
glBegin(GL_LINES);
for (float i = 0; i < 2 * PI; i += 0.01)
{
float x = P1[0] * cos(i) + 0.0;
float y = 0.0;
float z = P1[2] * sin(i) + 0.0;
glVertex3f(x, y, z);
}
glEnd();
Given the information about the planet, how do I draw its orbit trial?

If the planet is following a circular orbit, you need to know the center of the circle, the radius, and the axis of rotation. In your case, the axis of rotation is the y-axis. Therefore the points on along orbit can be computed by the trigonometric functions sin and cos Define a center point (float CPT[3]) and a radius (float radius). Use the Line primitive type GL_LINE_LOOP to draw the circular orbit:
glBegin(GL_LINES);
for (float angle = 0; i < 2 * PI; angle += 0.01)
{
float x = CPT[0] + cos(angle) * radius;
float y = CPT[1];
float z = CPT[2] + sin(angle) * radius;
glVertex3f(x, y, z);
}
glEnd();

Related

How can I transform 3D coordinates into 2D coordinates using isometric projection?

Programming Language: C
I'm currently in the process of implementing a 3D wireframe model represented through isometric projection.
My current understanding of the project is to:
parse a text map containing the x,y,z coordinates of the wireframe model
Transforming the 3D coordinates to 2D using isometric projection
Drawing the line using the Bresenham Line Algo and a few functions out of my graphic library of choice.
I'm done with Step 1 however I've been stuck on Step 2 for the last few days.
I understand that isometric projection is the process of projecting a 2D plane in a angle that it looks like it's 3D even though we are only working with x,y when drawing the lines. That is def. not the best way of describing it and if I'm incorrect please correct me.
Example of a text map:
0 0 0
0 5 0
0 0 0
My data structure of choice (implemented as array of structs)
typedef struct point
{
float x;
float y;
float z;
bool is_last;
int color; // Implemented after mandatory part
} t_point;
I pretty much just read out the rows, column and values of the text map and store them in x,y,z values respectively.
Now that I have to transform them I've tried the following formulas:
const double angle = 30 * M_PI / 180.0;
void isometric(t_dot *dot, double angle)
{
dot->x = (dot->x - dot->y) * cos(angle);
dot->y = (dot->x + dot->y) * sin(angle) - dot->z;
}
static void iso(int x, int y, int z)
{
int previous_x;
int previous_y;
previous_x = x;
previous_y = y;
x = (previous_x - previous_y) * cos(0.523599);
y = -z + (previous_x + previous_y) * sin(0.523599);
}
t_point *calc_isometric(t_point *pts, int max_pts)
{
float x;
float y;
float z;
const double angle = 30 * M_PI / 180.0;
int num_pts;
num_pts = 0;
while (num_pts < max_pts)
{
x = pts[num_pts].x;
y = pts[num_pts].y;
z = pts[num_pts].z;
printf("x: %f y: %f z: %f\n", x, y, z);
pts[num_pts].x = (x - y) * cos(angle);
pts[num_pts].y = (x + y) * sin(angle) - z;
printf("x_iso %f\ty_iso %f\n\n", pts[num_pts].x, pts[num_pts].y);
num_pts++;
}
return (pts);
}
It spits out various things which makes no sense to me. I could just go one and try to implement the Line Algo. from here and hope for the best but I would like to understand what I'm actually doing here.
Next to that I learned through me research that I need to set up my camera in a certain way to create the projection.
All in all I'm just very lost and my question boils down to this.
Please help me understand the concept of isometric projection.
How to transform 3D coordinates (x,y,z) into coordinates using isometric projection.
I see it like this:
// constants:
float deg = M_PI/180.0;
float ax = 30*deg;
float ay =150*deg;
vec2 X = vec2(cos(ax),-sin(ax)); // x axis
vec2 Y = vec2(cos(ay),-sin(ay)); // y axis
vec2 Z = vec2( 0.0,- 1.0); // z axis
vec2 O = vec2(0,0); // position of point (0,0,0) on screen
// conversion:
vec3 p=vec3(?,?,?); // input point
vec2 q=O+(p.x*X)+(p.y*Y)+(p.z*Y); // output point
the coordinatewise version:
float Xx = cos(ax);
float Xy = -sin(ax);
float Yx = cos(ay);
float Yy = -sin(ay);
float Zx = 0.0;
float Zy = - 1.0;
float Ox = 0;
float Oy = 0;
// conversion:
float px=?,py=?,pz=?; // input point
float qx=Ox+(px*Xx)+(py*Yx)+(pz*Yx); // output point
float qy=Oy+(px*Xy)+(py*Yy)+(pz*Yy); // output point
Asuming x axis going to right and y axis going down ... the O is usually set to center of screen instead of (0,0) unless you add pan capabilities of your isometric world.
In case you want to add arbitrary rotations within the "3D" XY plane see this:
How can I warp a shader matrix to match isometric perspective in a 3d scene?
So you just compute the X,Y vectors on the ellipse (beware they will not be unit anymore!!!) So if I see it right it would be:
float ax=?,ay=ax+90*deg;
float Xx = cos(ax) ;
float Xy = -sin(ax)*0.5;
float Yx = cos(ay) ;
float Yy = -sin(ay)*0.5;
where ax is the rotation angle...

How does glRotatef rotate around local indices?

I am replacing my project's use of glRotatef because I need to be able to transform double matrices. glRotated is not an option because OpenGL does not guarantee the stored matrices or any operations performed to be double precision. However, my new implementation only rotates around the global axes, and does not give the same result as glRotatef.
I have looked at some implementations of glRotatef (like OpenGl rotate custom implementation) and don't see how they account for the initial transformation matrix's local axes when calculating the rotation matrix.
I have a generic rotate function, taken (with some changes) from https://community.khronos.org/t/implementing-rotation-function-like-glrotate/68603:
typedef double double_matrix_t[16];
void rotate_double_matrix(const double_matrix_t in, double angle, double x, double y, double z,
double_matrix_t out)
{
double sinAngle, cosAngle;
double mag = sqrt(x * x + y * y + z * z);
sinAngle = sin ( angle * M_PI / 180.0 );
cosAngle = cos ( angle * M_PI / 180.0 );
if ( mag > 0.0f )
{
double xx, yy, zz, xy, yz, zx, xs, ys, zs;
double oneMinusCos;
double_matrix_t rotMat;
x /= mag;
y /= mag;
z /= mag;
xx = x * x;
yy = y * y;
zz = z * z;
xy = x * y;
yz = y * z;
zx = z * x;
xs = x * sinAngle;
ys = y * sinAngle;
zs = z * sinAngle;
oneMinusCos = 1.0f - cosAngle;
rotMat[0] = (oneMinusCos * xx) + cosAngle;
rotMat[4] = (oneMinusCos * xy) - zs;
rotMat[8] = (oneMinusCos * zx) + ys;
rotMat[12] = 0.0F;
rotMat[1] = (oneMinusCos * xy) + zs;
rotMat[5] = (oneMinusCos * yy) + cosAngle;
rotMat[9] = (oneMinusCos * yz) - xs;
rotMat[13] = 0.0F;
rotMat[2] = (oneMinusCos * zx) - ys;
rotMat[6] = (oneMinusCos * yz) + xs;
rotMat[10] = (oneMinusCos * zz) + cosAngle;
rotMat[14] = 0.0F;
rotMat[3] = 0.0F;
rotMat[7] = 0.0F;
rotMat[11] = 0.0F;
rotMat[15] = 1.0F;
multiply_double_matrices(in, rotMat, out); // Generic matrix multiplication function.
}
}
I call this function with the same rotations I used to call glRotatef with and in the same order, but the result is different. All rotations are done around the global axes, while glRotatef would rotate around the local axis of in.
For example, I have a plane:
and I pitch up 90 degrees (this gives the expected result with both glRotatef and my rotation function) and persist the transformation:
If I bank 90 degrees with glRotatef (glRotatef(90, 0.0f, 0.0f, 1.0f)), the plane rotates around the transformation's local Z axis pointing out of the plane's nose, which is what I want:
But if I bank 90 degrees with my code (rotate_double_matrix(in, 90.0f, 0.0, 0.0, 1.0, out)), I get this:
The plane is still rotating around the global Z axis.
Similar issues happen if I change the order of rotations - the first rotation gives the expected result, but subsequent rotations still happen around the global axes.
How does glRotatef rotate around a matrix's local axes? What do I need to change in my code to get the same result? I assume rotate_double_matrix needs to modify the x, y, z values passed in based on the in matrix somehow, but I'm not sure.
You're probably multiplying the matrices in the wrong order. Try changing
multiply_double_matrices(in, rotMat, out);
to
multiply_double_matrices(rotMat, in, out);
I can never remember which way is right, and there's a reasonable chance multiply_double_matrices is backwards anyway (at least if I'd written it :)
The order you multiply matrices in matters. Since rotMat holds your rotation, and in holds the combination of all other matrices applied so far, i.e. "everything else", multiplying in the wrong order means that rotMat gets applied after everything else instead of before everything else. (And I didn't get that part backwards! If you want rotMat to be the "top of stack" transformation, that means you actually want it to be the first when your vertex coordinates are processed)
Another possibility is that you mixed up rows with columns. OpenGL matrices go down, then across, i.e.
matrix[0] matrix[4] matrix[8] matrix[12]
matrix[1] matrix[5] matrix[9] matrix[13]
matrix[2] matrix[6] matrix[10] matrix[14]
matrix[3] matrix[7] matrix[11] matrix[15]
even though 2D arrays are traditionally stored across, then down:
matrix[0] matrix[1] matrix[2] matrix[3]
matrix[4] matrix[5] matrix[6] matrix[7]
matrix[8] matrix[9] matrix[10] matrix[11]
matrix[12] matrix[13] matrix[14] matrix[15]
Getting this wrong can cause similar-looking, but mathematically different, issues

Angle to Quaternion - Making an object facing another object

i have two Objects in a 3D World and want to make the one object facing the other object. I already calculated all the angles and stuff (pitch angle and yaw angle).
The problem is i have no functions to set the yaw or pitch individually which means that i have to do it by a quaternion. As the only function i have is: SetEnetyQuaternion(float x, float y, float z, float w). This is my pseudocode i have yet:
float px, py, pz;
float tx, ty, tz;
float distance;
GetEnetyCoordinates(ObjectMe, &px, &py, &pz);
GetEnetyCoordinates(TargetObject, &tx, &ty, &tz);
float yaw, pitch;
float deltaX, deltaY, deltaZ;
deltaX = tx - px;
deltaY = ty - py;
deltaZ = tz - pz;
float hyp = SQRT((deltaX*deltaX) + (deltaY*deltaY) + (deltaZ*deltaZ));
yaw = (ATAN2(deltaY, deltaX));
if(yaw < 0) { yaw += 360; }
pitch = ATAN2(-deltaZ, hyp);
if (pitch < 0) { pitch += 360; }
//here is the part where i need to do a calculation to convert the angles
SetEnetyQuaternion(ObjectMe, pitch, 0, yaw, 0);
What i tried yet was calculating the sinus from those angles devided with 2 but this didnt work - i think this is for euler angles or something like that but didnt help me. The roll(y axis) and the w argument can be left out i think as i dont want my object to have a roll. Thats why i put 0 in.
If anyone has any idea i would really appreciate help.
Thank you in advance :)
Let's suppose that the quaternion you want describes the attitude of the player relative to some reference attitude. It is then essential to know what the reference attitude is.
Moreover, you need to understand that an object's attitude comprises more than just its facing -- it also comprises the object's orientation around that facing. For example, imagine the player facing directly in the positive x direction of the position coordinate system. This affords many different attitudes, from the one where the player is standing straight up to ones where he is horizontal on either his left or right side, to one where he is standing on his head, and all those in between.
Let's suppose that the appropriate reference attitude is the one facing parallel to the positive x direction, and with "up" parallel to the positive z direction (we'll call this "vertical"). Let's also suppose that among the attitudes in which the player is facing the target, you want the one having "up" most nearly vertical. We can imagine the wanted attitude change being performed in two steps: a rotation about the coordinate y axis followed by a rotation about the coordinate z axis. We can write a unit quaternion for each of these, and the desired quaternion for the overall rotation is the Hamilton product of these quaternions.
The quaternion for a rotation of angle θ around the unit vector described by coordinates (x, y, z) is (cos θ/2, x sin θ/2, y sin θ/2, z sin θ/2). Consider then, the first quaternion you want, corresponding to the pitch. You have
double semiRadius = sqrt(deltaX * deltaX + deltaY * deltaY);
double cosPitch = semiRadius / hyp;
double sinPitch = deltaZ / hyp; // but note that we don't actually need this
. But you need the sine and cosine of half that angle. The half-angle formulae come in handy here:
double sinHalfPitch = sqrt((1 - cosPitch) / 2) * ((deltaZ < 0) ? -1 : 1);
double cosHalfPitch = sqrt((1 + cosPitch) / 2);
The cosine will always be nonnegative because the pitch angle must be in the first or fourth quadrant; the sine will be positive if the object is above the player, or negative if it is below. With all that being done, the first quaternion is
(cosHalfPitch, 0, sinHalfPitch, 0)
Similar analysis applies to the second quaternion. The cosine and sine of the full rotation angle are
double cosYaw = deltaX / semiRadius;
double sinYaw = deltaY / semiRadius; // again, we don't actually need this
We can again apply the half-angle formulae, but now we need to account for the full angle to be in any quadrant. The half angle, however, can be only in quadrant 1 or 2, so its sine is necessarily non-negative:
double sinHalfYaw = sqrt((1 - cosYaw) / 2);
double cosHalfYaw = sqrt((1 + cosYaw) / 2) * ((deltaY < 0) ? -1 : 1);
That gives us an overall second quaternion of
(cosHalfYaw, 0, 0, sinHalfYaw)
The quaternion you want is the Hamilton product of these two, and you must take care to compute it with the correct operand order (qYaw * qPitch), because the Hamilton product is not commutative. All the zeroes in the two factors make the overall expression much simpler than it otherwise would be, however:
(cosHalfYaw * cosHalfPitch,
-sinHalfYaw * sinHalfPitch,
cosHalfYaw * sinHalfPitch,
sinHalfYaw * cosHalfPitch)
At this point I remind you that we started with an assumption about the reference attitude for the quaternion system, and the this result depends on that choice. I also remind you that I made an assumption about the wanted attitude, and that also affects this result.
Finally, I observe that this approach breaks down where the target object is very nearly directly above or directly below the player (corresponding to semiRadius taking a value very near zero) and where the player is very nearly on top of the target (corresponding to hyp taking a value very near zero). There is a non-zero chance of causing a division by zero if you use these formulae exactly as given, so you'll want to think about how to deal with that.)

How to convert X and Y screen coordinates to -1.0, 1.0 float? (in C)

I'm trying to convert X and Y screen coordinates to a float range of -1.0, to 1.0.
(-1,-1 being 0,0 and if the resolution was 640x480, 1,1 would be 640,480. 0,0 would be 320,240... the center.)
How would I approach this? I made several futile attempts, and I'm not exactly mathematically inclined.
Here is some C code
void convert(int X, int Y)
{
float newx = 2*(X-320.0f)/640.0f;
float newy = 2*(Y-240.0f)/480.0f;
printf("New x = %f, New y = %f", newx, newy);
}
EDIT: Added the f suffix to ensure we do not do integer math !
In the X direction:
Screen coordinate 0 corresponds to -1.0.
Screen coordinate 640 corresponds to 1.0.
You can convert that to an equation:
Given fX in floating point coordinates, the screen coordinate sX is:
sX = 640*(fX + 1.0)/2.0
or
sX = 320*(fX + 1.0)
Similarly, given fY in floating point coordinates, the screen coordinate sY is:
sY = 480*(fY + 1.0)/2.0
or
sY = 240*(fY + 1.0)
The inverse of that:
Given sX in screen coordinates, fX in real coordinates is:
fX = (sX/320 - 1.0)
Given sY in screen coordinates, fY in real coordinates is:
fY = (sY/240 - 1.0)
When you convert that to code, make sure the last two equations have a 1.0. Otherwise, you'll lose accuracy due to integer division.
fX = (1.0*sX/320 - 1.0)
fY = (1.0*sY/240 - 1.0)

Better control over Tessellation in OpenGL?

I spent the day working on an OpenGL application that will tessellate a mesh and apply a lens distortion. The goal is to be able to render wide angle shots for a variety of different lenses. So far I've got the shaders properly applying the distortion but I've been having issues controlling the tessellation the way I want to. Right now my Tessellation Control Shader just breaks a single triangle into a set number of smaller triangles, then I apply the lens distortion in in the Tessellation Evaluation Shader.
The problem I'm having with this approach is that when I have really large triangles in the scene, they tend to need more warping. This means they need to be tessellated more in order to ensure good looking results. Unfortunately, I can't compute the size of a triangle (in screen space) in the Vertex Shader or the Tessellation Control Shader, but I need to define the tessellation amount in the Tessellation Control shader.
My question is then, is there some way to get a hold of the entire primitive in OpenGL's programmable pipeline, compute some metrics about it, then use that information to control tessellation?
Here's some example images of the problem for clarity...
Figure 1 (Above): Each Red or Green Square was originally 2 triangles, this example looks good because the triangles were small.
Figure 2 (Above): Each Red or Green Region was originally 2 triangles, this example looks bad because the triangles were small.
Figure 3 (Above): Another example with small triangles but with a much, much larger grid. Notice how much things curve on the edges. Still looks good with tessellation level of 4.
Figure 4 (Above): Another example with large triangles, only showing center 4 columns because the image is unintelligible if more columns are present. This shows how very large triangles don't get tessellated well. If I set the tessellation really really high then this comes out nice. But then I'm performing a crazy amount of tessellation on smaller triangles too.
In a Tessellation Control Shader (TCS) you have read access to every vertex in the input patch primitive. While that sounds nice on paper, if you are trying to compute the maximum edge length of a patch, it would actually mean iterating over every vertex in the patch on every TCS invocation and that's not particularly efficient.
Instead, it may be more practical to pre-compute the patch's center in object-space and determine the radius of a sphere that tightly bounds the patch. Store this bounding information as an extra vec4 attribute per-vertex, packed as shown below.
Pseudo-code for a TCS that computes the longest length of the patch in NDC-space
#version 420
uniform mat4 model_view_proj;
in vec4 bounding_sphere []; // xyz = center (object-space), w = radius
void main (void)
{
vec4 center = vec4 (bounding_sphere [0].xyz, 1.0f);
float radius = bounding_sphere [0].w;
// Transform object-space X extremes into clip-space
vec4 min_0 = model_view_proj * (center - vec4 (radius, 0.0f, 0.0f, 0.0f));
vec4 max_0 = model_view_proj * (center + vec4 (radius, 0.0f, 0.0f, 0.0f));
// Transform object-space Y extremes into clip-space
vec4 min_1 = model_view_proj * (center - vec4 (0.0f, radius, 0.0f, 0.0f));
vec4 max_1 = model_view_proj * (center + vec4 (0.0f, radius, 0.0f, 0.0f));
// Transform object-space Z extremes into clip-space
vec4 min_2 = model_view_proj * (center - vec4 (0.0f, 0.0f, radius, 0.0f));
vec4 max_2 = model_view_proj * (center + vec4 (0.0f, 0.0f, radius, 0.0f));
// Transform from clip-space to NDC
min_0 /= min_0.w; max_0 /= max_0.w;
min_1 /= min_1.w; max_1 /= max_1.w;
min_2 /= min_2.w; max_2 /= max_2.w;
// Calculate the distance (ignore depth) covered by all three pairs of extremes
float dist_0 = distance (min_0.xy, max_0.xy);
float dist_1 = distance (min_1.xy, max_1.xy);
float dist_2 = distance (min_2.xy, max_2.xy);
// A max_dist >= 2.0 indicates the patch spans the entire screen in one direction
float max_dist = max (dist_0, max (dist_1, dist_2));
// ...
}
If you run your 4th diagram through this TCS, you should come up with a value for max_dist very nearly 2.0, which means you need as much subdivision as possible. Meanwhile, many of the patches on the periphery of the sphere in the 3rd diagram will be close to 0.0; they don't need much subdivision.
This does not properly deal with situations where part of the patch is offscreen. You would need to clamp the NDC extremes to [-1.0,1.0] to properly handle those situations. Seemed like more trouble than it was worth.

Resources