Rotate Camera in the Direction behind object in OpenGL - c

I'm making a game in OpenGL, using freeglut.
I have a car, which I am able to move back and forward using keys and the camera follows it. Now, when I turn the car(glRotate in xz plane), I want the camera to change the Camera position(using gluLookAt) so it always points to the back of the car.
Any suggestions how do I do that?

For camera follow I use the object transform matrix
get object transform matrix
camera=object
use glGetMatrix or whatever for that
shift rotate the position so Z axis is directing where you want to look
I use object aligned to forward on Z axis, but not all mesh models are like this so rotate by (+/-)90 deg around x,y or z to match this:
Z-axis is forward (or backward depends on your projection matrix and depth function)
X-axis is Right
Y-axis is Up
with respect to your camera/screen coordinate system (projection matrix). Then translate to behind position
apply POV rotation (optional)
if you can slightly rotate camera view from forward direction (mouse look) then do it at this step
camera*=rotation_POV
convert matrix to camera
camera matrix is usually inverse of the coordinate system matrix it represents so:
camera=Inverse(camera)
For more info look here understanding transform matrices the OpenGL inverse matrix computation in C++ is included there.

Related

Barycentric coordinates. Harmonic warping of a point relative to a concave polygon in C

I'm trying to get an array of weights that represent the influence a polygon's vertices have on an arbitrary position inside of it. With which I can interpolate the vertices of a deformed version of the polygon and get the corresponding deformed position.
Mean Value and Harmonic warping:
It seems that Harmonic coordinates would do this? My mesh goal:
I don't have easy time reading math papers. I found this Mathlab article, but still not grasping how to process each sampled position relative to the polygon's vertices
Meshlab article
Thanks!
You could try to create a Delaunay triangulation of the polygon and then use Barycentric coordinates within each triangle. This mapping is well defined and continuous, but in most cases probably not smooth (i.e. the derivative is not continuous).

OpenGL/GLUT: glTranslatef and glRotatef before drawing cube, or after?

I'm currently working with OpenGL and GLUT frameworks to play with particles.
However, I can seem to get my logic working correctly for the rotations/translations.
Pseudo code of my current situation:
void display() {
drawEnvironment();
for each particle 'part' in the array {
glPushMatrix(); // pushes the matrix for the current transformations? i.e. this particle?
glRotatef(part.angles); // rotate this matrix based of it's own angles (constantly changing)
drawParticle(part); // draws at origin
glTranslatef(part.positon); // translate to the position
glPopMatrix();
}
}
What I think I'm doing here is as follows:
Pushing the transformation matrix I need from the stack (for the current particle)
Rotating said matrix (glRotatef rotates around the origin (0,0,0) )
Drawing the particle at origin so it rotates on spot
Translate the particle to it's position now that it's been rotated
Pop the transformation matrix back onto the stack
I've also tried translating -> rotating -> drawing, and a few other mixes.
Hard to explain what's going wrong without a quick video:
https://www.youtube.com/watch?v=G0ouhCKKcIM
Looks like it's rotating after it's being translated, so since it rotates around the origin it follows that larger circle, rather than spinning on it's own axis.
The transformations should be applied in this order:
Push the matrix from stack
Translate to position
Rotate it
Draw it
Pop the matrix
This is because OpenGL works by pre-multiplication. The first operation is performed last.

OpenGL Rotate an Object around It's Local Axes

Imagine a 3D rectangle at origin. It is first rotated along Y-axis. So good so far. Now, it is rotated around X-axis. However, OpenGL (API: glrotatef) interprets the X-axis to be the global X-axis. How can I ensure that the "axes move with the object"?
This is very much like an airplane. For example, if yaw (Y rotation) is applied first, and then pitch (X-rotation), a correct pitch would be X-rotation along the plane's local axes.
EDIT: I have seen this called gimbal lock problem, but I don't think it is though.
You cannot consistently describe an aeroplane's orientation as one x rotation and one y rotation. Not even if you also store and one z rotation. That's exactly the gimbal lock problem.
The crux of it is that you have to apply the rotations in some order. Say it's x then y then z for the sake of argument. Then what happens if the x rotation is by 90 degrees? That folds the y axis onto where the z axis was. Then say the y rotation is also by 90 degrees. That's now bent the z axis onto where the x axis was. So now what effect does any z rotation have?
That's just an easy to grasp example. It's not a special case. You can't wave your hands out of it by saying "oh, I'll detect when to do z rotations first" or "I'll do 90 degree rotations with a special pathway" or any other little hack. Trying to store and update orientations as three independent scalars doesn't work.
In classic OpenGL, a call to glRotatef means "... and then rotate the current matrix like this". It's not relative to world coordinates or to model coordinates or to any other space that you're thinking in.

how do I do "reverse" texture mapping from texture image x,y to 3d space?

I am using WPF 3D, but I think this question should apply to any 3d texture mapping.
Suppose I have a model of a cow, and I want to draw a circular spot on the cow (and I want to do this dynamically -- supposed I don't know the location of the spot until run-time). I could do this by coloring the vertexes (vertexes are assigned a color based on their distance from the center of the spot), but if the model is fairly low-poly, that will give a pretty jagged-edged circle.
I could do it using a pixel shader, where the shader colors each pixel based on its distance from the center of the spot. But suppose I don't have access to pixel shaders (since I don't in WPF).
So, it seems that what I want to do is dynamically create a texture with the circle pattern on it, and texture the cow with it.
The question is: As I'm drawing that texture, how can I know what 3d coordinate in model space a given xy coordinate on the texture image corresponds to?
That is, suppose I have already textured my model with a plain white texture -- I've set up texture coordinates, done texture mapping, but don't have the texture image yet. So I have this 1000x1000 (or whatever) pixel image that gets draped nicely over the cow according to some nice texture coordinates that have been set up on the model beforehand. I understand that when the 3D hardware goes to draw a given triangle, it uses the texture coordinates of the vertexes of the triangle to find the corresponding triangular region of the image, and then interpolates across the surface of the triangle to fill displayed model pixels with colors from that triangular region of the image.
How do I go the other way? How do I say, for this given xy point on my texture image, and given the texture coordinates that have already been set up on the model, what's the 3d coordinate in model space that this image pixel is going to correspond to once texture mapping happens?
If I had such a function, I could color my texture map image such that all the points (in 3d space) within a certain distance of the circle center point on the cow would get one color, and all points outside that distance would get another color, and I'd end up with a nice, crisp circular spot on the cow, even with a relatively low-poly model. Does that sound right?
I do understand that given the texture coordinates for the vertexes of each triangle, I can step through the triangles in my model, find the corresponding triangle on the texture image, and do my own interpolation, across the texture pixels in that triangle, by interpolating across the 3d plane determined by the vertex points. And that doesn't sound too hard. But I'm just trying to understand if there is some standard 3d concept/function where I can just call a ready-made function to give me the model space coordinates for a given texture xy.
I did end up getting this working. I walk every point on the texture (1024 x 1024 points). Using the model's texture coordinates, I determine which polygon face, if any, the given u,v point is inside of. If it's inside of a face, I get the model coordinates for each point on that face. I then do a barycentric interpolation as described here: http://paulbourke.net/texture_colour/interpolation/
That is, for each u,v point on the texture, I use an inside-polygon check to determine which quad it's in on the 2D texture sheet and then I use an interpolation on that same 2D geometry as described in the link above, but instead of interpolating colors or normals I'm interpolating 3D coordinates.
I can then use the 3D coordinate to color the point on the texture (e.g., to color a circular spot on the cow based on how far in model space the given texture point is from the spot center point). And then I can apply the texture to the model, and it works.
Again, it seems like this must be a standard procedure with a name...
One issue is that the result is very sensitive to the quality of the the texturing as set up by the modeler. For instance, if a relatively large quad on the cow corresponds to a small quad on the texture image, there just aren't enough pixels to work with to get a smooth curve within that model quad once the texture is applied. You can of course use a higher-res texture, such as 2048x2048, but then your loop time is 4x.
It's actually a rasterization process if I didn't misunderstand your question. In lightmapping, one may also need to find the corresponding positions and normals in world space for each texel in the lightmap space and then baking irradiance. (which seems similar to your goal)
You can use standard Graphics API to do this task instead of writing your own implementation. Let:
Size of texture -> Size of G-buffers
UVs of each mesh triangle -> Vertex positions vec3(u, v, 0) of the input stage
Indices of each mesh triangle -> Indices of the input stage
Positions (and normals, etc.) of each mesh triangle -> Attributes of the input stage
After the rasterizer stage of the graphics pipeline, all fragments that lie within the UV triangle are generated, and the attributes that have been supplied are interpolated automatically. You can do whatever you want now in pixel shader!

does glRotate in OpenGL rotate the camera or rotate the world axis or rotate the model object?

I want to know whether glRotate rotates the camera, the world axis, or the object. Explain how they are different with examples.
the camera
There is no camera in OpenGL.
the world axis
There is no world in OpenGL.
or the object.
There are no objects in OpenGL.
Confused?
OpenGL is a drawing system, that operates with points, lines and triangles. There is no concept of a scene or a world in OpenGL. All there is are vertices of which each has a set of attributes and there is the state of OpenGL which determines how vertices are turned into pixels.
The very first stage of this process is getting the vertex positions within the viewport. In the fixed function pipeline (i.e. without shaders), to get those, each vertex position if first multiplied with the so called "modelview" matrix, the intermediary result is used for lighting calculations and then multiplied with the "projection" matrix. After that clipping and then normalization into viewport coordinates are applied.
Those two matrices I mentioned save two purposes. The first one "modelview" is used to apply some transformation on the incoming vertices so that those end up in the desired spot relative to the origin. There is no difference in first moving geometry to some place in the world, and then moving the viewpoint within the world. Or keeping the viewpoint at the origin and move the whole world in the opposite. All this can be described by the modelview matrix.
The second one "projection" works together with the normalization process to behave like a kind of "lens", so to speak. With this you set the field of view (and a few other parameters, like shift, which you need for certain applications – don't worry about it).
The interesting thing about matrices is, that they're non-commutative, i.e. for two given matrices N, M
M * N =/= N * M ; for most M, N
This ultimately means, that you can compose a series of transformations A, B, C, D... into one single compound transformation matrix T by multiplying the primitive transformations onto each other in the right order.
The OpenGL matrix manipulation functions (they're obsolete BTW), do just that. You have a matrix selected for manipulation (the matrix mode) for example the modelview M. Then glRotate effectively does this:
M *= R(angle,axis)
i.e. the active matrix gets multiplied on a rotation matrix constructed from the given parameters. Similar for scale and translate.
If this happens to appear to behave like a camera or placing a object depends entirely on how and in which order those manipulations are combined.
But for OpenGL there are just numbers/vectors (vertex attributes), which somehow translate into 2-dimensional viewport coordinates, that get drawn as points for filled inbetween as line or a triangle.
glRotate works on the current matrix. So it depend if the matrix is the camera one or a world trasformation one. To know more about the current matrix have a look at glMatrixMode().
Finding example is just googling: I found this one that in order to me should help you to figure out what's happening.

Resources