does glRotate in OpenGL rotate the camera or rotate the world axis or rotate the model object? - c

I want to know whether glRotate rotates the camera, the world axis, or the object. Explain how they are different with examples.

the camera
There is no camera in OpenGL.
the world axis
There is no world in OpenGL.
or the object.
There are no objects in OpenGL.
Confused?
OpenGL is a drawing system, that operates with points, lines and triangles. There is no concept of a scene or a world in OpenGL. All there is are vertices of which each has a set of attributes and there is the state of OpenGL which determines how vertices are turned into pixels.
The very first stage of this process is getting the vertex positions within the viewport. In the fixed function pipeline (i.e. without shaders), to get those, each vertex position if first multiplied with the so called "modelview" matrix, the intermediary result is used for lighting calculations and then multiplied with the "projection" matrix. After that clipping and then normalization into viewport coordinates are applied.
Those two matrices I mentioned save two purposes. The first one "modelview" is used to apply some transformation on the incoming vertices so that those end up in the desired spot relative to the origin. There is no difference in first moving geometry to some place in the world, and then moving the viewpoint within the world. Or keeping the viewpoint at the origin and move the whole world in the opposite. All this can be described by the modelview matrix.
The second one "projection" works together with the normalization process to behave like a kind of "lens", so to speak. With this you set the field of view (and a few other parameters, like shift, which you need for certain applications – don't worry about it).
The interesting thing about matrices is, that they're non-commutative, i.e. for two given matrices N, M
M * N =/= N * M ; for most M, N
This ultimately means, that you can compose a series of transformations A, B, C, D... into one single compound transformation matrix T by multiplying the primitive transformations onto each other in the right order.
The OpenGL matrix manipulation functions (they're obsolete BTW), do just that. You have a matrix selected for manipulation (the matrix mode) for example the modelview M. Then glRotate effectively does this:
M *= R(angle,axis)
i.e. the active matrix gets multiplied on a rotation matrix constructed from the given parameters. Similar for scale and translate.
If this happens to appear to behave like a camera or placing a object depends entirely on how and in which order those manipulations are combined.
But for OpenGL there are just numbers/vectors (vertex attributes), which somehow translate into 2-dimensional viewport coordinates, that get drawn as points for filled inbetween as line or a triangle.

glRotate works on the current matrix. So it depend if the matrix is the camera one or a world trasformation one. To know more about the current matrix have a look at glMatrixMode().
Finding example is just googling: I found this one that in order to me should help you to figure out what's happening.

Related

Maya export uvset from one model to another

Basically I want a way to export the uvs from one model to another as a part of our pipeline where rig and texture/lookdev models(created simultaneously) need to be merged
I would like a solution other than importing the model into the scene and copying the uvset.
Something like an xml export.
Is there any way.
Thanks in advance
You can't really do this precisely unless the meshes are topologically identical. You can do a decent, but not perfect job with something like this:
for each triangle in the source model, derive a tangent space matrix. That's the matrix which converts the world space points of that triangle into the UV points of that face.
for each triangle in the target model, see if it matches one of the triangles in the source. "Match" will mean "has the same three corners" regardless of order (assuming you can fix up your normals in a separate step
Where source triangles have exact matches in the target triangles, apply a planar project that matches the tangent space matrix.
where a target triangle doesn't have an exact match you'll have to guess; you can try finding the N closest matches and doing a weighted blend between all of their matrices or something like that - but it will be hacky
This should be visually pretty close to the source, but all of the UV triangles will be disconnected; you'll need to merge coincident UVs to prevent seams.
It's a pretty non-trivial project, unfortunately.

Using glLoadMatrixd on specific vertices?

I am trying to do skeletal animation in legacy OpenGL and thought I could use matrices on individual vertices. When I programmed it and it didn't work, I did some Googling to find this: https://www.talisman.org/opengl-1.1/Reference/glLoadMatrix.html
GL_INVALID_OPERATION is generated if glLoadMatrix is executed between the execution of glBegin and the corresponding execution of glEnd.
So now I'm stumped. Here is a diagram:
Bones are labeled in red. I'm trying to do skeletal animation so there are two rectangles. One uses Bone 0 and the second uses Bone 1. Only specific vertices of the triangles that make the second rectangle use the rotation matrix of Bone 1, and the ones that don't use the rotation matrix of Bone 0, kind of making a snake, if that makes sense.
Since I cannot use glLoadMatrix for individual vertices in a triangle, what other way can I displace a vertex based on a stored matrix? Perhaps multiply some of the matrix values to the vertex? Not sure how to go about doing that. Any input is appreciated, thanks!
You mention the two rectangles Bone0 and Bone1. You need to draw them separately since they need to have separate transformation matrices. Two points of the two rectangles are coincident, your transformation matrix to draw Bone1 must ensure that:
glTranslateF(...);
glRotateF(...); /* position rectangle Bone0 */
glBegin(GL_QUADS); /* draw rectangle Bone0 */
glVertex3f(...); /* draw it */
...
glEnd();
glPushMatrix(); /* save transformation matrix */
glMulMatrix(...); /*
* as per your drawing, this is not just a
* simple translate/rotate operation, but
* a translate/shear
* you need to do that manually
*/
glBegin(GL_QUADS); /*
* draw rectangle Bone1. Two of the vertices are
* coincident with two of rectangle Bone0. Your
* shear matrix must ensure they are
*/
glVertex3F(...);
glEnd();
What you're trying to do is called skinning! And unfortunately it will involve a bit more effort than your approach. It is possible to do it between one begin and end, which is generally preferable.
The easiest way is not to use OpenGL to transform your vertices. Use your favourite matrix math library to multiply the vertices with your bone matrices before they get passed to OpenGL. If the number of vertices is not too large, it won't slow you down much.
The harder way is to implement a skinning shader. This book chapter provides a good introduction on how that is done. The principle is to upload multiple matrices to OpenGL, and give each vertex an index which says which matrix to multiply with. This will be much faster than the easy approach.
GPUs are fast because they are optimized for doing the same operation on a large set of data - the tradeoff for this is that you can't modify the state (such as changing the matrix) while a draw call is in progress.
Heh, have been doing a lot of math stuff since I posted my question and checked back just now to post an answer for anyone with the same question, and I noticed I already have some answers, so thanks for answering with your input!
Since I have figured out a solution, though, I thought I would post an answer along with these other two.
Basically, what I am doing with the rendering is per-vertex rendering anyway, where it reads the vertices of each triangle from a data buffer and all of that, so it wasn't too much trouble to go ahead and write a custom function to multiply a matrix to a vertex, so a copy of the vertex is loaded from the buffer, the matrix is multiplied to it based on which bone the vertex is mapped to, and then that is used for rendering that particular vertex on the triangle.
Funny that I already had implemented what #Hannesh suggested and I just had to write the multiplier function. Very cool!
The harder way is to implement a skinning shader. This book chapter provides a good introduction on how that is done. The principle is to upload multiple matrices to OpenGL, and give each vertex an index which says which matrix to multiply with. This will be much faster than the easy approach.
Thanks again! I'll up-vote whenever I have the reputation to do so!

Simple Flat Plane Tessellation Shader

Part 1:
So I want to create a basic tessellation program that takes a plane of quads and transforms it into a more, well, detailed/tessellated plane of quads. Such as the picture below. How much it gets tessellated would depend on user controls, passed in by a uniform (initially). However I am so new to tessellation shaders that I can't even figure out how to do this.
How is this typically done? Surely you shouldn't actually draw the plane of quads prior to the shader program, since from my understanding quads won't get tessellated this way, instead the get tessellated into a way like the below picture:
I believe the answer could to be to draw a plane of points, and these points are then tessellated into more points, and these points are transformed into quads of the appropriate size in the geometry shader I think? Alternatively, instead of converting points into quads could I just draw quads between each four closest points (that would be much better)? Examples very much appreciated!
NOTE: Using GLSL > 4.0 & C only (No C++/Python)
Part 2:
After I get part 1 working, how would I make it so that certain quads are more tessellated than others, such as this?:
I want the parts closer to the camera to be more tessellated.
Part 3:
If I were able to get that far, the next part would be to alter the z-axis of points to make the plane into an interesting environment. This would be done by reading in a 2Dsampler, I know how to do that and all. However, if I am correct in Part 1 about using a plane of points then I need to do more than just alter the points that are converted into quads, because quads need to be sharing vertices essentially in order there to be no gaps between quads. How would that be done? Alternatively if we draw quads between points, with each point being the appriate height, then this wouldn't be a issue.
Part 1
Yes you're correct: generate a 'patch' as a simple grid of points, specify the tesselation levels as uniforms into the TCS (tesselation control shader) and generate the vertex data in the TES (tesselation evaluation shader).
Sounds complicated? Here's a nice tutorial I based my work on: http://antongerdelan.net/opengl/tessellation.html
Part 2
What you are talking about here is LOD (or level of detail). You would need to tesselate and render the higher polygon-count bottom-left corner of your mesh as a separate object.
Your suggested approach is correct: break the overall scene into 'chunks' and determine the LOD (i.e. the tesselation parameters) for each chunk separately, usually by some distance-to-camera algorithm.
Part 3
Another excellent tutorial which does exactly what you are after I believe: http://codeflow.org/entries/2010/nov/07/opengl-4-tessellation/
I used this approach to get a very highly detailed but memory and frame efficient terrain.
Hope this helps.

Control points in nurbs surfaces

So I've read a lot about nurbs recently and completely understand nurbs curves ( even wrote a small library for it ). But I'm having some problem with surfaces. I can see that I need two sets of control points. My problem is that what the difference between points in these two sets is?
Can anybody briefly explain it or give me some link that does?
I think my favorite way of understanding NURBS surfaces (if you already understand NURBS curves) is beads on a wire.
So, let's look at the much simpler example of a Bezier surface (I assume if you understand NURBS curves you understand Bezier curves).
A cubic Bezier curve has 4 control points. Imagine a Bezier curve which is just a smooth horizontal curve. You can compute any point on that curve given a parameter value (usually this is called t).. just plug t into the parametric equation of the curve, and a point is produced.
Now imagine you have 4 horizontal Bezier curves, each one is above the other. If you plug the same parameter value into all 4 curves, you get 4 points, one for each curve. Those are the beads on the wires. Let's call the parameter value for the horizontal curves 's'.
Take those 4 "bead" points and treat them as the control points of a vertical curve. Evaluate that curve at another parameter value (this one we'll call 't', like usual) and it will give you a point. That point is on the surface. Specifically, that's the point P(s,t).
So, given a 4x4 grid of control points, you can use beads on a wire to compute points on the surface. As s changes, the beads sweep out different curves along the wire.. the set of all those curves is the surface.
You can do the exact same thing with Nurbs curves.. you just need a knot vector for s, another knot vector for t, and a grid of control points.
For a NURBS surface, you dont need two sets of control points, you need a 2 dimensional grid or mesh of control points. This mesh will have n rows and m columns, and each point in the mesh will have an x, y and z co-ordinate as well as a w value, the NURBS weight for that point.

How to get the new coordinates of the object after transformation?

How can I learn the new coordinates of the object after some transformation?
ex :
a,x,y,z any float numbers
glTranslatef( x, y,z ) ;
glRotate( a,0.0f, 1.0f, 0.0f ) ;
sketchSomething() ;
I want to know the coordinates of the object after this transformation.
OpenGL does not "think" in objects. When you draw an object, OpenGL treats each primitive (point, line, triangle) of the geometry on its own, draws it and then forgets about it. Transformations merely influence where on the screens will show up.
But of course you can assume that the geometry forms an object in some model space, that's transformed into a world space, then eye space, then clip space and finally NDC space.
Regarding your question: glTranslate, glRotate and some other functions don't manipulate objects. They apply transformation matrix in-place multiplication on the matrix on top of the currently active stack. There may have been an infinite number of transformations being applied previously. So what you can do is retrieving the current matrix from OpenGL and do the transformation yourself. This gives you the object geometry in the transformed space. And of course you can just multiply a center position vector yielding the center position of the object.
Also, instead of relying on OpenGL's matrix routines, which are cumbersome to work with, I strongly suggest you make use of a dedicated matrix math library (GLM, Eigen, linmath.h), do all the transformation matrix operations using that one, and load the prepared matrices into OpenGL using glLoadMatrix or glUniformMatrix.
It kind of depends on what co-ordinate you're looking for.
If you want a 2D screen-coordinate, you can use gluProject() to apply the current matrices to a single point.
If you want to just apply the current modelview matrix to a point, you're probably better off just reading back that matrix and applying the transform yourself.
Note that if you simply want the position of your object's origin within the world space, you just need to read the translation portion of the matrix and not actually perform a full transform.

Resources