Should I translate first or rotate first? - c

I'm trying to create a simple scene where I can walk around, with the criteria of being able to pan around and walk around with the keys. However, in my draw scene function, when I translate my scene than rotate, the panning around doesn't work properly as the entire scene just rotates around me, causing objects to go through me. When I rotate than translate my scene, I can pan around properly, however, I can move only in a certain direction, so if I pan around to my right 90 degrees, I'll move left instead of going forward. Is there anyway where I can put these 2 effects together?
This is the code that I use to draw my view:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glPushMatrix();
glTranslated(xposition, 0, zposition); //This is where I translate my views
glRotated(yrot, 0, 1, 0); //
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER_ARB, quadVBO);
glNormalPointer(GL_FLOAT, 0, (void*)sizeof(sideArray));
glColorPointer(3, GL_FLOAT, 0, (void*)sizeof(sideArray)+sizeof(normals));
glVertexPointer(3, GL_FLOAT, 0, 0);
glDrawArrays(GL_QUADS, 0, sizeof(sideArray)/sizeof(GLfloat)/3);
glPopMatrix();
glFlush();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
Here are some pictures that illustrate my problem right now:
Rotate then translate:
Pic1
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.17.39%20PM.PNG
Pic2
I can imitate the turn of the camera
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.17.48%20PM.PNG
Pic3
However, when I walk forward, it only walks in one direction, and not the direction I'm looking at.
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.18.30%20PM.PNG
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.18.39%20PM.PNG
Translate then Rotate:
Pic1
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.19.44%20PM.PNG
Pic2
I can move around freely, walking straight to any direction I'm looking at.
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.19.52%20PM.PNG
Pic3
However, when I rotate the scene, the entire thing rotates, which causes objects to clip through me and doesn't "pan" through the view anymore like when I rotate then translate my view.
http://dl.dropbox.com/u/2747708/Screen%20Shot%202012-04-03%20at%2010.20.01%20PM.PNG

Your problem isn't with the order of transformations. You should be rotating then translating.
The problem is that you aren't taking rotation into account when moving. The formula for movement:
movementX = sin(direction);
movementY = cos(direction);
where direction is the number of radians turned clockwise from north, and positive X is east, and positive Y is north.

I'm not sure how to comment on or add to Kendall's answer so I'll just add this as a new answer.
I think you may be using Y for up, so you would use this if direction is in radians:
movementX = sin(direction);
movementZ = cos(direction);
if direction is in degrees you will have to convert it to radians:
radians = degrees*(PI/180);
Just multiply the movement by 1 or -1 based on whether you are moving forwards or backwards:
movementX = sin(direction)*forwardsBackwards;
movementZ = cos(direction)*forwardsBackwards;
If you need strafing as well you can do:
movementX = sin(direction)*forwardsBackwards+sin(direction+1.5707)*sideToSide;
movementZ = cos(direction)*forwardsBackwards+cos(direction+1.5707)*sideToSide;
Where sideToSide is 1 or -1 based on if you are strafing left or right. The 1.5707 is 90 degrees in radians(PI/2), which means no matter what direction you are facing it takes the 90 degree angle to the right of it. You can also add 90 degrees to the degree rotation before converting if you want to.
Multiply the entire thing by your desired movement speed:
movementX = (sin...eToSide)*speed;
movementZ = (cos...eToSide)*speed;
This will however create a straferunning effect where you will move faster if you move in 2 directions at once. If you want to make it so that doesn't happen add this before calculating movement:
if (!forwardsBackwards && !sideToSide )
{
forwardsBackwards *= 0.7071;
sideToSide *= 0.7071;
}
You could also replace .7071 with cos(45) if you need it to be extremely accurate.
Alternatively you could:
float diagonalMod = 1;
if (!forwardsBackwards && !sideToSide)
diagonalMod = 0.7071; // or cos(45)
movementX = (sin...eToSide)*speed*diagonalMod;
movementZ = (cos...eToSide)*speed*diagonalMod;
You will want to rotate first and then translate. The other way, if I am thinking about it right, is basically like moving the point at which your camera will rotate around, the pivot point. This is why you would clip through objects when rotating.
Also, you should perform "camera" rotation and translation immediately following a glLoadIdentity(); as you want everything else in your scene to move and rotate, as that is how an OpenGL "camera" functions. This is how it would be setup:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotated(yrot, 0, 1, 0);
glTranslated(xposition, 0, zposition);
glPushMatrix();
And for future reference in case you expand on it:
glRotatef(Pitch, 1, 0, 0); // Up and down look
glRotatef(Yaw, 0, 1, 0); // Left and right look
glRotatef(Roll, 0, 0, 1); // Like a barrel roll in a jet
glTranslatef(X,Y,Z);

Related

Texturing a sphere in OpenGL with glTexGen

I want to get an earth texture on sphere. My sphere is an icosphere built with many triangles (100+) and I found it confusing to set the UV coordinates for whole sphere. I tried to use glTexGen and effects are quite close but I got my texture repeated 8 times (see image) . I cannot find a way to make it just wrap the whole object once. Here is my code where the sphere and textures are created.
glEnable(GL_TEXTURE_2D);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++)
{
Triangle *draw_Trian = new_sphere->Trians+i;
glVertex3f(draw_Trian->pnts[0].coords[0], draw_Trian->pnts[0].coords[1], draw_Trian->pnts[0].coords[2]);
glVertex3f(draw_Trian->pnts[1].coords[0], draw_Trian->pnts[1].coords[1], draw_Trian->pnts[1].coords[2]);
glVertex3f(draw_Trian->pnts[2].coords[0], draw_Trian->pnts[2].coords[1], draw_Trian->pnts[2].coords[2]);
}
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
glEnd();
You need to define how your texture is supposed to map to your triangles. This depends on the texture you're using. There are a multitude of ways to map the surface of a sphere with a texture (since no one mapping is free of singularities). It looks like you have a cylindrical projection texture there. So we will emit cylindrical UV coordinates.
I've tried to give you some code here, but it's assuming that
Your mesh is a unit sphere (i.e., centered at 0 and has radius 1)
pnts.coords is an array of floats
You want to use the second coordinate (coord[1]) as the 'up' direction (or the height in a cylindrical mapping)
Your code would look something like this. I've defined a new function for emitting cylindrical UVs, so you can put that wherever you like.
/* Map [(-1, -1, -1), (1, 1, 1)] into [(0, 0), (1, 1)] cylindrically */
inline void uvCylinder(float* coord) {
float angle = 0.5f * atan2(coord[2], coord[0]) / 3.14159f + 0.5f;
float height = 0.5f * coord[1] + 0.5f;
glTexCoord2f(angle, height);
}
glEnable(GL_TEXTURE_2D);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++) {
Triangle *t = new_sphere->Trians+i;
uvCylinder(t->pnts[0].coords);
glVertex3f(t->pnts[0].coords[0], t->pnts[0].coords[1], t->pnts[0].coords[2]);
uvCylinder(t->pnts[1].coords);
glVertex3f(t->pnts[1].coords[0], t->pnts[1].coords[1], t->pnts[1].coords[2]);
uvCylinder(t->pnts[2].coords);
glVertex3f(t->pnts[2].coords[0], t->pnts[2].coords[1], t->pnts[2].coords[2]);
}
glEnd();
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
Note on Projections
The reason it's confusing to build UV coordinates for the whole sphere is that there isn't one 'correct' way to do it. Mathematically-speaking, there's no such thing as a perfect 2D mapping of a sphere; hence why we have so many different types of projections. When you have a 2D image that's a texture for a spherical object, you need to know what type of projection that image was built for, so that you can emit the correct UV coordinates for that texture.

Why doesn't this code draw a triangle?

I'm very new to OpenGL and I just wrote up a section of code using SDL 2 that to my knowledge should have drawn a triangle, but this code doesn't seem to work and so I am not done learning. I've got all the initialization code SDL 2 documentation says I need already written in, and the functions returned by dynamic loading ARE callable. When I execute this code instead of a triangle I get a black (but cleared) window. Why does this code not draw the triangle I want, and why is the window cleared to black by this code? I want to know the technical details behind mainly the first question so I can depend on it later.
(*main_context.glViewport)(0, 0, 100, 100);
(*main_context.glBegin)(GL_TRIANGLES);
(*main_context.glColor4d)(255, 255, 255, 255);
(*main_context.glVertex3d)(1, 1, -50);
(*main_context.glVertex3d)(1, 30, 1);
(*main_context.glVertex3d)(30, 1, 1);
(*main_context.glEnd)();
(*main_context.glFinish)();
(*main_context.glFlush)();
SDL_GL_SwapWindow(window);
Update:
I've revised my code to include different coordinates and I got the triangle to draw, but I cannot get it to draw when farther away. Why is that?
(*main_context.glVertex3d)(2, -1, 1); /* Works. */
(*main_context.glVertex3d)(2, -1, 3); /* Doesn't work. */
Unless you are setting up a projection and/or modelview matrix elsewhere in your code, it's using the default (identity matrix) transform, which is an orthographic projection with (-1, -1) at the bottom left and (1, 1) at the top right. glViewport only changes the portion of the default framebuffer being rendered to, it has no bearing on the projection whatsoever.
With an orthographic projection, the Z coordinate does not affect the screen-space position of a point, except that points outside the Z clipping planes will not be rendered. In this case, that's everything outside of -1 <= z <= 1. Given that one of your points is (1, 1, -50), this seems to be your problem.

Orthographic Projection with OpenGL and how to implement camera or object movement in space

I have made a cube with display list using GL_POLYGON.I have initialised it in the origin of the coordinates that means in (0,0,0).In my display function which is called in glutDisplayFunc I use the code:
glLoadIdentity();
glOrtho(0,0,0,0,1,1);
glMatrixMode(GL_MODELVIEW);
I want to use orthographic projection using glOrtho.Well, my question is that: Is it normal that I still can see my cube considering that my window size is 600x600?
What's more, I would like some guidelines on how to move my cube or my camera with the relative OpenGL functions.Let's say I would like to move my camera back(to z axis) or my cube to the front(to -z axis).How can I do that?
First of you also need to set glMatrixMode() to GL_PROJECTION before you call glOrtho(), So it would look like this instead.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(...); // Replace ... with your values
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
To move the sceen you can simply call one or more of the following functions.
glTranslate*()
glRotate*()
glScale*()
You can click the above links to read how and what each function does. But basically:
glTranslate*() translates/moves the current selected matrix.
glRotate*() rotates the current selected matrix.
glScale*() scales the current selected matrix.
You can also use glPushMatrix() and glPopMatrix() to push and pop the current matrix stack.
Extra
Also be aware that you're using old and deprecated functions. You shouldn't use them, instead you're now suppose to calculate and create your own Matrix Stack.
Edit
Camera & Objects
Basically you do that by combining the above functions. Might sound harder that it actually is.
I will create an example of 1 camera and 2 objects, basically to give you the idea of how it works.
void render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// The Camera Rotations & Translation
glRotatef(camera_pitch, -1f, 0f, 0f);
glRotatef(camera_yaw, 0f, 1f, 0f);
glTranslate(-camera_x, -camera_y, -camera_z);
// Object 1
glPushMatrix();
glRotatef(...);
glTranslate(...);
// Render Object 1
glPopMatrix();
// Object 2
glPushMatrix();
glRotatef(...);
glTranslate(...);
// Render Object 2
glPopMatrix();
}
Again replace the ... with your own values.
The reason why need to translate the camera coordinates negatively is because why aren't moving a camera, we are actually "pushing" (translating, etc) everything away from the camera/center (Thereby the camera is in the center at all times).
Important the order in which you rotate then translate or translate and then rotate, is important. When needing to the camera transformations you always need to rotate then translate.
Edit
gluLookAt ?
gluLookAt does 100% the same, like in my example.
Example:
// The Camera Rotations & Translation
glRotatef(camera_pitch, -1f, 0f, 0f);
glRotatef(camera_yaw, 0f, 1f, 0f);
glTranslate(-camera_x, -camera_y, -camera_z);
This is my own function which does 100% the same as gluLookAt. How do I know? Because I've looked at the original gluLookAt function, and then I made the following function.
void lookAt(float eyex, float eyey, float eyez, float centerx, float centery, float centerz)
{
float dx = eyex - centerx;
float dy = eyey - centery;
float dz = eyez - centerz;
float pitch = (float) Math.atan2(dy, Math.sqrt(dx * dx + dz * dz));
float yaw = (float) Math.atan2(dz, dx);
pitch = -pitch;
yaw = yaw - 1.57079633f;
// Here you could call glLoadIdentity() if you want to reset the matrix
// glLoadIdentity();
glRotatef(Math.toDegrees(pitch), -1f, 0f, 0f);
glRotatef(Math.toDegrees(yaw), 0f, 1f, 0f);
glTranslatef(-eyex, -eyey, -eyez);
}
You might need to change the Math.* calls, since the above code isn't written in C.

opengl failing to draw mesh

SOLVED: I'm not really sure how though... thanks for all your help guys.
I tried glDisable(GL_CULL_FACE); but the mesh is still not visible.
Basically I'm trying to draw a mesh (made from verts, normals, and texture coords) in OpenGL, using a display list. The mesh is on .obj format (exported from 3ds max 2013)
The problem is that the mesh is not visible.
To draw the display list I'm just using glCallLists (list, 1);
I have verified that I can draw things to the screen by drawing a point in the center of the screen and that works fine.
Could it be possible that the camera is positioned inside the mesh? If so is there an OpenGL state that I could enable to allow me to see the inside of a set of verts?
I know that the data I have is all valid, verified by printing each vert, normal and texture coord to a file before adding it to the display list, it looks valid.
I have dont no glTranslatef or anything like that, my projection matrix is setup like this:
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective (45.0, (float)1024/(float)768, -9999, 9999);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
If you want to have a look at the .obj file, here it is: http://pastebin.com/PpG3vG5e
This is how I create the display list:
list = glGenLists (1);
glNewList (list, GL_COMPILE);
glBegin (GL_TRIANGLES);
for (i = 0; i < data.face_count; i++)
{
// first vert
normal[0][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[0];
normal[0][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[1];
normal[0][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[2];
tex[0][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[0];
tex[0][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[1];
tex[0][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[2];
vert[0][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[0];
vert[0][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[1];
vert[0][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[2];
// second vert
normal[1][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[0];
normal[1][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[1];
normal[1][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[2];
tex[1][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[0];
tex[1][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[1];
tex[1][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[2];
vert[1][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[0];
vert[1][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[1];
vert[1][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[2];
// third vert
normal[2][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[0];
normal[2][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[1];
normal[2][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[2];
tex[2][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[0];
tex[2][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[1];
tex[2][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[2];
vert[2][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[0];
vert[2][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[1];
vert[2][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[2];
for (j = 0; j < 3; j++)
{
glNormal3f (normal[j][0], normal[j][1], normal[j][2]);
glTexCoord3f (tex[j][0], tex[j][1], tex[j][2]);
glVertex3f (vert[j][0], vert[j][1], vert[j][2]);
}
}
glEnd ();
glEndList ();
EDIT:
I've tried things like:
glTranslatef (0, 0, 5);
glCallList (mesh);
glTranslatef (0, 0, 0);
but they don't work either :(
EDIT:
#datenwolf
Here is the code I use to draw it:
Draw_Begin ();
Mdl_Draw (list, 0.0f, 0.0f, 0.0f);
Draw_End ();
This
gluPerspective (45.0, (float)1024/(float)768, -9999, 9999);
is wrong. In a perspective projection both the near and the far plane distance must be of the same sign, i.e. both positive or both negative. Also the absolute value of the near plane must be smaller than the absolute value of the far plane. And the near plane distance must be nonzero. In mathematical notation:
sgn(near) = sgn(far) ^ 0 < |near| < |far|
Usually both near and far are chosen positive. Also as a rule of thumb the near clipping plane should be chosen as fer away as possible. The far plane can be placed at infinity (exploting some of the properties of homogenous matrices), but usually is placed as close as possible to max out depth buffer resolution.

OpenGL - maintaining aspect ratio upon window resize

I am drawing a polygon in a square window. When I resize the window, for instance by fullscreening, the aspect ratio is disturbed. From a reference I found one way of preserving the aspect ratio. Here is the code:
void reshape (int width, int height) {
float cx, halfWidth = width*0.5f;
float aspect = (float)width/(float)height;
glViewport (0, 0, (GLsizei) width, (GLsizei) height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(cx-halfWidth*aspect, cx+halfWidth*aspect, bottom, top, zNear, zFar);
glMatrixMode (GL_MODELVIEW);
}
Here, cx is the eye space center of the zNear plane in X. I request if someone could please explain how could I calculate this. I believe this should be the average of the initial first two arguments to glFrustum(). Am I right? Any help will be greatly appreciated.
It looks like what you want to do is maintain the field of view or angle of view when the aspect ratio changes. See the section titled 9.085 How can I make a call to glFrustum() that matches my call to gluPerspective()? of the OpenGL FAQ for details on how to do that. Here's the short version:
fov*0.5 = arctan ((top-bottom)*0.5 / near)
top = tan(fov*0.5) * near
bottom = -top
left = aspect * bottom
right = aspect * top
See the link for details.
The first two arguments are the X coordinates of the left and right clipping planes in eye space. Unless you are doing off-axis tricks (for example, to display uncentered projections across multiple monitors), left and right should have the same magnitude and opposite sign. Which would make your cx variable zero.
If you are having trouble understanding glFrustrum, you can always use gluPerspective instead, which has a somewhat simplified interface.

Resources