Simplest way to set up a 3D OpenGL perspective projection - c

There have been many tutorials where each suggests using gluPerspective or glFrustum with a combination of other things, yet I've had difficulties setting up the right matrix. What code do I need to set up a 45˚ perspective view looking down the +z axis?
So far I have:
glShadeModel(GL_SMOOTH);
glClearColor(0,0,0,0);
glClearDepth(1);
glDepthFunc(GL_LEQUAL);
glViewport(0,0,width,height);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45,1,0.1,100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
But that doesn't seem to work. All I get is a black screen when I attempt to draw things.
EDIT: Here's the minimal drawing code:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glColor3ub(255,255,255);
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(20,20,20);
glVertex3f(20,30,20);
glVertex3f(30,20,20);
glVertex3f(30,30,20);
glEnd();

Things such as points on (1,1,1) and (2,50,23). They do not appear.
Well there's your problem. The default OpenGL camera has the +Z axis pointing towards the camera. And since the camera is at Z=0, any position who's Z position is >0 is behind the camera.
Move your points in front of the camera. They need to at least have a -Z position.

EDIT: Here's the minimal drawing code:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glColor3ub(255,255,255);
glBegin(GL_TRIANGLE_STRIP);
glVertex3f(20,20,20);
glVertex3f(20,30,20);
glVertex3f(30,20,20);
glVertex3f(30,30,20);
glEnd();
Your vertex coordinates lie way outside the viewing volume. First, OpenGL by default "looks" down the negative Z axis, so your Z coordinates must be -100 < z < -0.1 for your choosen near and far clip plane.
But even if you flipped the sign on the Z coordinate, your vertices still would lie outside the 45° FOV. (20, 0, 20) is 45° from the viewing axis, and (30, 0, 20) even farther. Try centering your vertex coodinates around (0,0,-5) like (+/-1, +/-1, -5)

Related

OpenGL: Sphere texture appearing oddly

I'm currently trying to map this pool ball texture to a sphere I have created. My approach is as follows:
Generate the sphere vertices
For every sphere vertex, translate that vertexes coordinates from the openGL world to the texture coordinates.
I want the white circle with the '1' in it to appear at the top of the sphere (at z=1), so I am using the x and z coordinates of the sphere vertices.
The texture file I am using has multiple textures. The texture below is the one I am concerned with. In the texture file, the top left of this particular texture is at (0.01, 0.01) and the bottom right is at (0.24, 0.24). If my math is right, this makes the dead center at about (0.115, 0.115). Since I want the white circle to be on top of the ball (z=1), I've come up with the following two lines of code to map the points:
tex_coords[i].x = 0.125 + (verticies[i].x)*0.115;
tex_coords[i].y = 0.125 + (verticies[i].z)*0.115;
My logic is that if X or Z is 0, the respective coordinate is 0.115, which is right in the middle. Otherwise, X and Z range from -1 to 1 so the maximum value we can reach is 0.24 and the minimum value is 0.01.
As you can see in the bottom screenshot, something has gone wrong. If you look very closely you can see that one tiny part of the sphere is colored white.
There was a discrepancy between one of my shaders and my init function. I had a variable called "vTexCoord" in my shaders but was using "vTexCoords" in my init function.

Drawing pixels from buffer and position (glDrawPixels)

Basically I am doing some tests to simulate various window inside a scene. Everything works fine until I try to position better the window that I am drawing inside the scene.
The important code is here:
// camFront = glReadPixels ...
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
//glRasterPos3f(1.0, 0.5, 0.0); // <-- commented out
// Zooming window
glPixelZoom(0.5, 0.5);
glDrawPixels(500, 250, GL_RGB, GL_UNSIGNED_BYTE, camFront); //> camFront is the buffer of the window
glutSwapBuffers();
Basically when glRasterPos3f is commented out I got my nice window drawn inside my scene:
Now If i try to position that window with glRasterPos3f, the window disappears completly from the scene... Any clues?
One possible The cause of this problem is an invalid rasterpos. The raster pos is set after transforming x,y and z just like any other pixel. This includes the clipping stage.
The easy test is to see if when a bright point (or something more visible) is drawn at your x,y and z it appears on the screen.
Where is (1.0, 0.5, 0.0) in your screen? Is it visible?
The coordinate has to be a visible point that is projected onto screen, becoming a 2d coordinate. Try putting the code before the modelview part, maybe then the coordinate will be where you expected.
Because you reset the matrix with glLoadIdentity, the point (1.0, 0.5, 0.0) will be at the right edge of screen - possibly clipped as too far right or too close to camera.
GLboolean valid;
glGetBooleanv(GL_CURRENT_RASTER_POSITION_VALID, &valid);
if(valid == GL_FALSE)
printf("Error");
(The second test is better than drawing something, but won't help tell you where it is being drawn if it is not invalid)

In OpenGL, can I draw a pixel that exactly at the coordinates (5, 5)?

By (5, 5) I mean exactly the fifth row and fifth column.
I found it very hard to draw things using screen coordinates, all the coordinates in OpenGL is relative, and usually ranging from -1.0 to 1.0. Why it is so serious to prevent programmers from using screen coordinates / window coordinates?
The simplest way is probably to set the projection to match the pixel dimensions of the rendering space via glOrtho. Then vertices can be in pixel coordinates. The downside is that resizing the window could cause problems and you're mostly wasting the accelerated transforms.
Assuming a window that is 640x480:
// You can reverse the 0,480 arguments depending on you Y-axis
// direction preference
glOrtho(0, 640, 0, 480, -1, 1);
Frame buffer objects and textures are another avenue but you'll have to create your own rasterization routines (draw line, circle, bitmap, etc). There are problaby libs for this.
#dandan78 OpenGL is not a Vector Graphics renderer. Is a Rasterizer. And in a more precise way is a Standard described by means of a C language interface. A rasterizer, maps objects represented in 3D coordinated spaces (a car, a tree, a sphere, a dragon) into 2D coordinated spaces (say a plane, your app window or your display), these 2d coordinates belong to a discrete coordinated plane. The counter rendering method of rasterization is Ray Tracing.
Vector graphics is a way to represent by means of mathematical functions a set of curves, lines or similar geometrical primitives, in a nondiscrete way. So Vector graphics is in the "model representation" field rather than "rendering" field.
You can just change the "camera" to make 3D coordinates match screen coordinates by setting the modelview matrix to identity and the projection to an orthographic projection (see my answer on this question). Then you can just draw a single point primitive at the required screen coordinates.
You can also set the raster position with glWindowPos (which works in screen coordinates, unlike glRasterPos) and then just use glDrawPixels to draw a 1x1 pixel image.
glEnable( GL_SCISSOR_TEST );
glScissor( 5, 5, 1, 1 ); /// position of pixel
glClearColor( 1.0f, 1.0f, 1.0f, 0.0f ); /// color of pixel
glClear( GL_COLOR_BUFFER_BIT );
glDisable( GL_SCISSOR_TEST );
By changing last 2 arguments of glScissor you can also draw pixel perfect rectangle.
I did a bit of 3D programming several years back and, while I'm far from an expert, I think you are overlooking a very important difference between classical bitmapped DrawPixel(x, y) graphics and the type of graphics done with Direct3D and OpenGL.
Back in the days before 3D, computer graphics was mostly about bitmaps, which is to say collections of colored dots. These dots had a 1:1 relationship with the pixels on your monitor.
However, that had numerous drawbacks, including making 3D very difficult and requiring bitmaps of different sizes for different display resolutions.
In OpenGL/D3D, you are dealing with vector graphics. Lines are defined by points in a 3-dimensional coordinate space, shapes are defined by lines and so on. Surfaces can have textures, lights can be added, as can various types of lighting effects etc. This entire scene, or a part of it, can then be viewed through a virtual camera.
What you 'see' though this virtual camera is a projection of the scene onto a 2D surface. We're still dealing with vector graphics at this point. However, since computer displays consist of discrete pixels, this vector image has to be rasterized, which transforms the vector into a bitmap with actual pixels.
To summarize, you can't use screen/window coordinates because OpenGL is based on vector graphics.
I know I'm very late to the party, but just in case someone has this question in the future. I converted screen coordinates to OpenGL matrix coordinates using these:
double converterX (double x, int window_width) {
return 2 * (x / window_width) - 1;
}
double converterY (double y, int window_height) {
return -2 * (y / window_height) + 1;
}
Which are basically re-scaling methods.

Opengl drawing a 2d overlay on a 3d scene problem

I have a moving 3d scene set up, and I want to make a stationary 2d GUI overlay that is always on top, when I try making 2d shapes I don't see anything. When I call: glMatrixMode(GL_PROJECTION); my 3d scene disappears and I'm left with a blank window...
here is the code I'm using for the overlay
EDIT: updated code
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-100, 100, -100, 100);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glDisable(GL_TEXTURE_2D);
glDisable(GL_LIGHTING);
glColor3f(1, 1, 1);
glPushMatrix();
glBegin(GL_QUADS);
glVertex3f(-5.0f, 5.0f, 0.0f);
glVertex3f(-5.0f, -5.0f, 0.0f);
glVertex3f(5.0f, -5.0f, 0.0f);
glVertex3f(5.0f, 5.0f, 0.0f);
glEnd();
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glutSwapBuffers();
Hmm... Basing on the fragment of code you posted, I believe that your scene disappears because of what you're doing with your matrices - looks a bit chaotic to me. The approach should look like this:
clean the screen
3D:
enable lighting, z-test, etc
set active matrix mode to projection
load identity and establish a perspective projection
set active matrix mode back to modelview
draw everything 3D
2D:
disable lighting, z-test, etc
set active matrix mode to projection
load identity and establish an ortogonal projection
set active matrix mode back to modelview
draw everything 2D
swap buffers
Also, consider switching to shaders (and to a modern OpenGL version in general) if you want to make your life even easier :).
You must draw your quad in the other order. By default, OpenGL use counterclockwise front facing polygons. That means that you don't see your polygon because you see only its back face.
You might take a look at glFrontFace.
EDIT:
Also, if that doesn't work, you could try to disable the following states:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-100, 100, -100, 100);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glDisable(GL_BLENDING);
glDisable(GL_TEXTURE_2D);
glDisable(GL_LIGHTING);
You might want use glPushAttrib and glPopAttrib in order not to mess your state.
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glDisable(GL_TEXTURE_2D);
glDisable(GL_LIGHTING);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-100, 100, -100, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(1, 1, 1);
glBegin(GL_QUADS);
glVertex3f(20.0f, 20.0f, 0.0f);
glVertex3f(20.0f, -20.0f, 0.0f);
glVertex3f(-20.0f, -20.0f, 0.0f);
glVertex3f(-20.0f, 20.0f, 0.0f);
glEnd();
/// Now swap buffers
In addition, I also use a separate FBO for these kind of things. Usually the overlay doesn't have to be redrawn all the time, so render it on demand to a FBO and just render it as a fullscreen quad each frame. It wastes some fillrate but in general I find it is usually faster anyway and makes the code so much cleaner.
Make sure your geometry ( specifically the z coordinates of your geometry, in terms of your 2d UI ) is greater than the near plane ( behind the near plane on the z-axis ), otherwise, any rendering which takes place in front of the near-plane will not be seen. I'm assuming you have defined your view frustum somewhere else in the code ( this is where the near-plane is defined ).
If the near-plane is 0.01f, then your vertex definitions could be
glVertex3f(-5.0f, 5.0f, -0.02f);
glVertex3f(-5.0f, -5.0f, -0.02f);
glVertex3f(5.0f, -5.0f, -0.02f);
glVertex3f(5.0f, 5.0f, -0.02f);
I believe in the MatrixMode( GL_MODELVIEW ) you are always looking into the -Z Axis.
I hope this helps.
I may be wrong but i think the DEPTH_TEST refers to the z-buffering of your final rendered object, i don't think it disables the near-plane value.
' glGetBooleanv(GL_BLEND, &m_origin_blend);
glGetBooleanv(GL_DEPTH_TEST,&m_origin_depth);
glGetBooleanv(GL_CULL_FACE, &m_origin_cull);
setAlphaBlending(true);
setDepthTest(false);
setCullFace(false); //by stone
//ur draw core()
setAlphaBlending(m_origin_blend>0?true:false);
setDepthTest(m_origin_depth>0?true:false);
setCullFace(m_origin_cull>0?true:false); //by stone
'

OpenGL: How do I avoid rounding errors when specifying UV co-ordinates

I'm writing a 2D game using OpenGL. When I want to blit part of a texture as a sprite I use glTexCoord2f(u, v) to specify the UV co-ordinates, with u and v calculated like this:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)width_of_texture;
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)height_of_texture;
This works perfectly most of the time, except when I use glScale to zoom the game in or out. Then floating point rounding errors cause some pixels to be drawn one to the right of or one below the intended rectangle within the texture.
What can be done about this? At the moment I'm subtracting an 'epsilon' value from the right and bottom edges of the rectangle, and it seems to work but this seems like a horrible kludge. Are there any better solutions?
Your issue is most likely not coming from rounding errors, but a misunderstanding on how OpenGL maps texels to pixels. If you notice off-by-one errors, it's probably because your UVs, your vertex positions or your projection matrix/viewport pair are not aligned to where they ought to be.
To simplify, I'll just talk about 1D, and be assuming you use a projection and a viewport that map X,Y coordinates to the equivalent pixel location (i.e. a glOrtho(0,width,0,height,zmin,zmax) and a glViewport(0,width,0,height).
Say you want to draw 5 texels (starting at 0 for simplicity) of your 64-wide texture showing on the 10 pixels (scale of 2) of your screen starting at pixel 20.
To get there, draw the triangle with X coordinates 20 and 30, and U (of the UV pair) of 10/64 and 15/64. The rasterization of OpenGL will generate 10 pixels to shade, with X coordinates 20.5, 21.5, ... 29.5. Note that the positions are not full integers. OpenGL rasterizes in the middle of the pixel.
Likewise, it will generate U coordinates of 10.25/64, 10.75/64, 11.25/64, 11.75/64 ... 14.25/64, 14.75/64. Note again that texel coordinates, brought back to texel positions in the texture space, are not full integers. OpenGL samples from the middle of texel locations, so this is fine.
How the samplers use these UVs to generate texel values depend on filtering modes, but be it nearest or linear, the pixels should be contained solely inside the texels of interest (0.25 with a size of 0.5 should only use color from 0 to 0.5, which is all inside the first texel).
In general, if you follow the general principles I laid out, you should never see artifacts.
Use Ortho and Viewport of exactly your frame buffer size
Use positions of X, X+width exactly
Use UVs that correspond to exactly the texels you want (if you want the 10 texels starting from the texel 0, use U=0 to U=10.
If you ever have a -1 somewhere in your math, it's likely not correct (for position or UVs).
To get back to your example, it's unclear how you link the uvs you compute to positions (since you don't show the position computation).
It's also unclear how you got xpos_in_texture (you should explain how you computed them for the corners of your sprite). My guess is that you computed that wrong.
A bit late, but for posterity I was having the same problem, with the pixels from adjacent regions of a texture atlas bleeding into sprites/tiles when scaling or zooming the view. I had my glOrtho, glViewport, etc dimensions all set correctly, then I realized the problem was I was scaling the view before translating the camera, which meant that even though I was snapping to integer pixels pre-zoom, after the zoom it would align to a fraction of a pixel and introduce the texel problem.
So if your code looks something like this, where camera.zoom is a non-integer (i.e. 0.75):
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(camera.x, camera.y, 0.0f);
You'll want to make sure the result of the translation after scaling aligns to whole pixels on the screen, so you can do something like:
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(
floor(camera.x * camera.zoom) / camera.zoom,
floor(camera.y * camera.zoom) / camera.zoom,
0.0f);
Do the division as a double, round the result down yourself to the desired level of precision, then cast it to GLFloat.
Your xpos/ypos must be based on 0 to (width or height) - 1 and then:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)(width_of_texture - 1);
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)(height_of_texture - 1);

Resources