The following code:
glPointSize(8);
glBegin(GL_POINTS);
glColor3fv(yellow);
glVertex2i(50, 50);
glEnd();
...produces a screen where the point is exactly one pixel big, like barely visible.
I've tried using:
glEnable(GL_PROGRAM_POINT_SIZE);
I've also tried to change the function to GL_POINT instead of GL_POINTS but that didn't even show a pixel
I've also tried to move around the different functions inside of the code like placing the color function outside glBegin and stuff like that, but to no avail.
And I have yet to find anyone on the internet with my exact problem, may it be problems with my hardware? (Debian 11 on Chromebook)
Related
I'm recently playing with glPolygonOffset( factor, units ) and find something interesting.
I used GL_POLYGON_OFFSET_FILL, and set factor and units to negative values so the filled object is pulled out. This pulled object is supposed to cover the wireframe which is drawn right after it.
This works correctly for pixels inside of the object. However for those on object outline, it seems the filled object is not pulled and there is still lines there.
Before pulling the filled object:
After pulling the filled object:
glEnable(GL_POLYGON_OFFSET_FILL);
float line_offset_slope = -1.f;
float line_offset_unit = 0.f;
// I also tried slope = 0.f and unit = -1.f, no changes
glPolygonOffset( line_offset_slope, line_offset_unit );
DrawGeo();
glDisable( GL_POLYGON_OFFSET_FILL );
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
DrawGeo();
I read THIS POST about the meaning and usage of glPolygonOffset(). But I still don't understand why the pulling doesn't happen to those pixels on border.
To do this properly, you definitely do not want a unit of 0.0f. You absolutely want the pass that is supposed to be drawn overtop the wireframe to have a depth value that is at least 1 unit closer than the wireframe no matter the slope of the primitive being drawn. There is a far simpler approach that I will discuss below though.
One other thing to note is that line primitives have different coverage rules during rasterization than polygons. Lines use a diamond pattern for coverage testing and triangles use a square. You will sometimes see software apply a sub-pixel offset like (0.375, 0.375) to everything drawn, this is done as a hack to make the coverage tests for triangle edges and lines consistent. However, the depth value generated by line primitives is also different from planar polygons, so lines and triangles do not often jive for multi-pass rendering.
glPolygonMode (...) does not change the actual primitive type (it only changes how polygons are filled), so that will not be an issue if this is your actual code. However, if you try doing this with GL_LINES in one pass and GL_TRIANGLES in another you might get different results if you do not consider pixel coverage.
As for doing this simpler, you should be able to use a depth test of GL_LEQUAL (the default is GL_LESS) and avoid a depth offset altogether assuming you draw the same sphere on both passes. You will want to swap the order you draw your wireframe and filled sphere, however -- the thing that should be on top needs to be drawn last.
I am having problems with using a camera in OpenGL/freeGLUT. Here is my code:
http://pastebin.com/VCi3Bjq5
(For some reason, when I paste the code into the code feature on this site, it gives extremely weird output.)
As far as I can tell this should rotate the camera when arrow keys are pressed - but it does nothing. It also seems that even the initial camera position is wrong. Any clue why this is?
The display function is only called once. You need to either set an idle function with glutIdleFunc() or tell GLUT that the display function must be called again with glutPostRedisplay().
In the walk through for blackberry 10 sdk using opengl es. it uses 2 commands namely:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
and later:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
I don't understand what these are used for when initializing the viewport. If I take those lines out the program still runs perfectly and nothing changes.
I see its got to do with rendering the matrix but i'm not sure I understand which matrix as this is only when im initializing before any sort of rendering.
Called in an initialization routine, those do nothing. The default value of both matrices is identity, so it's just setting it to the same value that they already are.
As to why it is there, I guess that some people just like to explicitly setup their context so they know for sure what the current value is, maybe it's easier to remember or they don't trust the context to have the right default value, I don't know.
I have a WebGL fragment shader that I am using to do raytracing. I pass in sphere and triangle data using textures. So far I've got 2 spheres and 3 triangles working. When I add the call to check interesection with a 4th triangle, the shader does not link, and calling getProgramInfoLog() just returns null.
Could the fragment shader be getting too big? Or do I need to look for another cause? How do I determine where the problem might be?
Here is a code snippet, commenting out any one of the checkTriangleIntersection calls causes the shader to link successfully.
checkTriangleIntersection(0.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
checkTriangleIntersection(1.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
checkTriangleIntersection(2.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
//checkTriangleIntersection(3.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
Since all the calls are the same, except for the index, I thought that there would be nothing wrong with the code itself, but is there some kind of limit that I could be running up against?
I'm getting more then 30 FPS before I add the extra function call, and even when I do add the extra call, both the vertex and fragemnt shader compile OK.
I got rid of the issue by putting the calls to checkTriangleIntersection() into a for loop.
I've been trying to get a HUD texture to display for a simulator for a while now, without success.
First I bind the texture like this:
glGenTextures(1,&hudTexObj);
gHud = getPPM("textures/purplenebula/hud.ppm",&n,&m,&s);
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
//glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,n,m,0,GL_RGB,GL_UNSIGNED_INT, gHud);
And then I attempt to map it to a QUAD, which results in the whole quad being a single brown color, and I want it to use all the texels. Here's how I map:
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0);
glVertex2f(0,0);
glTexCoord2f(0.0,1.0);
glVertex2f(0,m);
glTexCoord2f(1.0,1.0);
glVertex2f(n,m);
glTexCoord2f(1.0,0.0);
glVertex2f(n,0);
glEnd();
The weird thing is that I've been able to get the exact above code to display the texture in a program of its own, yet when I put it into my main program it fails. Could it have to do with the texture matrix? I'm dumbfounded at this point.
Stupidly, I had enabled automatic tex coord generation far away in another part of the code. So if you see one texel's color covering a whole image, that is the likely cause.