OpenGL: Sphere texture appearing oddly - c

I'm currently trying to map this pool ball texture to a sphere I have created. My approach is as follows:
Generate the sphere vertices
For every sphere vertex, translate that vertexes coordinates from the openGL world to the texture coordinates.
I want the white circle with the '1' in it to appear at the top of the sphere (at z=1), so I am using the x and z coordinates of the sphere vertices.
The texture file I am using has multiple textures. The texture below is the one I am concerned with. In the texture file, the top left of this particular texture is at (0.01, 0.01) and the bottom right is at (0.24, 0.24). If my math is right, this makes the dead center at about (0.115, 0.115). Since I want the white circle to be on top of the ball (z=1), I've come up with the following two lines of code to map the points:
tex_coords[i].x = 0.125 + (verticies[i].x)*0.115;
tex_coords[i].y = 0.125 + (verticies[i].z)*0.115;
My logic is that if X or Z is 0, the respective coordinate is 0.115, which is right in the middle. Otherwise, X and Z range from -1 to 1 so the maximum value we can reach is 0.24 and the minimum value is 0.01.
As you can see in the bottom screenshot, something has gone wrong. If you look very closely you can see that one tiny part of the sphere is colored white.

There was a discrepancy between one of my shaders and my init function. I had a variable called "vTexCoord" in my shaders but was using "vTexCoords" in my init function.

Related

OpenGL Rotate an Object around It's Local Axes

Imagine a 3D rectangle at origin. It is first rotated along Y-axis. So good so far. Now, it is rotated around X-axis. However, OpenGL (API: glrotatef) interprets the X-axis to be the global X-axis. How can I ensure that the "axes move with the object"?
This is very much like an airplane. For example, if yaw (Y rotation) is applied first, and then pitch (X-rotation), a correct pitch would be X-rotation along the plane's local axes.
EDIT: I have seen this called gimbal lock problem, but I don't think it is though.
You cannot consistently describe an aeroplane's orientation as one x rotation and one y rotation. Not even if you also store and one z rotation. That's exactly the gimbal lock problem.
The crux of it is that you have to apply the rotations in some order. Say it's x then y then z for the sake of argument. Then what happens if the x rotation is by 90 degrees? That folds the y axis onto where the z axis was. Then say the y rotation is also by 90 degrees. That's now bent the z axis onto where the x axis was. So now what effect does any z rotation have?
That's just an easy to grasp example. It's not a special case. You can't wave your hands out of it by saying "oh, I'll detect when to do z rotations first" or "I'll do 90 degree rotations with a special pathway" or any other little hack. Trying to store and update orientations as three independent scalars doesn't work.
In classic OpenGL, a call to glRotatef means "... and then rotate the current matrix like this". It's not relative to world coordinates or to model coordinates or to any other space that you're thinking in.

how do I do "reverse" texture mapping from texture image x,y to 3d space?

I am using WPF 3D, but I think this question should apply to any 3d texture mapping.
Suppose I have a model of a cow, and I want to draw a circular spot on the cow (and I want to do this dynamically -- supposed I don't know the location of the spot until run-time). I could do this by coloring the vertexes (vertexes are assigned a color based on their distance from the center of the spot), but if the model is fairly low-poly, that will give a pretty jagged-edged circle.
I could do it using a pixel shader, where the shader colors each pixel based on its distance from the center of the spot. But suppose I don't have access to pixel shaders (since I don't in WPF).
So, it seems that what I want to do is dynamically create a texture with the circle pattern on it, and texture the cow with it.
The question is: As I'm drawing that texture, how can I know what 3d coordinate in model space a given xy coordinate on the texture image corresponds to?
That is, suppose I have already textured my model with a plain white texture -- I've set up texture coordinates, done texture mapping, but don't have the texture image yet. So I have this 1000x1000 (or whatever) pixel image that gets draped nicely over the cow according to some nice texture coordinates that have been set up on the model beforehand. I understand that when the 3D hardware goes to draw a given triangle, it uses the texture coordinates of the vertexes of the triangle to find the corresponding triangular region of the image, and then interpolates across the surface of the triangle to fill displayed model pixels with colors from that triangular region of the image.
How do I go the other way? How do I say, for this given xy point on my texture image, and given the texture coordinates that have already been set up on the model, what's the 3d coordinate in model space that this image pixel is going to correspond to once texture mapping happens?
If I had such a function, I could color my texture map image such that all the points (in 3d space) within a certain distance of the circle center point on the cow would get one color, and all points outside that distance would get another color, and I'd end up with a nice, crisp circular spot on the cow, even with a relatively low-poly model. Does that sound right?
I do understand that given the texture coordinates for the vertexes of each triangle, I can step through the triangles in my model, find the corresponding triangle on the texture image, and do my own interpolation, across the texture pixels in that triangle, by interpolating across the 3d plane determined by the vertex points. And that doesn't sound too hard. But I'm just trying to understand if there is some standard 3d concept/function where I can just call a ready-made function to give me the model space coordinates for a given texture xy.
I did end up getting this working. I walk every point on the texture (1024 x 1024 points). Using the model's texture coordinates, I determine which polygon face, if any, the given u,v point is inside of. If it's inside of a face, I get the model coordinates for each point on that face. I then do a barycentric interpolation as described here: http://paulbourke.net/texture_colour/interpolation/
That is, for each u,v point on the texture, I use an inside-polygon check to determine which quad it's in on the 2D texture sheet and then I use an interpolation on that same 2D geometry as described in the link above, but instead of interpolating colors or normals I'm interpolating 3D coordinates.
I can then use the 3D coordinate to color the point on the texture (e.g., to color a circular spot on the cow based on how far in model space the given texture point is from the spot center point). And then I can apply the texture to the model, and it works.
Again, it seems like this must be a standard procedure with a name...
One issue is that the result is very sensitive to the quality of the the texturing as set up by the modeler. For instance, if a relatively large quad on the cow corresponds to a small quad on the texture image, there just aren't enough pixels to work with to get a smooth curve within that model quad once the texture is applied. You can of course use a higher-res texture, such as 2048x2048, but then your loop time is 4x.
It's actually a rasterization process if I didn't misunderstand your question. In lightmapping, one may also need to find the corresponding positions and normals in world space for each texel in the lightmap space and then baking irradiance. (which seems similar to your goal)
You can use standard Graphics API to do this task instead of writing your own implementation. Let:
Size of texture -> Size of G-buffers
UVs of each mesh triangle -> Vertex positions vec3(u, v, 0) of the input stage
Indices of each mesh triangle -> Indices of the input stage
Positions (and normals, etc.) of each mesh triangle -> Attributes of the input stage
After the rasterizer stage of the graphics pipeline, all fragments that lie within the UV triangle are generated, and the attributes that have been supplied are interpolated automatically. You can do whatever you want now in pixel shader!

In OpenGL, can I draw a pixel that exactly at the coordinates (5, 5)?

By (5, 5) I mean exactly the fifth row and fifth column.
I found it very hard to draw things using screen coordinates, all the coordinates in OpenGL is relative, and usually ranging from -1.0 to 1.0. Why it is so serious to prevent programmers from using screen coordinates / window coordinates?
The simplest way is probably to set the projection to match the pixel dimensions of the rendering space via glOrtho. Then vertices can be in pixel coordinates. The downside is that resizing the window could cause problems and you're mostly wasting the accelerated transforms.
Assuming a window that is 640x480:
// You can reverse the 0,480 arguments depending on you Y-axis
// direction preference
glOrtho(0, 640, 0, 480, -1, 1);
Frame buffer objects and textures are another avenue but you'll have to create your own rasterization routines (draw line, circle, bitmap, etc). There are problaby libs for this.
#dandan78 OpenGL is not a Vector Graphics renderer. Is a Rasterizer. And in a more precise way is a Standard described by means of a C language interface. A rasterizer, maps objects represented in 3D coordinated spaces (a car, a tree, a sphere, a dragon) into 2D coordinated spaces (say a plane, your app window or your display), these 2d coordinates belong to a discrete coordinated plane. The counter rendering method of rasterization is Ray Tracing.
Vector graphics is a way to represent by means of mathematical functions a set of curves, lines or similar geometrical primitives, in a nondiscrete way. So Vector graphics is in the "model representation" field rather than "rendering" field.
You can just change the "camera" to make 3D coordinates match screen coordinates by setting the modelview matrix to identity and the projection to an orthographic projection (see my answer on this question). Then you can just draw a single point primitive at the required screen coordinates.
You can also set the raster position with glWindowPos (which works in screen coordinates, unlike glRasterPos) and then just use glDrawPixels to draw a 1x1 pixel image.
glEnable( GL_SCISSOR_TEST );
glScissor( 5, 5, 1, 1 ); /// position of pixel
glClearColor( 1.0f, 1.0f, 1.0f, 0.0f ); /// color of pixel
glClear( GL_COLOR_BUFFER_BIT );
glDisable( GL_SCISSOR_TEST );
By changing last 2 arguments of glScissor you can also draw pixel perfect rectangle.
I did a bit of 3D programming several years back and, while I'm far from an expert, I think you are overlooking a very important difference between classical bitmapped DrawPixel(x, y) graphics and the type of graphics done with Direct3D and OpenGL.
Back in the days before 3D, computer graphics was mostly about bitmaps, which is to say collections of colored dots. These dots had a 1:1 relationship with the pixels on your monitor.
However, that had numerous drawbacks, including making 3D very difficult and requiring bitmaps of different sizes for different display resolutions.
In OpenGL/D3D, you are dealing with vector graphics. Lines are defined by points in a 3-dimensional coordinate space, shapes are defined by lines and so on. Surfaces can have textures, lights can be added, as can various types of lighting effects etc. This entire scene, or a part of it, can then be viewed through a virtual camera.
What you 'see' though this virtual camera is a projection of the scene onto a 2D surface. We're still dealing with vector graphics at this point. However, since computer displays consist of discrete pixels, this vector image has to be rasterized, which transforms the vector into a bitmap with actual pixels.
To summarize, you can't use screen/window coordinates because OpenGL is based on vector graphics.
I know I'm very late to the party, but just in case someone has this question in the future. I converted screen coordinates to OpenGL matrix coordinates using these:
double converterX (double x, int window_width) {
return 2 * (x / window_width) - 1;
}
double converterY (double y, int window_height) {
return -2 * (y / window_height) + 1;
}
Which are basically re-scaling methods.

OpenGL: How do I avoid rounding errors when specifying UV co-ordinates

I'm writing a 2D game using OpenGL. When I want to blit part of a texture as a sprite I use glTexCoord2f(u, v) to specify the UV co-ordinates, with u and v calculated like this:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)width_of_texture;
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)height_of_texture;
This works perfectly most of the time, except when I use glScale to zoom the game in or out. Then floating point rounding errors cause some pixels to be drawn one to the right of or one below the intended rectangle within the texture.
What can be done about this? At the moment I'm subtracting an 'epsilon' value from the right and bottom edges of the rectangle, and it seems to work but this seems like a horrible kludge. Are there any better solutions?
Your issue is most likely not coming from rounding errors, but a misunderstanding on how OpenGL maps texels to pixels. If you notice off-by-one errors, it's probably because your UVs, your vertex positions or your projection matrix/viewport pair are not aligned to where they ought to be.
To simplify, I'll just talk about 1D, and be assuming you use a projection and a viewport that map X,Y coordinates to the equivalent pixel location (i.e. a glOrtho(0,width,0,height,zmin,zmax) and a glViewport(0,width,0,height).
Say you want to draw 5 texels (starting at 0 for simplicity) of your 64-wide texture showing on the 10 pixels (scale of 2) of your screen starting at pixel 20.
To get there, draw the triangle with X coordinates 20 and 30, and U (of the UV pair) of 10/64 and 15/64. The rasterization of OpenGL will generate 10 pixels to shade, with X coordinates 20.5, 21.5, ... 29.5. Note that the positions are not full integers. OpenGL rasterizes in the middle of the pixel.
Likewise, it will generate U coordinates of 10.25/64, 10.75/64, 11.25/64, 11.75/64 ... 14.25/64, 14.75/64. Note again that texel coordinates, brought back to texel positions in the texture space, are not full integers. OpenGL samples from the middle of texel locations, so this is fine.
How the samplers use these UVs to generate texel values depend on filtering modes, but be it nearest or linear, the pixels should be contained solely inside the texels of interest (0.25 with a size of 0.5 should only use color from 0 to 0.5, which is all inside the first texel).
In general, if you follow the general principles I laid out, you should never see artifacts.
Use Ortho and Viewport of exactly your frame buffer size
Use positions of X, X+width exactly
Use UVs that correspond to exactly the texels you want (if you want the 10 texels starting from the texel 0, use U=0 to U=10.
If you ever have a -1 somewhere in your math, it's likely not correct (for position or UVs).
To get back to your example, it's unclear how you link the uvs you compute to positions (since you don't show the position computation).
It's also unclear how you got xpos_in_texture (you should explain how you computed them for the corners of your sprite). My guess is that you computed that wrong.
A bit late, but for posterity I was having the same problem, with the pixels from adjacent regions of a texture atlas bleeding into sprites/tiles when scaling or zooming the view. I had my glOrtho, glViewport, etc dimensions all set correctly, then I realized the problem was I was scaling the view before translating the camera, which meant that even though I was snapping to integer pixels pre-zoom, after the zoom it would align to a fraction of a pixel and introduce the texel problem.
So if your code looks something like this, where camera.zoom is a non-integer (i.e. 0.75):
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(camera.x, camera.y, 0.0f);
You'll want to make sure the result of the translation after scaling aligns to whole pixels on the screen, so you can do something like:
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(
floor(camera.x * camera.zoom) / camera.zoom,
floor(camera.y * camera.zoom) / camera.zoom,
0.0f);
Do the division as a double, round the result down yourself to the desired level of precision, then cast it to GLFloat.
Your xpos/ypos must be based on 0 to (width or height) - 1 and then:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)(width_of_texture - 1);
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)(height_of_texture - 1);

Finding center of 2D triangle?

I've been given a struct for a 2D triangle with x and y coordinates, a rotation variable, and so on. From the point created by those x and y coordinates, I am supposed to draw a triangle around the point and rotate it appropriately using the rotation variable.
I'm familiar with drawing triangles in OpenGl with GL_TRIANGLES. My problem is somehow extracting the middle of a triangle and drawing the vertices around it.
edit: Yes, what I am looking for is the centroid.
There are different "types" of centers of a triangle. Details on: The Centers of a Triangle. A quick method for finding a center of a triangle is to average all your point's coordinates. For example:
GLfloat centerX = (tri[0].x + tri[1].x + tri[2].x) / 3;
GLfloat centerY = (tri[0].y + tri[1].y + tri[2].y) / 3;
When you find the center, you will need to rotate your triangle about the center. To do this, translate so that the center is now at (0, 0). Perform your rotation. Now reverse the translation you performed earlier.
I guess you mean the centroid of the triangle!?
This can be easily computed by 1/3(A + B + C) where A, B and C are the respective points of the triangle.
If you have your points, you can simply multiply them by your rotation matrix as usual. Hope i got you right.
There are several points in a triangle that can be considered to be its center (orthocenter, centroid, etc.). This section of the Wikipedia article on triangles has more information. Just look at the pictures to get a quick overview.
By "middle" do you mean "centroid", a.k.a. the center of gravity if it were a 3D object of constant thickness and density?
If so, then pick two points, and find the midpoint between them. Then take this midpoint and the third point, and find the point 1/3 of the way between them (closer to the midpoint). That's your centroid. I'm not doing the math for you.

Resources