Based on this stackoverflow answer:
https://stackoverflow.com/a/55385972
I'm trying to find a way to move the output of fragment shader inside screen coordinates. In that example the output must have the same size of screen resolution, otherwise you'll see only a portion of the result. Furthermore the result is always aligned with lower-left corner.
In which way someone can resize the final frame and draw it centered in viewport? E.g., screen 1920x1080, viewport 1920x1080, final distorted frame 640x480 centered, so frame position x = 640, y = 300. I can't find a way to move the destination result
Related
I'm currently trying to map this pool ball texture to a sphere I have created. My approach is as follows:
Generate the sphere vertices
For every sphere vertex, translate that vertexes coordinates from the openGL world to the texture coordinates.
I want the white circle with the '1' in it to appear at the top of the sphere (at z=1), so I am using the x and z coordinates of the sphere vertices.
The texture file I am using has multiple textures. The texture below is the one I am concerned with. In the texture file, the top left of this particular texture is at (0.01, 0.01) and the bottom right is at (0.24, 0.24). If my math is right, this makes the dead center at about (0.115, 0.115). Since I want the white circle to be on top of the ball (z=1), I've come up with the following two lines of code to map the points:
tex_coords[i].x = 0.125 + (verticies[i].x)*0.115;
tex_coords[i].y = 0.125 + (verticies[i].z)*0.115;
My logic is that if X or Z is 0, the respective coordinate is 0.115, which is right in the middle. Otherwise, X and Z range from -1 to 1 so the maximum value we can reach is 0.24 and the minimum value is 0.01.
As you can see in the bottom screenshot, something has gone wrong. If you look very closely you can see that one tiny part of the sphere is colored white.
There was a discrepancy between one of my shaders and my init function. I had a variable called "vTexCoord" in my shaders but was using "vTexCoords" in my init function.
I am creating the "perfect" sprite packer. This is a sprite packer that makes sure the output sprite is compatible with most if not all game engines and animation software. It is a program that merges images into a horizontal sprite sheet.
It converts (if needed) the source frames to BMP in memory
It considers the top-left pixel fully transparent for the entire image (can be configured)
It parses the frames each individually to find the real coordinates rect (where the actual frame starts, ends, its width and height (sometimes images may have a lot of extra transparent pixels).
It determines the frame box, which have the width and height of the frame with the largest width/height so that it is long enough to contain every frame. (For extra compatibility, every frame must have the same dimensions).
Creates output sprite with width of nFrames * wFrameBox
The problem is - anchor alignment. Currently, it tries to align each frame so that its center is on the frame box center.
if((wBox / 2) > (frame->realCoordinates.w / 2))
{
xpos = xBoxOffset + ((wBox / 2) - (frame->realCoordinates.w / 2));
}
else
{
xpos = xBoxOffset + ((frame->realCoordinates.w / 2) - (wBox / 2));
}
When animated, it looks better with it, but there is still this inconsistent horizontal frame position so that a walking animation looks like walking and shaking.
I also tried the following:
store the real x pixel position of the widest frame and use it as a reference point:
xpos = xBoxOffset + (frame->realCoordinates.x - xRef);
It also gives a little better results, showing that this is still not the correct algorithm.
Honestly, I don't know what am I doing.
What will be the correct way to align sprite frames (obtain the appropriate x position for drawing the next frame) given that the output sprite sheet have width of the number of frames multiplied by the width of the widest frame?
Your problem is that you first calculate the center then calculate the size of the required bounding box. That is why your image 'shakes' because in each image that center is different to the original center.
You should use the center of the original bounding box as your origin, then find out the size of each sprite, keeping track of the leftmost, rightmost, topmost and bottommost non transparent pixels. That would give you the bounding box you need to use to avoid the shaking.
The problem is that you will find that most sprites are already done that way, so the original bounding box is actually defined as to the minimum space to paint the whole sprite's sequence covering these non transparent pixels.
The only way to remove unused sprite space is to store the first sprite complete, and then the origin and dimensions of each other sprite, like is done in animated GIF and APNG ( Animated PNG -> https://en.wikipedia.org/wiki/APNG )
I have to rotate an image on touch on a fixed axis. Currently I am able to set the image to a fixed position with certain angle. But when i am trying to rotate it with some other deg. then the position of image gets changed.
I have set the transform-origin: 0 0; Since my image is covering more area. the rotation is not working as i want.
My screen shot goes here
I have an axis with an arrow which needs to rotate on the X-Y Axis image at its center.
Any suggestions will be very helpful.
I am using WPF 3D, but I think this question should apply to any 3d texture mapping.
Suppose I have a model of a cow, and I want to draw a circular spot on the cow (and I want to do this dynamically -- supposed I don't know the location of the spot until run-time). I could do this by coloring the vertexes (vertexes are assigned a color based on their distance from the center of the spot), but if the model is fairly low-poly, that will give a pretty jagged-edged circle.
I could do it using a pixel shader, where the shader colors each pixel based on its distance from the center of the spot. But suppose I don't have access to pixel shaders (since I don't in WPF).
So, it seems that what I want to do is dynamically create a texture with the circle pattern on it, and texture the cow with it.
The question is: As I'm drawing that texture, how can I know what 3d coordinate in model space a given xy coordinate on the texture image corresponds to?
That is, suppose I have already textured my model with a plain white texture -- I've set up texture coordinates, done texture mapping, but don't have the texture image yet. So I have this 1000x1000 (or whatever) pixel image that gets draped nicely over the cow according to some nice texture coordinates that have been set up on the model beforehand. I understand that when the 3D hardware goes to draw a given triangle, it uses the texture coordinates of the vertexes of the triangle to find the corresponding triangular region of the image, and then interpolates across the surface of the triangle to fill displayed model pixels with colors from that triangular region of the image.
How do I go the other way? How do I say, for this given xy point on my texture image, and given the texture coordinates that have already been set up on the model, what's the 3d coordinate in model space that this image pixel is going to correspond to once texture mapping happens?
If I had such a function, I could color my texture map image such that all the points (in 3d space) within a certain distance of the circle center point on the cow would get one color, and all points outside that distance would get another color, and I'd end up with a nice, crisp circular spot on the cow, even with a relatively low-poly model. Does that sound right?
I do understand that given the texture coordinates for the vertexes of each triangle, I can step through the triangles in my model, find the corresponding triangle on the texture image, and do my own interpolation, across the texture pixels in that triangle, by interpolating across the 3d plane determined by the vertex points. And that doesn't sound too hard. But I'm just trying to understand if there is some standard 3d concept/function where I can just call a ready-made function to give me the model space coordinates for a given texture xy.
I did end up getting this working. I walk every point on the texture (1024 x 1024 points). Using the model's texture coordinates, I determine which polygon face, if any, the given u,v point is inside of. If it's inside of a face, I get the model coordinates for each point on that face. I then do a barycentric interpolation as described here: http://paulbourke.net/texture_colour/interpolation/
That is, for each u,v point on the texture, I use an inside-polygon check to determine which quad it's in on the 2D texture sheet and then I use an interpolation on that same 2D geometry as described in the link above, but instead of interpolating colors or normals I'm interpolating 3D coordinates.
I can then use the 3D coordinate to color the point on the texture (e.g., to color a circular spot on the cow based on how far in model space the given texture point is from the spot center point). And then I can apply the texture to the model, and it works.
Again, it seems like this must be a standard procedure with a name...
One issue is that the result is very sensitive to the quality of the the texturing as set up by the modeler. For instance, if a relatively large quad on the cow corresponds to a small quad on the texture image, there just aren't enough pixels to work with to get a smooth curve within that model quad once the texture is applied. You can of course use a higher-res texture, such as 2048x2048, but then your loop time is 4x.
It's actually a rasterization process if I didn't misunderstand your question. In lightmapping, one may also need to find the corresponding positions and normals in world space for each texel in the lightmap space and then baking irradiance. (which seems similar to your goal)
You can use standard Graphics API to do this task instead of writing your own implementation. Let:
Size of texture -> Size of G-buffers
UVs of each mesh triangle -> Vertex positions vec3(u, v, 0) of the input stage
Indices of each mesh triangle -> Indices of the input stage
Positions (and normals, etc.) of each mesh triangle -> Attributes of the input stage
After the rasterizer stage of the graphics pipeline, all fragments that lie within the UV triangle are generated, and the attributes that have been supplied are interpolated automatically. You can do whatever you want now in pixel shader!
I was wondering was the best way to invert the color pixels in the frame buffer is. I know it's possible to do with glReadPixels() and glDrawPixels() but the performance hit of those calls is pretty big.
Basically, what I'm trying to do is have an inverted color cross-hair which is always visible no matter what's behind it. For instance, I'd have an arbitrary alpha mask bitmap or texture, have it render without depth test after the scene is drawn, and all the frame buffer pixels under the masked (full alpha) pixels of the texture would be inverted.
I've been trying to do this with a texture, but I'm getting some strange results, also all the blending options I still find confusing.
Give something like this a try:
glEnable(GL_COLOR_LOGIC_OP);
glLogicOp(GL_XOR);
// render geometry
glDisable(GL_COLOR_LOGIC_OP);
how about:
glEnable (GL_BLEND);
glBlend (GL_ONE_MINUS_DST_COLOR, GL_ZERO);