OpenGL distortion when applying perspective matrix - c

I have a program that simply draws a cube. When applying transformation such as rotation scaling etc the program works. When I attempt to apply any perspective matrix such as perspective, frustum, or ortho the cube becomes very distorted in undefined ways. What I am getting confused about is why the program works fine when using the other transformations, but breaks when applying any sort of perspective view.
Additionally, when I change the GL_TRUE parameter to GL_FALSE in glUniformMatrix4fv the cube stops moving around the screen, but still has strange distortion. Here is the display function. Just applying perspective gives that same distortion.
void display()
{
vec4 e2 = { 0.0, 0.0, 1.0, 0.0};
vec4 at = { 0.0, 0.0, 0.0, 0.0 };
vec4 up = { 0.0, 1.0, 0.0, 0.0 };
mat4 rx = RotateX(theta);
mat4 ry = RotateY(theta);
mat4 ps = Perspective(zoom*45.0, aspect, 0.5, 10.0);
mat4 rxry = multiplymat4(rx, ry);
mat4 mv = LookAt(e2, at, up);
glUniformMatrix4fv( ModelView, 1, GL_TRUE, &mv.m[0][0] );
mat4 p = multiplymat4(rxry,ps);
glUniformMatrix4fv(projection, 1, GL_TRUE, &p.m[0][0]);
}
I do not believe it is in my perspective function since ortho and frustum does the same thing, but the perspective code is below.
mat4 Perspective(float fovy, float aspect, float zNear, float zFar)
{
float top = tan(fovy*DegreesToRadians/2) * zNear;
float right = top * aspect;
mat4 c = ZERO_MATRIX;
c.m[0][0] = zNear/right;
c.m[1][1] = zNear/top;
c.m[2][2] = -(zFar + zNear)/(zFar - zNear);
c.m[2][3] = -2.0*zFar*zNear/(zFar - zNear);
c.m[3][2] = -1.0;
c.m[3][3] = 0.0;
return c;
}
And the vertex shader
#version 120
attribute vec4 vPosition;
attribute vec4 vColor;
varying vec4 color;
uniform mat4 ModelView;
uniform mat4 projection;
void
main()
{
gl_Position = projection*ModelView*vPosition/vPosition.w;
color = vec4( vColor);
}

I can spot two places where it seems me it is wrong.
1) you are multiplying your perspective matrix with rotational matrix. Why? if you want to do camera movements, they must be done in lookAt matrix. So, I suggest this simple code:
mat4 ps = Perspective(zoom*45.0, aspect, 0.5, 10.0);
glUniformMatrix4fv(projection, 1, GL_TRUE, &ps.m[0][0]);
2) Perspective divide is done automatically by GPU, in this way, it seems me you vertex shader is wrong too:
gl_Position = projection*ModelView*vPosition;
color = vec4( vColor);

Matrix multiplication order matters, and from what I see your p matrix should be
mat4 p = multiplymat4(ps, rxry);
Though logically rotation belongs to the view matrix.
Also, / vPosition.w likely does nothing in the shader, since w equals 1.0 unless you actually supplied 4-dimensional position data. Nor do you need a perspective divide in your vertex shader.

Related

glNamedBufferSubData Not updating between draws

I am having difficulties with rendering rectangles.
The rectangle vertices are being calculated using gl_VertexID using data from a Uniform Buffer Object.
However when updating the uniform buffer data between draw calls, the same elements seem to appear.
#version 440
out vec3 r_uv;
out vec4 r_color;
layout (binding = 2, std140) uniform struct_uirect {
vec2 pos;
vec2 size;
vec4 color;
int uv;
} uirect;
void main(){
vec2 verts[4] = vec2[4](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
r_uv = vec3(verts[gl_VertexID], uirect.uv);
r_color = uirect.color;
vec2 vert = uirect.pos + verts[gl_VertexID] * uirect.size;
vert = vert * 2 - 1;
gl_Position = vec4(vert, 0.0, 1.0);
}
#version 440
out vec4 color;
in vec3 r_uv;
in vec4 r_color;
layout (binding = 1) uniform sampler2DArray voxel_atlas;
void main(){
color = texture(voxel_atlas, r_uv) * r_color;
}
Because of order dependence every element is being drawn separately, using the following recursive function.
void UI_Tag_Render(Tag* tag, int x, int y, int w, int h){
glViewport(x, y, w, h);
glNamedBufferSubData(binding_points[2], 0, sizeof(Tag), tag);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
if(tag->child)
UI_Tag_Render(
tag->child,
x + w * tag->pos[0],
y + h * tag->pos[1],
w * tag->size[0],
h * tag->size[1]
);
if(tag->sibling)
UI_Tag_Render(tag->sibling, x, y, w, h);
}
This results in getting same elements that progressively get smaller (due to glViewport call).
The uniform buffer object is created empty (with the sizeof(Tag) size), and once the glNamedBufferSubData is called, it's data doesn't seem to update.
The same way of handling ubo's is used in a different shader that seems to handle correctly (but that one draws directly to screen, and has input vertices)
This does appear to be a synchronization issue. (I'm not sure whether this is driver related or valid in the opengl standard)
Adding a glFinish() call after the draw call, will cause the UBO to be updated correctly.
Thanks to #NicolBolas for pointing out that draw calls are indeed asynchronous.

GLSL layout attribute number

I was trying out a shader example to draw a triangle with the RGB interpolated across the vertices, and assumed that using
layout (location = 0)in vec4 vertex;
layout (location = 1) in vec4 VertexColor;
in the vertex shader would work, since the 4 float colors immediately follow 4 float vertices in the array. However, it always drew a solid red triangle. So I tried location = 4 only to get a black screen. Experimenting further gave a blue triangle for location = 2, and finally got the interpolated result with location = 3.
My question is, how was I expected to know to enter 3 as the location? The vertex array looks like this:
GLfloat vertices[] = { -1.5,-1.5, 0.0, 1.0, //first 3D vertex
1.0, 0.0, 0.0, 1.0, //first color
0.0, 1.5, 0.0, 1.0, //second vertex
0.0, 1.0, 0.0, 1.0, //second color
1.5,-1.5, 0.0, 1.0, //third vertex
0.0, 0.0, 1.0, 1.0,}; //third color
note: edited the original layout=1 from layout = 3 in first code block
each location can hold 4 floats (a single vec4), So a valid option would also be:
layout (location = 0)in vec4 vertex;
layout (location = 1) in vec4 VertexColor;
What dictates where what attribute comes from is the set of glVertexAttribPointer calls.
these are the ones I would expect for the glsl declaration above (assuming you use a VBO)
glVertexAttribPointer(0, 4, GL_FLOAT, false, sizeof(float)*4*2, 0);
glVertexAttribPointer(1, 4, GL_FLOAT, false, sizeof(float)*4*2, sizeof(float)*4);

Can't spot the issue with my GLSL/OpenGL code

I wrote a little program to display a 32bit float texture in a simple quad. When displaying the quad, the texture color is always black. I experimented with a lot of things, but I couldn't make it work. I'm really at loss what the problem with it.
The code of creating the OpenGL texture goes like this
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, textureData);
Using the debugger, there's no error in any of these calls. I also examined the textureData pointer, and got the expected results (in my simplified program, it is just a gradient texture).
This is the vertex shader code in GLSL:
#version 400
in vec4 vertexPosition;
out vec2 uv;
void main() {
gl_Position = vertexPosition;
uv.x = (vertexPosition.x + 1.0) / 2;
uv.y = (vertexPosition.y + 1.0) / 2;
}
It's kind of a simple generation of the UV coordinates without taking them as vertex attributes. The corresponding vertex buffer object is really simple:
GLfloat vertices[4][4] = {
{ -1.0, 1.0, 0.0, 1.0 },
{ -1.0, -1.0, 0.0, 1.0 },
{ 1.0, 1.0, 0.0, 1.0 },
{ 1.0, -1.0, 0.0, 1.0 },
};
I've tested the solution, and it displays the quad covering the entire window as I wanted to. Displaying the UV coordinates in the fragment shader reproduce the gradient that I expected to get. Now here's the fragment shader:
#version 400
uniform sampler2D myTex;
in vec2 uv;
out vec4 fragColor;
void main() {
fragColor = texture(myTex, uv);
// fragColor += vec4(uv.x, uv.y, 0, 1);
}
The commented out line displays the UV coordinates as color for debugging purposes. What do I do wrong here? I just can't see why the texture() call returns 0 where the texture seems completely right, and the uv coordinates are also proper. I link here the full code if there's something else I do wrong: gl-view.c
EDIT: This is how I set up the myTex sampler:
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glUniform1i(glGetUniformLocation(shaderProgram, "myTex"), 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
EDIT: Cleared up the vertex shader code.
I've found the issue: I didn't set any MAG or MIN filter on the texture. Setting the MIN filter to GL_NEAREST solved the problem.

Trouble getting view (lookat) and projection (perspective) matrices to work properly

I've been following the open.gl tutorials without using the the GLM library because reasons (stubbornness and C).
I can't get the view and projection matrices to work properly.
Here's the relevant vertex shader code,
#version 150 core
in vec3 size;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
uniform vec3 pos;
uniform float angle;
uniform vec3 camPos;
uniform vec3 camTarget;
const float fov=90, ratio=4.0/3.0, near=1.0, far=10.0;
mat4 projection ()
{
float t = tan(radians(fov)),
l = ratio * t;
return mat4(
vec4(near/l, 0.0, 0.0, 0.0),
vec4(0.0, near/t, 0.0, 0.0),
vec4(0.0, 0.0, -(far+near)/(far-near), -(2*far*near)/(far-near)),
vec4(0.0, 0.0, -1.0, 0.0)
);
}
mat4 rotZ(float theta)
{
return mat4(
vec4(cos(theta), -sin(theta), 0.0, 0.0),
vec4(sin(theta), cos(theta), 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
}
mat4 translate(vec3 translation)
{
return mat4(
vec4(1.0, 0.0, 0.0, 0.0),
vec4(0.0, 1.0, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(translation.x, translation.y, translation.z, 1.0)
);
}
mat4 lookAtRH(vec3 eye, vec3 target)
{
vec3 zaxis = normalize(target - eye); // The "forward" vector.
vec3 xaxis = normalize(cross(vec3(0.0,0.0,1.0), zaxis));// The "right" vector.
vec3 yaxis = normalize(cross(zaxis, xaxis)); // The "up" vector.
mat4 axis = {
vec4(xaxis.x, yaxis.x, zaxis.x, 0),
vec4(xaxis.y, yaxis.y, zaxis.y, 0),
vec4(xaxis.z, yaxis.z, zaxis.z, 0),
vec4(dot(xaxis,-eye), dot(yaxis,-eye), dot(zaxis,-eye), 1)
};
return axis;
}
void main()
{
Color = color;
Texcoord = texcoord;
mat4 model = translate(pos) * rotZ(angle);
mat4 view = lookAtRH(camPos, camTarget);
gl_Position = projection() * view * model * vec4(size, 1.0);
}
From tweaking things around it seems as if the view matrix is correct, but the projection matrix is causing the dodgyness.
First I must remark that it is a very bad idea to do this directly in the shaders.
However, if you really want to, you can do this. You should be aware that the GLSL matrix constructors work with column vectors. Your projection matrix is thuse specified transposed (however, your translation matrix is correct).
EDIT: If you want pure C, here is nice lib for math (+ you can check the code :) ) https://github.com/datenwolf/linmath.h
Never do something like that :) Creating matrices in shader is very bad idea...
Vertex shader is executed for each vertex. So if you pass to shader thousand vertices you calculate new matrices thousand times. I think there's nothing more to explain :)
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
... // somewhere
glm::mat4 projection = glm::perspective(45.0f, float(window_width) / window_height, 0.1f, 100.0f);
glm::mat4 world(1.0f); // world/model matrix
glm::mat4 view(1.0f); // view/camera matrix
glm::mat4 mvp = projection * view * model;
... // inside the main loop
glUniformMatrix4fv(glGetUniformLocation(program, "mvpMatrix"), 1, GL_FALSE, &mvp[0][0]);
draw_mesh();
It's really cool and optimised :)

How to fill polygon with different color than boundary?

I need to draw a polygon that has the boundary lines with one color and fill the interior with another color. Is there an easy way to do this ? I currently draw two polygons one for the interior color and 1 for the boundary. I think there must be a better to do this. Thanks for your help.
glColor3d (1, 1., .7);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glBegin(GL_TRIANGLES);
glVertex3f(-0.8f, 0.0f, 0.0f);
glVertex3f(-0.6f, 0.0f, 0.0f);
glVertex3f(-0.7f, 0.2f, 0.0f);
glEnd();
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glColor3d (.5, .5, .7);
glBegin(GL_TRIANGLES);
glVertex3f(-0.8f, 0.0f, 0.0f);
glVertex3f(-0.6f, 0.0f, 0.0f);
glVertex3f(-0.7f, 0.2f, 0.0f);
glEnd();
Thank you everyone for answering my question. I am fairly new to openGL and was looking for a simple answer to a simple question. The answer appears to be not so simple and probably can take a semester worth of study.
A more modern approach would be to implement this via geometry shaders. This would work for OpenGL 3.2 and above as part of the core functionality, or for OpenGL 2.1 with extension GL_EXT_geometry_shader4.
This paper has all the relevant theory : Shader-Based wireframe drawing. It also provides a sample implementation of the simplest technique in GLSL.
Here is my own stab at it, basically a port of their implementation for OpenGL 3.3, limited to triangle primitives:
Vertex shader: (replace the inputs with whatever you use to pass in vertex data an m, v and p matrices)
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in mat4 mv;
out Data
{
vec4 position;
} vdata;
uniform mat4 projection;
void main()
{
vdata.position = projection * mv * position;
}
Geometry shader:
#version 330
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
in Data
{
vec4 position;
} vdata[3];
out Data
{
noperspective out vec3 dist;
} gdata;
void main()
{
vec2 scale = vec2(500.0f, 500.0f); // scaling factor to make 'd' in frag shader big enough to show something
vec2 p0 = scale * vdata[0].position.xy/vdata[0].position.w;
vec2 p1 = scale * vdata[1].position.xy/vdata[1].position.w;
vec2 p2 = scale * vdata[2].position.xy/vdata[2].position.w;
vec2 v0 = p2-p1;
vec2 v1 = p2-p0;
vec2 v2 = p1-p0;
float area = abs(v1.x*v2.y - v1.y*v2.x);
gdata.dist = vec3(area/length(v0),0,0);
gl_Position = vdata[0].position;
EmitVertex();
gdata.dist = vec3(0,area/length(v1),0);
gl_Position = vdata[1].position;
EmitVertex();
gdata.dist = vec3(0,0,area/length(v2));
gl_Position = vdata[2].position;
EmitVertex();
EndPrimitive();
}
Vertex shader: (replace the colors with whatever you needed !)
#version 330
in Data
{
noperspective in vec3 dist;
} gdata;
out vec4 outputColor;
uniform sampler2D tex;
const vec4 wireframeColor = vec4(1.0f, 0.0f, 0.0f, 1.0f);
const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
void main()
{
float d = min(gdata.dist.x, min(gdata.dist.y, gdata.dist.z));
float I = exp2(-2*d*d);
outputColor = mix(fillColor, wireframeColor, I);
}
You can switch the fill mode between polygons, lines and points, using glPolygonMode.
In order to draw polygon lines in a different color you can do the following:
glPolygonMode(GL_FRONT_AND_BACK,GL_FILL);
draw_mesh( fill_color );
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE);
glEnable(GL_POLYGON_OFFSET_LINE);
glPolygonOffset(-1.f,-1.f);
draw_mesh( line_color );
Line offset may be needed, because OpenGL doesn't guarantee the edges of polygons will be rasterized in the exact same pixels as the lines. So, without explicit offset you may and up with lines being hidden by polygons, due to failed depth test.
There are 2 ways to do this:
the one you do at the moment (2 polygons, one a little larger than the other or drawn after)
texture
to my knowledge there are no other possibilities and from a performance standpoint these 2 possibilities, especially the first one as long as you only color-fill, are extremely fast.
I think you should see this answer:
fill and outline
first draw your triangle using
glPolygonMode(GL_FRONT_AND_BACK,GL_FILL) and use your desired color.
then draw the triangle again using
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE) using your outline color.

Resources