I am having difficulties with rendering rectangles.
The rectangle vertices are being calculated using gl_VertexID using data from a Uniform Buffer Object.
However when updating the uniform buffer data between draw calls, the same elements seem to appear.
#version 440
out vec3 r_uv;
out vec4 r_color;
layout (binding = 2, std140) uniform struct_uirect {
vec2 pos;
vec2 size;
vec4 color;
int uv;
} uirect;
void main(){
vec2 verts[4] = vec2[4](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
r_uv = vec3(verts[gl_VertexID], uirect.uv);
r_color = uirect.color;
vec2 vert = uirect.pos + verts[gl_VertexID] * uirect.size;
vert = vert * 2 - 1;
gl_Position = vec4(vert, 0.0, 1.0);
}
#version 440
out vec4 color;
in vec3 r_uv;
in vec4 r_color;
layout (binding = 1) uniform sampler2DArray voxel_atlas;
void main(){
color = texture(voxel_atlas, r_uv) * r_color;
}
Because of order dependence every element is being drawn separately, using the following recursive function.
void UI_Tag_Render(Tag* tag, int x, int y, int w, int h){
glViewport(x, y, w, h);
glNamedBufferSubData(binding_points[2], 0, sizeof(Tag), tag);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
if(tag->child)
UI_Tag_Render(
tag->child,
x + w * tag->pos[0],
y + h * tag->pos[1],
w * tag->size[0],
h * tag->size[1]
);
if(tag->sibling)
UI_Tag_Render(tag->sibling, x, y, w, h);
}
This results in getting same elements that progressively get smaller (due to glViewport call).
The uniform buffer object is created empty (with the sizeof(Tag) size), and once the glNamedBufferSubData is called, it's data doesn't seem to update.
The same way of handling ubo's is used in a different shader that seems to handle correctly (but that one draws directly to screen, and has input vertices)
This does appear to be a synchronization issue. (I'm not sure whether this is driver related or valid in the opengl standard)
Adding a glFinish() call after the draw call, will cause the UBO to be updated correctly.
Thanks to #NicolBolas for pointing out that draw calls are indeed asynchronous.
Related
I have checked out several solutions in here and other pages for calculating vertex normals. The common solution which seems to work best for my own implementation which renders a 3D terrain is to calculate the face normals, which isn't a problem. And then go over each face and add it's normal to the vertices which make it up and then normalize those when done. It seems to work for the most part, but I have some strange graphical problems, mainly where the light transitions from light to dark, you can tell where the faces are. In the following image you can see this near the lower right side, at the top of this hill.
So I am wondering what is causing this strange pattern. It has something to do with how I am calculating the normals, but I am just not seeing where the issue is. Any help would be appreciated.
The code to calculate the normals is...
// Calclulate surface normals
vec3 v1, v2, v3, vec1, vec2;
for(GLuint i = 0; i < terrain->NumFaces; i++) {
v1 = terrain->Vertices[terrain->Faces[i].vert_indices[0]];
v2 = terrain->Vertices[terrain->Faces[i].vert_indices[1]];
vec1 = vector(&v2, &v1);
v3 = terrain->Vertices[terrain->Faces[i].vert_indices[2]];
vec2 = vector(&v3, &v1);
terrain->Faces[i].surface_normal = crossProduct(&vec1, &vec2);
normalize(&terrain->Faces[i].surface_normal);
}
// Calculate vertex normals...
// Add all the surface normals to their attached vertex normals
for(GLuint currentFace = 0; currentFace < terrain->NumFaces; currentFace++) {
vec3 *f = &terrain->Faces[currentFace].surface_normal;
for(GLuint faceVertex = 0; faceVertex < 3; faceVertex++) {
vec3 *n = &terrain->Normals[terrain->Faces[currentFace].vert_indices[faceVertex]];
*n = vec3Add(n, f); // adds vector f to n
}
}
// Go over all vertices and normalize them
for(GLuint currentVertice = 0; currentVertice < terrain->NumVertices; currentVertice++)
normalize(&terrain->Normals[currentVertice]);
Other utility functions I use in the above code are...
// Returns the vector between two vertices
vec3 vector(const vec3 *vp1, const vec3 *vp2)
{
vec3 ret;
ret.x = vp1->x - vp2->x;
ret.y = vp1->y - vp2->y;
ret.z = vp1->z - vp2->z;
return ret;
}
// Returns the normal of two vectors
vec3 crossProduct(const vec3 *v1, const vec3 *v2)
{
vec3 normal;
normal.x = v1->y * v2->z - v1->z * v2->y;
normal.y = v1->z * v2->x - v1->x * v2->z;
normal.z = v1->x * v2->y - v1->y * v2->x;
return normal;
}
// Returns the length of a vector
float vec3Length(vec3 *v1) {
return sqrt(v1->x * v1->x + v1->y * v1->y + v1->z * v1->z);
}
// Normalizes a vector
void normalize(vec3 *v1)
{
float len = vec3Length(v1);
if(len < EPSILON) return;
float inv = 1.0f / len;
v1->x *= inv;
v1->y *= inv;
v1->z *= inv;
}
// Adds vector v2 to v1
vec3 vec3Add(vec3 *v1, vec3 *v2)
{
vec3 v;
v.x = v1->x + v2->x;
v.y = v1->y + v2->y;
v.z = v1->z + v2->z;
return v;
}
One problem with using the average of the face normals to compute the vertex normals is that the computed normals can be biased. For example, imagine that there is a ridge that runs north/south. One vertex on the peak of the ridge has three polygons on the east side, and two on the west. The vertex normal will be angled to the east. This can cause darker lighting at that point when the illumination is coming from the west.
A possible improvement would be to apply a weight to each face's normal, proportional to the angle that corner of the face has at that vertex, but this will not get rid of all of the bias.
After experimenting with different solutions, I discovered that my own normal generation in this post actually works extremely well, it's virtually instant and wasn't the problem. The problem seemed to be in using a large texture for the terrain. I changed the texture I used for the terrain to use a tiled texture which wouldn't get stretched so much and the graphic issue seems to have went away. It was a relief that the normal generation I posted works well as other solutions were horribly slow. This is what I ended up with and as you can see, there are no graphical problems. Plus it looks better with more detail. I wanted to post what I found out in case anyone else sees the same problem.
I have a program that simply draws a cube. When applying transformation such as rotation scaling etc the program works. When I attempt to apply any perspective matrix such as perspective, frustum, or ortho the cube becomes very distorted in undefined ways. What I am getting confused about is why the program works fine when using the other transformations, but breaks when applying any sort of perspective view.
Additionally, when I change the GL_TRUE parameter to GL_FALSE in glUniformMatrix4fv the cube stops moving around the screen, but still has strange distortion. Here is the display function. Just applying perspective gives that same distortion.
void display()
{
vec4 e2 = { 0.0, 0.0, 1.0, 0.0};
vec4 at = { 0.0, 0.0, 0.0, 0.0 };
vec4 up = { 0.0, 1.0, 0.0, 0.0 };
mat4 rx = RotateX(theta);
mat4 ry = RotateY(theta);
mat4 ps = Perspective(zoom*45.0, aspect, 0.5, 10.0);
mat4 rxry = multiplymat4(rx, ry);
mat4 mv = LookAt(e2, at, up);
glUniformMatrix4fv( ModelView, 1, GL_TRUE, &mv.m[0][0] );
mat4 p = multiplymat4(rxry,ps);
glUniformMatrix4fv(projection, 1, GL_TRUE, &p.m[0][0]);
}
I do not believe it is in my perspective function since ortho and frustum does the same thing, but the perspective code is below.
mat4 Perspective(float fovy, float aspect, float zNear, float zFar)
{
float top = tan(fovy*DegreesToRadians/2) * zNear;
float right = top * aspect;
mat4 c = ZERO_MATRIX;
c.m[0][0] = zNear/right;
c.m[1][1] = zNear/top;
c.m[2][2] = -(zFar + zNear)/(zFar - zNear);
c.m[2][3] = -2.0*zFar*zNear/(zFar - zNear);
c.m[3][2] = -1.0;
c.m[3][3] = 0.0;
return c;
}
And the vertex shader
#version 120
attribute vec4 vPosition;
attribute vec4 vColor;
varying vec4 color;
uniform mat4 ModelView;
uniform mat4 projection;
void
main()
{
gl_Position = projection*ModelView*vPosition/vPosition.w;
color = vec4( vColor);
}
I can spot two places where it seems me it is wrong.
1) you are multiplying your perspective matrix with rotational matrix. Why? if you want to do camera movements, they must be done in lookAt matrix. So, I suggest this simple code:
mat4 ps = Perspective(zoom*45.0, aspect, 0.5, 10.0);
glUniformMatrix4fv(projection, 1, GL_TRUE, &ps.m[0][0]);
2) Perspective divide is done automatically by GPU, in this way, it seems me you vertex shader is wrong too:
gl_Position = projection*ModelView*vPosition;
color = vec4( vColor);
Matrix multiplication order matters, and from what I see your p matrix should be
mat4 p = multiplymat4(ps, rxry);
Though logically rotation belongs to the view matrix.
Also, / vPosition.w likely does nothing in the shader, since w equals 1.0 unless you actually supplied 4-dimensional position data. Nor do you need a perspective divide in your vertex shader.
I have this code calling glGetUniform location but it's returning -1 even though I'm using the uniform in my vertex shader. I don't get any errors from glGetError or glGetProgramInfoLog or glGetShaderInfoLog and the shaders/program all gets created successfully. I also only call this after it gets compiled and linked.
int projectionUniform = glGetUniformLocation( shaderProgram, "dfProjection" );
#version 410
uniform float dfRealTime;
uniform float dfGameTime;
uniform mat4 dfProjection;
uniform mat4 dfModelView;
layout(location = 0) in vec3 vertPosition;
layout(location = 1) in vec4 vertColor;
smooth out vec4 color;
out vec4 position;
void main() {
color = vertColor;
position = (dfModelView * dfProjection) * vec4(vertPosition, 1.0);
}
This is the fragment shader:
smooth in vec4 color;
out vec4 fragColor;
void main() {
fragColor = color;
}
There are three positibilites:
You have mis-spelled dfProjection in glGetUniformLocation, but it doesn't seem so.
You are not binding the correct program before calling glGetUniformLocation using glUseProgram.
Or you are not using position in your fragment shader, which means dfProjection is not really active.
Another thing from the code it seems you are passing the shader handle to glGetUniformLocation you should pass the linked program handle instead.
After your edit you are not using position in your fragment shader,
smooth in vec4 color;
out vec4 fragColor;
in vec4 position;
void main() {
// do sth with position here
fragColor = color*position;
}
Keep in mind that you still need to use gl_Position in-order for the fragment shader to know the final fragment position. But I was answering the question of why a uniform variable is not being detected.
I have written code to render my scene objects to a cubemap texture of format GL_DEPTH_COMPONENT and then use this texture in a shader to determine whether a fragment is being directly lit or not, for shadowing purposes. However, my cubemap appears to come out as black. I suppose I am not setting up my FBO or rendering context sufficiently, but fail to see what is missing.
Using GL 3.3 in compatibility profile.
This is my code for creating the FBO and cubemap texture:
glGenFramebuffers(1, &fboShadow);
glGenTextures(1, &texShadow);
glBindTexture(GL_TEXTURE_CUBE_MAP, texShadow);
for (int sideId = 0; sideId < 6; sideId++) {
// Make sure GL knows what this is going to be.
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
// Don't interpolate depth value sampling. Between occluder and occludee there will
// be an instant jump in depth value, not a linear transition.
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
My full rendering function then looks like so:
void render() {
// --- MAKE DEPTH CUBEMAP ---
// Set shader program for depth testing
glUseProgram(progShadow);
// Get the light for which we want to generate a depth cubemap
PointLight p = pointLights.at(0);
// Bind our framebuffer for drawing; clean it up
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboShadow);
glClear(GL_DEPTH_BUFFER_BIT);
// Make 1:1-ratio, 90-degree view frustum for a 512x512 texture.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0, 1, 16.0, 16384.0);
glViewport(0, 0, 512, 512);
glMatrixMode(GL_MODELVIEW);
// Set modelview and projection matrix uniforms
setShadowUniforms();
// Need 6 renderpasses to complete each side of the cubemap
for (int sideId = 0; sideId < 6; sideId++) {
// Attach depth attachment of current framebuffer to level 0 of currently relevant target of texShadow cubemap texture.
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId, texShadow, 0);
// All is fine.
GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
std::cout << "Shadow FBO is broken with code " << status << std::endl;
}
// Push modelview matrix stack because we need to rotate and move camera every time
glPushMatrix();
// This does a switch-case with glRotatefs
rotateCameraForSide(GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId);
// Render from light's position.
glTranslatef(-p.getX(), -p.getY(), -p.getZ());
// Render all objects.
for (ObjectList::iterator it = objectList.begin(); it != objectList.end(); it++) {
(*it)->render();
}
glPopMatrix();
}
// --- RENDER SCENE ---
// Bind default framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Setup proper projection matrix with 70 degree vertical FOV and ratio according to window frame dimensions.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70.0, ((float)vpWidth) / ((float)vpHeight), 16.0, 16384.0);
glViewport(0, 0, vpWidth, vpHeight);
glUseProgram(prog);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
applyCameraPerspective();
// My PointLight class has both a position (world space) and renderPosition (camera space) Vec3f variable;
// The lights' renderPositions get transformed with the modelview matrix by this.
updateLights();
// And here, among other things, the lights' camera space coordinates go to the shader.
setUniforms();
// Render all objects
for (ObjectList::iterator it = objectList.begin(); it != objectList.end(); it++) {
// Object texture goes to texture unit 0
GLuint usedTexture = glTextureList.find((*it)->getTextureName())->second;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, usedTexture);
glUniform1i(textureLoc, 0);
// Cubemap goes to texture unit 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_CUBE_MAP, texShadow);
glUniform1i(shadowLoc, 1);
(*it)->render();
}
glPopMatrix();
frameCount++;
}
The shader program for rendering depth values ("progShadow") is simple.
Vertex shader:
#version 330
in vec3 position;
uniform mat4 modelViewMatrix, projectionMatrix;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
}
Fragment shader:
#version 330
void main() {
// OpenGL sets the depth anyway. Nothing to do here.
}
The shader program for final rendering ("prog") has a fragment shader which looks something like this:
#version 330
#define MAX_LIGHTS 8
in vec3 fragPosition;
in vec3 fragNormal;
in vec2 fragTexCoordinates;
out vec4 fragColor;
uniform sampler2D colorTexture;
uniform samplerCubeShadow shadowCube;
uniform uint activeLightCount;
struct Light {
vec3 position;
vec3 diffuse;
float cAtt;
float lAtt;
float qAtt;
};
// Index 0 to (activeLightCount - 1) need to be the active lights.
uniform Light lights[MAX_LIGHTS];
void main() {
vec3 lightColor = vec3(0, 0, 0);
vec3 normalFragmentToLight[MAX_LIGHTS];
float distFragmentToLight[MAX_LIGHTS];
float distEyeToFragment = length(fragPosition);
// Accumulate all light in "lightColor" variable
for (uint i = uint(0); i < activeLightCount; i++) {
normalFragmentToLight[i] = normalize(lights[i].position - fragPosition);
distFragmentToLight[i] = distance(fragPosition, lights[i].position);
float attenuation = (lights[i].cAtt
+ lights[i].lAtt * distFragmentToLight[i]
+ lights[i].qAtt * pow(distFragmentToLight[i], 2.0));
float dotProduct = dot(fragNormal, normalFragmentToLight[i]);
lightColor += lights[i].diffuse * max(dotProduct, 0.0) / attenuation;
}
// Shadow mapping only for light at index 0 for now.
float distOccluderToLight = texture(shadowCube, vec4(normalFragmentToLight[0], 1));
// My geometries use inches as units, hence a large bias of 1
bool isLit = (distOccluderToLight + 1) < distFragmentToLight[0];
fragColor = texture2D(colorTexture, fragTexCoordinates) * vec4(lightColor, 1.0f) * int(isLit);
}
I have verified that all uniform location variables are set to a proper value (i.e. not -1).
It might be worth noting I do no call to glBindFragDataLocation() for "progShadow" prior to linking it, because no color value should be written by that shader.
See anything obviously wrong here?
For shadow maps, depth buffer internal format is pretty important (too small and things look awful, too large and you eat memory bandwidth). You should use a sized format (e.g. GL_DEPTH_COMPONENT24) to guarantee a certain size, otherwise the implementation will pick whatever it wants. As for debugging a cubemap shadow map, the easiest thing to do is actually to draw the scene into each cube face and output color instead of depth. Then, where you currently try to use the cubemap to sample depth, write the sampled color to fragColor instead. You can rule out view issues immediately this way.
There is another much more serious issue, however. You are using samplerCubeShadow, but you have not set GL_TEXTURE_COMPARE_MODE for your cube map. Attempting to sample from a depth texture with this sampler type and without GL_TEXTURE_COMPARE_MODE = GL_COMPARE_REF_TO_TEXTURE will produce undefined results. Even if you did have this mode set properly, the 4th component of the texture coordinates are used as the depth comparison reference -- a constant value of 1.0 is NOT what you want.
Likewise, the depth buffer does not store linear distance, you cannot directly compare the distance you computed here:
distFragmentToLight[i] = distance(fragPosition, lights[i].position);
Instead, something like this will be necessary:
float VectorToDepth (vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
// Replace f and n with the far and near plane values you used when
// you drew your cube map.
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float LightDepth = VectorToDepth (fragPosition - lights [i].position);
float depth_compare = texture(shadowCube,vec4(normalFragmentToLight[0],LightDepth));
* Code for float VectorToDepth (vec3 Vec)borrowed from Omnidirectional shadow mapping with depth cubemap
Now depth_compare will be a value between 0.0 (completely in shadow) and 1.0 (completely out of shadow). If you have linear texture filtering enabled, the hardware will sample the depth at 4 points and may give you a form of 2x2 PCF filtering. If you have nearest texture filtering, then it will either be 1.0 or 0.0.
I have a program that generates a heightmap and then displays it as a mesh with OpenGL. When I try to add lighting, it ends up with weird square shapes covering the mesh. They are more noticeable in some areas than others, but are always there.
I was using a quad mesh, but nothing changed after switching to a triangle mesh. I've used at least three different methods to calculate the vertex normals, all with the same effect. I was doing the lighting manually with shaders, but nothing changes when using the builtin
OpenGL lighting system.
My latest normal-generating code (faces is an array of indices into verts, the vertex array):
int i;
for (i = 0; i < NINDEX; i += 3) {
vec v[3];
v[0] = verts[faces[i + 0]];
v[1] = verts[faces[i + 1]];
v[2] = verts[faces[i + 2]];
vec v1 = vec_sub(v[1], v[0]);
vec v2 = vec_sub(v[2], v[0]);
vec n = vec_norm(vec_cross(v2, v1));
norms[faces[i + 0]] = vec_add(norms[faces[i + 0]], n);
norms[faces[i + 1]] = vec_add(norms[faces[i + 1]], n);
norms[faces[i + 2]] = vec_add(norms[faces[i + 2]], n);
}
for (i = 0; i < NVERTS; i++) {
norms[i] = vec_norm(norms[i]);
}
Although that isn't the only code I've used, so I doubt that it is the cause of the problem.
I draw the mesh with:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, verts);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, norms);
glDrawElements(GL_TRIANGLES, NINDEX, GL_UNSIGNED_SHORT, faces);
And I'm not currently using any shaders.
What could be causing this?
EDIT: A more comprehensive set of screenshots:
Wireframe
Flat shading, OpenGL lighting
Smooth shading, OpenGL lighting
Lighting done in shader
For the last one, the shader code is
Vertex:
varying vec3 lightvec, normal;
void main() {
vec3 lightpos = vec3(0, 0, 100);
vec3 v = vec3(gl_ModelViewMatrix * gl_Vertex);
normal = gl_NormalMatrix * gl_Normal;
lightvec = normalize(lightpos - v);
gl_Position = ftransform();
}
Fragment:
varying vec3 lightvec, normal;
void main(void) {
float l = dot(lightvec, normal);
gl_FragColor = vec4(l, l, l, 1);
}
You need to either normalize the normal in the fragment shader, like so:
varying vec3 lightvec, normal;
void main(void) {
vec3 normalNormed = normalize(normal);
float l = dot(lightvec, normalNormed);
gl_FragColor = vec4(l, l, l, 1);
}
This can be expensive though. What will also work in this case, with directional lights, is to use vertex lighting. So calculate the light value in the vertex shader
varying float lightItensity;
void main() {
vec3 lightpos = vec3(0, 0, 100);
vec3 v = vec3(gl_ModelViewMatrix * gl_Vertex);
normal = gl_NormalMatrix * gl_Normal;
lightvec = normalize(lightpos - v);
lightItensity = dot(normal, lightvec);
gl_Position = ftransform();
}
and use it in the fragment shader,
varying float light;
void main(void) {
float l = light;
gl_FragColor = vec4(l, l, l, 1);
}
I hope this fixes it, let me know if it doesn't.
EDIT: Heres a small diagram that explains what is most likely happening
EDIT2:
If that doesn't help, add more triangles. Interpolate the values of your heightmap and add some vertices in between.
Alternatively, try changing your tesselation scheme. For example a mesh of equilateral triangles like so could make the artifacts less prominent.
You'll have to do some interpolation on your heightmap.
Otherwise I have no idea.. Good luck!
I don't have a definitive answer for the non-shader versions, but I wanted to add that if you're doing per pixel lighting in your fragment shader, you should probably be normalizing the normal and lightvec inside the fragment shader.
If you don't do this they not be unit length (a linear interpolation between two normalized vectors is not necessarily normalized). This could explain some of the artifacts you see in the shader version, as the magnitude of the dot product would vary as a function of the distance from the vertices, which kind of looks like what you're seeing.
EDIT: Another thought, are you doing any non-uniform scaling (different x,y,z) of the mesh when rendering the non-shader version? If you scale it, then you need to either modify the normals by the inverse scale factor, or set glEnable(GL_NORMALIZE). See here for more:
http://www.lighthouse3d.com/tutorials/glsl-tutorial/normalization-issues/