Graphic issue with vertex normals - c

I have checked out several solutions in here and other pages for calculating vertex normals. The common solution which seems to work best for my own implementation which renders a 3D terrain is to calculate the face normals, which isn't a problem. And then go over each face and add it's normal to the vertices which make it up and then normalize those when done. It seems to work for the most part, but I have some strange graphical problems, mainly where the light transitions from light to dark, you can tell where the faces are. In the following image you can see this near the lower right side, at the top of this hill.
So I am wondering what is causing this strange pattern. It has something to do with how I am calculating the normals, but I am just not seeing where the issue is. Any help would be appreciated.
The code to calculate the normals is...
// Calclulate surface normals
vec3 v1, v2, v3, vec1, vec2;
for(GLuint i = 0; i < terrain->NumFaces; i++) {
v1 = terrain->Vertices[terrain->Faces[i].vert_indices[0]];
v2 = terrain->Vertices[terrain->Faces[i].vert_indices[1]];
vec1 = vector(&v2, &v1);
v3 = terrain->Vertices[terrain->Faces[i].vert_indices[2]];
vec2 = vector(&v3, &v1);
terrain->Faces[i].surface_normal = crossProduct(&vec1, &vec2);
normalize(&terrain->Faces[i].surface_normal);
}
// Calculate vertex normals...
// Add all the surface normals to their attached vertex normals
for(GLuint currentFace = 0; currentFace < terrain->NumFaces; currentFace++) {
vec3 *f = &terrain->Faces[currentFace].surface_normal;
for(GLuint faceVertex = 0; faceVertex < 3; faceVertex++) {
vec3 *n = &terrain->Normals[terrain->Faces[currentFace].vert_indices[faceVertex]];
*n = vec3Add(n, f); // adds vector f to n
}
}
// Go over all vertices and normalize them
for(GLuint currentVertice = 0; currentVertice < terrain->NumVertices; currentVertice++)
normalize(&terrain->Normals[currentVertice]);
Other utility functions I use in the above code are...
// Returns the vector between two vertices
vec3 vector(const vec3 *vp1, const vec3 *vp2)
{
vec3 ret;
ret.x = vp1->x - vp2->x;
ret.y = vp1->y - vp2->y;
ret.z = vp1->z - vp2->z;
return ret;
}
// Returns the normal of two vectors
vec3 crossProduct(const vec3 *v1, const vec3 *v2)
{
vec3 normal;
normal.x = v1->y * v2->z - v1->z * v2->y;
normal.y = v1->z * v2->x - v1->x * v2->z;
normal.z = v1->x * v2->y - v1->y * v2->x;
return normal;
}
// Returns the length of a vector
float vec3Length(vec3 *v1) {
return sqrt(v1->x * v1->x + v1->y * v1->y + v1->z * v1->z);
}
// Normalizes a vector
void normalize(vec3 *v1)
{
float len = vec3Length(v1);
if(len < EPSILON) return;
float inv = 1.0f / len;
v1->x *= inv;
v1->y *= inv;
v1->z *= inv;
}
// Adds vector v2 to v1
vec3 vec3Add(vec3 *v1, vec3 *v2)
{
vec3 v;
v.x = v1->x + v2->x;
v.y = v1->y + v2->y;
v.z = v1->z + v2->z;
return v;
}

One problem with using the average of the face normals to compute the vertex normals is that the computed normals can be biased. For example, imagine that there is a ridge that runs north/south. One vertex on the peak of the ridge has three polygons on the east side, and two on the west. The vertex normal will be angled to the east. This can cause darker lighting at that point when the illumination is coming from the west.
A possible improvement would be to apply a weight to each face's normal, proportional to the angle that corner of the face has at that vertex, but this will not get rid of all of the bias.

After experimenting with different solutions, I discovered that my own normal generation in this post actually works extremely well, it's virtually instant and wasn't the problem. The problem seemed to be in using a large texture for the terrain. I changed the texture I used for the terrain to use a tiled texture which wouldn't get stretched so much and the graphic issue seems to have went away. It was a relief that the normal generation I posted works well as other solutions were horribly slow. This is what I ended up with and as you can see, there are no graphical problems. Plus it looks better with more detail. I wanted to post what I found out in case anyone else sees the same problem.

Related

OpenGL. Want the camera to always look at the player in a fixed position [duplicate]

i'm making a car race for the first time using opengl,the first problem i face is how to make the camera follow the car with constant distance..here is the code for keyboard function.V is the velocity of the car.
void OnSpecial(int key, int x, int y)
{
float step = 5;
switch(key) {
case GLUT_KEY_LEFTa:
carAngle = step;
V.z = carAngle ;
camera.Strafe(-step/2);
break;
case GLUT_KEY_RIGHT:
carAngle = -step;
V.z = carAngle ;
camera.Strafe(step/2);
break;
case GLUT_KEY_UP:
V.x += (-step);
camera.Walk(step/2);
break;
case GLUT_KEY_DOWN:
if(V.x<0)
{
V.x += step;
camera.Walk(-step/2);
}
break;
}
}
Something like that maybe ?
vec3 cameraPosition = carPosition + vec3(20*cos(carAngle), 10,20*sin(carAngle));
vec3 cameraTarget = carPosition;
vec3 cameraUp = vec3(0,1,0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity()
gluLookAt(cameraPosition, cameraTarget, cameraUp);
glTranslate(carPosition);
drawCar();
It you're not using the old and deprecated openGL API (glBegin & stuff) you'll have to do something like
mat4 ViewMatrix = LookAt(cameraPosition, cameraTarget, cameraUp); // adapt depending on what math library you use
The answer to that is simple. You have player controlled object (car) so you have its position and orientation via ModelViewMatrix in world space (usually pointed to the center of 3D model)
To transform it to the correct follow ModelViewMatrix you must:
obtain or construct car ModelMatrix as double M[16]
translate/rotate it to the new position (inside cockpit or behind car)
so the Z axis is pointing the way you want to see. Its usual to have the follow distance as a function of speed
Invert M so M=Inverse(M)
use M as ModelViewMatrix
render
so in an nutshell:
ModelViewMatrix = rendered_object_matrix * Inverse(following_object_matrix * local_view_offset)
for additional stuff you need for this look at my answer here:
https://stackoverflow.com/a/18041433/2521214

glNamedBufferSubData Not updating between draws

I am having difficulties with rendering rectangles.
The rectangle vertices are being calculated using gl_VertexID using data from a Uniform Buffer Object.
However when updating the uniform buffer data between draw calls, the same elements seem to appear.
#version 440
out vec3 r_uv;
out vec4 r_color;
layout (binding = 2, std140) uniform struct_uirect {
vec2 pos;
vec2 size;
vec4 color;
int uv;
} uirect;
void main(){
vec2 verts[4] = vec2[4](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
r_uv = vec3(verts[gl_VertexID], uirect.uv);
r_color = uirect.color;
vec2 vert = uirect.pos + verts[gl_VertexID] * uirect.size;
vert = vert * 2 - 1;
gl_Position = vec4(vert, 0.0, 1.0);
}
#version 440
out vec4 color;
in vec3 r_uv;
in vec4 r_color;
layout (binding = 1) uniform sampler2DArray voxel_atlas;
void main(){
color = texture(voxel_atlas, r_uv) * r_color;
}
Because of order dependence every element is being drawn separately, using the following recursive function.
void UI_Tag_Render(Tag* tag, int x, int y, int w, int h){
glViewport(x, y, w, h);
glNamedBufferSubData(binding_points[2], 0, sizeof(Tag), tag);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
if(tag->child)
UI_Tag_Render(
tag->child,
x + w * tag->pos[0],
y + h * tag->pos[1],
w * tag->size[0],
h * tag->size[1]
);
if(tag->sibling)
UI_Tag_Render(tag->sibling, x, y, w, h);
}
This results in getting same elements that progressively get smaller (due to glViewport call).
The uniform buffer object is created empty (with the sizeof(Tag) size), and once the glNamedBufferSubData is called, it's data doesn't seem to update.
The same way of handling ubo's is used in a different shader that seems to handle correctly (but that one draws directly to screen, and has input vertices)
This does appear to be a synchronization issue. (I'm not sure whether this is driver related or valid in the opengl standard)
Adding a glFinish() call after the draw call, will cause the UBO to be updated correctly.
Thanks to #NicolBolas for pointing out that draw calls are indeed asynchronous.

Mandelbrot Set Zoom Distortion in C

I'm writing a C program to render a Mandelbrot set and currently, I'm stuck with trying out to figure out how to zoom in properly.
I want for the zoom to be able to follow the mouse pointer on the screen - so that the fractal zooms in into the cursor position.
I have a window defined by:
# define WIDTH 800
# define HEIGHT 600
My Re_max, Re_min, Im_Max, Im_Min are defined and initialized as follows:
man->re_max = 2.0;
man->re_min = -2.0;
man->im_max = 2.0;
man->im_min = -2.0;
The interpolation value (more on in later) is defined and initialized as follows:
pos->interp = 1.0;
To map the pixel coordinates to the center of the screen, I'm using the position function:
void position(int x, int y, t_mandel *man)
{
double *s_x;
double *s_y;
s_x = &man->pos->shift_x;
s_y = &man->pos->shift_y;
man->c_re = (x / (WIDTH / (man->re_max - man->re_min)) + man->re_min) + *s_x;
man->c_im =(y / (HEIGHT / (man->im_max - man->re_min)) + man->im_min) + *s_y;
man->c_im *= 0.8;
}
To zoom in, I first get the coordinates of the mouse pointer and map them to the visible area given by the rectangle defined by the (Re_Max, Re_Min, Im_Max, Im_Min) using this function, where x and y are coordinates of the pointer on a screen:
int mouse_move(int x, int y, void *p)
{
t_fract *fract;
t_mandel *man;
fract = (t_fract *)p;
man = fract->mandel;
fract->mouse->Re = x / (WIDTH / (man->re_max - man->re_min)) + man->re_min;
fract->mouse->Im = y / (HEIGHT / (man->im_max - man->re_min)) + man->im_min;
return (0);
}
This function is called when a mouse wheel scroll is registered. The actual zooming is achieved by this function:
void zoom_control(int key, t_fract *fract)
{
double *interp;
interp = &fract->mandel->pos->interp;
if (key == 5) // zoom in
{
*interp = 1.0 / 1.03;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, *interp);
}
else if (key == 4) // zoom out
{
*interp = 1.0 * 1.03;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, *interp);
}
}
Which calls this:
void apply_zoom(t_mandel *man, double m_re, double m_im, double interp)
{
man->re_min = interpolate(m_re, man->re_min, interp);
man->im_min = interpolate(m_im, man->im_min, interp);
man->re_max = interpolate(m_re, man->re_max, interp);
man->im_max = interpolate(m_im, man->im_max, interp);
}
I have a simple interpolate function to redefine the area bounding rectangle:
double interpolate(double start, double end, double interp)
{
return (start + ((end - start) * interp));
}
So the problem is:
My code renders the fractal like this -
Mandelbrot set
But when I try to zoom in as described with the mouse, instead of going nicely "in", it just distorts like this, the image just sort of collapses onto itself instead of actually diving into the fractal.
I would really appreciate help with this one as I've been stuck on it for a while now.
If you please could also explain the actual math behind your solutions, I would be overjoyed!
Thank you!
After quite a bit of headache and a lot of paper wasted on recalculation interpolation methods, I've realized that the way I've mapped my complex numbers on-screen was incorrect, to begin with. Reworking my mapping method solved my problem, so I'll share what have I done.
-------------------------------OLD WAY--------------------------------------
I've initialized my Re_max, Re_min, Im_Max, Im_Min values, which define the visible area in the following way:
re_max = 2.0;
re_min = -2.0;
im_max = 2.0;
im_min = -2.0;
Then, I used this method to convert my on-screen coordinates to the complex numbers used to calculate the fractal (note that the coordinates used for mapping the mouse position for zoom interpolation and coordinates used to calculate the fractal itself use the same method):
Re = x / (WIDTH / (re_max - re_min)) + re_min;
Im = y / (HEIGHT / (im_max - re_min)) + im_min;
However, this way I didn't take the screen ratio into account and I've neglected the fact (due to a lack of knowledge) that the y coordinate on-screen is inverse (at least in my program) - negative direction is up, positive is down.
This way, when I tried to zoom in with my interpolation, naturally, the image distorted.
------------------------------CORRECT WAY-----------------------------------
When defining the bounding rectangle of the set, maximum imaginary im_max) part should be calculated, based on the screen ratio, to avoid image distortion when the display window isn't a square:
re_max = 2.0;
re_min = -2.0;
im_min = -2.0;
im_max = im_min + (re_max - re_min) * HEIGHT / WIDTH;
To map the on-screen coordinates to the complex numbers, I first found the "coordinate-to-number* ratio, which is equal to *rectangle length / screen width*:
re_factor = (re_max - re_min) / (WIDTH - 1);
im_factor = (im_max - im_min) / (HEIGHT - 1);
Then, I've mapped my pixel coordinates to the real and imaginary part of a complex number used in calculations like so:
c_re = re_min + x * re_factor;
c_im = im_max - y * im_factor;
After implementing those changes, I was finally able to smoothly zoom into the mouse position without any distortion or image "jumps".
If I understand you correctly, you want to make the point where the mouse is located a new center of the image, and change the scale of the image by a factor of 1.03. I would try something like that:
Your position() and mouse_move() functions remain the same.
in zoom_control() just change the way how you set the new value of interpolation, it should not be a fixed constant, but should be based on its current value. Also, pass the new scaling factor to the apply_zoom():
void zoom_control(int key, t_fract *fract)
{
double *interp;
interp = &fract->mandel->pos->interp;
double zoom_factor = 1.03;
if (key == 5) // zoom in
{
*interp /= zoom_factor;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, 1.0 / zoom_factor);
}
else if (key == 4) // zoom out
{
*interp *= zoom_factor;
apply_zoom(fract->mandel, fract->mouse->Re, fract->mouse->Im, zoom_factor);
}
}
modify the apply zoom function:
void apply_zoom(t_mandel *man, double m_re, double m_im, double zoom_factor)
{
// Calculate the new ranges along the real and imaginary axes.
// They are equal to the current ranges multiplied by the zoom_factor.
double re_range = (man->re_max - man->re_min) * zoom_factor;
double im_range = (man->im_max - man->im_min) * zoom_factor;
// Set the new min/max values for real and imaginary axes with the center at
// mouse coordinates m_re and m_im.
man->re_min = m_re - re_range / 2;
man->re_max = m_re + re_range / 2;
man->im_min = m_im - im_range / 2;
man->im_max = m_im + im_range / 2;
}

How to correctly make a depth cubemap for shadow mapping?

I have written code to render my scene objects to a cubemap texture of format GL_DEPTH_COMPONENT and then use this texture in a shader to determine whether a fragment is being directly lit or not, for shadowing purposes. However, my cubemap appears to come out as black. I suppose I am not setting up my FBO or rendering context sufficiently, but fail to see what is missing.
Using GL 3.3 in compatibility profile.
This is my code for creating the FBO and cubemap texture:
glGenFramebuffers(1, &fboShadow);
glGenTextures(1, &texShadow);
glBindTexture(GL_TEXTURE_CUBE_MAP, texShadow);
for (int sideId = 0; sideId < 6; sideId++) {
// Make sure GL knows what this is going to be.
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
// Don't interpolate depth value sampling. Between occluder and occludee there will
// be an instant jump in depth value, not a linear transition.
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
My full rendering function then looks like so:
void render() {
// --- MAKE DEPTH CUBEMAP ---
// Set shader program for depth testing
glUseProgram(progShadow);
// Get the light for which we want to generate a depth cubemap
PointLight p = pointLights.at(0);
// Bind our framebuffer for drawing; clean it up
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboShadow);
glClear(GL_DEPTH_BUFFER_BIT);
// Make 1:1-ratio, 90-degree view frustum for a 512x512 texture.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0, 1, 16.0, 16384.0);
glViewport(0, 0, 512, 512);
glMatrixMode(GL_MODELVIEW);
// Set modelview and projection matrix uniforms
setShadowUniforms();
// Need 6 renderpasses to complete each side of the cubemap
for (int sideId = 0; sideId < 6; sideId++) {
// Attach depth attachment of current framebuffer to level 0 of currently relevant target of texShadow cubemap texture.
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId, texShadow, 0);
// All is fine.
GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
std::cout << "Shadow FBO is broken with code " << status << std::endl;
}
// Push modelview matrix stack because we need to rotate and move camera every time
glPushMatrix();
// This does a switch-case with glRotatefs
rotateCameraForSide(GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId);
// Render from light's position.
glTranslatef(-p.getX(), -p.getY(), -p.getZ());
// Render all objects.
for (ObjectList::iterator it = objectList.begin(); it != objectList.end(); it++) {
(*it)->render();
}
glPopMatrix();
}
// --- RENDER SCENE ---
// Bind default framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Setup proper projection matrix with 70 degree vertical FOV and ratio according to window frame dimensions.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70.0, ((float)vpWidth) / ((float)vpHeight), 16.0, 16384.0);
glViewport(0, 0, vpWidth, vpHeight);
glUseProgram(prog);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
applyCameraPerspective();
// My PointLight class has both a position (world space) and renderPosition (camera space) Vec3f variable;
// The lights' renderPositions get transformed with the modelview matrix by this.
updateLights();
// And here, among other things, the lights' camera space coordinates go to the shader.
setUniforms();
// Render all objects
for (ObjectList::iterator it = objectList.begin(); it != objectList.end(); it++) {
// Object texture goes to texture unit 0
GLuint usedTexture = glTextureList.find((*it)->getTextureName())->second;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, usedTexture);
glUniform1i(textureLoc, 0);
// Cubemap goes to texture unit 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_CUBE_MAP, texShadow);
glUniform1i(shadowLoc, 1);
(*it)->render();
}
glPopMatrix();
frameCount++;
}
The shader program for rendering depth values ("progShadow") is simple.
Vertex shader:
#version 330
in vec3 position;
uniform mat4 modelViewMatrix, projectionMatrix;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
}
Fragment shader:
#version 330
void main() {
// OpenGL sets the depth anyway. Nothing to do here.
}
The shader program for final rendering ("prog") has a fragment shader which looks something like this:
#version 330
#define MAX_LIGHTS 8
in vec3 fragPosition;
in vec3 fragNormal;
in vec2 fragTexCoordinates;
out vec4 fragColor;
uniform sampler2D colorTexture;
uniform samplerCubeShadow shadowCube;
uniform uint activeLightCount;
struct Light {
vec3 position;
vec3 diffuse;
float cAtt;
float lAtt;
float qAtt;
};
// Index 0 to (activeLightCount - 1) need to be the active lights.
uniform Light lights[MAX_LIGHTS];
void main() {
vec3 lightColor = vec3(0, 0, 0);
vec3 normalFragmentToLight[MAX_LIGHTS];
float distFragmentToLight[MAX_LIGHTS];
float distEyeToFragment = length(fragPosition);
// Accumulate all light in "lightColor" variable
for (uint i = uint(0); i < activeLightCount; i++) {
normalFragmentToLight[i] = normalize(lights[i].position - fragPosition);
distFragmentToLight[i] = distance(fragPosition, lights[i].position);
float attenuation = (lights[i].cAtt
+ lights[i].lAtt * distFragmentToLight[i]
+ lights[i].qAtt * pow(distFragmentToLight[i], 2.0));
float dotProduct = dot(fragNormal, normalFragmentToLight[i]);
lightColor += lights[i].diffuse * max(dotProduct, 0.0) / attenuation;
}
// Shadow mapping only for light at index 0 for now.
float distOccluderToLight = texture(shadowCube, vec4(normalFragmentToLight[0], 1));
// My geometries use inches as units, hence a large bias of 1
bool isLit = (distOccluderToLight + 1) < distFragmentToLight[0];
fragColor = texture2D(colorTexture, fragTexCoordinates) * vec4(lightColor, 1.0f) * int(isLit);
}
I have verified that all uniform location variables are set to a proper value (i.e. not -1).
It might be worth noting I do no call to glBindFragDataLocation() for "progShadow" prior to linking it, because no color value should be written by that shader.
See anything obviously wrong here?
For shadow maps, depth buffer internal format is pretty important (too small and things look awful, too large and you eat memory bandwidth). You should use a sized format (e.g. GL_DEPTH_COMPONENT24) to guarantee a certain size, otherwise the implementation will pick whatever it wants. As for debugging a cubemap shadow map, the easiest thing to do is actually to draw the scene into each cube face and output color instead of depth. Then, where you currently try to use the cubemap to sample depth, write the sampled color to fragColor instead. You can rule out view issues immediately this way.
There is another much more serious issue, however. You are using samplerCubeShadow, but you have not set GL_TEXTURE_COMPARE_MODE for your cube map. Attempting to sample from a depth texture with this sampler type and without GL_TEXTURE_COMPARE_MODE = GL_COMPARE_REF_TO_TEXTURE will produce undefined results. Even if you did have this mode set properly, the 4th component of the texture coordinates are used as the depth comparison reference -- a constant value of 1.0 is NOT what you want.
Likewise, the depth buffer does not store linear distance, you cannot directly compare the distance you computed here:
distFragmentToLight[i] = distance(fragPosition, lights[i].position);
Instead, something like this will be necessary:
float VectorToDepth (vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
// Replace f and n with the far and near plane values you used when
// you drew your cube map.
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float LightDepth = VectorToDepth (fragPosition - lights [i].position);
float depth_compare = texture(shadowCube,vec4(normalFragmentToLight[0],LightDepth));
* Code for float VectorToDepth (vec3 Vec)borrowed from Omnidirectional shadow mapping with depth cubemap
Now depth_compare will be a value between 0.0 (completely in shadow) and 1.0 (completely out of shadow). If you have linear texture filtering enabled, the hardware will sample the depth at 4 points and may give you a form of 2x2 PCF filtering. If you have nearest texture filtering, then it will either be 1.0 or 0.0.

Strange square lighting artefacts in OpenGL

I have a program that generates a heightmap and then displays it as a mesh with OpenGL. When I try to add lighting, it ends up with weird square shapes covering the mesh. They are more noticeable in some areas than others, but are always there.
I was using a quad mesh, but nothing changed after switching to a triangle mesh. I've used at least three different methods to calculate the vertex normals, all with the same effect. I was doing the lighting manually with shaders, but nothing changes when using the builtin
OpenGL lighting system.
My latest normal-generating code (faces is an array of indices into verts, the vertex array):
int i;
for (i = 0; i < NINDEX; i += 3) {
vec v[3];
v[0] = verts[faces[i + 0]];
v[1] = verts[faces[i + 1]];
v[2] = verts[faces[i + 2]];
vec v1 = vec_sub(v[1], v[0]);
vec v2 = vec_sub(v[2], v[0]);
vec n = vec_norm(vec_cross(v2, v1));
norms[faces[i + 0]] = vec_add(norms[faces[i + 0]], n);
norms[faces[i + 1]] = vec_add(norms[faces[i + 1]], n);
norms[faces[i + 2]] = vec_add(norms[faces[i + 2]], n);
}
for (i = 0; i < NVERTS; i++) {
norms[i] = vec_norm(norms[i]);
}
Although that isn't the only code I've used, so I doubt that it is the cause of the problem.
I draw the mesh with:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, verts);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, norms);
glDrawElements(GL_TRIANGLES, NINDEX, GL_UNSIGNED_SHORT, faces);
And I'm not currently using any shaders.
What could be causing this?
EDIT: A more comprehensive set of screenshots:
Wireframe
Flat shading, OpenGL lighting
Smooth shading, OpenGL lighting
Lighting done in shader
For the last one, the shader code is
Vertex:
varying vec3 lightvec, normal;
void main() {
vec3 lightpos = vec3(0, 0, 100);
vec3 v = vec3(gl_ModelViewMatrix * gl_Vertex);
normal = gl_NormalMatrix * gl_Normal;
lightvec = normalize(lightpos - v);
gl_Position = ftransform();
}
Fragment:
varying vec3 lightvec, normal;
void main(void) {
float l = dot(lightvec, normal);
gl_FragColor = vec4(l, l, l, 1);
}
You need to either normalize the normal in the fragment shader, like so:
varying vec3 lightvec, normal;
void main(void) {
vec3 normalNormed = normalize(normal);
float l = dot(lightvec, normalNormed);
gl_FragColor = vec4(l, l, l, 1);
}
This can be expensive though. What will also work in this case, with directional lights, is to use vertex lighting. So calculate the light value in the vertex shader
varying float lightItensity;
void main() {
vec3 lightpos = vec3(0, 0, 100);
vec3 v = vec3(gl_ModelViewMatrix * gl_Vertex);
normal = gl_NormalMatrix * gl_Normal;
lightvec = normalize(lightpos - v);
lightItensity = dot(normal, lightvec);
gl_Position = ftransform();
}
and use it in the fragment shader,
varying float light;
void main(void) {
float l = light;
gl_FragColor = vec4(l, l, l, 1);
}
I hope this fixes it, let me know if it doesn't.
EDIT: Heres a small diagram that explains what is most likely happening
EDIT2:
If that doesn't help, add more triangles. Interpolate the values of your heightmap and add some vertices in between.
Alternatively, try changing your tesselation scheme. For example a mesh of equilateral triangles like so could make the artifacts less prominent.
You'll have to do some interpolation on your heightmap.
Otherwise I have no idea.. Good luck!
I don't have a definitive answer for the non-shader versions, but I wanted to add that if you're doing per pixel lighting in your fragment shader, you should probably be normalizing the normal and lightvec inside the fragment shader.
If you don't do this they not be unit length (a linear interpolation between two normalized vectors is not necessarily normalized). This could explain some of the artifacts you see in the shader version, as the magnitude of the dot product would vary as a function of the distance from the vertices, which kind of looks like what you're seeing.
EDIT: Another thought, are you doing any non-uniform scaling (different x,y,z) of the mesh when rendering the non-shader version? If you scale it, then you need to either modify the normals by the inverse scale factor, or set glEnable(GL_NORMALIZE). See here for more:
http://www.lighthouse3d.com/tutorials/glsl-tutorial/normalization-issues/

Resources