I have to read a 3D object from an ASE file. This object turns to be too big for the world I have to create, therefore, I must scale it down.
With its original size, it is properly lighted up.
However, once I scale it down, it becomes oversaturated.
The world is centered around (0, 0, 0) and it is 100 meters long (y axis) and 50 meters wide (x axis), my upVector is (0, 0, 1). There are two lights, light0 in (20, 35, 750) and light1 in (-20, -35, 750).
Relevant parts of the code:
void init(void){
glClearColor(0.827, 0.925, 0.949, 0.0);
glEnable(GL_DEPTH_TEST);
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT, GL_DIFFUSE);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glEnable(GL_LIGHTING);
glShadeModel(GL_SMOOTH);
GLfloat difusa[] = { 1.0f, 1.0f, 1.0f, 1.0f}; // white light
glLightfv(GL_LIGHT0, GL_DIFFUSE, difusa);
glLightfv(GL_LIGHT1, GL_DIFFUSE, difusa);
loadObjectFromFile("objeto.ASE");
}
void display ( void ) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(eyeX, eyeY, eyeZ, atX, atY, atZ, 0.0, 0.0, 1.0);
GLfloat posicion0[] = { 20.0f, 35.0f, 750.0f, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION, posicion0);
GLfloat posicion1[] = { -20.0f, -35.0f, 750.0f, 1.0f};
glLightfv(GL_LIGHT1, GL_POSITION, posicion1);
glColor3f(0.749, 0.918, 0.278);
glPushMatrix();
glTranslatef(0.0, 0.0, 1.5);
//Here comes the problem
glScalef(0.08, 0.08, 0.08);
glBegin(GL_TRIANGLES);
for(int i = 0; i < numFaces; i++){
glNormal3d(faces3D[i].n.nx, faces3D[i].n.ny, faces3D[i].n.nz);
glVertex3d(vertex[faces3D[i].s.A].x, vertex[faces3D[i].s.A].y, vertex[faces3D[i].s.A].z);
glVertex3d(vertex[faces3D[i].s.B].x, vertex[faces3D[i].s.B].y, vertex[faces3D[i].s.B].z);
glVertex3d(vertex[faces3D[i].s.C].x, vertex[faces3D[i].s.C].y, vertex[faces3D[i].s.C].z);
}
glEnd();
glPopMatrix();
glutSwapBuffers();
}
Why does lighting fail when the object is scaled down?
The problem you're running into is, that scaling the modelview matrix also influences the "normal matrix" normals are transformed with. The "normal matrix" is actually the transpose of the inverse of the modelview matrix. So by scaling down the modelview matrix, you're scaling up the normal matrix (because of the modelview inversion step used to obtain it).
Because of that the transformed normals must be rescaled, or normalized if the scale of the modelview matrix is not unitary. In fixed function OpenGL there are two methods to do this: Normal normalization (sounds funny, I know) and normal rescaling. You can enable either with
glEnable(GL_NORMALIZE);
glEnable(GL_RESCALE_NORMALS);
In a shader you'd simply normalize the transformed normal
#version ...
uniform mat3 mat_normal;
in vec3 vertex_normal;
void main()
{
...
vec3 view_normal = normalize( mat_normal * vertex_normal );
...
}
Depending on the setting of GL_NORMALIZE and GL_RESCALE_NORMALS, your normals can be transformed by the OpenGL-Pipeline.
Start with glEnable(GL_NORMALIZE) and see if that solves your problem
Related
I know this has been asked before, but I have yet to find an answer that works in my case.
Basically, I want the camera to move left and right based on the mouse cursor position. The more the mouse is to the left, the more the camera turns to the left. So it should be possible to turn around and move in the reverse direction. How do I do this?
This is my camera position:
GLfloat cameraPosition[] = { 0.0, 0.0, 3.5 };
GLfloat lx = 0.0; GLfloat ly = 0.0;
This is my projection matrix:
// set to projection mode
glMatrixMode(GL_PROJECTION);
// clear any previous transformations
glLoadIdentity();
// set the perspective
gluPerspective(45, (float)windowWidth / (float)windowHeight, 0.1, 20);
In the myDisplay function, this how I set the camera position:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// set the camera position
gluLookAt(cameraPosition[0], cameraPosition[1], cameraPosition[2],
lx, ly, cameraPosition[2] - 100,
0, 1, 0);
What should I do in the glutPassiveMotionFunc function?
Most probably you need to do something like this
glRotatef(-yAngle, 0.0f, 1.0, 0.0f);
glRotatef(-xAngle, 1.0f, 0.0f, 0.0f);
glTranslatef(cameraPosition[0], cameraPosition[1], cameraPosition[2])
instead of gluLookAt(). Try it out, maybe it will solve your problem.
I render a triangle strip this way, and with basic bypass shaders all is working fine:
EDIT:
I added TextCoords and modified the shaders , I keep getting the same result, my 3d objects are going black!
UPDATED CODE:
// Dibuixem tots els prismes
glBegin(GL_TRIANGLE_STRIP);
for(i=0;i<num_elems;i++) {
for(j=0;j<num_vertices;j++) {
glNormal3fv((GLfloat *)(a+j*2));
glTexCoord2f(0.0f, 0.0f);
glVertex3fv((GLfloat *)(a+j*2+1));
glTexCoord2f(1.0f, 0.0f);
glNormal3fv((GLfloat *)(b+j*2));
glTexCoord2f(1.0f, 1.0f);
glVertex3fv((GLfloat *)(b+j*2+1));
}
glNormal3fv((GLfloat *)(a));
glTexCoord2f(0.0f, 1.0f);
glVertex3fv((GLfloat *)(a+1));
glNormal3fv((GLfloat *)(b));
glTexCoord2f(0.0f, 0.0f);
glVertex3fv((GLfloat *)(b+1));
a+=face_size;
b+=face_size;
}
glEnd();
And I am trying to attach a texture to my shaders, but I can't figure out how to pass the texture.
I create and add the texture to my program this way. Texture data is verified
array with format unsigned char data[imageSize];:
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glActiveTexture(GL_TEXTURE0); // Texture unit 0
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0,GL_BGR, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
GLuint VertexShaderID = glCreateShader(GL_VERTEX_SHADER);
This is what I read in another posts with the same issue and I added to my code after compiling my shaders and generating my program without errors.
Tutorials tend to dismiss this information (how you say to your shader the name and location of your binded texture).
GLuint t1Location = glGetUniformLocation(programID, "tex1");
glUniform1i(t1Location, 0);
And my shaders UPDATED CODE:
#define GLSL(version, shader) "#version " #version "\n" #shader
const char* vert = GLSL
(
110,
varying vec4 position;
varying vec3 normal;
varying out vec4 texCoord;
varying vec2 coord;
void main()
{
position = gl_ModelViewMatrix * gl_Vertex;
normal = normalize( gl_NormalMatrix * gl_Normal.xyz );
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
coord = vec2(gl_MultiTexCoord0);
}
);
const char* frag = GLSL
(
110,
uniform sampler2D tex1;
varying vec4 position;
varying vec3 normal;
varying vec2 coord;
void main()
{
gl_FragColor = texture2D(tex1, coord);
}
);
EDIT2:
I am setting up gl this way:(maybe something is conflicting with my texture shader, but I don't think so!
/* set up depth-buffering */
glEnable(GL_DEPTH_TEST);
glEnable(GL_POLYGON_SMOOTH);
glHint(GL_POLYGON_SMOOTH_HINT, GL_FASTEST);
/* set up lights */
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPolygonMode(GL_FRONT_AND_BACK,GL_FILL);
glShadeModel(GL_SMOOTH);
GLfloat lightpos[] = { 3.0, 0.0, 1.0, 0.0 };
GLfloat lightcolor[] = { 0.5, 0.5, 0.5, 1.0 };
GLfloat ambcolor[] = { 0.5, 0.5, 0.5, 1.0 };
glLightModelfv(GL_LIGHT_MODEL_AMBIENT,ambcolor);
glEnable(GL_LIGHTING);
glColorMaterial(GL_FRONT_AND_BACK,GL_AMBIENT_AND_DIFFUSE);
glEnable (GL_COLOR_MATERIAL);
glLightfv (GL_LIGHT0,GL_POSITION,lightpos);
glLightfv (GL_LIGHT0,GL_AMBIENT,ambcolor);
glLightfv (GL_LIGHT0,GL_DIFFUSE,lightcolor);
glLightfv (GL_LIGHT0,GL_SPECULAR,lightcolor);
glLightf (GL_LIGHT0,GL_CONSTANT_ATTENUATION,0.2);
glLightf (GL_LIGHT0,GL_LINEAR_ATTENUATION,0.0);
glLightf (GL_LIGHT0,GL_QUADRATIC_ATTENUATION,1.0);
glEnable (GL_LIGHT0);
glEnable(GL_TEXTURE_2D);
Replacing gl_FragColor by a flat color is working fine.
I know maybe is related to coord parameter but I am trying all the stuff I found and nothing is working for me.
The internal texture format GL_BGR is not valid. GL_BGR is a valid for the format of the source texture, but the internal representation has to be GL_RGB.
See glTexImage2D.
Adapt your code like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
See the Khronos reference page GLAPI/glTexImage2D which says:
To define texture images, call glTexImage2D. The arguments describe the parameters of the texture image, such as height, width, width of the border, level-of-detail number (see glTexParameter), and number of color components provided. The last three arguments describe how the image is represented in memory.
format determines the composition of each element in data. It can assume one of these symbolic values:
GL_BGR:
Each element is an RGB triple. The GL converts it to floating point and assembles it into an RGBA element by attaching 1 for alpha. Each component is clamped to the range [0,1].
Im trying to do a simple rotation in opengl of my primitive object in the projection plane. I want to rotate the object like a propeller but i cant seem to get it going right. When i run the code my object looks like it shrinks into itself (i know its not that, but its rotating funny)
void rotateStuff()
{
spin = spin - .5; // inc for spin
if(spin < 360)
{
spin = spin + 360;
}
glPushMatrix();
glTranslatef(150, 95, 0.0);
glRotatef(spin, 1.0, 0.0, 0.0);
glTranslatef(-150, -95, 0);
displayStuff();
glPopMatrix();
drawButton();
glutSwapBuffers();
}
Heres a snippet of my object
glBegin(GL_POLYGON);
glVertex2i(50, 0);
glVertex2i(50, 75);
glVertex2i(150, 75);
glVertex2i(150, 0);
glEnd(); // end current shape
I think something is wrong with the setting of my origin but what exaclty? am i translating to a wrong origin?
This is a rotation around the x-axis: glRotatef(spin, 1.0, 0.0, 0.0).
Presumably you want things in the x-y plane to stay in the x-y plane,
so you want rotation around the z-axis: glRotatef(spin, 0.0, 0.0, 1.0).
I wrote a little program to display a 32bit float texture in a simple quad. When displaying the quad, the texture color is always black. I experimented with a lot of things, but I couldn't make it work. I'm really at loss what the problem with it.
The code of creating the OpenGL texture goes like this
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, textureData);
Using the debugger, there's no error in any of these calls. I also examined the textureData pointer, and got the expected results (in my simplified program, it is just a gradient texture).
This is the vertex shader code in GLSL:
#version 400
in vec4 vertexPosition;
out vec2 uv;
void main() {
gl_Position = vertexPosition;
uv.x = (vertexPosition.x + 1.0) / 2;
uv.y = (vertexPosition.y + 1.0) / 2;
}
It's kind of a simple generation of the UV coordinates without taking them as vertex attributes. The corresponding vertex buffer object is really simple:
GLfloat vertices[4][4] = {
{ -1.0, 1.0, 0.0, 1.0 },
{ -1.0, -1.0, 0.0, 1.0 },
{ 1.0, 1.0, 0.0, 1.0 },
{ 1.0, -1.0, 0.0, 1.0 },
};
I've tested the solution, and it displays the quad covering the entire window as I wanted to. Displaying the UV coordinates in the fragment shader reproduce the gradient that I expected to get. Now here's the fragment shader:
#version 400
uniform sampler2D myTex;
in vec2 uv;
out vec4 fragColor;
void main() {
fragColor = texture(myTex, uv);
// fragColor += vec4(uv.x, uv.y, 0, 1);
}
The commented out line displays the UV coordinates as color for debugging purposes. What do I do wrong here? I just can't see why the texture() call returns 0 where the texture seems completely right, and the uv coordinates are also proper. I link here the full code if there's something else I do wrong: gl-view.c
EDIT: This is how I set up the myTex sampler:
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glUniform1i(glGetUniformLocation(shaderProgram, "myTex"), 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
EDIT: Cleared up the vertex shader code.
I've found the issue: I didn't set any MAG or MIN filter on the texture. Setting the MIN filter to GL_NEAREST solved the problem.
I'm new to openGL and im having trouble understanding the concept of glOrtho. for instance i have:
void display(void)
{
/* clear all pixels */
glClear (GL_COLOR_BUFFER_BIT);
/* draw black polygon (rectangle) with corners at * (0.25, 0.25, 0.0) and (0.75, 0.75, 0.0)
*/
glColor3f (0.0, 0.0, 0.0);
glBegin(GL_POLYGON);
glVertex3f (-.25,0,0.0);
glVertex3f (.25, 0, 0.0);
glVertex3f (.25, .25, 0.0);
glVertex3f (-.25, .25, 0.0);
glEnd();
/* don’t wait!
* start processing buffered OpenGL routines */
glFlush (); }
this produces a rectangle and then this "morphs" the rectangle:
void init (void)
/* this function sets the initial state */ {
/* select clearing (background) color to white */
glClearColor (1.0, 1.0, 1.0, 0.0);
/* initialize viewing values */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1, 1, 0.0, -1.0,1.0);
}
and this pretty much makes it a square and puts in up in the top left corner. I'm not sure how it does that. Are the points transformed in the rectangle?
EDIT:
figured it out. this was very helpful. http://elvenware.sourceforge.net/OpenGLNotes.html#Ortho
glOrtho is used to define an orthographic projection volume:
The signature is glOrtho(GLdouble left, GLdouble right, GLdouble bottom, GLdouble top, GLdouble near, GLdouble far);
left and right specify the x-coordinate clipping planes, bottom and top specify the y-coordinate clipping planes, and near and far specify the distance to the z-coordinate clipping planes. Together these coordinates provide a box shaped viewing volume.
The way you have defined your volume of projection is not centered around the point 3d (0, 0, 0) but (.5, -5, 0) you should have defined your glOrtho this way instead: glOrtho(-.5, .5, -.5, .5, -1.0, 1.0); since you polygon is center around the point 3d (0, 0, 0). (You can also change the coordinates of your polygon to match the center of your projection volume).
Your glOrtho call sets up the viewport such that the top-left is (0,0) and the bottom-right is (1,1), with a valid Z-range of (-1,1).
Now, you drew a square with a top-left of (-0.25,-0.25) to (0.25,0.25).
The glVertex calls do not match the comment just above them. Either change the vertices to the values you stated, or change the glOrtho call:
glOrtho(-0.5, 0.5, 0.5, -0.5, -1.0, 1.0 );