How can I draw a triangle with a VBO? - c

I am trying to get my VBO to draw, but I can't see anything. I am attempting to draw a single triangle (seems to me that a single triangle is a good start in the right direction). Everything compiles and runs without breaking.
void initGraphics(int width, int height) {
glViewport(0, 0, width, height);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClearColor(1.0, 1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, width, height, 0.0);
glMatrixMode(GL_MODELVIEW);
}
GLuint vboId;
void initVbo(void) {
GLsizei dataSize;
GLfloat* vertices;
int vCount = 3;
dataSize = sizeof(GLfloat) * 3 * vCount;
vertices = (GLfloat*)malloc(dataSize);
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = 0;
vertices[3] = 100;
vertices[4] = 0;
vertices[5] = 0;
vertices[6] = 100;
vertices[7] = 100;
vertices[8] = 0;
glewInit();
glGenBuffersARB(1, &vboId);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboId);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, dataSize, vertices, GL_STATIC_DRAW_ARB);
free(vertices);
//glDeleteBuffersARB(1, &vboId); // edit #1
}
unsigned int indices[] = { 0, 1, 2 }; // edit #3
void drawVbo(void) {
glClear(GL_COLOR_BUFFER_BIT);
glClearColor(1.0, 1.0, 1.0, 1.0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
//edit #1, #2
glDrawElements(GL_TRIANGLES, 3 /*1*/, /*GL_UNSIGNED_BYTE*/ GL_UNSIGNED_INT, &indices);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
}
These are the functions that are getting called. I am running on windows and I am avoiding posting a bunch of windows related code...
EDIT #1
Modified the code to reflect genpfault's suggestions.
EDIT #2
Modified the code to reflect Nicol Bolas' suggestions.
COMMENT #1
The following code works (just to prove projections are set up correct):
glBegin(GL_TRIANGLES);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(100.0, 0.0, 0.0);
glVertex3f(100.0, 100.0, 0.0);
glEnd();
EDIT #3
Modified the code to reflect Nicol Bolas' suggestions.
UPDATE #1
The modified code now works. Although I am curious how to get glDrawArrays to work properly...my implementation looked like:
glDrawArrays(GL_TRIANGLES, 0, 0);
This doesn't seem right to me, but the spec says:
mode: Specifies what kind of primitives to render. Symbolic constants GL_POINTS, GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_TRIANGLES, GL_QUAD_STRIP, GL_QUADS, and GL_POLYGON are accepted.
first: Specifies the starting index in the enabled arrays.
count: Specifies the number of indices to be rendered.
Based on what Nicol Bolas was saying, since I shouldn't need indices, 0, 0 make sense as arguments. Right?

glGenBuffersARB(1, &vboId);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboId);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, dataSize, vertices, GL_STATIC_DRAW_ARB);
free(vertices);
glDeleteBuffersARB(1, &vboId); // wat
After the glDeleteBuffersARB() call the pointed-to VBO ID(s) are invalidated and can't be used in glBindBufferARB().
Also:
glDrawElements(GL_TRIANGLES, 1, GL_UNSIGNED_BYTE, &indices);
You've defined indices as a unsigned int so you should use GL_UNSIGNED_INT instead of GL_UNSIGNED_BYTE.

glDrawElements(GL_TRIANGLES, 1, GL_UNSIGNED_INT, &indices);
Triangles, as you may be aware, are made of three vertices. You are sending one (the second parameter). You can't draw a triangle from one position.

Related

Multiple sequentual glPushAttrib/glPopAttrib incorrect behaviour

#include <GL/glut.h>
void reshape(int w, int h){
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-w/2, w - w/2, -h/2, h - h/2);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
void display(){
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1, 1, 1);
glPushAttrib(GL_ALL_ATTRIB_BITS);
glColor3f(1, 0, 0);
glPopAttrib();
glRecti(0, 0, 10, 10); // draws white rect
// Commenting the line makes next rect white
// Uncommenting the line makes next rect red
glTranslatef(0, 0, 0);
glPushAttrib(GL_ALL_ATTRIB_BITS);
glColor3f(1, 0, 0);
glPopAttrib();
glRecti(20, 20, 30, 30); // draws white or red rect
glutSwapBuffers();
}
int main (int argc, char * argv[]){
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
glutInitWindowSize(800, 600);
glutCreateWindow("OpenGL lesson 1");
glutReshapeFunc(reshape);
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
The code above is full compilable programm that renders two rectangles. The programm produce different results depending on weather line "glTranslatef(0, 0, 0);" commented or not. Is that a bug, or misusage of OpenGL?
Is that a bug, or misusage of OpenGL?
That is just a bug. The spec clearly states that glColor will set the current RGBA color value, which will become the vertex's color the next time a vertex is formed. This would happen by the next glVertex call inside a glBegin/glEnd block. glRect is specified to be equivalent to glBegin(); glVertex() [4x]; glEnd().
The current RGBA color value is part of the GL_CURRENT_BIT attribute group, and is of course included in GL_ALL_ATTRIB_BITS. glTranslate is to only affect the top element of the currenttly selected matrid stack. The correct output for this code are two wihite rectangles, no matter if a glTranslate is there or not.
However, all this stuff is horribly outdated, and deprecated since 2008.

GLSL Shader going black when I try to sample a texture

I render a triangle strip this way, and with basic bypass shaders all is working fine:
EDIT:
I added TextCoords and modified the shaders , I keep getting the same result, my 3d objects are going black!
UPDATED CODE:
// Dibuixem tots els prismes
glBegin(GL_TRIANGLE_STRIP);
for(i=0;i<num_elems;i++) {
for(j=0;j<num_vertices;j++) {
glNormal3fv((GLfloat *)(a+j*2));
glTexCoord2f(0.0f, 0.0f);
glVertex3fv((GLfloat *)(a+j*2+1));
glTexCoord2f(1.0f, 0.0f);
glNormal3fv((GLfloat *)(b+j*2));
glTexCoord2f(1.0f, 1.0f);
glVertex3fv((GLfloat *)(b+j*2+1));
}
glNormal3fv((GLfloat *)(a));
glTexCoord2f(0.0f, 1.0f);
glVertex3fv((GLfloat *)(a+1));
glNormal3fv((GLfloat *)(b));
glTexCoord2f(0.0f, 0.0f);
glVertex3fv((GLfloat *)(b+1));
a+=face_size;
b+=face_size;
}
glEnd();
And I am trying to attach a texture to my shaders, but I can't figure out how to pass the texture.
I create and add the texture to my program this way. Texture data is verified
array with format unsigned char data[imageSize];:
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glActiveTexture(GL_TEXTURE0); // Texture unit 0
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0,GL_BGR, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
GLuint VertexShaderID = glCreateShader(GL_VERTEX_SHADER);
This is what I read in another posts with the same issue and I added to my code after compiling my shaders and generating my program without errors.
Tutorials tend to dismiss this information (how you say to your shader the name and location of your binded texture).
GLuint t1Location = glGetUniformLocation(programID, "tex1");
glUniform1i(t1Location, 0);
And my shaders UPDATED CODE:
#define GLSL(version, shader) "#version " #version "\n" #shader
const char* vert = GLSL
(
110,
varying vec4 position;
varying vec3 normal;
varying out vec4 texCoord;
varying vec2 coord;
void main()
{
position = gl_ModelViewMatrix * gl_Vertex;
normal = normalize( gl_NormalMatrix * gl_Normal.xyz );
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
coord = vec2(gl_MultiTexCoord0);
}
);
const char* frag = GLSL
(
110,
uniform sampler2D tex1;
varying vec4 position;
varying vec3 normal;
varying vec2 coord;
void main()
{
gl_FragColor = texture2D(tex1, coord);
}
);
EDIT2:
I am setting up gl this way:(maybe something is conflicting with my texture shader, but I don't think so!
/* set up depth-buffering */
glEnable(GL_DEPTH_TEST);
glEnable(GL_POLYGON_SMOOTH);
glHint(GL_POLYGON_SMOOTH_HINT, GL_FASTEST);
/* set up lights */
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPolygonMode(GL_FRONT_AND_BACK,GL_FILL);
glShadeModel(GL_SMOOTH);
GLfloat lightpos[] = { 3.0, 0.0, 1.0, 0.0 };
GLfloat lightcolor[] = { 0.5, 0.5, 0.5, 1.0 };
GLfloat ambcolor[] = { 0.5, 0.5, 0.5, 1.0 };
glLightModelfv(GL_LIGHT_MODEL_AMBIENT,ambcolor);
glEnable(GL_LIGHTING);
glColorMaterial(GL_FRONT_AND_BACK,GL_AMBIENT_AND_DIFFUSE);
glEnable (GL_COLOR_MATERIAL);
glLightfv (GL_LIGHT0,GL_POSITION,lightpos);
glLightfv (GL_LIGHT0,GL_AMBIENT,ambcolor);
glLightfv (GL_LIGHT0,GL_DIFFUSE,lightcolor);
glLightfv (GL_LIGHT0,GL_SPECULAR,lightcolor);
glLightf (GL_LIGHT0,GL_CONSTANT_ATTENUATION,0.2);
glLightf (GL_LIGHT0,GL_LINEAR_ATTENUATION,0.0);
glLightf (GL_LIGHT0,GL_QUADRATIC_ATTENUATION,1.0);
glEnable (GL_LIGHT0);
glEnable(GL_TEXTURE_2D);
Replacing gl_FragColor by a flat color is working fine.
I know maybe is related to coord parameter but I am trying all the stuff I found and nothing is working for me.
The internal texture format GL_BGR is not valid. GL_BGR is a valid for the format of the source texture, but the internal representation has to be GL_RGB.
See glTexImage2D.
Adapt your code like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
See the Khronos reference page GLAPI/glTexImage2D which says:
To define texture images, call glTexImage2D. The arguments describe the parameters of the texture image, such as height, width, width of the border, level-of-detail number (see glTexParameter), and number of color components provided. The last three arguments describe how the image is represented in memory.
format​ determines the composition of each element in data​. It can assume one of these symbolic values:
GL_BGR:
Each element is an RGB triple. The GL converts it to floating point and assembles it into an RGBA element by attaching 1 for alpha. Each component is clamped to the range [0,1].

OpenGL Lighting Failing when Scaling

I have to read a 3D object from an ASE file. This object turns to be too big for the world I have to create, therefore, I must scale it down.
With its original size, it is properly lighted up.
However, once I scale it down, it becomes oversaturated.
The world is centered around (0, 0, 0) and it is 100 meters long (y axis) and 50 meters wide (x axis), my upVector is (0, 0, 1). There are two lights, light0 in (20, 35, 750) and light1 in (-20, -35, 750).
Relevant parts of the code:
void init(void){
glClearColor(0.827, 0.925, 0.949, 0.0);
glEnable(GL_DEPTH_TEST);
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT, GL_DIFFUSE);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
glEnable(GL_LIGHTING);
glShadeModel(GL_SMOOTH);
GLfloat difusa[] = { 1.0f, 1.0f, 1.0f, 1.0f}; // white light
glLightfv(GL_LIGHT0, GL_DIFFUSE, difusa);
glLightfv(GL_LIGHT1, GL_DIFFUSE, difusa);
loadObjectFromFile("objeto.ASE");
}
void display ( void ) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(eyeX, eyeY, eyeZ, atX, atY, atZ, 0.0, 0.0, 1.0);
GLfloat posicion0[] = { 20.0f, 35.0f, 750.0f, 1.0f};
glLightfv(GL_LIGHT0, GL_POSITION, posicion0);
GLfloat posicion1[] = { -20.0f, -35.0f, 750.0f, 1.0f};
glLightfv(GL_LIGHT1, GL_POSITION, posicion1);
glColor3f(0.749, 0.918, 0.278);
glPushMatrix();
glTranslatef(0.0, 0.0, 1.5);
//Here comes the problem
glScalef(0.08, 0.08, 0.08);
glBegin(GL_TRIANGLES);
for(int i = 0; i < numFaces; i++){
glNormal3d(faces3D[i].n.nx, faces3D[i].n.ny, faces3D[i].n.nz);
glVertex3d(vertex[faces3D[i].s.A].x, vertex[faces3D[i].s.A].y, vertex[faces3D[i].s.A].z);
glVertex3d(vertex[faces3D[i].s.B].x, vertex[faces3D[i].s.B].y, vertex[faces3D[i].s.B].z);
glVertex3d(vertex[faces3D[i].s.C].x, vertex[faces3D[i].s.C].y, vertex[faces3D[i].s.C].z);
}
glEnd();
glPopMatrix();
glutSwapBuffers();
}
Why does lighting fail when the object is scaled down?
The problem you're running into is, that scaling the modelview matrix also influences the "normal matrix" normals are transformed with. The "normal matrix" is actually the transpose of the inverse of the modelview matrix. So by scaling down the modelview matrix, you're scaling up the normal matrix (because of the modelview inversion step used to obtain it).
Because of that the transformed normals must be rescaled, or normalized if the scale of the modelview matrix is not unitary. In fixed function OpenGL there are two methods to do this: Normal normalization (sounds funny, I know) and normal rescaling. You can enable either with
glEnable(GL_NORMALIZE);
glEnable(GL_RESCALE_NORMALS);
In a shader you'd simply normalize the transformed normal
#version ...
uniform mat3 mat_normal;
in vec3 vertex_normal;
void main()
{
...
vec3 view_normal = normalize( mat_normal * vertex_normal );
...
}
Depending on the setting of GL_NORMALIZE and GL_RESCALE_NORMALS, your normals can be transformed by the OpenGL-Pipeline.
Start with glEnable(GL_NORMALIZE) and see if that solves your problem

Draw a cube and rotate it: a part of the cube disappears

In this code I try to draw a cube.I try to draw all faces vertices anticlockwise.
The problem is that if I don't rotate the cube only the red face is drawn, if instead I rotate it of 5 degrees, I just see a part of the cube.
#import <OpenGL/OpenGL.h>
#import <GLUT/GLUT.h>
int width=500, height=500, depth=500;
void init()
{
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(200, 200,-200, 200, 200, 0, 0, 1, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,0,height);
gluPerspective(90, 1, -100, 100);
glViewport(0, 0, width, height);
}
void drawCube()
{
int vertices[8][3]= { {100,100,0} , {300,100,0}, {300,300,0}, {100,300,0}, {100,100,300} , {300,100,300}, {300,300,300}, {100,300,300} };
glBegin(GL_QUADS);
glColor4f(1, 0, 0, 0);
glVertex3iv(vertices[0]);
glVertex3iv(vertices[1]);
glVertex3iv(vertices[2]);
glVertex3iv(vertices[3]);
glVertex3iv(vertices[4]);
glVertex3iv(vertices[5]);
glVertex3iv(vertices[6]);
glVertex3iv(vertices[7]);
glColor4f(0, 1, 0, 0);
glVertex3iv(vertices[1]);
glVertex3iv(vertices[5]);
glVertex3iv(vertices[6]);
glVertex3iv(vertices[4]);
glVertex3iv(vertices[0]);
glVertex3iv(vertices[4]);
glVertex3iv(vertices[7]);
glVertex3iv(vertices[3]);
glColor4f(0, 0, 1, 0);
glVertex3iv(vertices[3]);
glVertex3iv(vertices[2]);
glVertex3iv(vertices[6]);
glVertex3iv(vertices[7]);
glVertex3iv(vertices[0]);
glVertex3iv(vertices[1]);
glVertex3iv(vertices[5]);
glVertex3iv(vertices[4]);
glEnd();
}
void display()
{
glClearColor(0.0, 0.0, 0.0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(200,200,150);
glRotatef(5, 0, 1, 0);
glTranslatef(-200,-200,-150);
drawCube();
glutSwapBuffers();
}
void idle(void)
{
}
int main(int argc, char * argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowPosition(100, 100);
glutInitWindowSize(width, height);
glutCreateWindow("Test");
glutDisplayFunc(display);
glutIdleFunc(idle);
init();
glutMainLoop();
return 0;
}
This is what I see:
But I should see a rotated cube, so I should see the part of the other face on the right.My doubt is that I'm going wrong with drawing the vertices in anticlockwise order, or something else.
PS: the code is outdated, because at my university I don't have the possibility to study the newest version of OpenGL, and I must use GLUT.
Couple problems:
Your projection matrix setup is not sensical.
Firstly, you should decide if you want an orthographic, or a perspective projection.
If you want orthographic, use gluOrtho2d. If you want a perspective projection, use gluPerspective. Using both will generate a bizarre transformation that's certainly not what you want.
gluPerspective can't have a negative near plane. The near plane should be greater than zero, perhaps something small like 1, with a far plane defining how far away from the camera you want the back clip plane to be. Since you seem to be using units in the hundreds, I might recommend a back plane of 1000 or so.
You're calling gluLookAt, but erasing the view matrix by calling glLoadIdentity in display(). If you want a view matrix, don't erase it after you program it.

OpenGL Drawing "axis"

For a piece of coursework we have to build a working model of the solar system. I have mine implemented with planets (spheres), but we also have to draw the axis of the planet as a line above and below.
I am finding that using GL_LINES doesn't seem to be working, presumably because of the scale of this project (the radius of the planets is 139000000+).
Simplified example:
void drawAxis(int n)
{
/* Draws the axis for body "n" */
glColor3f(1.0,0.0,0.0);
glBegin(GL_LINES);
glVertex3f(0, bodies[n].radius, 0);
glVertex3f(0, bodies[n].radius*2, 0);
glEnd();
glColor3f(0.0,1.0,0.0);
glBegin(GL_LINES);
glVertex3f(0, -bodies[n].radius, 0);
glVertex3f(0, -bodies[n].radius*2, 0);
glEnd();
}
void drawBody(int n)
{
if(n==0) {
/* Draws body "n" */
//glRotatef(bodies[n].axis_tilt, 1.0, 0.0, 0.0);
//Scale and position
glTranslatef(0.0, 0.0, 0.0);
glScalef(SCALE,SCALE,SCALE); //why already scaled?
//Axis
drawAxis(n);
//r g b - colour (red, green, blue)
glColor3f(bodies[n].r,bodies[n].g,bodies[n].b);
//radius - size of body (km)
glutSolidSphere (bodies[n].radius/SCALE, 50, 50);
}
}
Draws:
Am I missing something critical here?
What is the best "work around" for drawing axis on this sphere?
glScalef(SCALE,SCALE,SCALE); //why already scaled?
...
glutSolidSphere (bodies[n].radius/SCALE, 50, 50);
This makes no sense. Why would you apply a uniform scale, only to then divide your sphere's scale by it, thus undoing the scale? Wouldn't it make more sense to have no scale at all and just use bodies[n].radius?
This is the source of your problem. See, you undo your unnecessary scale when you draw your sphere, but you don't undo it when you draw your axes. If you take out the unnecessary scale, there's a better chance that it will work.

Resources