I have a VBO and an IBO in OpenGL, but am unable to draw them properly. Could you please let me know what I could have forgotten in the frame display function ?
- struct Point3D is a struct with 3 floats inside (x,y,z).
- nbVertex is the amount of vertexes in the glVertex array.
- nbVBOInd is the amount of indices in the VBOInd array.
glGenBuffers(1, &VertexVBOID);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glBufferData(GL_ARRAY_BUFFER, sizeof(struct Point3D)*nbVertex, glVertex, GL_STATIC_DRAW);
glGenBuffers(1, &IndexVBOID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(int)*nbVBOInd, VBOInd, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glVertex), BUFFER_OFFSET(0)); //The starting point of the VBO, for the vertices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); //The starting point of the IBO
Thanks !
I see the same problem as rodrigo - you have data type mismatch, as you can see here:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(int)*nbVBOInd, VBOInd, GL_STATIC_DRAW);
sizeof(int) - using integer type
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
GL_UNSIGNED_SHORT - using short type
according to openGL specification, there are only unsigned data types possible for glDrawElements. To fix this you need:
change VBOInd to unsigned type in declaration like:
unsigned int* VBOInd = new unsigned int[nbVBOInd]
replace 6th call with
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int)*nbVBOInd, VBOInd, GL_STATIC_DRAW);
replace 11th (last) call with
glDrawElements(GL_TRIANGLES, nbVBOInd, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
Anyway I believe that the problem is hidden in pointer setup, change 9th call to:
glVertexPointer(3, GL_FLOAT, 0, BUFFER_OFFSET(0));
If that doesn't work, please show us how glVertex and VBOInd is declared and filled with data. Maybe you're using std::vector? You need to call these data containers like:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int)*nbVBOInd, &VBOInd[0], GL_STATIC_DRAW);
If something's unclear, just ask in comments..
Try changing the last line to:
glDrawElements(GL_TRIANGLES, nbVBOInd, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
Unless your data in the IndexVBOID are really short, but then, the sizeof(int) above would be wrong.
Related
I'm adding transformations to my C OpenGL program. I'm using CGLM as my maths library. The program has no warnings or errors. Still however, when I compile and run the program, I get a distorted version of my intended image (it was not distorted before adding transformations).
The following is my program's main loop:
// Initialize variables for framerate counting
double lastTime = glfwGetTime();
int frameCount = 0;
// Program loop
while (!glfwWindowShouldClose(window)) {
// Calculate framerate
double thisTime = glfwGetTime();
frameCount++;
// If a second has passed.
if (thisTime - lastTime >= 1.0) {
printf("%i FPS\n", frameCount);
frameCount = 0;
lastTime = thisTime;
}
processInput(window);
// Clear the window
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Bind textures on texture units
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
// Create transformations
mat4 transform = {{1.0f}};
glm_mat4_identity(transform);
glm_translate(transform, (vec3){0.5f, -0.5f, 0.0f});
glm_rotate(transform, (float)glfwGetTime(), (vec3){0.0f, 0.0f, 1.0f});
// Get matrix's uniform location and set matrix
shaderUse(myShaderPtr);
GLint transformLoc = glGetUniformLocation(myShaderPtr->shaderID, "transform");
// mat4 transform;
glUniformMatrix4fv(transformLoc, 1, GL_FALSE, (float*)transform);
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glfwSwapBuffers(window); // Swap the front and back buffers
glfwPollEvents(); // Check for events (mouse movement, mouse click, keyboard press, keyboard release etc.)
}
The Program is up on github here if you'd like to check out the full code.
The Output of the program is this (The square also rotates):
However, the intended output of the program is the penguin at 20% opacity on top and the box at 100% opacity underneath the penguin.
In the vertex shader, the location of the texture coordinate is 1:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
However, when you specify the vertices, location 1 is used for the color attribute and position 2 for the text coordinates:
// Colour attribute
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
// Texture coord attribute
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float)));
glEnableVertexAttribArray(2);
Remove the color attribute and use location 1 for the texture coordinates. e.g.:
// Texture coord attribute
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float)));
glEnableVertexAttribArray(1);
Looking at your source code, you're passing in three attributes (position, color and texture coordinates), but your vertex shader only takes two.
Removing the color attribute and instead passing the texture coordinates as attribute #1 instead of #2 should make it look like intended.
I try program a simple industrie-roboter arm in C + OpenGl 3.3. I have 3 simple models set up in a single buffer element (Vertex, Index and ColorBuffer) and i can rotate/translate the whole thing around. However, i am unable to translate/rotate a single section.
The Vertex Data is saved in an HEADER file. The Red segment is called "base", blue is "seg1" and green is "seg2". I loaded them into the buffer like so.
size = sizeof(base_verticies) + sizeof(seg1_verticies) + sizeof(seg2_verticies);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, size, 0, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(base_verticies), base_verticies);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(base_verticies), sizeof(seg1_verticies), seg1_verticies);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(base_verticies)+sizeof(seg1_verticies), sizeof(seg2_verticies), seg2_verticies);
size = sizeof(base_indices) + sizeof(seg1_indices) + sizeof(seg2_indices);
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, size, 0, GL_STATIC_DRAW);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sizeof(base_indices), base_indices);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, sizeof(base_indices), sizeof(seg1_indices), seg1_indices);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, sizeof(base_indices)+sizeof(seg1_indices), sizeof(seg2_indices), seg2_indices);
size = sizeof(base_colors) + sizeof(seg1_colors) + sizeof(seg2_colors);
glGenBuffers(1, &CBO);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glBufferData(GL_ARRAY_BUFFER, size, 0, GL_STATIC_DRAW);
glBufferSubData( ... ); ...; // same for colors
I can now draw those section seperatly like so
//base (red)
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, 0);
//seg1 (blue)
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)sizeof(base_indices));
//seg2 (green)
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)(sizeof(base_indices)+sizeof(seg1_indices)));
But my Problem is, that I can only rotate and translate the whole thing, but not single sections. I do rotate the whole model in the onIdle() function like so:
float angle = 1;
float RotationMatrixAnim[16];
SetRotationY(angle, RotationMatrixAnim);
/* Apply model rotation */
MultiplyMatrix(RotationMatrixAnim, InitialTransform, ModelMatrix);
MultiplyMatrix(TranslateDown, ModelMatrix, ModelMatrix);
Is there a way to modify the position,translation,... for single segments? The only tutorials I find are from legacy OpenGl with glPushMatrix() and glPopMatrix, which are deprecated.
Is it a good idea to store those Segments in the same buffer?
Thanks in advance for any help & precious time of yours.
Thanks to #Rabbid76 suggestion i was able to solve my issue.
I made 3 ModelMatrices, that is one for each section
float ModelMatrix[3][16]; /* Model matrix */
I rotate/translate the section i want like so
float RotationMatrixSeg2[16];
SetRotationY(angle, RotationMatrixSeg2);
MultiplyMatrix(RotationMatrixSeg2, InitialTransform, ModelMatrix[2]);
MultiplyMatrix(TranslateDown, ModelMatrix[2], ModelMatrix[2]);
I set the corresbonding ModelMatrix befor drawing the section.
/* Draw each section */
glUniformMatrix4fv(RotationUniform, 1, GL_TRUE, ModelMatrix[0]);
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, 0);
glUniformMatrix4fv(RotationUniform, 1, GL_TRUE, ModelMatrix[1]);
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)sizeof(base_indices));
glUniformMatrix4fv(RotationUniform, 1, GL_TRUE, ModelMatrix[2]);
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)(sizeof(base_indices)+sizeof(seg1_indices)));
I'm still trying to grasp the quirks of OpenGL but it doesn't help trying to learn on OSX which uses OpenGL 4.2 Core, which could be relevant.
All I'm simply trying to do is copy an array of vertex data parsed from an object file, which shows up fine if I iterate through that data, into a bound VBO, however when I use glGetBufferSubData to return the contents of the currently bound VBO it shows the data fine for a few dozen lines then starts display what only I can see as corrupted data like so...
Vertex: (-0.437500, 0.328125, 0.765625)
Vertex: (0.500000, 0.390625, 0.687500)
Vertex: (-0.500000, 0.390625, 0.687500)
Vertex: (0.546875, 0.437500, 0.578125)
Vertex: (-0.546875, 0.437500, 0.578125)
-------------------vvvvv corrupts here
Vertex: (0.625000, 19188110749038498652920741888.000000, 12125095608195490978463744.000000)
Vertex: (68291490374736750313472.000000, 70795556751816250086766057357312.000000, 0.000000)
Vertex: (4360831549674110915425861632.000000, 4544202249129758853702852018176.000000, 50850084445497733730842814971904.000000)
This happens even if I try initializing a buffer of the same or similar size with all zeroes, it gets weird after some arbitrary amount. Here's a snippet to see what I'm doing, but I can't imagine what's going on.
// Parse an OBJ file
compositeWavefrontObj com;
parseObjFileVerticies("suzanne.obj", &com);
// Bind a vertex array object
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// copy vertex buffer data
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, com.attrib->num_vertices*sizeof(GLfloat), 0, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, com.attrib->num_vertices*sizeof(GLfloat), com.attrib->vertices);
// Read back data from bound buffer object
GLfloat *vertBuffer = malloc(sizeof(GLfloat) * com.attrib->num_vertices);
glGetBufferSubData(GL_ARRAY_BUFFER, 0, com.attrib->num_vertices, vertBuffer);
for (int i = 0; i < com.attrib->num_vertices; ++i) {
printf("Vertex: (%f,", *(vertBuffer+i));
++i;
printf(" %f,", *(vertBuffer+i));
++i;
printf(" %f)\n", *(vertBuffer+i));
}
printf("\n\n\n");
I see two problems:
glBufferData(GL_ARRAY_BUFFER, com.attrib->num_vertices*sizeof(GLfloat), 0, GL_STATIC_DRAW); needs a size in bytes for the whole data.
Because you have X,Y,Z for each vertex, it should be
glBufferData(GL_ARRAY_BUFFER, com.attrib->num_vertices*3*sizeof(GLfloat), 0, GL_STATIC_DRAW);
note the '3'.
Same when you allocate room and read the buffer.
GLfloat *vertBuffer = malloc(sizeof(GLfloat) * com.attrib->num_vertices * 3);
glGetBufferSubData(GL_ARRAY_BUFFER, 0, com.attrib->num_vertices * 3 * sizeof(GLfloat), vertBuffer);
I'm having problem rendering a texture in OpenGL. I'm getting a white color instead of the texture image but my rectangle and the whole context is working fine. I'm using the stbi image loading library and OpenGL 3.3.
Before rendering:
glViewport(0, 0, ScreenWidth, ScreenHeight);
glClearColor(0.2f, 0.0f, 0.0f, 1.0f);
int Buffer;
glGenBuffers(1, &Buffer);
glBindBuffer(GL_ARRAY_BUFFER, Buffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
Loading the Texture:
unsigned char *ImageData = stbi_load(file_name, 1024, 1024, 0, 4); //not sure about the "0" parameter
int Texture;
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, x, y, 0, GL_RGB, GL_UNSIGNED_BYTE, ImageData);
stbi_image_free(ImageData);
Rendering:
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Texture);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 6 * 3); //a rectangle which is rendered fine
glDisableVertexAttribArray(0);
I am looking for a very minimalistic and simple solution.
First of all, take a second to look at the function declaration for stbi_load (...):
unsigned char *stbi_load (char const *filename,
int *x, // POINTER!!
int *y, // POINTER!!
int *comp, // POINTER!!
int req_comp)
Now, consider the arguments you have passed it:
file_name (presumably a pointer to a C string)
1024 (definitely NOT a pointer)
1024 (definitely NOT a pointer)
0 (definitely NOT a pointer)
The whole point of passing x, y and comp as pointers is so that stbi can update their values after it loads the image. This is the C equivalent of passing by reference in a language like C++ or Java, but you have opted instead to pass integer constants that the compiler then treats as addresses.
Because of the way you call this function, stbi attempts to store the width of the image at the address: 1024, the height of the image at address: 1024 and the number of components at address: 0. You are actually lucky that this does not cause an access violation or other bizarre behavior at run-time, storing things at arbitrary addresses is dangerous.
Instead, consider this:
GLuint ImageX,
ImageY,
ImageComponents;
GLubyte *ImageData = stbi_load(file_name, &ImageX, &ImageY, &ImageComponents, 4);
[...]
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA8,
ImageX, ImageY, 0, GL_RGBA, GL_UNSIGNED_BYTE, ImageData);
// ~~~~ You are requesting 4 components per-pixel from stbi, so this is RGBA!
It's been a while since I've done OpenGL but aside from a few arbitrary calls, the main issue I'd guess is that you're not calling glEnable(GL_TEXTURE_2D);before you make the call to draw your function. Afterwards it is wise to call glDisable(GL_TEXTURE_2D);
as well to avoid potential future conflicts.
In my game engine, I draw to a framebuffer before drawing to the actual screen, and then use a post processing shader to draw to the actual framebuffer. However, there's an issue I'm having with AMD drivers where my quad is rendering incorrectly.
The code works as expected on both Nvidia and Intel GPUs across multiple operating systems. Here's an example: http://i.stack.imgur.com/5fiJk.png
It's been reproduced across 3 systems with different AMD gpus, and hasn't been seen on non-AMD systems.
The code that created the quad is here:
static int data[] = {
0, 0,
0, 1,
1, 1,
1, 0
};
GLuint vao, vbo;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(data), &data, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_INT, GL_FALSE, 0, NULL);
glVertexAttribPointer(1, 2, GL_INT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
/* code which draws it */
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
I have tried other options such as using two independent triangles, but nothing thus far has worked. I'm completely lost as to why this behaviour would occur in the AMD drivers.
Then stop sending them as GL_INT. You're already using 8 bytes per position, so you may as well use GL_FLOAT and floating-point values. If you want to save space, use GL_SHORT or GL_BYTE.
Yes, this is technically a driver bug. But you're not gaining anything by using GL_INT.