Why is the AMD driver rendering fullscreen quads incorrectly? - c

In my game engine, I draw to a framebuffer before drawing to the actual screen, and then use a post processing shader to draw to the actual framebuffer. However, there's an issue I'm having with AMD drivers where my quad is rendering incorrectly.
The code works as expected on both Nvidia and Intel GPUs across multiple operating systems. Here's an example: http://i.stack.imgur.com/5fiJk.png
It's been reproduced across 3 systems with different AMD gpus, and hasn't been seen on non-AMD systems.
The code that created the quad is here:
static int data[] = {
0, 0,
0, 1,
1, 1,
1, 0
};
GLuint vao, vbo;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(data), &data, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_INT, GL_FALSE, 0, NULL);
glVertexAttribPointer(1, 2, GL_INT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
/* code which draws it */
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
I have tried other options such as using two independent triangles, but nothing thus far has worked. I'm completely lost as to why this behaviour would occur in the AMD drivers.

Then stop sending them as GL_INT. You're already using 8 bytes per position, so you may as well use GL_FLOAT and floating-point values. If you want to save space, use GL_SHORT or GL_BYTE.
Yes, this is technically a driver bug. But you're not gaining anything by using GL_INT.

Related

glTexImage2D, in random and occasional ways, expends 800 times more in executing

I'm using glsl 2.0 for some GPGPU purposes (I know, not the best for GPGPU).
I have a reduction phase for matrix multiplication in which I have to constantly reduce the texture size (I'm using glTexImage2D). The pseudocode is something like this:
// Start reduction
for (int i = 1; i <= it; i++)
{
glViewport(0, 0, x, y);
glDrawArrays(GL_TRIANGLES, 0, 6);
x = resize(it);
if (i % 2 != 0)
{
glUniform1i(tex2_multiply_initialstep, 4);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer3);
// Resize output texture
glActiveTexture(GL_TEXTURE5);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, x, y, 0, GL_RGBA, GL_FLOAT, NULL);
}
else
{
glUniform1i(tex2_multiply_initialstep, 5);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer2);
// Resize output texture
glActiveTexture(GL_TEXTURE4);
// A LOT OF TIME!!!!!!!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, x, y, 0, GL_RGBA, GL_FLOAT, NULL);
// A LOT OF TIME!!!!!!!
}
}
In some iterations the glTexImage2D of the else branch takes 800 times more time that in other ones. I make a test hardcoding x and y but surprisingly takes similar high times in the same iterations, so have nothing to do with x value.
What's wrong here? Alternatives to resizing without glTexImage2D?
Thanks.
EDIT:
I know that glsl 2.0 is a bad choice for GPGPU but its mandatory for my project. So that I'm not able to use functions like glTexStorage2D because they are not included in 2.0 subset.
I'm not sure if I understand exactly what you're trying to achieve, but glTexImage2D is reallocating memory each time you call it. You may want to call glTexStorage2D, and then call glTexSubImage2D.
You can check Khronos's Common Mistakes page about that. Relevant part is :
Better code would be to use texture storage functions (if you have
OpenGL 4.2 or ARB_texture_storage) to allocate the texture's storage,
then upload with glTexSubImage2D:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0​, 0, 0, width​, height​, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
This creates a texture with a single mipmap level, and sets all of the parameters appropriately. If you wanted to have multiple mipmaps, then you should change the 1 to the number of mipmaps you want. You will also need separate glTexSubImage2D calls to upload each mipmap.
If that is unavailable, you can get a similar effect from this code:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Again, if you use more than one mipmaps, you should change the GL_TEXTURE_MAX_LEVEL to state how many you will use (minus 1. The base/max level is a closed range), then perform a glTexImage2D (note the lack of "Sub") for each mipmap.

Opengl Translate/Rotate single Objects from same Buffer

I try program a simple industrie-roboter arm in C + OpenGl 3.3. I have 3 simple models set up in a single buffer element (Vertex, Index and ColorBuffer) and i can rotate/translate the whole thing around. However, i am unable to translate/rotate a single section.
The Vertex Data is saved in an HEADER file. The Red segment is called "base", blue is "seg1" and green is "seg2". I loaded them into the buffer like so.
size = sizeof(base_verticies) + sizeof(seg1_verticies) + sizeof(seg2_verticies);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, size, 0, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(base_verticies), base_verticies);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(base_verticies), sizeof(seg1_verticies), seg1_verticies);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(base_verticies)+sizeof(seg1_verticies), sizeof(seg2_verticies), seg2_verticies);
size = sizeof(base_indices) + sizeof(seg1_indices) + sizeof(seg2_indices);
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, size, 0, GL_STATIC_DRAW);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sizeof(base_indices), base_indices);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, sizeof(base_indices), sizeof(seg1_indices), seg1_indices);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, sizeof(base_indices)+sizeof(seg1_indices), sizeof(seg2_indices), seg2_indices);
size = sizeof(base_colors) + sizeof(seg1_colors) + sizeof(seg2_colors);
glGenBuffers(1, &CBO);
glBindBuffer(GL_ARRAY_BUFFER, CBO);
glBufferData(GL_ARRAY_BUFFER, size, 0, GL_STATIC_DRAW);
glBufferSubData( ... ); ...; // same for colors
I can now draw those section seperatly like so
//base (red)
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, 0);
//seg1 (blue)
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)sizeof(base_indices));
//seg2 (green)
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)(sizeof(base_indices)+sizeof(seg1_indices)));
But my Problem is, that I can only rotate and translate the whole thing, but not single sections. I do rotate the whole model in the onIdle() function like so:
float angle = 1;
float RotationMatrixAnim[16];
SetRotationY(angle, RotationMatrixAnim);
/* Apply model rotation */
MultiplyMatrix(RotationMatrixAnim, InitialTransform, ModelMatrix);
MultiplyMatrix(TranslateDown, ModelMatrix, ModelMatrix);
Is there a way to modify the position,translation,... for single segments? The only tutorials I find are from legacy OpenGl with glPushMatrix() and glPopMatrix, which are deprecated.
Is it a good idea to store those Segments in the same buffer?
Thanks in advance for any help & precious time of yours.
Thanks to #Rabbid76 suggestion i was able to solve my issue.
I made 3 ModelMatrices, that is one for each section
float ModelMatrix[3][16]; /* Model matrix */
I rotate/translate the section i want like so
float RotationMatrixSeg2[16];
SetRotationY(angle, RotationMatrixSeg2);
MultiplyMatrix(RotationMatrixSeg2, InitialTransform, ModelMatrix[2]);
MultiplyMatrix(TranslateDown, ModelMatrix[2], ModelMatrix[2]);
I set the corresbonding ModelMatrix befor drawing the section.
/* Draw each section */
glUniformMatrix4fv(RotationUniform, 1, GL_TRUE, ModelMatrix[0]);
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, 0);
glUniformMatrix4fv(RotationUniform, 1, GL_TRUE, ModelMatrix[1]);
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)sizeof(base_indices));
glUniformMatrix4fv(RotationUniform, 1, GL_TRUE, ModelMatrix[2]);
glDrawElements(GL_TRIANGLES, sizeof(base_indices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)(sizeof(base_indices)+sizeof(seg1_indices)));

OpenGL - Not able to copy data past a certain size to bound VBO

I'm still trying to grasp the quirks of OpenGL but it doesn't help trying to learn on OSX which uses OpenGL 4.2 Core, which could be relevant.
All I'm simply trying to do is copy an array of vertex data parsed from an object file, which shows up fine if I iterate through that data, into a bound VBO, however when I use glGetBufferSubData to return the contents of the currently bound VBO it shows the data fine for a few dozen lines then starts display what only I can see as corrupted data like so...
Vertex: (-0.437500, 0.328125, 0.765625)
Vertex: (0.500000, 0.390625, 0.687500)
Vertex: (-0.500000, 0.390625, 0.687500)
Vertex: (0.546875, 0.437500, 0.578125)
Vertex: (-0.546875, 0.437500, 0.578125)
-------------------vvvvv corrupts here
Vertex: (0.625000, 19188110749038498652920741888.000000, 12125095608195490978463744.000000)
Vertex: (68291490374736750313472.000000, 70795556751816250086766057357312.000000, 0.000000)
Vertex: (4360831549674110915425861632.000000, 4544202249129758853702852018176.000000, 50850084445497733730842814971904.000000)
This happens even if I try initializing a buffer of the same or similar size with all zeroes, it gets weird after some arbitrary amount. Here's a snippet to see what I'm doing, but I can't imagine what's going on.
// Parse an OBJ file
compositeWavefrontObj com;
parseObjFileVerticies("suzanne.obj", &com);
// Bind a vertex array object
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// copy vertex buffer data
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, com.attrib->num_vertices*sizeof(GLfloat), 0, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, com.attrib->num_vertices*sizeof(GLfloat), com.attrib->vertices);
// Read back data from bound buffer object
GLfloat *vertBuffer = malloc(sizeof(GLfloat) * com.attrib->num_vertices);
glGetBufferSubData(GL_ARRAY_BUFFER, 0, com.attrib->num_vertices, vertBuffer);
for (int i = 0; i < com.attrib->num_vertices; ++i) {
printf("Vertex: (%f,", *(vertBuffer+i));
++i;
printf(" %f,", *(vertBuffer+i));
++i;
printf(" %f)\n", *(vertBuffer+i));
}
printf("\n\n\n");
I see two problems:
glBufferData(GL_ARRAY_BUFFER, com.attrib->num_vertices*sizeof(GLfloat), 0, GL_STATIC_DRAW); needs a size in bytes for the whole data.
Because you have X,Y,Z for each vertex, it should be
glBufferData(GL_ARRAY_BUFFER, com.attrib->num_vertices*3*sizeof(GLfloat), 0, GL_STATIC_DRAW);
note the '3'.
Same when you allocate room and read the buffer.
GLfloat *vertBuffer = malloc(sizeof(GLfloat) * com.attrib->num_vertices * 3);
glGetBufferSubData(GL_ARRAY_BUFFER, 0, com.attrib->num_vertices * 3 * sizeof(GLfloat), vertBuffer);

OpenGL texture rendering

I'm having problem rendering a texture in OpenGL. I'm getting a white color instead of the texture image but my rectangle and the whole context is working fine. I'm using the stbi image loading library and OpenGL 3.3.
Before rendering:
glViewport(0, 0, ScreenWidth, ScreenHeight);
glClearColor(0.2f, 0.0f, 0.0f, 1.0f);
int Buffer;
glGenBuffers(1, &Buffer);
glBindBuffer(GL_ARRAY_BUFFER, Buffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
Loading the Texture:
unsigned char *ImageData = stbi_load(file_name, 1024, 1024, 0, 4); //not sure about the "0" parameter
int Texture;
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, x, y, 0, GL_RGB, GL_UNSIGNED_BYTE, ImageData);
stbi_image_free(ImageData);
Rendering:
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Texture);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 6 * 3); //a rectangle which is rendered fine
glDisableVertexAttribArray(0);
I am looking for a very minimalistic and simple solution.
First of all, take a second to look at the function declaration for stbi_load (...):
unsigned char *stbi_load (char const *filename,
int *x, // POINTER!!
int *y, // POINTER!!
int *comp, // POINTER!!
int req_comp)
Now, consider the arguments you have passed it:
file_name (presumably a pointer to a C string)
1024 (definitely NOT a pointer)
1024 (definitely NOT a pointer)
0 (definitely NOT a pointer)
The whole point of passing x, y and comp as pointers is so that stbi can update their values after it loads the image. This is the C equivalent of passing by reference in a language like C++ or Java, but you have opted instead to pass integer constants that the compiler then treats as addresses.
Because of the way you call this function, stbi attempts to store the width of the image at the address: 1024, the height of the image at address: 1024 and the number of components at address: 0. You are actually lucky that this does not cause an access violation or other bizarre behavior at run-time, storing things at arbitrary addresses is dangerous.
Instead, consider this:
GLuint ImageX,
ImageY,
ImageComponents;
GLubyte *ImageData = stbi_load(file_name, &ImageX, &ImageY, &ImageComponents, 4);
[...]
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA8,
ImageX, ImageY, 0, GL_RGBA, GL_UNSIGNED_BYTE, ImageData);
// ~~~~ You are requesting 4 components per-pixel from stbi, so this is RGBA!
It's been a while since I've done OpenGL but aside from a few arbitrary calls, the main issue I'd guess is that you're not calling glEnable(GL_TEXTURE_2D);before you make the call to draw your function. Afterwards it is wise to call glDisable(GL_TEXTURE_2D);
as well to avoid potential future conflicts.

Unable to display VBO in OpenGL

I have a VBO and an IBO in OpenGL, but am unable to draw them properly. Could you please let me know what I could have forgotten in the frame display function ?
- struct Point3D is a struct with 3 floats inside (x,y,z).
- nbVertex is the amount of vertexes in the glVertex array.
- nbVBOInd is the amount of indices in the VBOInd array.
glGenBuffers(1, &VertexVBOID);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glBufferData(GL_ARRAY_BUFFER, sizeof(struct Point3D)*nbVertex, glVertex, GL_STATIC_DRAW);
glGenBuffers(1, &IndexVBOID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(int)*nbVBOInd, VBOInd, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(glVertex), BUFFER_OFFSET(0)); //The starting point of the VBO, for the vertices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); //The starting point of the IBO
Thanks !
I see the same problem as rodrigo - you have data type mismatch, as you can see here:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(int)*nbVBOInd, VBOInd, GL_STATIC_DRAW);
sizeof(int) - using integer type
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
GL_UNSIGNED_SHORT - using short type
according to openGL specification, there are only unsigned data types possible for glDrawElements. To fix this you need:
change VBOInd to unsigned type in declaration like:
unsigned int* VBOInd = new unsigned int[nbVBOInd]
replace 6th call with
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int)*nbVBOInd, VBOInd, GL_STATIC_DRAW);
replace 11th (last) call with
glDrawElements(GL_TRIANGLES, nbVBOInd, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
Anyway I believe that the problem is hidden in pointer setup, change 9th call to:
glVertexPointer(3, GL_FLOAT, 0, BUFFER_OFFSET(0));
If that doesn't work, please show us how glVertex and VBOInd is declared and filled with data. Maybe you're using std::vector? You need to call these data containers like:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int)*nbVBOInd, &VBOInd[0], GL_STATIC_DRAW);
If something's unclear, just ask in comments..
Try changing the last line to:
glDrawElements(GL_TRIANGLES, nbVBOInd, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
Unless your data in the IndexVBOID are really short, but then, the sizeof(int) above would be wrong.

Resources