I'm trying to port an OpenGL program to GLESv2. The program uses the following code a texture to the default framebuffer (it also fails if I render it to an fbo which also works on OpenGL).
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glVertexAttribPointer(bgra_texcoords, 2, GL_FLOAT, GL_FALSE, 0, display_texcoords);
glEnableVertexAttribArray(bgra_texcoords);
DEBUG_ERROR_CHECK();
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glUseProgram(bgra_program);
glBindTexture(GL_TEXTURE_2D, inst->texture);
glUniform1i(bgra_texture, 0);
glViewport(inst->x, root_surface->h - (inst->y + inst->h), inst->w, inst->h);
DEBUG_ERROR_CHECK();
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
DEBUG_ERROR_CHECK();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glFlush();
This works fine with OpenGL but it fails on glDrawArrays() under GLESv2. I read this question: glDrawElements throw GL_INVALID_VALUE​ error which is very similar to my problem but I can't figure out how to apply the solution to my code since I'm not using VertexArray and I'm very new to GL.
inst->texture is a texture uploaded with glTexImage2D(). I created the vertex_buffer right after initializing EGL and compiling the shaders:
glGenBuffers(1, &vertex_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glVertexAttribPointer(bgra_pos, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(bgra_pos);
DEBUG_ERROR_CHECK();
Edit: You can look at the whole source file here: https://github.com/fernando-rodriguez/mediabox/blob/bc4135d9568b2c5b4e8f39ac63ded2cb66023bcd/src/lib/ui/video-opengl.c. The file is a video "driver" for a compositor, all it does is creates 2D surfaces and render them to the screen. If there is anything wrong with the question or am missing something please post a comment so I can fix it. Thanks.
I figured it out. I was calling eglBindAPI() after eglCreateContext() so I think I was actually creating a GLES1 context.
Related
I recently downloaded GLFW3 since it's better than GLUT from what I heard. I managed to get a window to display and change the clear colors but I cannot figure out why I'm not rendering anything in my draw calls. In this case, it's a triangle. I'm running this on XCode 9.2 and this is the code I have right now:
#define GLFW_INCLUDE_GLCOREARB
#include <stdio.h>
#include <stdlib.h>
#include <GLFW/glfw3.h>
static const GLfloat vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
int main(int argc, const char * argv[]) {
GLuint VertexBufferID;
GLFWwindow* window;
/* Initialize the library */
if ( !glfwInit() )
{
return -1;
}
#ifdef __APPLE__
/* We need to explicitly ask for a 3.2 context on OS X */
glfwWindowHint (GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint (GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint (GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint (GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
#endif
/* Create a windowed mode window and its OpenGL context */
window = glfwCreateWindow( 400 , 400, "Hello World", NULL, NULL );
if (!window)
{
glfwTerminate();
return -1;
}
/* Make the window's context current */
glfwMakeContextCurrent(window);
glGenBuffers(1, &VertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex_buffer_data), vertex_buffer_data, GL_STATIC_DRAW);
//Edit in
**program = initShaders(VSHADER_SOURCE, FSHADER_SOURCE);**
while (!glfwWindowShouldClose(window))
{
/* Render here */
//set clear color
glClearColor(0.0, 0.0, 0.0, 1.0);
//clear window
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the buffers
//Draw
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferID);
//got error 0x502 on line below
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
//Edit in
**glUseProgram(program);**
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
It's probably a minor mistake but I can't see it.
Edit: Okay shaders are required here from what I'm told. I don't know how I got away with it in GLUT. I guess it was an older version. So here are the shader programs I'm using.
"#version 330 core\n"
"layout(location = 0) in vec3 vertexPosition_modelspace;\n"
"void main()\n"
"{\n"
" gl_Position.xyz = vertexPosition_modelspace;\n"
" gl_Position.w = 1.0;\n"
"}\n";
"#version 330 core\n"
"out vec3 color;\n"
"void main()\n"
"{\n"
" color = vec3(1, 0, 0);\n"
"}\n";
I should also mention that I've been following this tutorial for help as well. http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/
As for errors, I found an error in glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); as code 502, where it apparently means GL_INVALID_OPERATION, which I don't know what that means in this case.
The 2nd parameter of glBufferData is the target type of the buffer and not the named buffer object itself. glBufferData uses the named buffer object which is bound to the specified target:
glBufferData(
GL_ARRAY_BUFFER, // GL_ARRAY_BUFFER instead of VertexBufferID
sizeof(vertex_buffer_data),
vertex_buffer_data,
GL_STATIC_DRAW);
If you want to use a OpenGL Core profile context, then you have to use a shader program, this is not optional.
Further you have to create a named Vertex Array Object, because the default vertex array object (0) is not present in core profile context.
The modern way of rendering in OpenGL, would be to use a Shader program.
If you don't want to use use a shader program, than you have to use a compatibility context and you have to define the array of vertex data using the deprected way by glVertexPointer and you have to enable the client-side capability for vertex coordinates by glEnableClientState( GL_VERTEX_ARRAY ).
glfwWindowHint (GLFW_OPENGL_PROFILE,
GLFW_OPENGL_COMPAT_PROFILE); // instead of GLFW_OPENGL_CORE_PROFILE
.....
glGenBuffers(1, &VertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferID);
glBufferData(GL_ARRAY_BUFFER,
sizeof(vertex_buffer_data), vertex_buffer_data, GL_STATIC_DRAW);
.....
glEnableClientState( GL_VERTEX_ARRAY );
glBindBuffer(GL_ARRAY_BUFFER, VertexBufferID);
glVertexPointer(3, GL_FLOAT, 0, (void*)0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState( GL_VERTEX_ARRAY );
Im about to implement a very basic render module. now is time to change the old way to render primitives to a modern approach using VBO , so far i understand how it works but i cant get my PoC working.
Loading the basic model( a triangle) no opengl errors generated (glBindVertexArray is a macro to glBindVertexArrayAPPLE) :
float pos[] = {
-1.0f, -1.0f,-5.0f,
-1.0f, 1.0f, -5.0f,
1.0f,1.0f,-5.0f,
};
printf("%d %d", map_VAO, map_VBO);
checkGLError();
glGenVertexArrays(1, &map_VAO);
checkGLError();
glGenBuffers(1, &map_VBO);
printf("%d %d", map_VAO, map_VBO); // here with 4.1 map_VAO is 0
checkGLError();
glEnableClientState(GL_VERTEX_ARRAY);
glBindVertexArrays(map_VAO);
glBindBuffer(GL_ARRAY_BUFFER, map_VBO);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), &pos[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrays(0);
glDisableClientState(GL_VERTEX_ARRAY);
return 0;
And in the main loop (drawing part) :
// .. clear buffers load identity etc...
glColor3f(0.33f,0.0f,0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, map_VBO);
glBindVertexArrayAPPLE(map_VAO);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindBuffer(GL_ARRAY_BUFFER, 0 );
glBindVertexArrayAPPLE(0);
glDisableClientState(GL_VERTEX_ARRAY);
New drawing part : (removing unnecessary clientstate and binds)
glColor3f(0.33f,0.0f,0.0f);
glBindVertexArrayAPPLE(map_VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
But nothing is displayed. I had tried changing the profiles and the OpenGL Version but other problems arise.
I can draw a simple triangle with the old approach:
glBegin(GL_TRIANGLES);
glVertex3f( -1.0f, -1.0f, -5.0f);
glVertex3f( -1.0f, 1.0f, -5.0f);
glVertex3f( 1.0f, 1.0f,-5.0f);
glEnd();
Questing: What I'm doing wrong?, theres some kind of activation related to VBO and VAO?
Additional questions : why when i use open gl 4.1 Core profile i cant get a VAO name with genVertexArray? (it says invalid operation)
A few things:
glEnableClientState is deprecated. glEnableClientState is used to tell OpenGL you're using a vertex array for fixed-function functionality which you're not using anymore, so it's no use calling this function (and probably causes weird results).
glEnableVertexAttribArray(0); There is no need to enable it again in your drawing function. Enabling the 0th vertex attribute was stored in your VAO.
glBindBuffer(GL_ARRAY_BUFFER, map_VBO); Also, no need to call this function in the drawing function. glVertexAttribPointer stores the VBO binding while you configured the VAO.
So, remove the glEnable/Disable-ClientState functions and remember that you just need to bind the VAO in your case. I believe the cause of your error is point 1. Points 2 and 3 are just to improve your code ;)
You did not wrap glGenVertexArrays around glGenVertexArraysAPPLE did you? (like you mentioned doing for glBindVertexArray)
That function does not exist in core profiles on OS X, you will notice a distinct lack of GL_APPLE_vertex_array_object from the extensions string. It exists in Legacy (2.1) profiles as seen here but not in Core (3.2+) as seen here.
You are supposed to #include <OpenGL/gl3.h> when using a core profile on OS X and call glGenVertexArrays (...) instead of glGenVertexArraysAPPLE (...).
Only call VertexArray*APPLE functions in an OpenGL 2.1 context on OS X or you will get GL_INVALID_OPERATION errors.
I have a question about glTranslatef. My teacher told me to place the call in display() which is defined below:
void display()
{
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(program);
glBindBuffer(GL_ARRAY_BUFFER, vertexArrayBufferID);
glVertexAttribPointer(vPos, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0) );
glEnableVertexAttribArray(vPos);
glTranslatef(transX, transY, transZ);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementArrayBufferID);
glDrawElements(GL_TRIANGLES, numElements, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glutSwapBuffers();
}
transX, transY, and transZ are set to 0.0 initially and are changed using a callback glutKeyboardFunc() that reads the key pressed. I've tested the callback with cout and it displays the feedback but the image won't translate. Is there a specific placement required for the program to work? I checked on Google but haven't found what I'm looking for yet.
You need to reset the matrix as well, otherwise the translation will be accumulated every frame until your content is outside the display.
Look up glLoadIdentity
I've written a small tiling game engine with OpenGL and C, and I can't seem to figure out what the problem is. My main loop looks like this:
void main_game_loop()
{
(poll for events and respond to them)
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
draw_block(WALL, 10, 10);
}
draw_block:
void draw_block(block b, int x, int y)
{
(load b's texture from a hash and store it in GLuint tex)
glPushMatrix();
glTranslatef(x, y, 0);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
//BLOCK_DIM is 32, the width and height of the texture
glTexCoord2i(0, 0); glVertex3f(0, 0, 0);
glTexCoord2i(1, 0); glVertex3f(BLOCK_DIM, 0, 0);
glTexCoord2i(1, 1); glVertex3f(BLOCK_DIM, BLOCK_DIM, 0);
glTexCoord2i(0, 1); glVertex3f(0, BLOCK_DIM, 0);
glEnd();
glPopMatrix;
}
initialization function: (called before main_game_loop)
void init_gl()
{
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, SCREEN_WIDTH, SCREEN_HEIGHT, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
When run, this displays a black screen. However, if I remove the glViewport call, it seemingly displays the texture, but huge and in the corner of the window. Screenshot:
The texture IS being drawn correctly, because if I scale out by a huge factor, I can see the entire image. The y-axis also seems to be flipped from what I used in the gluOrtho2D call (discovered by making events add or subtract from x/y coordinates of the image, subtracting from the y coordinate causes the image to move downward). I'm starting to get frustrated, because this is the simplest possible example I can think of. I'm using SDL, and am passing SDL_OPENGL to SDL_SetVideoMode. What is going on here?
Looks like a problem with glViewport, but just to be sure, did you try clearing the color buffer to purple?
I've always thought of glViewport as a video/windowing function, not actually part of OpenGL itself, because it is the intermediate between the window manager and the OpenGL subsystem, and it uses window coordinates. As such, you should probably look at it along with the other SDL video calls. I suggest updating the question with the full code, or at least with those parts relevant to the video/window subsystem.
Or is it that you omitted to call glViewport after a resize?
You should also try your code without SDL_FULLSCREEN and/or with a smaller window. I usually start with a 512x512 or 640x480 window until I get the viewport and some basic controls right.
the first two parameters of glViewPort specifies the lower left of the view
http://www.opengl.org/sdk/docs/man/xhtml/glViewport.xml
You can try
glViewport(0, SCREEN_HEIGHT, SCREEN_WIDTH, SCREEN_HEIGHT);
For gluOrtho2D, the parameters are left, right, top, bottom
so I would probably use
gluOrtho2D(0, SCREEN_WIDTH, 0, SCREEN_HEIGHT);
I am trying to write depth to a texture. I would like to have the linear depth, so I tried using I tried using a R16F texture. I defined a texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16F_EXT, g_bufferWidth, g_bufferHeight, 0,
GL_RED_EXT, GL_HALF_FLOAT_OES, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, g_texture, 0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16,
g_bufferWidth, g_bufferHeight);
But when debugging on Xcode by using frame capture on an iPhone5, I get an Unknown texture in the color buffer, and nothing is written to the depth buffer.
I've also tried just creating a depth texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, g_bufferWidth, g_bufferHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_texture, 0);
But in this case also nothing seems to get written in the depth buffer.
The only way I can get things rendered to the depth buffer seems by defining the first texture as RGBA32...
Aren't the EXT_color_buffer_half_float and depth extensions active in iOS6??