Drawing using Vertex Buffer Objects in OpenGL ES 1.1 vs ES 2.0 - c

i am new to openGL. Iam using apple documentation as my major referens
http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html#//apple_ref/doc/uid/TP40008793-CH107-SW6
My problem is that i am using openGL ES 1.1 and not 2 thus functions which are used in Listing 9-3 such as glVertexAttribPointer, glEnableVertexAttribArray are not recognized ... :)
I trying to make the optimizations which are described in this documentation:
to hold indices and vertex as a struct with all of its data: position, color (Listing 9-1)
typedef struct _vertexStruct
{
GLfloat position[3];
GLubyte color[4];
} VertexStruct;
const VertexStruct vertices[] = {...};
const GLushort indices[] = {...};
and to use VBOs such as in Listing 9-2, 9-3
As i mentioned, some of the functions that are used there don't exists in openGL ES 1.1. I am wondering if there is a way to do the same in ES 1.1 maybe with some other code ?
thank,
Alex
Edit according to Christians answer, tried to use glVertexPointer, glColorPointer.
Here is the code, it prints the cube but no colors ... :(. Anyone, is it possible to use
VBOs in such menner using ES 1.1
typedef struct {
GLubyte red;
GLubyte green;
GLubyte blue;
GLubyte alpha;
} Color3D;
typedef struct {
GLfloat x;
GLfloat y;
GLfloat z;
} Vertex3D;
typedef struct{
Vector3D position;
Color3D color;
} MeshVertex;
Cube Data:
static const MeshVertex meshVertices [] =
{
{ { 0.0, 1.0, 0.0 } , { 1.0, 0.0, 0.0 ,1.0 } },
{ { 0.0, 1.0, 1.0 } , { 0.0, 1.0, 0.0 ,1.0 } },
{ { 0.0, 0.0, 0.0 } , { 0.0, 0.0, 1.0 ,1.0 } },
{ { 0.0, 0.0, 1.0 } , { 1.0, 0.0, 0.0, 1.0 } },
{ { 1.0, 0.0, 0.0 } , { 0.0, 1.0, 0.0, 1.0 } },
{ { 1.0, 0.0, 1.0 } , { 0.0, 0.0, 1.0, 1.0 } },
{ { 1.0, 1.0, 0.0 } , { 1.0, 0.0, 0.0, 1.0 } },
{ { 1.0, 1.0, 1.0 } , { 0.0, 1.0, 0.0, 1.0 } }
};
static const GLushort meshIndices [] =
{ 0, 1, 2 ,
2, 1, 3 ,
2, 3, 4 ,
3, 5, 4 ,
0, 2, 6 ,
6, 2, 4 ,
1, 7, 3 ,
7, 5, 3 ,
0, 6, 1 ,
1, 6, 7 ,
6, 4, 7 ,
4, 5, 7
};
Function
GLuint vertexBuffer;
GLuint indexBuffer;
- (void) CreateVertexBuffers
{
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(meshVertices), meshVertices, GL_STATIC_DRAW);
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(meshIndices), meshIndices, GL_STATIC_DRAW);
}
- (void) DrawModelUsingVertexBuffers
{
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexPointer(3, GL_FLOAT, sizeof(MeshVertex), (void*)offsetof(MeshVertex,position));
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(MeshVertex), (void*)offsetof(MeshVertex,color));
glEnableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(meshIndices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}

Functions like glVertexAttribPointer and glEnableVertexAttribArray are used for generic custom vertex attributes (which are the only supported method for submitting vertex data in OpenGL ES 2.0).
When using the fixed-function pipeline (as you have to in OpenGL ES 1.1) you just use the builtin attributes (think of the glVertex and glColor calls, you might have used before switching to vertex arrays). There are functions for each attribute which are called similar to their immediate mode counterparts, like glVertexPointer or glColorPointer (instead of glVertexAttribPointer). These arrays are enabled/disabled by calling gl(En/Dis)ableClientState with constants like GL_VERTEX_ARRAY or GL_COLOR_ARRAY (instead of gl(En/Dis)ableVertexAttribArray).
But as a general rule you should not learn OpenGL ES 1.1 programming with 2.0 resources, as much of the information won't be of use to you (at least if you are new to OpenGL). For example some methods described on your linked site may not be supported in 1.1, like VBOs or even VAOs. But I also have to admit, that I have completely no ES experience, so am not perfectly sure about that.
EDIT: Regarding your updated code: I assume no colors means the cube is of a single color, probably white. In your first code example you used GLubyte color[4], and now its some Color3D type, maybe this doesn't fit to the glColorPointer(4, GL_UNSIGNED_BYTE, ...) call (where the first argument is the number of components and the second one the type)?
If your Color3D type only contains 3 colors or floating point colors, I would anyway suggest you to use 4-ubyte colors, because together with your 3 floats for the position you should get a perfectly 16-byte aligned vertex, which is also and optimization they suggest in your provided link.
And by the way, the repetition of the index buffer creation in your CreateVertexBuffers function is rather a typo, isn't it?
EDIT: Your colors contain ubytes (which range from 0 (black) to 255 (full color)) and you initialize them with floats. So your float value 1.0 (which should surely mean full color) is converted to ubyte and you get 1, which compared to the whole [0,255] range is still very small, so everything is black. When you use ubytes, then you should also initialize them with ubytes, so just replace every 0.0 with 0 and every 1.0 with 255 in the color data.
And by the way, since you're using VBOs in ES 1.1 and at least something is drawn, then ES 1.1 seems to support VBOs. I didn't know that. But I'm not sure if it also supports VAOs.
And by the way, you should call glBindBuffer(GL_ARRAY_BUFFER, 0) and likewise for the element array buffer after you're finished with using them at the end of these two functions. Othwerwise you may get problems in other functions which assume no buffers but the buffers are still bound. Always remember that OpenGL is a state machine and every state you set stays until it's changed again or the context is destroyed.

Related

GLSL layout attribute number

I was trying out a shader example to draw a triangle with the RGB interpolated across the vertices, and assumed that using
layout (location = 0)in vec4 vertex;
layout (location = 1) in vec4 VertexColor;
in the vertex shader would work, since the 4 float colors immediately follow 4 float vertices in the array. However, it always drew a solid red triangle. So I tried location = 4 only to get a black screen. Experimenting further gave a blue triangle for location = 2, and finally got the interpolated result with location = 3.
My question is, how was I expected to know to enter 3 as the location? The vertex array looks like this:
GLfloat vertices[] = { -1.5,-1.5, 0.0, 1.0, //first 3D vertex
1.0, 0.0, 0.0, 1.0, //first color
0.0, 1.5, 0.0, 1.0, //second vertex
0.0, 1.0, 0.0, 1.0, //second color
1.5,-1.5, 0.0, 1.0, //third vertex
0.0, 0.0, 1.0, 1.0,}; //third color
note: edited the original layout=1 from layout = 3 in first code block
each location can hold 4 floats (a single vec4), So a valid option would also be:
layout (location = 0)in vec4 vertex;
layout (location = 1) in vec4 VertexColor;
What dictates where what attribute comes from is the set of glVertexAttribPointer calls.
these are the ones I would expect for the glsl declaration above (assuming you use a VBO)
glVertexAttribPointer(0, 4, GL_FLOAT, false, sizeof(float)*4*2, 0);
glVertexAttribPointer(1, 4, GL_FLOAT, false, sizeof(float)*4*2, sizeof(float)*4);

Can't spot the issue with my GLSL/OpenGL code

I wrote a little program to display a 32bit float texture in a simple quad. When displaying the quad, the texture color is always black. I experimented with a lot of things, but I couldn't make it work. I'm really at loss what the problem with it.
The code of creating the OpenGL texture goes like this
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, textureData);
Using the debugger, there's no error in any of these calls. I also examined the textureData pointer, and got the expected results (in my simplified program, it is just a gradient texture).
This is the vertex shader code in GLSL:
#version 400
in vec4 vertexPosition;
out vec2 uv;
void main() {
gl_Position = vertexPosition;
uv.x = (vertexPosition.x + 1.0) / 2;
uv.y = (vertexPosition.y + 1.0) / 2;
}
It's kind of a simple generation of the UV coordinates without taking them as vertex attributes. The corresponding vertex buffer object is really simple:
GLfloat vertices[4][4] = {
{ -1.0, 1.0, 0.0, 1.0 },
{ -1.0, -1.0, 0.0, 1.0 },
{ 1.0, 1.0, 0.0, 1.0 },
{ 1.0, -1.0, 0.0, 1.0 },
};
I've tested the solution, and it displays the quad covering the entire window as I wanted to. Displaying the UV coordinates in the fragment shader reproduce the gradient that I expected to get. Now here's the fragment shader:
#version 400
uniform sampler2D myTex;
in vec2 uv;
out vec4 fragColor;
void main() {
fragColor = texture(myTex, uv);
// fragColor += vec4(uv.x, uv.y, 0, 1);
}
The commented out line displays the UV coordinates as color for debugging purposes. What do I do wrong here? I just can't see why the texture() call returns 0 where the texture seems completely right, and the uv coordinates are also proper. I link here the full code if there's something else I do wrong: gl-view.c
EDIT: This is how I set up the myTex sampler:
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glUniform1i(glGetUniformLocation(shaderProgram, "myTex"), 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
EDIT: Cleared up the vertex shader code.
I've found the issue: I didn't set any MAG or MIN filter on the texture. Setting the MIN filter to GL_NEAREST solved the problem.

Trouble getting view (lookat) and projection (perspective) matrices to work properly

I've been following the open.gl tutorials without using the the GLM library because reasons (stubbornness and C).
I can't get the view and projection matrices to work properly.
Here's the relevant vertex shader code,
#version 150 core
in vec3 size;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
uniform vec3 pos;
uniform float angle;
uniform vec3 camPos;
uniform vec3 camTarget;
const float fov=90, ratio=4.0/3.0, near=1.0, far=10.0;
mat4 projection ()
{
float t = tan(radians(fov)),
l = ratio * t;
return mat4(
vec4(near/l, 0.0, 0.0, 0.0),
vec4(0.0, near/t, 0.0, 0.0),
vec4(0.0, 0.0, -(far+near)/(far-near), -(2*far*near)/(far-near)),
vec4(0.0, 0.0, -1.0, 0.0)
);
}
mat4 rotZ(float theta)
{
return mat4(
vec4(cos(theta), -sin(theta), 0.0, 0.0),
vec4(sin(theta), cos(theta), 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
}
mat4 translate(vec3 translation)
{
return mat4(
vec4(1.0, 0.0, 0.0, 0.0),
vec4(0.0, 1.0, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(translation.x, translation.y, translation.z, 1.0)
);
}
mat4 lookAtRH(vec3 eye, vec3 target)
{
vec3 zaxis = normalize(target - eye); // The "forward" vector.
vec3 xaxis = normalize(cross(vec3(0.0,0.0,1.0), zaxis));// The "right" vector.
vec3 yaxis = normalize(cross(zaxis, xaxis)); // The "up" vector.
mat4 axis = {
vec4(xaxis.x, yaxis.x, zaxis.x, 0),
vec4(xaxis.y, yaxis.y, zaxis.y, 0),
vec4(xaxis.z, yaxis.z, zaxis.z, 0),
vec4(dot(xaxis,-eye), dot(yaxis,-eye), dot(zaxis,-eye), 1)
};
return axis;
}
void main()
{
Color = color;
Texcoord = texcoord;
mat4 model = translate(pos) * rotZ(angle);
mat4 view = lookAtRH(camPos, camTarget);
gl_Position = projection() * view * model * vec4(size, 1.0);
}
From tweaking things around it seems as if the view matrix is correct, but the projection matrix is causing the dodgyness.
First I must remark that it is a very bad idea to do this directly in the shaders.
However, if you really want to, you can do this. You should be aware that the GLSL matrix constructors work with column vectors. Your projection matrix is thuse specified transposed (however, your translation matrix is correct).
EDIT: If you want pure C, here is nice lib for math (+ you can check the code :) ) https://github.com/datenwolf/linmath.h
Never do something like that :) Creating matrices in shader is very bad idea...
Vertex shader is executed for each vertex. So if you pass to shader thousand vertices you calculate new matrices thousand times. I think there's nothing more to explain :)
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
... // somewhere
glm::mat4 projection = glm::perspective(45.0f, float(window_width) / window_height, 0.1f, 100.0f);
glm::mat4 world(1.0f); // world/model matrix
glm::mat4 view(1.0f); // view/camera matrix
glm::mat4 mvp = projection * view * model;
... // inside the main loop
glUniformMatrix4fv(glGetUniformLocation(program, "mvpMatrix"), 1, GL_FALSE, &mvp[0][0]);
draw_mesh();
It's really cool and optimised :)

Calling glDrawElements after glInterleavedArrays isn't working

I am writing some openGL wrappers and am trying to run the following code:
void some_func1() {
float vertices[] = {50.0, 50.0, 0.0, 20.0, 50.0, 0.0, 20.0, 60.0, 0.0};
glColor3f(1.0, 0.0, 0.0);
glInterleavedArrays(GL_V3F, 0, vertices);
}
void some_func2() {
int indices[] = {0,1,2};
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, indices);
}
void parent_func() {
some_func1();
some_func2();
}
But it would seem that openGL is not picking up the call to glDrawElements in the second function. My routine opens a window, clears it to black, and draws nothing. What's weird is that running this code
void some_func1() {
float vertices[] = {50.0, 50.0, 0.0, 20.0, 50.0, 0.0, 20.0, 60.0, 0.0};
int indices[] = {0,1,2};
glColor3f(1.0, 0.0, 0.0);
glInterleavedArrays(GL_V3F, 0, vertices);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, indices);
}
void parent_func() {
some_func1();
}
works exactly as expected: a red triangle is drawn. I've looked through the documentation and searched around, but I can't find any reason that glDrawElements wouldn't work, or would miss data somehow if called in another function. Any ideas?
FYI: I am running this on an Ubuntu 12.04 VM through VirtualBox, 32-bit processor on the host, and freeglut is doing my window handling. I have also set LIBGL_ALWAYS_INDIRECT=1 to work around an issue with the VM's 3D rendering. (not sure if any of that matters but... :))
The reason is, that at the point of drawing with glDrawElements, there is no valid vertex data to draw. When calling glInterleavedArrays (which just does a bunch of gl...Pointer calls under the hood) you are merely telling OpenGL where to find the vertex data, without copying anything. The actual data is not accessed before the drawing operation (glDrawElements). So in some_func1 you are setting a pointer to the local variable vertices, which doesn't exist anymore after the function returns. This doesn't happen in your modified code (where the pointer is set and drawn in the same function).
So either make this array survive until the glDrawElements call or, even better, make OpenGL to actually store the vertex data itself, by employing a vertex buffer object and performing an actual data copy. In this case you might also want to refrain from the awfully deprecated glInterleavedArrays function (which isn't much more than a mere software wrapper around proper gl...Pointer and glEnableClientState calls, anyway).

3D pyramid appears scattered, with mixed up sides

First of all I defined a structure to express the coordinated of a pyramid:
typedef struct
{
GLfloat xUp;
GLfloat yUp;
GLfloat zUp;
GLfloat base;
GLfloat height;
}pyramid;
Pretty self-explanatory here : I store the coordinates of the uppest point, the base and the height.
The I wrote a function to draw a pyramid:
void drawPyramid(pyramid pyr)
{
GLfloat p1[]= {pyr.xUp+pyr.base/2.0, pyr.yUp-pyr.height, pyr.zUp-pyr.base/2.0};
GLfloat p2[]= {pyr.xUp+pyr.base/2.0, pyr.yUp-pyr.height, pyr.zUp+pyr.base/2.0};
GLfloat p3[]= {pyr.xUp-pyr.base/2.0, pyr.yUp-pyr.height, pyr.zUp+pyr.base/2.0};
GLfloat p4[]= {pyr.xUp-pyr.base/2.0, pyr.yUp-pyr.height, pyr.zUp-pyr.base/2.0};
GLfloat up[]= {pyr.xUp, pyr.yUp, pyr.zUp};
glBegin(GL_TRIANGLES);
glColor4f(1.0, 0.0, 0.0, 0.0);
glVertex3fv(up);
glVertex3fv(p1);
glVertex3fv(p2);
glColor4f(0.0, 1.0, 0.0, 0.0);
glVertex3fv(up);
glVertex3fv(p2);
glVertex3fv(p3);
glColor4f(0.0, 0.0, 1.0, 0.0);
glVertex3fv(up);
glVertex3fv(p3);
glVertex3fv(p4);
glColor4f(1.0, 1.0, 0.0, 0.0);
glVertex3fv(up);
glVertex3fv(p4);
glVertex3fv(p1);
glEnd();
glColor4f(0.0, 1.0, 1.0, 0.0);
glBegin(GL_QUADS);
glVertex3fv(p1);
glVertex3fv(p2);
glVertex3fv(p3);
glVertex3fv(p4);
glEnd();
}
I struggled to draw all the vertices in anti-clockwise order, but probably I messed up something.
This is how I display the pyramid in my rendering function:
void display()
{
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTranslatef(0.0, -25.0, 50.0);
glRotatef(-angle, 0.0, 1.0, 0.0);
glTranslatef(0.0, 25.0, -50.0);
pyramid pyr;
pyr.xUp=0.0;
pyr.yUp=10.0;
pyr.zUp=50.0;
pyr.base=10.0;
pyr.height=18.0;
glColor4f(1.0, 0.0, 0.0, 0.0);
drawPyramid(pyr);
glutSwapBuffers();
}
I also use an init method called before the glut main loop:
void init()
{
glEnable(GL_DEPTH);
glViewport(-1.0, 1.0, -1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(35.0, 1.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0,1.0,0.0, 0.0,1.0,30.0, 0.0,1.0,0.0);
}
angle is just a double that I use to rotate the pyramid, changeable by pressing 'r', but this is not relevant.It appears that the real problem is how I draw the vertices.
The problem is that the faces of the pyramid appear scattered, messed up.I would better describe this situation with an image:
There's a face that is too small, that is displayed and I don't know why.
If I rotate the pyramid it appears messed up, I even recored a video to describe this.
Later I could upload it if the problem is not totally clear.
PS: Many people have noticed that I am using outdated techniques.But unfortunately this is what my university offers.
EDIT
I forgot to say about the main function:
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitWindowPosition(100, 100);
glutInitWindowSize(500, 500);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutCreateWindow("Sierpinsky Pyramid");
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
init();
glutMainLoop();
return 0;
}
It looks like depth buffer isn't initialzied.
Calling glEnable(GL_DEPTH_TEST) is not enough. You must correctly initialize glut and specify that you want depth buffer support, otherwise you won't get a depth buffer. If I remember correctly, this is done using glutInitDisplayMode(GLUT_DEPTH|...). See documentation here and introduction here. Additional info can be found using google.
--EDIT--
You're passing invalid parameter to glEnable. call glEnable(GL_DEPTH_TEST) instead of glEnable(GL_DEPTH).
Also:
Matrix code in display function isn't protected by glPushMatrix/glPopMatrix. Which means that every time you rotate pyramid, rotation is applied to previous transform. I.e. calling display function will rotate the pyramid.
glViewport is called with invalid parameters. glViewport takes 4 integer arguments, but you're trying to pass floats. Also, what's "width of -1.0" supposed to mean?
You have not checked any error codes (glGetError). If you tried to call glGetError after glEnable call, then you'd see that it returns GL_INVALID_ENUM.
OpenGL has documentation. Documentation is available on opengl.org. Use it and read it. Also, I'd recommend reading "OpenGL red book".

Resources