Optimising the rendition of quads using GLUT? - c

I'm making a 3d voxel engine, similar to Minecraft. I have some world generation and chunk logic working, however when I have a render distance of 12 chunks (which seems fairly typical for Minecraft for example), (2*12*12*16*16*2), I see there is a potential of being upwards of 150,000 faces needing to be rendered. Before trying to optimize the engine as a whole, I run a test where I just rendered one face 150,000 times. In theory, as the points aren't having to be calculated in 3d space each time, this task should really the least computationally expensive rendition the engine would have to make.
Nevertheless, running the following
glBegin(GL_QUADS);
glColor3f(1, 0, 0);
for (int i = 0; i < 150000;i++) {
glVertex3fv(renderp1);
glVertex3fv(renderp2);
glVertex3fv(renderp3);
glVertex3fv(renderp4);
}
glEnd();
Even when theres no texture and the points are all the same, I still get a very shabby fps, which makes the engine unusable.
I know modern games have meshes with upwards of 100,000 polygons, and run fantastically. Which makes me wonder how this code here is so slow? Is rendering using this technique a horrible way to go about doing this? How could i achieve such a render?

The first thing you should do (before any of the shader stuff) is to stop using glBegin/glEnd and start using glDrawArrays or glDrawElements.
e.g.
// define data structures
struct vec3 { GLfloat x, y, z; };
struct vertex_t {
vec3 position, color;
};
// define data (just a single triangle with RGB colors)
static const vertex_t vertices[] = {
{ { 0.0f, 0.5f, 0.0f }, { 1, 0, 0 } },
{ { 0.5f, -0.5f, 0.0f }, { 0, 1, 0 } },
{ { -0.5f, -0.5f, 0.0f }, { 0, 0, 1 } }
};
...
// setup the arrays
glVertexPointer(3, GL_FLOAT, sizeof(vertex_t), (char*)vertices + offsetof(vertex_t, position) );
glColorPointer(3, GL_FLOAT, sizeof(vertex_t), (char*)vertices + offsetof(vertex_t, color) );
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
...
// draw
glDrawArrays(GL_TRIANGLES, 0, 3);
This is OpenGL 1.1 stuff. It can be further improved with VBO (Vertex Buffer Object), which requires OpenGL 1.5
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// now replace vertices with nullptr in calls to glVertexPointer and glColorPointer
and we haven't even touched shaders yet.
For OpenGL 2.x style shaders, the above code can stay as is, or you can further "modernize" it by replacing glVertexPointer/glColorPointer with glVertexAttribPointer, and glEnableClientState with glEnableVertexAttribArray, to make "modern OpenGL" people happy.
But just using glDrawArrays, OpenGL 1.1 style, should be enough to resolve the performance problem.
This way you don't call glVertex/glColor 100000 times, but a single call to glDrawArrays can draw 100000 vertices at once (and if using VBO, they're already in the GPU memory).
And oh, quads are deprecated since 3.0. We're supposed to build everything from triangles.

Related

C - Depth effect in OpenGL

I read this tutorial online: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/. More precisely the chapter about projection matrices.
I am trying to apply what I read, to a specific part of what I drew. However I don't see anything different. Did I misunderstand anything or do I just apply it incorrectly?
I am actually drawing 2 houses. And I would like to create this effect: http://www.opengl-tutorial.org/assets/images/tuto-3-matrix/homogeneous.png
My code:
void rescale()
{
glViewport(0,0,500,500);
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glOrtho(-6.500, 6.500, -6.500, 6.500, -6.00, 12.00);
}
void setup()
{
glClearColor(1.0, 1.0, 1.0, 1.0);
glEnable(GL_DEPTH_TEST);
}
void draw()
{
glPushMatrix();
//draw something in the orthogonal projection
glPopMatrix();
glPushMatrix();
gluPerspective(0.0, 1, -12, 30); //perspective projection
glPushMatrix();
glScalef(0.2,0.2,0.2);
glTranslatef(-1, 0.5, -5);
glColor3f(1.0, 0.0, 0.0);
drawVertexCube();
glPopMatrix();
glPushMatrix();
glScalef(0.2,0.2,0.2);
glTranslatef(-1, 0.5, -40);
glColor3f(0, 1.0, 0.0);
drawVertexCube();
glPopMatrix();
glPopMatrix();
glFlush();
}
This is the result:
As you can see there is no (visible) result. I would like to accentuate the effect much more.
Your gluPerspective call is invalid:
gluPerspective(0.0, 1, -12, 30);
The near and far parameters must both be positive, otherwise the command will generate just an GL error and will leave the current matrix unmodified, so your ortho projection is still in use. Your glPushMatrix()/glPopMatrix() calls won't help because you never enclosed the glOrtho() call in such a block, so you never restore to a state where that matrix is not already applied. Furthermore. the glMatrixMode() inbetween a glLoadIdentity() and the glOrtho seems also quite weird.
You should also be aware that all GL matrix functions besides the glLoad* ones will multiply the current matrix by a new matrix. Since you still have the ortho matrix applied, you would get the product of both matrices, which will totally screw up your results.
Finally, you should be aware that all of the GL matrix strack functionality is deprecated and completely removed in the core profile of modern OpenGL. If you are learning OpenGL nowadays, you should really consider learning the "modern" way in OpenGL (which is basically already a decade old).

In OpenGL ES 2.0 depth texture buffer is not working when using FBO with texture depth buffer

Currently I'm working on a project for rendering applications to Framebuffer Object(FBO) first, and rendering back the applications by using the FBO color and depth texture attachments in OpenGL ES 2.0 .
Now multiple applications are rendered well with color buffers. When I'm trying to use depth information from depth texture buffer, it seems not working.
I tried to render depth texture by sampling it with texture coordinate, it is all white. People say the grayscale may differ quite slightly, namely even in the shadow part it's near to 1.0. So I modify my fragment shader to like following:
vec4 depth;
depth = texture2D(s_depth0, v_texCoord);
if(depth.r == 1.0)
gl_FragColor = vec4(1.0,0.0,0.0,1.0);
And without surprise, it's all red.
The application code:
void Draw ( ESContext *esContext )
{
UserData *userData = esContext->userData;
// Clear the color buffer
glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw a triangle
GLfloat vVertices[] = { 0.0f, 0.5f, 0.5f,
-0.5f, -0.5f,-0.5f,
0.5f, -0.5f,-0.5f };
// Set the viewport
glViewport ( 0, 0, esContext->width, esContext->height );
// Use the program object
glUseProgram ( userData->programObject );
// Load the vertex position
glVertexAttribPointer ( 0, 3, GL_FLOAT, GL_FALSE, 0, vVertices );
glEnableVertexAttribArray ( 0 );
glDrawArrays ( GL_TRIANGLES, 0, 3 );
eglSwapBuffers ( esContext->eglDisplay, esContext->eglSurface );
}
So, what would be the problem, if color buffer works fine, while depth buffer doesn't work?
I solved it finally. That reason ends up to be a lack of GL_DEPTH_TEST enabling from client applications.
So if you got the same problem, be sure to enable GL_DEPTH_TEST by calling glEnable(GL_DEPTH_TEST); during OpenGL ES initialization. By default it's disabled for the sake of performance I guess.
Thanks for all advices and answers.
The depth texture needs to be linearized to be seen in the viewport because it's saved in exponential form. Try this in fragment:
uniform float farClip, nearClip;
float depth = texture2D(s_depth0, v_texCoord).x;
float depthLinear = (2 * nearClip) / (farClip + nearClip - depth * (farClip - nearClip));

Fill the area with texture cocos2d-x

I've been trying to make "Obstacle" class which builds box2d body by array of points and draw the area which my body covers. As for body, it works totally ok, I receive array of points, build b2PolygonShape and so on. BUT, I really don't know, how to fill the area with color or texture which was built by points array. Here's my draw() method:
void Obstacle::draw(cocos2d::Renderer *renderer, const cocos2d::Mat4 &transform, uint32_t flags)
{
CC_NODE_DRAW_SETUP();
glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST);
GL::bindTexture2D(obstacleTexture->getName());
//DrawPrimitives::setDrawColor4F(1.0, 1.0, 0.0, 1.0);
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)shapePoints.size());
}
vertices is the array of points which I use for creating b2body.
You should triangulate the polygon shape you built in order to draw.
poly2tri is a good option for triangulating shapes: https://code.google.com/p/poly2tri/
After triangulating your shape, map texture coordinates or set the vertex colors.

Basic Vertex Buffer Example not working

Im about to implement a very basic render module. now is time to change the old way to render primitives to a modern approach using VBO , so far i understand how it works but i cant get my PoC working.
Loading the basic model( a triangle) no opengl errors generated (glBindVertexArray is a macro to glBindVertexArrayAPPLE) :
float pos[] = {
-1.0f, -1.0f,-5.0f,
-1.0f, 1.0f, -5.0f,
1.0f,1.0f,-5.0f,
};
printf("%d %d", map_VAO, map_VBO);
checkGLError();
glGenVertexArrays(1, &map_VAO);
checkGLError();
glGenBuffers(1, &map_VBO);
printf("%d %d", map_VAO, map_VBO); // here with 4.1 map_VAO is 0
checkGLError();
glEnableClientState(GL_VERTEX_ARRAY);
glBindVertexArrays(map_VAO);
glBindBuffer(GL_ARRAY_BUFFER, map_VBO);
glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), &pos[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrays(0);
glDisableClientState(GL_VERTEX_ARRAY);
return 0;
And in the main loop (drawing part) :
// .. clear buffers load identity etc...
glColor3f(0.33f,0.0f,0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, map_VBO);
glBindVertexArrayAPPLE(map_VAO);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindBuffer(GL_ARRAY_BUFFER, 0 );
glBindVertexArrayAPPLE(0);
glDisableClientState(GL_VERTEX_ARRAY);
New drawing part : (removing unnecessary clientstate and binds)
glColor3f(0.33f,0.0f,0.0f);
glBindVertexArrayAPPLE(map_VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
But nothing is displayed. I had tried changing the profiles and the OpenGL Version but other problems arise.
I can draw a simple triangle with the old approach:
glBegin(GL_TRIANGLES);
glVertex3f( -1.0f, -1.0f, -5.0f);
glVertex3f( -1.0f, 1.0f, -5.0f);
glVertex3f( 1.0f, 1.0f,-5.0f);
glEnd();
Questing: What I'm doing wrong?, theres some kind of activation related to VBO and VAO?
Additional questions : why when i use open gl 4.1 Core profile i cant get a VAO name with genVertexArray? (it says invalid operation)
A few things:
glEnableClientState is deprecated. glEnableClientState is used to tell OpenGL you're using a vertex array for fixed-function functionality which you're not using anymore, so it's no use calling this function (and probably causes weird results).
glEnableVertexAttribArray(0); There is no need to enable it again in your drawing function. Enabling the 0th vertex attribute was stored in your VAO.
glBindBuffer(GL_ARRAY_BUFFER, map_VBO); Also, no need to call this function in the drawing function. glVertexAttribPointer stores the VBO binding while you configured the VAO.
So, remove the glEnable/Disable-ClientState functions and remember that you just need to bind the VAO in your case. I believe the cause of your error is point 1. Points 2 and 3 are just to improve your code ;)
You did not wrap glGenVertexArrays around glGenVertexArraysAPPLE did you? (like you mentioned doing for glBindVertexArray)
That function does not exist in core profiles on OS X, you will notice a distinct lack of GL_APPLE_vertex_array_object from the extensions string. It exists in Legacy (2.1) profiles as seen here but not in Core (3.2+) as seen here.
You are supposed to #include <OpenGL/gl3.h> when using a core profile on OS X and call glGenVertexArrays (...) instead of glGenVertexArraysAPPLE (...).
Only call VertexArray*APPLE functions in an OpenGL 2.1 context on OS X or you will get GL_INVALID_OPERATION errors.

Basic rendering of comet Wild 2 shape data using OpenGL

I want to learn OpenGL, and decided to start with a very simple example - rendering the shape of comet Wild 2 as inferred from measurements from the Stardust spacecraft (details about the data in: http://nssdc.gsfc.nasa.gov/nmc/masterCatalog.do?ds=PSSB-00133). Please keep in mind that I know absolutely NOTHING about OpenGL. Some Google-fu helped me get as far as the code presented below. Despite my best efforts, my comet sucks:
I would like for it to look prettier, and I have no idea how to proceed (besides reading the Red book, or similar). For example:
How can I make a very basic "wireframe" rendering of the shape?
Suppose the Sun is along the "bottom" direction (i.e., along -Y), how can I add the light and see the shadow on the other side?
How can I add "mouse events" so that I can rotate my view by, and zoom in/out?
How can I make this monster look prettier? Any references to on-line tutorials, or code examples?
I placed the source code, data, and makefile (for OS X) in bitbucket:
hg clone https://arrieta#bitbucket.org/arrieta/learning-opengl
The data consists of 8,761 triplets (the vertices, in a body-fixed frame) and 17,518 triangles (each triangle is a triplet of integers referring to one of the 8,761 vertex triplets).
#include<stdio.h>
#include<stdlib.h>
#include<OpenGL/gl.h>
#include<OpenGL/glu.h>
// I added this in case you want to "copy/paste" the program into a
// non-Mac computer
#ifdef __APPLE__
# include <GLUT/glut.h>
#else
# include <GL/glut.h>
#endif
/* I hardcoded the data and use globals. I know it sucks, but I was in
a hurry. */
#define NF 17518
#define NV 8761
unsigned int fs[3 * NF];
float vs[3 * NV];
float angle = 0.0f;
/* callback when the window changes size (copied from Internet example) */
void changeSize(int w, int h) {
if (h == 0) h = 1;
float ratio = w * 1.0 / h;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glViewport(0, 0, w, h);
gluPerspective(45.0f, ratio, 0.2f, 50000.0f); /* 45 degrees fov in Y direction; 50km z-clipping*/
glMatrixMode(GL_MODELVIEW);
}
/* this renders and updates the scene (mostly copied from Internet examples) */
void renderScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0.0f, 0.0f, 10000.0f, /* eye is looking down along the Z-direction at 10km */
0.0f, 0.0f, 0.0f, /* center at (0, 0, 0) */
0.0f, 1.0f, 0.0f); /* y direction along natural y-axis */
/* just add a simple rotation */
glRotatef(angle, 0.0f, 0.0f, 1.0f);
/* use the facets and vertices to insert triangles in the buffer */
glBegin(GL_TRIANGLES);
unsigned int counter;
for(counter=0; counter<3 * NF; ++counter) {
glVertex3fv(vs + 3 * fs[counter]); /* here is where I'm loading
the data - why do I need to
load it every time? */
}
glEnd();
angle += 0.1f; /* update the rotation angle */
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
FILE *fp;
unsigned int counter;
/* load vertices */
fp = fopen("wild2.vs", "r");
counter = 0;
while(fscanf(fp, "%f", &vs[counter++]) > 0);
fclose(fp);
/* load facets */
fp = fopen("wild2.fs", "r");
counter = 0;
while(fscanf(fp, "%d", &fs[counter++]) > 0);
fclose(fp);
/* this initialization and "configuration" is mostly copied from Internet */
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(0, 0);
glutInitWindowSize(1024, 1024);
glutCreateWindow("Wild-2 Shape");
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);
GLfloat mat_specular[] = { 1.0, 1.0, 1.0, 1.0 };
GLfloat mat_shininess[] = { 30.0 };
GLfloat light_position[] = {3000.0, 3000.0, 3000.0, 0.0 };
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel (GL_SMOOTH);
glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glutDisplayFunc(renderScene);
glutReshapeFunc(changeSize);
glutIdleFunc(renderScene);
glutMainLoop();
return 0;
}
EDIT
It is starting to look better, and I have now plenty of resources to look into for the time being. It still sucks, but my questions have been answered!
I added the normals, and can switch back and forth between the "texture" and the wireframe:
PS. The repository shows the changes made as per SeedmanJ's suggestions.
It's really easy to change to a wireframe rendering in OpenGL, you'll have to use
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
and to switch back to a fill rendering,
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
About the lights, OpenGL allows you to use at most 8 different lights, generating your final rendering thanks to the normals, and materials. You can activate a lighting mode with:
glEnable(GL_LIGHTING);
and then activate each of your lights with either:
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
to change a light property like its position, please look at
http://linux.die.net/man/3/gllightfv
You'll have to set up your normals for each vertices you define, if your using the glBegin() method. In VBO rendering it's the same but normals are also contained in the vram. In the glBegin() method, you can use
glNormal3f(x, y, z); for example
for each vertex you define.
And for more information about what you can do, the redbook is a good way to begin.
Moving your "scene" is one more thing OpenGL indirectly allows you to do. As it all works with matrix,
you can either use
glTranslate3f(x, y, z);
glRotate3f(num, x, y, z);
....
Managing key events and mouse events has (i'm almost sure about that) nothing to do with OpenGL, it depends on the lib your using, for example glut/SDL/... so you'll have to refer to their own documentations.
Finaly, for more further information about some of the functions you can use, http://www.opengl.org/sdk/docs/man/, and there's also a tutorial part, leading you to different interesting websites.
Hope this helps!
How can I make a very basic "wireframe" rendering of the shape?
glPolygonMode( GL_FRONT, GL_LINE );
Suppose the Sun is along the "bottom" direction (i.e., along -Y), how can I add the light and see the shadow on the other side?
Good shadows are hard, especially with the fixed-function pipeline.
But before that you need normals to go with your vertices. You can calculate per-face normals pretty easily.
How can I add "mouse events" so that I can rotate my view by, and zoom in/out?
Try the mouse handlers I did here.
Though I some like to say "Start with something simpler", I think, sometimes you need to "dive in" to get a good understanding, on a small time span! Well done!
Also if you would like an example, please ask...
I have written a WELL DOCUMENTED, and efficient,
but readable pure Win32 (No .NET, or MFC) OpenGL FPS!
Though it appears other people answered most of you questions...
I can help you if you would like, maybe make a cool texture (if you don't have one)...
To answer this question:
glBegin(GL_TRIANGLES);
unsigned int counter;
for(counter=0; counter<3 * NF; ++counter) {
glVertex3fv(vs + 3 * fs[counter]); /* here is where I'm loading
the data - why do I need to
load it every time? */
}
glEnd();
That is rendering the vertices of the 3D Model (in the case the view has changed)
and using the DC (Device Context), BitBlt's it- onto the Window!
It has to be done repeatedly (in case something has caused the window to clear)...

Resources