Infinite Loop on OpenGL - c

I decided that whenever the user tries to resize the screen the screen must go back to preset sizes, so it makes my life easier on graph nodes drawing. On MAC, my application is working properly but on Linux it is happening an infinte loop on the resize function, and I don't know why. After some loop calls I got a Segmentation fault (core dumped)
here's my opengl configuration(main funciton)
glutInit(&argc, argv);
glutInitDisplayMode ( GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT);
glutInitWindowPosition(100, 100);
glutCreateWindow(APP_NAME);
glClearColor(0.0, 0.0, 0.0, 0.0); // black background
glMatrixMode(GL_PROJECTION); // setup viewing projection
glLoadIdentity(); // start with identity matrix
glOrtho(0.0, 50.0, 0.0, 50.0, 0.0, 0.1); // setup a 50x50 viewing world
glutDisplayFunc(display);
glutReshapeFunc(resize);
glutMainLoop();
and here's my display and resize functions implementations
void display() {
Matrix* distanceMatrix = NULL;
PalleteNodePosition* nodesPositions = NULL;
distanceMatrix = fromFile(inputFileName);
printf("Finish input parsing...\n");
nodesPositions = calculateNodesPositions(distanceMatrix);
printf("Finish calculating nodes position on screen...\n");
glClear( GL_COLOR_BUFFER_BIT);
drawNodes(nodesPositions, distanceMatrix->width);
drawLink(10, 10, 18, 18);
glFlush();
}
void resize(int w, int h){
glutReshapeWindow(WINDOW_WIDTH, WINDOW_HEIGHT);
}
When I print the resize call, I got w equals to WINDOW_WIDTH and h equals to WINDOW_HEIGHT as expected, so why the app is resizing the screen everytime?

Your resize() callback will be indirectly called by itself even if it
is in an asynchronous way.
You ask the windowing system to resize your window, then later
you receive the event that says that your window has been resized,
then your callback is triggered, which leads to a new resize
request...
If nobody stops this loop (apparently the windowing system
does not detect that the resize is not actually needed),
it is infinite.
May be should you consider comparing w and h to the expected
values, inside your resize() callback, and only invoke glutReshapeWindow() if it is actually needed?
You should also be aware that the inner size and the outer size
of the window are probably different (border, title-bar...).

Related

Using gluLookAt() correctly?

I am trying to set the angle of View with gluLookAt()
Here I have my code where I tried to set the camera without results
Here the function displaycone():
void displayCone(void)
{
glMatrixMode(GL_MODELVIEW);
// clear the drawing buffer.
glClear(GL_COLOR_BUFFER_BIT);
// clear the identity matrix.
glLoadIdentity();
// traslate the draw by z = -4.0
// Note this when you decrease z like -8.0 the drawing will looks far , or smaller.
glTranslatef(0.0,0.0,-4.5);
// Red color used to draw.
glColor3f(0.8, 0.2, 0.1);
// changing in transformation matrix.
// rotation about X axis
glRotatef(xRotated,1.0,0.0,0.0);
// rotation about Y axis
glRotatef(yRotated,0.0,1.0,0.0);
// rotation about Z axis
glRotatef(zRotated,0.0,0.0,1.0);
// scaling transfomation
glScalef(1.0,1.0,1.0);
// built-in (glut library) function , draw you a Cone.
// move the peak of the cone to the origin
glTranslatef(0.0, 0.0, -height);
glutSolidCone(base,height,slices,stacks);
// Flush buffers to screen
gluLookAt(3,3,3,0,0,-4.5,0,1,0);
glFlush();
// sawp buffers called because we are using double buffering
// glutSwapBuffers();
}
With my main:
int main (int argc, char **argv)
{
glutInit(&argc, argv);
//double buffering used to avoid flickering problem in animation
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
// window size
glutInitWindowSize(400,350);
// create the window
glutCreateWindow("Cone Rotating Animation");
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE);
glClearColor(0.0,0.0,0.0,0.0);
//Assign the function used in events
glutDisplayFunc(displayCone);
glutReshapeFunc(reshapeCone);
glutIdleFunc(idleCone);
//Let start glut loop
glutMainLoop();
return 0;
}
The function idlecone instead changes the values of xRotated, yRotated... and displays the cone. Any ideas?
I am pretty sure I didn't understand the right moment where to use gluLookAt()...
gluLookAt changes the current matrix, similar to glTranslatef or glRotatef.
The operation defines a transformation matrix and multiplies the current matrix by the new transformation matrix.
gluLookAt has to be called before glutSolidCone, e.g.:
void displayCone(void)
{
// set matrix mode
glMatrixMode(GL_MODELVIEW);
// clear model view matrix
glLoadIdentity();
// multiply view matrix to current matrix
gluLookAt(3,3,3,0,0,-4.5,0,1,0); // <----------------------- add
// clear the drawing buffer.
glClear(GL_COLOR_BUFFER_BIT);
// traslate the draw by z = -4.0
// Note this when you decrease z like -8.0 the drawing will looks far , or smaller.
glTranslatef(0.0,0.0,-4.5);
// Red color used to draw.
glColor3f(0.8, 0.2, 0.1);
// changing in transformation matrix.
// rotation about X axis
glRotatef(xRotated,1.0,0.0,0.0);
// rotation about Y axis
glRotatef(yRotated,0.0,1.0,0.0);
// rotation about Z axis
glRotatef(zRotated,0.0,0.0,1.0);
// scaling transfomation
glScalef(1.0,1.0,1.0);
// built-in (glut library) function , draw you a Cone.
// move the peak of the cone to the origin
glTranslatef(0.0, 0.0, -height);
glutSolidCone(base,height,slices,stacks);
// Flush buffers to screen
// gluLookAt(3,3,3,0,0,-4.5,0,1,0); <----------------------- delete
glFlush();
// sawp buffers called because we are using double buffering
// glutSwapBuffers();
}

Opengl hide parts of the screen

(code snippet. I know it's ugly but i wanted to make it work before making it better so please don't pay too much attention to the structure)
I modified slightly the glfw example present in the documentation to have a triangle that rotates when pressing the right arrow key and draws a circle described by the position of one of his vertices (the blue one in this case).
I clear the GL_COLOR_BUFFER_BIT only when initializing the window to avoid having to store all the coordinates that will be needed to draw the line (they would be hundreds of thousands in the final program), that means that on the screen every time i press the right arrow a "copy" of the triangle is draws rotated by 12 degrees and a line is drawn that connects the old blue angle position to the new one.
The problem now is that i would want to be able to press the escape key GLFW_KEY_ESCAPE and "delete" the triangles while keeping the lines drawn.
I tried using a z-buffer to hide the triangles behind a black rectangle but only the last line drawn is visualized (i think this is because opengl doesn't know the z of the previous lines since i don't store them).
Is there a way to do what i want without having to store all the point coordinates and then clearing the whole screen and redrawing only the lines? If this is the case, what would be the best way to store them?
Here is part of the code i have so far.
bool check = 0;
Vertex blue = {0.f, 0.6f, 0.5f};
Vertex green = {0.6f,-0.4f, 0.5f};
Vertex red = {-0.6f, -0.4f, 0.5f};
Vertex line = {0.f, 0.6f, 0.f};
Vertex line2 = {0.f, 0.6f, 0.f};
static void
key_callback(GLFWwindow *window, int key, int scancode, int action, int mods) {
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
check = !check;
if (key == GLFW_KEY_RIGHT && action == GLFW_PRESS) {
line.x = line2.x;
line.y = line2.y;
rotation -= 12;
rad = DegToRad(-12);
double x = line.x*cos(rad) - line.y * sin(rad);
double y = line.y * cos(rad) + line.x * sin(rad);
line2.x = x;
line2.y = y;
}
int main(void) {
GLFWwindow *window;
glfwSetErrorCallback(error_callback);
if (!glfwInit())
exit(EXIT_FAILURE);
window = glfwCreateWindow(1280, 720, "Example", NULL, NULL);
if (!window) {
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwMakeContextCurrent(window);
glfwSetKeyCallback(window, key_callback);
glClear(GL_COLOR_BUFFER_BIT);
while (!glfwWindowShouldClose(window)) {
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE);
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / (float) height;
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(rotation, 0.f, 0.f, 1.f);
glBegin(GL_TRIANGLES);
glColor3f(1.f, 0.f, 0.f);
glVertex3f(red.x, red.y, red.z);
glColor3f(0.f, 1.f, 0.f);
glVertex3f(green.x, green.y, green.z);
glColor3f(0.f, 0.f, 1.f);
glVertex3f(blue.x, blue.y, blue.z);
glEnd();
glLoadIdentity();
glLineWidth(1.0);
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_LINES);
glVertex3f(line.x, line.y, line.z);
glVertex3f(line2.x, line2.y, line2.z);
glEnd();
if (check){
//hide the triangles but not the lines
}
glEnd();
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
I clear the GL_COLOR_BUFFER_BIT only when initializing the window
That's your problem right there. It's idiomatic in OpenGL to always start with a clear operation of the main framebuffer color bits. That is, because you don't know the state of your window main framebuffer when the operating system is asking for a redraw. For all you know it could have been all replaced with cat pictures in the background without your program knowing it. Seriously: If you have a cat video running and the OS felt the need to rearrange your window's main framebuffer memory this is what you might end up with.
Is there a way to do what i want without having to store all the point coordinates and then clearing the whole screen and redrawing only the lines?
For all intents and purposes: No. In theory one could come up with a contraption made out of a convoluted series of stencil buffer operations to implement that, but this would be barking up a very wrong tree.
Here's something for you to try out: Draw a bunch of triangles like you do, then resize your window down so there nothing remains, then resize it back to its original size… you see where the problem? There's a way to address this particular problem, but that's not what you should do here.
The correct thing is to redraw everything. If you feel that that's to slow you have to optimize your drawing process. On current generation hardware it's possible to churn out on the order of 100 million triangles per second.

Why is display function called twice?

In the OpenGL code below, used for initialization and for the main function, why is the display function getting called twice? I can't see a call that would be executed other than glutDisplayFunc(display);
void init(void)
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set background color to black and opaque
glClearDepth(1.0f); // Set background depth to farthest
glEnable(GL_DEPTH_TEST); // Enable depth testing for z-culling
glEnable(GL_POINT_SMOOTH);
glDepthFunc(GL_LEQUAL); // Set the type of depth-test
glShadeModel(GL_SMOOTH); // Enable smooth shading
gluLookAt(0.0, 0.0, -5.0, /* eye is at (0,0,5) */
0.0, 0.0, 0.0, /* center is at (0,0,0) */
0.0, 1.0, 0.0); /* up is in positive Y direction */
glOrtho(-5,5,-5,5,12,15);
//glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Nice perspective corrections
}
int
main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutCreateWindow("red 3D lighted cube");
glutInitWindowSize(1280,800 ); // Set the window's initial width & height
//glutInitWindowPosition(50, 50); // Position the window's initial top-left corner
//glutReshapeWindow(800,800);
init();
compute();
glutDisplayFunc(display);
glutMainLoop();
return 0; /* ANSI C requires main to return int. */
}
Your display() callback is called whenever GLUT decides it wants the application (that's you) to redraw the contents of the window.
Perhaps there is some events happening as the window opens that causes a need to make sure the window is redrawn.
You're not supposed to "care"; just make sure you redraw the content in the display() function and never mind how many times it gets called.

No display (transparent window) with OpenGL 2 and Primusrun on single buffer

I’m trying to make a minimalist OpenGL program to run on both my Intel chipset (Mesa) and NVIDIA card through Bumblebee (Optimus).
My source code (using FreeGLUT):
#include <GL/freeglut.h>
void display(void);
void resized(int w, int h);
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_SINGLE);
glutInitContextVersion(2, 1);
glutInitContextProfile(GLUT_CORE_PROFILE);
glutInitWindowSize(640, 480);
glutCreateWindow("Hello, triangle!");
glutReshapeFunc(resized);
glutDisplayFunc(display);
glClearColor(0.3, 0.3, 0.3, 1.0);
glutMainLoop();
return 0;
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glBegin(GL_TRIANGLES);
glVertex3f(0, 0.75, 0.0);
glVertex3f(-0.75, -0.75, 0.0);
glVertex3f(0.75, -0.75, 0.0);
glEnd();
glFlush();
}
void resized(int w, int h)
{
glViewport(0, 0, w, h);
glutPostRedisplay();
}
When I launch directly the program (./a.out) on the Intel chipset, everything works. I don’t have that chance with primusrun ./a.out which displays a transparent window:
It is not really transparent, the image behind stays even if I move the window.
What's interesting is that when I change for a double color buffer (using GLUT_DOUBLE instead of GLUT_SINGLE, and glutSwapBuffers() instead of glFush()) this works both on Intel and primusrun.
Here's my glxinfo: http://pastebin.com/9DADif6X
and my primusrun glxinfo: http://pastebin.com/YCHJuWAA
Am I doing it wrong or is it a Bumblebee-related bug?
The window is probably not really transparent, it probably just shows whatever was beneath it when it showed up; try moving it around and watch if it "drags" along the picture.
When using a compositor, single buffered windows are a bit tricky, because there's no cue for the compositor to know, when the program is done rendering. Using a double buffered window performing a buffer swap does give the compositor that additional information.
In addition to that, to finish a single buffered drawing you call glFinish not glFlush; glFinish also acts as a cue that drawing has been, well, finished.
Note that there's little use for single buffered drawing these days. The only argument against double buffering was lack of available graphics memory. In times where GPUs have several hundreds of megabytes of RAM available this is no longer a grave argument.

Calling glDrawElements after glInterleavedArrays isn't working

I am writing some openGL wrappers and am trying to run the following code:
void some_func1() {
float vertices[] = {50.0, 50.0, 0.0, 20.0, 50.0, 0.0, 20.0, 60.0, 0.0};
glColor3f(1.0, 0.0, 0.0);
glInterleavedArrays(GL_V3F, 0, vertices);
}
void some_func2() {
int indices[] = {0,1,2};
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, indices);
}
void parent_func() {
some_func1();
some_func2();
}
But it would seem that openGL is not picking up the call to glDrawElements in the second function. My routine opens a window, clears it to black, and draws nothing. What's weird is that running this code
void some_func1() {
float vertices[] = {50.0, 50.0, 0.0, 20.0, 50.0, 0.0, 20.0, 60.0, 0.0};
int indices[] = {0,1,2};
glColor3f(1.0, 0.0, 0.0);
glInterleavedArrays(GL_V3F, 0, vertices);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, indices);
}
void parent_func() {
some_func1();
}
works exactly as expected: a red triangle is drawn. I've looked through the documentation and searched around, but I can't find any reason that glDrawElements wouldn't work, or would miss data somehow if called in another function. Any ideas?
FYI: I am running this on an Ubuntu 12.04 VM through VirtualBox, 32-bit processor on the host, and freeglut is doing my window handling. I have also set LIBGL_ALWAYS_INDIRECT=1 to work around an issue with the VM's 3D rendering. (not sure if any of that matters but... :))
The reason is, that at the point of drawing with glDrawElements, there is no valid vertex data to draw. When calling glInterleavedArrays (which just does a bunch of gl...Pointer calls under the hood) you are merely telling OpenGL where to find the vertex data, without copying anything. The actual data is not accessed before the drawing operation (glDrawElements). So in some_func1 you are setting a pointer to the local variable vertices, which doesn't exist anymore after the function returns. This doesn't happen in your modified code (where the pointer is set and drawn in the same function).
So either make this array survive until the glDrawElements call or, even better, make OpenGL to actually store the vertex data itself, by employing a vertex buffer object and performing an actual data copy. In this case you might also want to refrain from the awfully deprecated glInterleavedArrays function (which isn't much more than a mere software wrapper around proper gl...Pointer and glEnableClientState calls, anyway).

Resources