glFlush() do not show anything - c

My OpenGL glFlush() didn't show anything when I run a glut project in Codeblocks on windows 7.
Here my main function.
#include <windows.h>
#include <GL/glut.h>
#include <stdlib.h>
#include <stdio.h>
float Color1=0.0, Color2=0.0, Color3=0.0;
int r,p,q;
void keyboard(unsigned char key, int x, int y)
{
switch (key)
{
case 27: // ESCAPE key
exit (0);
break;
case 'r':
Color1=1.0, Color2=0.0, Color3=0.0;
break;
case 'g':
Color1=0.0, Color2=1.0, Color3=0.0;
break;
case 'b':
Color1=0.0, Color2=0.0, Color3=1.0;
break;
}
glutPostRedisplay();
}
void Init(int w, int h)
{
glClearColor(1.0, 1.0, 1.0, 1.0);
glViewport(0,0, (GLsizei)w,(GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D( (GLdouble)w/-2,(GLdouble)w/2, (GLdouble)h/-2, (GLdouble)h/2);
}
static void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int i=0;
glColor4f(0,0,0,1);
glPointSize(1);
glBegin(GL_POINTS);
for( i=-320;i<=320;i++)
glVertex2f(i,0);
for( i=-240;i<=240;i++)
glVertex2f(0,i);
glEnd();
glColor4f(Color1,Color2, Color3,1);
glPointSize(1);
glBegin(GL_POINTS);
int x=0, y = r;
int d= 1-r;
while(y>=x)
{
glVertex2f(x+p, y+q);
glVertex2f(y+p, x+q);
glVertex2f(-1*y+p, x+q);
glVertex2f(-1*x+p, y+q);
glVertex2f(-1*x+p, -1*y+q);
glVertex2f(-1*y+p, -1*x+q);
glVertex2f(y+p, -1*x+q);
glVertex2f(x+p, -1*y+q);
if(d<0)
d += 2*x + 3;
else
{
d += 2*(x-y) + 5;
y--;
}
x++;
}
glEnd();
glFlush();
//glutSwapBuffers();
}
int main(int argc, char *argv[])
{
printf("Enter the center point and radius: ");
scanf("%d %d %d",&p,&q,&r);
glutInit(&argc, argv);
glutInitWindowSize(640,480);
glutInitWindowPosition(10,10);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutCreateWindow("Circle drawing");
Init(640, 480);
glutKeyboardFunc(keyboard);
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
But when I change these two lines, it simply works fine.
glFlush(); to glutSwapBuffers(); and
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE); to glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
Can anyone tell me what's the problem with my code and why not glFlush() didn't work?

Modern graphics systems (Windows DWM/Aero, MacOS Quartz Extreme, X11 Composite) are built around the concept of composition. Composition always implies double buffering and hence relies on the buffer swap to initiate a composition refresh.
You can disable DWM/Aero on Windows and restrain from using a compositing window manager on X11, and then single buffered OpenGL should work as expected.
But why exactly do you want single buffered drawing? Modern GPUs are actually presuming that double buffering is used to pump their presentation pipeline efficiently. There's zero benefit in being single buffered.

glFlush works as documented:
The glFlush function forces execution of OpenGL functions in finite time.
What this does, is it forces all outstanding OpenGL operations to compleate rendering to the back buffer. This will not magically display the back buffer. To do that you need to swap the font buffer and the back buffer.
So the correct use of glFlush, is in conjunction with glutSwapBuffers. But that is redundant, since glutSwapBuffers will flush all outstanding rendering operations anyway.
It appears that you are using an old OpenGL 1.1 tutorial, where double buffering was an expensive novelty. Currently double buffering is the norm and you need to jump through quite some expensive hoops to get single buffering.
Since OpenGL is currently at version 4.6, I would encourage you to at least start using 4.0.

Related

High numbers when getting pointer motion xlib [duplicate]

Hi! I am trying to write a program where I need to report the position of every mouse motion. I have called the XSelectInput() function with a PointerMotionMask mask. Everything seems to work alright but the numbers after printing don't appear after every movement, they appear in blocks and also the numbers in event.xmotion.x and event.xmotion.y are very high, in the hundred thousands.
What is causing these large numbers?
Also is my program getting every number and reporting it immediately or is it being stored in a queue and sent in blocks to the terminal?
Thanks
Here's my event loop:
while(1)
{
XNextEvent(display, &event);
switch (event.type)
{
case Expose:
glClearColor( 1.0, 1.0, 0.0, 1.0 );
glClear( GL_COLOR_BUFFER_BIT );
glFlush();
glXSwapBuffers( display, glxwin );
break;
case MotionNotify:
printf("%d, %d", event.xmotion.x, event.xmotion.y);
break;
case ButtonPress: exit(1);
default: break;
}
}
Besides printing a newline at the end, you could also do a '\r' at the end it it will move the cursor to the beginning of the existing line, so it will just print over itself each time. To make this work better, change the digit formatting to be a fixed size, like:
printf("%4d, %4d \r", event.xmotion.x, event.xmotion.y);
fflush(stdout) ;

GLUT timer loop stopping prematurely

I've encountered a strange issue where glutTimerFunc seems to randomly stop working when I call it with a zero delay.
Here is my code:
#include <Windows.h>
#include <GL/gl.h>
#include <GL/glut.h>
int x = 0;
void init(void)
{
glClearColor(0.0, 0.0, 0.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 1.0, 0.125, 0.875, -1.0, 1.0);
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_POLYGON);
glColor3f(1.0, x ? 1.0 : 0.0, 0.0);
glVertex3f(0.25, 0.25, 0.0);
glVertex3f(0.75, 0.25, 0.0);
glVertex3f(0.75, 0.75, 0.0);
glVertex3f(0.25, 0.75, 0.0);
glEnd();
glFlush();
glutSwapBuffers();
}
void timer(int value)
{
x = !x;
glutPostRedisplay();
glutTimerFunc(0, timer, 0); // The line in question
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(800, 600);
glutInitWindowPosition(200, 200);
glutCreateWindow("hello");
init();
glutDisplayFunc(display);
glutTimerFunc(0, timer, 0);
glutMainLoop();
return 0;
}
I expected this to show a flickering square, that is changing color as fast as the GPU can keep up.
That is what it actually does initially, but the timer loop seems to randomly stop, and the square stops changing color. Sometimes it doesn't flicker perceptibly at all, and sometimes it flickers for several seconds before stopping.
It doesn't stop if I set the delay to 1ms (glutTimerFunc(1, timer, 0);).
Why does the timer loop stop unexpectedly?
I don't really care about how to fix it, just why it happens.
Your GPU is changing the value faster than your monitor can draw it.
If you had a monitor with an extremely high refresh rate, you could probably see it, but unfortunately we're limited to 60Hz/120Hz/240Hz for now.
When you remove the 1ms forced delay, you are causing the system to become a non-deterministic system (based on the speed of other programs, rather than just yours), and that's why you're getting the random behavior.

GluProject Not showing Accurate coordinate in C

I am trying to find the screen coordinates from the opengl coordinates(projected in 3D space) I have used glproject call for this purpose,i have used rotation and translation to in my code
At certain point after performing some transformation, i called glproject api to get the screen coordinate of a particular projected point P(x,y,z)
glproject(x,y,z,modelMatrix,projectionmatrix,viewport,*x_s,*y_s,*z_s);
I am able to get x screen coordinate correctly in x_s , but y coordinates are different
The only change in y which is not in x is when initially i called the glperspective to set fovy(The field of view angle, in degrees, in the y-direction). gluPerspective(60.0f, Width/Height,0.0001f,1000.0f);
Let me Rephrase the question I have created a 3D point on screen and now i am getting the 2d(x,y) coordinates of that point through GLproject they come different from the mouse coordinates What could be the possible solution to get correct coordinates.
Here is the code snippet
#include<GL/glut.h> /* Header File For The GLUT Library */
GLint Window; /* The number of our GLUT window */
float tmp_x,tmp_y,tmp_z;
GLfloat w = 1200; /* Window size. Global for use in rotation routine */
GLfloat h = 1200;
GLint prevx, prevy; /* Remember previous x and y positions */
GLfloat xt=1.0,yt=1.0,zt=1.0; /* translate */
int width = 1600;
int height = 1200;
//This function will set windowing transformation
void transform(GLfloat Width , GLfloat Height )
{
glViewport(0,0, (GLfloat)Width, (GLfloat)Height);
glPushMatrix();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0f, Width/Height,0.0001f,1000.0f);
glTranslatef(0.0, 0.0, -15.0f); /* Centre and away the viewer */
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
GLvoid draw_room()
{
int i;
glPushMatrix();
glShadeModel(GL_SMOOTH);
glLineWidth(1.0);
glPointSize(4.0); /* Add point size, to make it clear */
glBegin(GL_POINTS); /* start drawing the cube.*/
glColor3f(0.0f,0.0f,1.0f); /* Set The Color To Orange*/
glColor3f(1.0f,0.0f,0.0f); /* Set The Color To Orange*/
glVertex3f(3.1f,2.1f,2.1f);
glEnd(); /* Done Drawing The Cube*/
glEnable(GL_DEPTH_TEST);
}
//OpenGl Display callback Function It call init room
void DrawGLScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
printf("%f %f %f\n",xt,yt,zt);
glPushMatrix();
glMatrixMode(GL_PROJECTION);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(xt,yt, zt);
draw_room();
glPopMatrix();
glutSwapBuffers(); /* Swap buffers */
glFlush();
}
GLvoid Mouse( int b , int s, int xx, int yy)
{
double a1,a2,a3;
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
//gluProject(xt+3.1f,yt+2.1f,zt+2.1f, modelview, projection, viewport, &a1, &a2, &a3);
gluProject(3.1f,2.1f,2.1f, modelview, projection, viewport, &a1, &a2, &a3);
printf("Mouse: %d %d\n",xx,yy); // Both Print are giving different coordinaes.
printf("Unproject %f %f %f\n",a1,a2,a3);
switch (b) {
case GLUT_LEFT_BUTTON: /* only stash away for left mouse */
prevx = xx - w/2;
prevy = h/2 - yy;
break;
case GLUT_MIDDLE_BUTTON:
break;
case GLUT_RIGHT_BUTTON:
break;
}
}
What could be the possible solution for this?
OpenGL uses lower left corner of screen as coordinate system orign. Window system usually uses upper left corner as coordinate system orign. You need to handle this difference. Example:
printf("Mouse: %d %d\n",xx,yy);
printf("Unproject %f %f %f\n",a1,viewport[1]-a2,a3);
A side issue:
gluPerspective(60.0f, Width/Height,0.0001f,1000.0f);
You have a very large ratio between your near and far plane distance. This is very bad for depth resolution. As a general rule you should set the near clip plane distance as far as possible, as the scene allows. The far clip plane for a projection as created by gluPerspective should be set as near as possible (it's also possible to set the far clipping plane to infinity if the projection matrix is built slightly different).
Anyway, your low depth resolution will have a negative impact on your mouse pointer screen position back projection.
Let me Rephrase the question I have created a 3D point on screen and now i am getting the 2d(x,y) coordinates of that point through GLproject they come different from the mouse coordinates What could be the possible solution to get correct coordinates.
Most window systems put the pointer coordinate system origin to the upper left. OpenGL sets the viewport coordinate system origin into the lower left. So you'll have to invert the mouse position in the window along the Y=Up axis.

Infinite Loop Drawing in OpenGL and Broken Lines issue

Infinite Loop Question
I want to achieve the effect as shown the picture:
I am generating this by including an infinite loop inside the glutDisplayFunct callback function, and which is not good as i cannot then process any input from the keyboard. The other method of which i can think is to probably use the glut's explicit window refresh functions.
I want to know how can i insert an infinite loop and also check for keyboard input. Here is the sample code i have made. It simply implements the DDA algorithm and attempts to draw infinite lines by generating random coordinates and colours.
#include <stdio.h>
#include <GL/glut.h>
int width;
int height;
void dda (int x1, int y1, int x2, int y2)
{
int del_x, del_y, sample_steps, i = 1;
double x_incr, y_incr, x, y;
del_x = x2 - x1;
del_y = y2 - y1;
sample_steps = (abs (del_x) > abs (del_y)) ? abs (del_x) : abs (del_y);
x_incr = del_x / (double) sample_steps;
y_incr = del_y / (double) sample_steps;
x = x1;
y = y1;
glBegin (GL_POINTS);
while (i<=sample_steps)
{
glVertex2f ((2.0 * x)/width, (2.0 * y)/height);
x += x_incr;
y += y_incr;
i++;
}
glEnd ();
glFlush ();
}
void keypress_handler (unsigned char key, int x, int y)
{
if (key == 'q' || key == 'Q')
{
glutLeaveMainLoop ();
}
}
void init_screen (void)
{
glMatrixMode (GL_PROJECTION);
glClearColor (0, 0, 0, 1);
glClear (GL_COLOR_BUFFER_BIT);
glLoadIdentity ();
glMatrixMode (GL_MODELVIEW);
}
void test_dda (void)
{
int x1, y1, x2, y2;
float r, g, b;
int i=1;
glClear (GL_COLOR_BUFFER_BIT);
srand (time(NULL));
width = glutGet (GLUT_WINDOW_WIDTH);
height = glutGet (GLUT_WINDOW_HEIGHT);
while (i)
{
x1 = rand () % width - (width /2); /* Global */
y1 = rand () % height - (height /2); /* Global */
x2 = rand () % width - (width /2); /* Global */
y2 = rand () % height - (height /2); /* Global */
r = rand () / (float) RAND_MAX;
g = rand () / (float) RAND_MAX;
b = rand () / (float) RAND_MAX;
glColor3f (r, g, b);
dda (x1, y1, x2, y2);
printf ("\r%d", i);
i++;
}
}
void reshape (int w, int h)
{
glViewport (0, 0, w, h);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluOrtho2D (-1, 1, -1, 1);
glMatrixMode (GL_MODELVIEW);
}
int main (int argc, char *argv[])
{
glutInit (&argc, argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
init_screen ();
glutCreateWindow ("DDA");
glutDisplayFunc (test_dda);
glutReshapeFunc (reshape);
glutKeyboardFunc (keypress_handler);
glutMainLoop ();
printf ("\n");
return 0;
}
Broken lines when first drawn
Also i have an additional question, which is like this:
When i uncomment the infinite loop (the while (i)) inside the test_dda function and run the executable with 1280x960 screen size every line drawn shows as broken lines, they seems look something like dashed lines. But, if i do not infinitely loop in this function and draw the lines with some other way, like forcing OpenGL to redraw, the lines shows as they should be displayed. I have noticed that when drawing the first time, the lines show broken. The broke lines of which i am talking is shown below:
To understand what i am saying do the following to get the effect. Change the while (i) to while (i<1000) . This will draw 1000 lines on the screen. When i run with this change with 1280x960 window size, the window is drawn 2 times. The first time the lines are drawn shows as broken as the above image. The moment 1000 lines are drawn, the window is cleared again the it is drawn again, but this time the lines shows as they should be. Why this is happening.
You don't have to. The infinite loop already happens inside glutMainLoop(). It will call your display function over and over again till the program is closed. To keep the output of previous frames (i.e. drawing over them), don't clear the color buffer with glClear().
As for the broken lines: Don't draw lines pixel by pixel. While I didn't have a closer look, you most likely have some discrepancy caused by your view/projection matrix (i.e. you're drawing the dots with too much spacing). Instead, use OpenGL calls to draw lines.
What you're trying to do here is essentially trying to do software rendering on top of hardware accelerated rendering, which is just weird and not really recommended.
The "display" function (test_dda in your case) is called every time the window needs to be redrawn. The event handling code in GLUT get no change of running if you are in an infinite loop inside the display function.
Instead use a timer, and draw one line in the timer function and then call a function to force GLUT to redraw the window, where you "flush" the GL pipe.
I think that the starting solution you are adopting is conceptually wrong.
Don't take me bad :)
If the point is to draw constantly lines over and over, one possible solution would be to segment the process in this way:
FRAME 1 STEP1: You draw the lines on a framebuffer mapped to a texture over a quad containing the working texture
FRAME 1 STEP2: You draw a quad with the working texture
GLUT INPUT CALLBACK
FRAME 2 STEP1: You draw the lines on a framebuffer mapped to a texture over a quad containing the working texture
FRAME 1 STEP2: You draw a quad with the output working texture
GLUT INPUT CALLBACK
And so on.....

Easy way to display a continuously updating image in C/Linux

I'm a scientist who is quite comfortable with C for numerical computation, but I need some help with displaying the results. I want to be able to display a continuously updated bitmap in a window, which is calculated from realtime data. I'd like to be able to update the image quite quickly (e.g. faster than 1 frame/second, preferably 100 fps). For example:
char image_buffer[width*height*3];//rgb data
initializewindow();
for (t=0;t<t_end;t++)
{
getdata(data);//get some realtime data
docalcs(image_buffer, data);//process the data into an image
drawimage(image_buffer);//draw the image
}
What's the easiest way to do this on linux (Ubuntu)? What should I use for initializewindow() and drawimage()?
If all you want to do is display the data (ie no need for a GUI), you might want to take a look at SDL: It's straight-forward to create a surface from your pixel data and then display it on screen.
Inspired by Artelius' answer, I also hacked up an example program:
#include <SDL/SDL.h>
#include <assert.h>
#include <stdint.h>
#include <stdlib.h>
#define WIDTH 256
#define HEIGHT 256
static _Bool init_app(const char * name, SDL_Surface * icon, uint32_t flags)
{
atexit(SDL_Quit);
if(SDL_Init(flags) < 0)
return 0;
SDL_WM_SetCaption(name, name);
SDL_WM_SetIcon(icon, NULL);
return 1;
}
static uint8_t * init_data(uint8_t * data)
{
for(size_t i = WIDTH * HEIGHT * 3; i--; )
data[i] = (i % 3 == 0) ? (i / 3) % WIDTH :
(i % 3 == 1) ? (i / 3) / WIDTH : 0;
return data;
}
static _Bool process(uint8_t * data)
{
for(SDL_Event event; SDL_PollEvent(&event);)
if(event.type == SDL_QUIT) return 0;
for(size_t i = 0; i < WIDTH * HEIGHT * 3; i += 1 + rand() % 3)
data[i] -= rand() % 8;
return 1;
}
static void render(SDL_Surface * sf)
{
SDL_Surface * screen = SDL_GetVideoSurface();
if(SDL_BlitSurface(sf, NULL, screen, NULL) == 0)
SDL_UpdateRect(screen, 0, 0, 0, 0);
}
static int filter(const SDL_Event * event)
{ return event->type == SDL_QUIT; }
#define mask32(BYTE) (*(uint32_t *)(uint8_t [4]){ [BYTE] = 0xff })
int main(int argc, char * argv[])
{
(void)argc, (void)argv;
static uint8_t buffer[WIDTH * HEIGHT * 3];
_Bool ok =
init_app("SDL example", NULL, SDL_INIT_VIDEO) &&
SDL_SetVideoMode(WIDTH, HEIGHT, 24, SDL_HWSURFACE);
assert(ok);
SDL_Surface * data_sf = SDL_CreateRGBSurfaceFrom(
init_data(buffer), WIDTH, HEIGHT, 24, WIDTH * 3,
mask32(0), mask32(1), mask32(2), 0);
SDL_SetEventFilter(filter);
for(; process(buffer); SDL_Delay(10))
render(data_sf);
return 0;
}
I'd recommend SDL too. However, there's a bit of understanding you need to gather if you want to write fast programs, and that's not the easiest thing to do.
I would suggest this O'Reilly article as a starting point.
But I shall boil down the most important points from a computations perspective.
Double buffering
What SDL calls "double buffering" is generally called page flipping.
This basically means that on the graphics card, there are two chunks of memory called pages, each one large enough to hold a screen's worth of data. One is made visible on the monitor, the other one is accessible by your program. When you call SDL_Flip(), the graphics card switches their roles (i.e. the visible one becomes program-accessible and vice versa).
The alternative is, rather than swapping the roles of the pages, instead copy the data from the program-accessible page to the monitor page (using SDL_UpdateRect()).
Page flipping is fast, but has a drawback: after page flipping, your program is presented with a buffer that contains the pixels from 2 frames ago. This is fine if you need to recalculate every pixel every frame.
However, if you only need to modify smallish regions on the screen every frame, and the rest of the screen does not need to change, then UpdateRect can be a better way (see also: SDL_UpdateRects()).
This of course depends on what it is you're computing and how you're visualising it. Analyse your image-generating code - maybe you can restructure it to get something more efficient out of it?
Note that if your graphics hardware doesn't support page flipping, SDL will gracefully use the other method for you.
Software/Hardware/OpenGL
This is another question you face. Basically, software surfaces live in RAM, hardware surfaces live in Video RAM, and OpenGL surfaces are managed by OpenGL magic.
Depending on your hardware, OS, and SDL version, programatically modifying the pixels of a hardware surface can involve a LOT of memory copying (VRAM to RAM, and then back!). You don't want this to happen every frame. In such cases, software surfaces work better. But then, you can't take advantage of double buffering, nor hardware accelerated blits.
Blits are block-copies of pixels from one surface to another. This works well if you want to draw a whole lot of identical icons on a surface. Not so useful if you're generating a temperature map.
OpenGL lets you do much more with your graphics hardware (3D acceleration for a start). Modern graphics cards have a lot of processing power, but it's kind of hard to use unless you're making a 3D simulation. Writing code for a graphics processor is possible but quite different to ordinary C.
Demo
Here's a quick demo SDL program that I made. It's not supposed to be a perfect example, and may have some portability problems. (I will try to edit a better program into this post when I get time.)
#include "SDL.h"
#include <assert.h>
#include <math.h>
/* This macro simplifies accessing a given pixel component on a surface. */
#define pel(surf, x, y, rgb) ((unsigned char *)(surf->pixels))[y*(surf->pitch)+x*3+rgb]
int main(int argc, char *argv[])
{
int x, y, t;
/* Event information is placed in here */
SDL_Event event;
/* This will be used as our "handle" to the screen surface */
SDL_Surface *scr;
SDL_Init(SDL_INIT_VIDEO);
/* Get a 640x480, 24-bit software screen surface */
scr = SDL_SetVideoMode(640, 480, 24, SDL_SWSURFACE);
assert(scr);
/* Ensures we have exclusive access to the pixels */
SDL_LockSurface(scr);
for(y = 0; y < scr->h; y++)
for(x = 0; x < scr->w; x++)
{
/* This is what generates the pattern based on the xy co-ord */
t = ((x*x + y*y) & 511) - 256;
if (t < 0)
t = -(t + 1);
/* Now we write to the surface */
pel(scr, x, y, 0) = 255 - t; //red
pel(scr, x, y, 1) = t; //green
pel(scr, x, y, 2) = t; //blue
}
SDL_UnlockSurface(scr);
/* Copies the `scr' surface to the _actual_ screen */
SDL_UpdateRect(scr, 0, 0, 0, 0);
/* Now we wait for an event to arrive */
while(SDL_WaitEvent(&event))
{
/* Any of these event types will end the program */
if (event.type == SDL_QUIT
|| event.type == SDL_KEYDOWN
|| event.type == SDL_KEYUP)
break;
}
SDL_Quit();
return EXIT_SUCCESS;
}
GUI stuff is a regularly-reinvented wheel, and there's no reason to not use a framework.
I'd recommend using either QT4 or wxWidgets. If you're using Ubuntu, GTK+ will suffice as it talks to GNOME and may be more comfortable to you (QT and wxWidgets both require C++).
Have a look at GTK+, QT, and wxWidgets.
Here's the tutorials for all 3:
Hello World, wxWidgets
GTK+ 2.0 Tutorial, GTK+
Tutorials, QT4
In addition to Jed Smith's answer, there are also lower-level frameworks, like OpenGL, which is often used for game programming. Given that you want to use a high frame rate, I'd consider something like that. GTK and the like aren't primarily intended for rapidly updating displays.
In my experience Xlib via MIT-SHM extension was significantly faster than SDL surfaces, not sure I used SDL in the most optimal way though.

Resources