Simple GTK/GDK app very slow to update/refresh window & buttons - c

Apologies in advance, I'm not a GTK/GDK master and have been feeling my way round some code written by someone else who's no longer around.
Edited to add TL;DR - Full question below with some detail.
The TL;DR is that gtk_button_set_image seems to take ~1ms, multiplied by 50 buttons that causes a bottleneck when changing the image on every button in our window.
EDIT again to add timings for the various calls:
Call Time (ms) approximate
gtk_button_set_image 1ms
gtk_button_set_label 0.5
gdk_pixbuf_scale 0.5
recolour_pixbuf (re-written) 0.5
gdk_pixbuf_new_subpixbuf 0.4
gtk_css_provider_load_from_data 0.3
gtk_image_new_from_pixbuf 0.15
BuildButtonCSS 0.01
And yes, recolour_pixbuf() takes a long time but I can find no other way of colour-swapping pixels in a pixbuf other than going through the whole thing pixel-by-pixel.
This adds up to GTK/GDK calls taking ~2.35ms to update each button each time. I have refactored my code to check what's changed and ONLY execute necessary changes - but even then, the whole window is fairly regularly updated with new images for every button plus new colours etc. so it's not an edge case and it is noticeable.
Basically we have a pretty simple GTK C app, just a window with a grid of buttons in it. A TCP socketed connection sends messages to the app to (for example) change the label or colour of a button, and we send messages back when a button is pushed.
However, with 100ms polling on the main loop for refreshing/re-drawing the buttons it seems to be taking a very long time to refresh the window.
I'll try to keep this sane + readable - I can't really post a minimal working example (it would be huge) but I'll try and break down the basics of the code so you can see what's done.
Each button is a widget that can contain a straight text label or instead be an image created from a pixbuf.
Each button is attached to a grid, the grid is inside a window.
Hopefully this is sensible and obvious so far.
In our main application we have a check that happens every 100ms which will run through all the button data, and for any that have changed (EG new label or new pixbuf) it will update the button accordingly.
g_timeout_add(100, (GSourceFunc)check_refresh, _context->refresh);
The code then happening for each button (including the timestamps I've added to get debug info) is:
static void refresh_button(int buttonId)
{
char name[12];
snprintf(name, 10, "BTN_%02d", buttonId);
int bid = buttonId-1;
char tstr[VERY_LONG_STR];
struct dev_button *dbp;
dbp = &_context->buttons[bid];
// For debug timestamps:
struct timespec start, stop;
double result;
clock_gettime(CLOCK_MONOTONIC, &start);
if(dbp->css_modified != 0)
{
BuildButtonCSS(dbp, tstr, NULL);
gtk_css_provider_load_from_data(dbp->bp, tstr, -1, NULL);
if(dbp->text[0] != '\0')
{
gtk_button_set_label(GTK_BUTTON(dbp->btn), dbp->text);
}
else
{
gtk_button_set_label(GTK_BUTTON(dbp->btn), NULL);
}
}
clock_gettime(CLOCK_MONOTONIC, &stop);
result = ((stop.tv_sec - start.tv_sec) * 1e3) + ((stop.tv_nsec - start.tv_nsec) / 1e6); // in milliseconds
g_message("[BRF] %s took %.3fms to here", name, result);
/*
* CSS changes affect button image drawing (cropping etc.)
*/
if(dbp->image_modified != 0 || dbp->css_modified != 0)
{
uint8_t b = dbp->bpx; // Border in pixels
GdkPixbuf* tmp = gdk_pixbuf_new_subpixbuf (dbp->pixbuf, b, b, _context->innerButton.width - (b * 2), _context->innerButton.height - (b * 2));
dbp->image = (GtkImage*)gtk_image_new_from_pixbuf(tmp);
gtk_button_set_image(GTK_BUTTON(dbp->btn), GTK_WIDGET(dbp->image));
}
btn_timediff(buttonId);
clock_gettime(CLOCK_MONOTONIC, &stop);
result = ((stop.tv_sec - start.tv_sec) * 1e3) + ((stop.tv_nsec - start.tv_nsec) / 1e6); // in milliseconds
g_message("[BRF] %s took %.3fms for update", name, result);
dbp->css_modified = 0;
}
So I'm timing the milliseconds taken to update the button from CSS, then the time to have updated the image from pixbuf - running on a Raspberry Pi CM4 I'm getting results like this:
** Message: 10:25:22.956: [BRF] BTN_03 took 1.443ms to here
** Message: 10:25:22.959: [BRF] BTN_03 took 5.061ms for update
So around ~1.5ms to update a button from simple CSS, and ~3.5ms to update a button image from a pixbuf.
And before you say the Raspberry Pi is slow - even on a full fat Linux desktop machine I'm seeing similar timings - a little faster on average but sometimes the total can be beyond 10ms for a single button.
This feels very slow to me - almost like there's something blocking on screen refresh after each change to each button. I wonder if we're going about this wrong, perhaps we should be somehow inhibiting re-draws of the window until we get to the last button and then let the whole thing re-draw once?
As I said - I'm not experienced with GTK and am a bit in at the deep end on this project so may well be doing this all wrong or totally missing some obvious method or call or something.

Related

GTK+3 replacing/swapping pixbuf seems to break

I have a simple GTK3 grid layout using images as buttons and I want to replace the images when various things happen.
What's weird is that this works fine if I replace/remove the image from the same piece of code each time but not when I alternate between the two different sources despite (as far as I can see) everything being almost identical.
Anyway, here's my code...
Initial setup - create pixbuf, make it an image, attach it to the button - this works fine:
/*
* Initial setup
*
* dbp is a pointer to a struct that contains the button, the pixbuf, etc.
*/
// Create new blank (&transprent) image
dbp->pixbuf = gdk_pixbuf_new(GDK_COLORSPACE_RGB, TRUE, 8, innerButton.width, innerButton.height);
gdk_pixbuf_fill(dbp->pixbuf, 0x00000000);
// Image holds pixbuf
dbp->image = (GtkImage*)gtk_image_new_from_pixbuf(dbp->pixbuf);
// Attach image to button
gtk_button_set_image(GTK_BUTTON(dbp->btn), GTK_WIDGET(dbp->image));
// Attach button to grid
gtk_grid_attach(GTK_GRID(_grid), dbp->btn, c, r, 1, 1);
Clear image code - blanks the image (transparent):
gdk_pixbuf_fill(dbp->pixbuf, 0x00000000);
Button refresh callback (updates image from pixbuf):
if(dbp->image_modified != 0)
{
gtk_image_set_from_pixbuf(GTK_IMAGE(dbp->image), dbp->pixbuf);
dbp->image_modified = 0;
}
1st image replacement code - takes data input in our own format & creates new pixbuf:
GdkPixbuf *newpixbuf = gdk_pixbuf_new(GDK_COLORSPACE_RGB, TRUE, 8, BITMAP_WIDTH, BITMAP_HEIGHT);
g_object_unref(dbp->pixbuf);
// <Data unpacked to pixels here, code removed for clarity>
dbp->pixbuf = gdk_pixbuf_copy(newpixbuf);
gdk_pixbuf_copy_options(newpixbuf , dbp->pixbuf);
dbp->image_modified = 1; // Trigger refresh
2nd image replacement - loads image from file and creates new pixbuf:
GdkPixbuf *newpixbuf = gdk_pixbuf_new_from_file_at_scale (file_path, BITMAP_WIDTH, BITMAP_HEIGHT, FALSE, &err);
// [Error check code removed for clarity]
g_object_unref(dbp->pixbuf);
dbp->pixbuf = gdk_pixbuf_copy(newpixbuf );
gdk_pixbuf_copy_options(newpixbuf , dbp->pixbuf);
dbp->image_modified = 1; // Trigger refresh
Now, if I do repeated calls to these routines I get odd interactions;
If I do clear, image_replace_1, clear, image_replace_1, clear, image_replace_1 etc... it works absolutely perfectly.
If I do clear, image_replace_2, clear, image_replace_2, clear, image_replace_2 etc... it works absolutely perfectly.
However, when I do:
If I do clear, image_replace_1, clear, image_replace_2, clear, image_replace_1... it falls over with complaints that GDK_IS_PIXBUF(pixbuf) failed and I can't for the life of me work out how that happens?
I have also tried this alternative code for the button refresh callback;
dbp->image = (GtkImage*)gtk_image_new_from_pixbuf(dbp->pixbuf);
gtk_button_set_image(GTK_BUTTON(dbp->btn), GTK_WIDGET(dbp->image));
But the result is the same.

Get texture coordinates of mouse position in SDL2?

I have the strict requirement to have a texture with resolution (let's say) of 512x512, always (even if the window is bigger, and SDL basically scales the texture for me, on rendering). This is because it's an emulator of a classic old computer assuming a fixed texture, I can't rewrite the code to adopt multiple texture sizes and/or texture ratios dynamically.
I use SDL_RenderSetLogicalSize() for the purpose I've described above.
Surely, when this is rendered into a window, I can get the mouse coordinates (window relative), and I can "scale" back to the texture position with getting the real window size (since the window can be resized).
However, there is a big problem here. As soon as window width:height ratio is not the same as the texture's ratio (for example in full screen mode, since the ratio of modern displays would not match of the ratio I want to use), there are "black bars" at the sides or top/bottom. Which is nice, since I want always the same texture ratio, fixed, and SDL does it for me, etc. However I cannot find a way to ask SDL where is my texture rendered exactly inside the window based on the fixed ratio I forced. Since I need the position within the texture only, and the exact texture origin is placed by SDL itself, not by me.
Surely, I can write some code to figure out how those "black bars" would change the origin of the texture, but I can hope there is a more simple and elegant way to "ask" SDL about this, since surely it has the code to position my texture somewhere, so I can re-use that information.
My very ugly (can be optimized, and floating point math can be avoided I think, but as the first try ...) solution:
static void get_mouse_texture_coords ( int x, int y )
{
int win_x_size, win_y_size;
SDL_GetWindowSize(sdl_win, &win_x_size, &win_y_size);
// I don't know if there is more sane way for this ...
// But we must figure out where is the texture within the window,
// which can be changed since the fixed ratio versus the window ratio (especially in full screen mode)
double aspect_tex = (double)SCREEN_W / (double)SCREEN_H;
double aspect_win = (double)win_x_size / (double)win_y_size;
if (aspect_win >= aspect_tex) {
// side ratio correction bars must be taken account
double zoom_factor = (double)win_y_size / (double)SCREEN_H;
int bar_size = win_x_size - (int)((double)SCREEN_W * zoom_factor);
mouse_x = (x - bar_size / 2) / zoom_factor;
mouse_y = y / zoom_factor;
} else {
// top-bottom ratio correction bars must be taken account
double zoom_factor = (double)win_x_size / (double)SCREEN_W;
int bar_size = win_y_size - (int)((double)SCREEN_H * zoom_factor);
mouse_x = x / zoom_factor;
mouse_y = (y - bar_size / 2) / zoom_factor;
}
}
Where SCREEN_W and SCREEN_H are the dimensions of the my texture, quite misleading by names, but anyway. Input parameters x and y are the window-relative mouse position (reported by SDL). mouse_x and mouse_y are the result, the texture based coordinates. This seems to work nicely. However, is there any sane solution or a better one?
The code which calls the function above is in my event handler loop (which I call regularly, of course), something like this:
void handle_sdl_events ( void ) {
SDL_Event event;
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_MOUSEMOTION:
get_mouse_texture_coords(event.motion.x, event.motion.y);
break;
[...]

cairo / xlib not updating window content

I am trying to learn how to use Cairo 2D drawing library with xlib surfaces.
I wrote a little test program that allows creating multiple windows. Each function may have a custom paint() function that is called regularly to add some graphics content to the window, or redraw it completely if desired. There is also an option to define mouse and key listener. The main routine checks for X events (to delegate them to mouse and key listener) and for timeout for periodic call of those paint() functions.
I tried with the 1.14.6 version of Cairo (that is currently available as package in Ubuntu 16.04), and the latest 1.15.12, but the results are the same.
The expected behavior of this demo is to open 3 windows. One will have random rectangles being added, another one random texts, and the third random circles.
In addition, clicking into windows should produce lines (connecting to mouse click, or randomly), and using arrow keys should draw a red line in the window with circles.
The circles and text seem to show up regularly as expected. All three windows should have white background, but two of them are black. And the worst, the window with rectangles does not get updated much (and it does not matter if it is the first window created or not, it is always the rectangles that do not show up properly).
They are only shown when the focus changes to or from that window - then the remaining rectangles that should have been drawn meanwhile suddenly appear.
I am calling cairo_surface_flush() on the surface of each window after adding any content, but that does not help. I also tried posting XEvents to that window of various kind (such as focus), they arrive, but rectangles do not show up.
Furthermore, even though drawing lines with mouse works fine, drawing line with key arrows suffers from the same problem - it is drawn, but not shown properly.
I am obviously wrong in some of my assumptions about what this library can do, but I am not sure where.
It seems that there are some two competing versions of drawing being shown, since it happens sometimes that one or two rectangles, or pieces of the red line are flashing. Some kind of strange buffering, caching?
It may just be some bug in my program, I do not know.
Another observation - the black background is because drawing white background happens before the window is shown, and thus those cairo_paint calls are somehow discarded. I do not know how to make the window appear earlier, it seems it appears only after some later changes on the screen.
I am stuck on this after a couple of desperate days, could you help me out at least in part, please?
The program is here: test_cairo.c
An example screenshot (with a broken red line drawn by keys, and rectangles not showing up properly): test_cairo.png
To compile (on Ubuntu 16.04 or similar system):
gcc -o test_cairo test_cairo.c -I/usr/include/cairo -lX11 -lcairo
X11 does not retain window content for you. When you get an Expose event, you have to repaint the area described by that event completely.
All three windows should have white background, but two of them are black.
You create your window with XCreateSimpleWindow, so their background attribute is set to black. The X11 server will fill exposed areas with black for you before sending an expose event. Since you do not tell cairo to draw a white background, the black stays.
Try this:
--- test_cairo.c.orig 2018-07-28 09:53:10.000000000 +0200
+++ test_cairo.c 2018-07-29 10:52:43.268867754 +0200
## -63,6 +63,7 ## static gui_mouse_callback mouse_callback
static cairo_t *windows[MAX_GUI_WINDOWS_COUNT];
static cairo_surface_t *surfaces[MAX_GUI_WINDOWS_COUNT];
+static cairo_surface_t *real_surfaces[MAX_GUI_WINDOWS_COUNT];
static Window x11windows[MAX_GUI_WINDOWS_COUNT];
static char *window_names[MAX_GUI_WINDOWS_COUNT];
## -79,7 +80,12 ## long long usec()
void repaint_window(int window_handle)
{
draw_callbacks[window_handle](windows[window_handle]);
- cairo_surface_flush(surfaces[window_handle]);
+
+ cairo_t *cr = cairo_create(real_surfaces[window_handle]);
+ cairo_set_source_surface(cr, surfaces[window_handle], 0, 0);
+ cairo_paint(cr);
+ cairo_destroy(cr);
+ cairo_surface_flush(real_surfaces[window_handle]);
}
int gui_cairo_check_event(int *xclick, int *yclick, int *win)
## -149,7 +155,6 ## void draw_windows_title(int window_handl
sprintf(fullname, "Mikes - %d - [%s]", window_handle, context_names[current_context]);
else
sprintf(fullname, "Mikes - %s - [%s]", window_names[window_handle], context_names[current_context]);
- cairo_surface_flush(surfaces[window_handle]);
XStoreName(dsp, x11windows[window_handle], fullname);
}
## -179,20 +184,17 ## int gui_open_window(gui_draw_callback pa
}
if (window_handle < 0) return -1;
- surfaces[window_handle] = gui_cairo_create_x11_surface(&width, &height, window_handle);
+ real_surfaces[window_handle] = gui_cairo_create_x11_surface(&width, &height, window_handle);
+ surfaces[window_handle] = cairo_surface_create_similar(real_surfaces[window_handle], CAIRO_CONTENT_COLOR, width, height);
windows[window_handle] = cairo_create(surfaces[window_handle]);
mouse_callbacks[window_handle] = 0;
draw_callbacks[window_handle] = paint;
window_update_periods[window_handle] = update_period_in_ms;
window_names[window_handle] = 0;
-
- cairo_surface_flush(surfaces[window_handle]);
cairo_set_source_rgb(windows[window_handle], 1, 1, 1);
cairo_paint(windows[window_handle]);
-
- cairo_surface_flush(surfaces[window_handle]);
draw_callbacks[window_handle](windows[window_handle]);
## -201,7 +203,6 ## int gui_open_window(gui_draw_callback pa
else next_window_update[window_handle] = 0;
draw_windows_title(window_handle);
- cairo_surface_flush(surfaces[window_handle]);
window_in_use[window_handle] = 1;
return window_handle;
## -213,6 +214,7 ## void gui_close_window(int window_handle)
cairo_destroy(windows[window_handle]);
cairo_surface_destroy(surfaces[window_handle]);
+ cairo_surface_destroy(real_surfaces[window_handle]);
window_in_use[window_handle] = 0;
int no_more_windows = 1;
for (int i = 0; i < MAX_GUI_WINDOWS_COUNT; i++)

How To Slow Down glutIdleFunc Animation Speed

Dear all I am trying to create animation using OpenGL through glutIdleFunc(). Below is my code:
float t = 0.0;
void idle (void)
{
t += 0.1;
if (t > 2*pi)
{
t = 0.0;
}
glutPostRedisplay();
}
//in main function
glutIdleFunc(idle);
I have been trying to adjust the increment of t in order to slow down my animation. But somehow my animation keeps moving on too fast, until I can't catch it with my eye. Does anyone know how to slow down this kind of animation? Thank's
You need to use the time since the last function call rather than a straight value as your metric, since that time may vary.
For more information, read valkea's answer on GameDev, which suggests that you use glutGet(GLUT_ELAPSED_TIME) to calculate that value.
Rather than trying to find an artificial t value to use in your idle function, you'll probably be better off using a real timer such as C's time(). Then, simply advance your animation by the appropriate amount given the elapsed time since the last frame was drawn.
Here's how it might look:
time_t lastTime;
void draw() {
const time_t now = time();
const double dt_s = difftime(now, lastTime);
// Update your frame based on the elapsed time. For example, update an angle
// based on a specified rotation rate (omega_deg_s):
const double omega_deg_s = 10.0;
angle += dt_s * omega_deg_s;
angle = fmod(angle, 360.0);
// Now draw something based on the new angle info:
draw_my_scene(angle);
// Record current time for next time:
lastTime = now;
}

Constant game speed independent of variable FPS in OpenGL with GLUT?

I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one.
After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article.
First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it.
GLUT Toolkit:
GLUT is an OpenGL toolkit and helps with common tasks in OpenGL.
The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration.
The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once.
The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene.
The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load.
The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now.
I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it.
Current Implementation:
Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this:
#define TICKS_PER_SECOND 30
#define MOVEMENT_SPEED 2.0f
const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND;
int previousTime;
int currentTime;
int elapsedTime;
void renderScene(void) {
(...)
// Setup the camera position and looking point
SceneCamera.LookAt();
// Do all drawing below...
(...)
}
void processAnimationTimer(int value) {
// setups the timer to be called again
glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0);
// Get the time when the previous frame was rendered
previousTime = currentTime;
// Get the current time (in milliseconds) and calculate the elapsed time
currentTime = glutGet(GLUT_ELAPSED_TIME);
elapsedTime = currentTime - previousTime;
/* Multiply the camera direction vector by constant speed then by the
elapsed time (in seconds) and then move the camera */
SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f));
// Requests to render a new frame (this will call my renderScene() once)
glutPostRedisplay();
}
void main(int argc, char **argv) {
glutInit(&argc, argv);
(...)
glutDisplayFunc(renderScene);
(...)
// Setup the timer to be called one first time
glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0);
// Read the current time since glutInit was called
currentTime = glutGet(GLUT_ELAPSED_TIME);
glutMainLoop();
}
This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. [EDIT: Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time.]
I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. [EDIT: It's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes?]
I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow.
Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right?
How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT?
[EDIT] Another Approach:
I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance.
My code has now turned into this:
int previousTime;
int currentTime;
int elapsedTime;
void renderScene(void) {
(...)
// Setup the camera position and looking point
SceneCamera.LookAt();
// Do all drawing below...
(...)
}
void renderScene(void) {
(...)
// Get the time when the previous frame was rendered
previousTime = currentTime;
// Get the current time (in milliseconds) and calculate the elapsed time
currentTime = glutGet(GLUT_ELAPSED_TIME);
elapsedTime = currentTime - previousTime;
/* Multiply the camera direction vector by constant speed then by the
elapsed time (in seconds) and then move the camera */
SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f));
// Setup the camera position and looking point
SceneCamera.LookAt();
// All drawing code goes inside this function
drawCompleteScene();
glutSwapBuffers();
/* Redraw the frame ONLY if the user is moving the camera
(similar code will be needed to redraw the frame for other events) */
if(!IsTupleEmpty(cameraDirection)) {
glutPostRedisplay();
}
}
void main(int argc, char **argv) {
glutInit(&argc, argv);
(...)
glutDisplayFunc(renderScene);
(...)
currentTime = glutGet(GLUT_ELAPSED_TIME);
glutMainLoop();
}
Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving.
Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that?
Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think.
What do you think?
glut is designed to be the game loop. When you call glutMainLoop(), it executes a 'for loop' with no termination condition except the exit() signal. You can implement your program kind of like you're doing now, but you need some minor changes. First, if you want to know what the FPS is, you should put that tracking into the renderScene() function, not in your update function. Naturally, your update function is being called as fast as specified by the timer and you're treating elapsedTime as a measure of time between frames. In general, that will be true because you're calling glutPostRedisplay rather slowly and glut won't try to update the screen if it doesn't need to (there's no need to redraw if the scene hasn't changed). However, there are other times that renderScene will be called. For example, if you drag something across the window. If you did that, you'd see a higher FPS (if you were properly tracking the FPS in the render function).
You could use glutIdleFunc, which is called continuously whenever possible--similar to the while(game_is_running) loop. That is, whatever logic you would otherwise put into that while loop, you could put into the callback for glutIdleFunc. You can avoid using glutTimerFunc by keeping track of the ticks on your own, as in the article you linked (using glutGet(GLUT_ELAPSED_TIME)).
Have, as an example, a mouse-driven rotation matrix that updates at a fixed frame-rate, independently of the rendering frame-rate. In my program, space-bar toggles benchmarking mode, and determines the Boolean fxFPS.
Let go of the mouse button while dragging, and you can 'throw' an object transformed by this matrix.
If fxFPS is true then the rendering frame-rate is throttled to the animation frame-rate; otherwise identical frames are drawn repeatedly for benchmarking, even though not enough milliseconds will have passed to trigger any animation.
If you're thinking about slowing down AND speeding up frames, you have to think carefully about whether you mean rendering or animation frames in each case. In this example, render throttling for simple animations is combined with animation acceleration, for any cases when frames might be dropped in a potentially slow animation.
To accelerate the animation, rotations are performed repeatedly in a loop. Such a loop is not too slow compared with the option of doing trig with an adaptive rotation angle; just be careful what you put inside any loop that actually takes longer to execute, the lower the FPS. This loop takes far less than an extra frame to complete, for each frame-drop that it accounts for, so it's reasonably safe.
int xSt, ySt, xCr, yCr, msM = 0, msOld = 0;
bool dragging = false, spin = false, moving = false;
glm::mat4 mouseRot(1.0f), continRot(1.0f);
float twoOvHght; // Set in reshape()
glm::mat4 mouseRotate(bool slow) {
glm::vec3 axis(twoOvHght * (yCr - ySt), twoOvHght * (xCr - xSt), 0); // Perpendicular to mouse motion
float len = glm::length(axis);
if (slow) { // Slow rotation; divide angle by mouse-delay in milliseconds; it is multiplied by frame delay to speed it up later
int msP = msM - msOld;
len /= (msP != 0 ? msP : 1);
}
if (len != 0) axis = glm::normalize(axis); else axis = glm::vec3(0.0f, 0.0f, 1.0f);
return rotate(axis, cosf(len), sinf(len));
}
void mouseMotion(int x, int y) {
moving = (xCr != x) | (yCr != y);
if (dragging & moving) {
xSt = xCr; xCr = x; ySt = yCr; yCr = y; msOld = msM; msM = glutGet(GLUT_ELAPSED_TIME);
mouseRot = mouseRotate(false) * mouseRot;
}
}
void mouseButton(int button, int state, int x, int y) {
if (button == 0) {
if (state == 0) {
dragging = true; moving = false; spin = false;
xCr = x; yCr = y; msM = glutGet(GLUT_ELAPSED_TIME);
glutPostRedisplay();
} else {
dragging = false; spin = moving;
if (spin) continRot = mouseRotate(true);
}
}
}
And then later...
bool fxFPS = false;
int T = 0, ms = 0;
const int fDel = 20;
void display() {
ms = glutGet(GLUT_ELAPSED_TIME);
if (T <= ms) { T = ms + fDel;
for (int lp = 0; lp < fDel; lp++) {
orient = rotY * orient; orientCu = rotX * rotY * orientCu; // Auto-rotate two orientation quaternions
if (spin) mouseRot = continRot * mouseRot; // Track rotation from thowing action by mouse
}
orient1 = glm::mat4_cast(orient); orient2 = glm::mat4_cast(orientCu);
}
// Top secret animation code that will make me rich goes here
glutSwapBuffers();
if (spin | dragging) { if (fxFPS) while (glutGet(GLUT_ELAPSED_TIME) < T); glutPostRedisplay(); } // Fast, repeated updates of the screen
}
Enjoy throwing things around an axis; I find that most people do. Notice that the fps affects nothing whatsoever, in the interface or the rendering. I've minimised the use of divisions, so comparisons should be nice and accurate, and any inaccuracy in the clock does not accumulate unnecessarily.
Syncing of multiplayer games is another 18 conversations, I would judge.

Resources