Load an image only once (SDL2) - c

I want to simulate a tv screen in my game. A 'no signal' image is displayed all along. It will be replaced by a scene of a man shooting another one, that's all. so I wrote this, which load my image every time :
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
/*Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene==NULL){printf("Erreur no signal : %s\n", SDL_GetError());}*/
if (Scene == NULL)
{
Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
//SDL_DestroyTexture(Scene);
//500, 545
}
and it causes memory leaks. I've tried to destroy the texture in the loop etc., but nothing changes. so, can you advice me some ways to load the image at the very beginning , keep it , and display it only when needed.

I agree with the commenters about dedicated texture loader being a correct solution, but if you only want this behavior for one particular texture it may be an overkill. In that case you can write a separate function which loads this particular texture and make sure it is only called once.
Alternatively, you can use static variables. If a variable declared in a function is marked as static it will retain its value across calls to that function. You can find a simple example here (it's a tutorial-grade source but it shows basic usage) or here (SO source).
By modifying your code ever so slightly you should be able to make sure that the texture is loaded only once. By marking a pointer to it as static you ensure that its value (so address of the loaded texture) is not lost bewteen calls to the function. Afterwards the pointer will live in memory until the program terminates. Thanks to this, we do not have to free the texture's memory (unless you explicitly want to free it at some point, but then the texture manager is probably a better idea). A memory leak will not occur, since we are never going to lose the reference to the texture.
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
static SDL_Texture* Scene_cache = NULL; // Introduce a variable which remembers the texture across function calls
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
if (Scene_cache == NULL) // First time we call the function, Scene_cache will be NULL, but in next calls it will point to a loaded texture
{
Scene_cache = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg"); // If Scene_cache is NULL we load the texture
if (Scene_cache == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
Scene = Scene_cache; // Set Scene to point to the loaded texture
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
}
If you care about performance and memory usage etc you should read about consequences of static variables, for instance their impact on cache and how they work internally. This might be considered a "dirty hack" but it might be just enough for a small project that does not need bigger solutions.

Related

Saving kinect v2 frames in an array

I am trying to develop a software based on Kinect v2 and I need to keep the capturedframes in an array. I have a problem and I dont have any idea about it as follow.
The captured frames are processed by my processing class and the processed writable bitmap will be called as the source of the image box in my ui window which works perfectly and I have a realtime frames in my ui.
for example:
/// Color
_ProcessingInstance.ProcessColor(colorFrame);
ImageBoxRGB.Source = _ProcessingInstance.colorBitmap;
but when I want to assign this to an element of an array, all of the elements in array will be identical as the first frame!! I should mention that, this action is in the reading event which above action is there.
the code:
ColorFrames_Array[CapturingFrameCounter] = _ProcessingInstance.colorBitmap;
the equal check in intermediate window:
ColorFrames_Array[0].Equals(ColorFrames_Array[1])
true
ColorFrames_Array[0].Equals(ColorFrames_Array[2])
true
Please give me some hints about this problem. Any idea?
Thanks Yar
You are right and when I create a new instance, frames are saved correctly.
But my code was based on the Microsoft example and problem is that creating new instances makes the memory leakage because writablebitmap is not disposable.
similar problem is discussed in the following link which the frames are frizzed to the first frame and this is from the intrinsic properties of writeablebitmap:
http://www.wintellect.com/devcenter/jprosise/silverlight-s-big-image-problem-and-what-you-can-do-about-it
Therefore i use a strategy similar to the above solution and try to get a copy instead of the original bitmap frame. In this scenario, I have create a new writeblebitmap for each element of ColorFrames_Array[] at initialization step.
ColorFrames_Array = new riteableBitmap[MaximumFramesNumbers_Capturing];
for (int i=0; i < MaximumFramesNumbers_Capturing; ++i)
{
ColorFrames_Array[i] = new WriteableBitmap(color_width, color_height, 96.0, 96.0, PixelFormats.Bgr32, null);
}
and finally, use clone method to copy the bitmap frames to array elements.
ColorFrames_ArrayBuffer[CapturingFrameCounter] = _ProcessingInstance.colorBitmap.Clone();
While above solution works, but it has a huge memory leakage!!.
Therefore I use Array and .copypixel methods (of writeablebitmap) to copy the pixels of the frame to array and hold it (while the corresponding writeablebitmap will be disposed correctly without leakage).
public Array[] ColorPixels_Array;
for (int i=0; i< MaximumFramesNumbers_Capturing; ++i)
{
ColorPixels_Array[i]=new int[color_Width * color_Height];
}
colorBitmap.CopyPixels(ColorPixels_Array[Counter_CapturingFrame], color_Width * 4, 0);
Finally, when we want to save the arrays of pixels, we need to convert them new writeablebitmap instances and write them on hard.
wb = new WriteableBitmap(color_Width, color_Height, 96.0, 96.0, PixelFormats.Bgr32, null);
wb.WritePixels(new Int32Rect(0, 0, color_Width, color_Height)
, Ar_Px,
color_Width * 4, 0);

WinAPI managing brushes

I don't get how I am supposed to handle brushes for coloring static text background.
At first everything looks nice as it is supposed to be:
However, after the statics have been redrawn for several times, they change to this:
I also noticed this depends on whether I'm straight returning the same brush in every case (for debugging) or using the actual code with different cases(grey boxes after first redrawing).
My WM_CTLCOLORSTATIC message handling looks like this:
case WM_CTLCOLORSTATIC:
{
HDC hdcStatic = (HDC) wParam;
SetTextColor(hdcStatic, RGB(0,0,0));
HBRUSH hbrDefault = CreateSolidBrush(RGB(255,255,255));
return (INT_PTR)hbrDefault;
(Simplified for debugging)
I guess this has something to do with freeing the brushes after using with DeleteObject(), but how could I do this when I need to return the brushes, but I want to delete them before leaving the function?
MSDN resources didn't help: WM_CTLCOLORSTATIC
EDIT : I found my mistake.
I declared my brushes as global variables like this:
HBRUSH hbrBkFoodCat[FOODCAT_LENGTH];
HBRUSH hbrDefault;
But then I initialised them on startup like this:
for(int i=0;i<FOODCAT_LENGTH;i++) {
hbrBkFoodCat[i] = CreateSolidBrush(foodCatClr[i]);
}
HBRUSH hbrDefault = CreateSolidBrush(RGB(255,255,255));
As you can see, I accidentally declared hbrDefault again but this time as a local variable, so at message handling I got that grey boxes (NULL brush).
What I tried out (stupid idea I know), was to initialize them at message handling. Since I just copy-pasted that initialization right into the handling, it became a local variable again, but this time it was 'in range' for the return. This lead me to the assumption something was wrong with freeing the brushes, because of having to redraw it numerous times before getting that grey background (still don't get this though).
Thank you all for your help anyway!
The MSDN documentation for that message says that you MUST free the brush. But you don't have to create/free it everytime. Just create it once and reuse. Free it when you don't need it anymore, but not in the message handler.
Do not create a new brush every time you process a WM_CTLCOLORSTATIC message. That is a resource leak. Create the brush one time, either when you first create the static text control, or when it sends you WM_CTLCOLORSTATIC for the first time. Keep returning that same brush for every WM_CTLCOLORSTATIC message:
HBRUSH hbrStaticBkg = NULL;
...
case WM_CTLCOLORSTATIC:
{
HDC hdc = (HDC) wParam;
SetTextColor(hdc, RGB(...));
if (!hbrStaticBkg) hbrStaticBkg = CreateSolidBrush(RGB(...));
return (LRESULT) hbrStaticBkg;
}
Destroy the brush only after you have destroyed the static text control.
DestroyWindow(hwndStatic);
if (hbrStaticBkg) {
DeleteObject(hbrStaticBkg);
hbrStaticBkg = NULL;
}
If you want to change the background color during the lifetime of the static text control, destroy the brush and invalidate the control to trigger a repaint, then create the new brush when requested.
COLOREREF clrStaticText = RGB(0,0,0);
COLORREF clrStaticBkg = RGB(255,255,255);
HBRUSH hbrStaticBkg = NULL;
...
case WM_CTLCOLORSTATIC:
{
HDC hdc = (HDC) wParam;
SetTextColor(hdc, clrStaticText);
if (!hbrStaticBkg) hbrStaticBkg = CreateSolidBrush(clrStaticBkg);
return (LRESULT) hbrStaticBkg;
}
...
clrStaticText = RGB(...);
clrStaticBkg = RGB(...);
if (hbrStaticBkg) {
DeleteObject(hbrStaticBkg);
hbrStaticBkg = NULL;
}
InvalidateRect(hwndStatic, NULL, TRUE);

Using SDL_BlitScaled to created scaled copies of surfaces

So I'm working on some SDL2 Wrapper stuff, and I'm trying to use SDL_BlitScaled to copy the data in a src surface into a destination surface which I've already created, like so
SDL_Surface *loaded = IMG_Load("test.png");
SDL_SetSurfaceBlendMode(loaded, SDL_BLENDMODE_NONE);
SDL_Surface *out = SDL_CreateRGBSurface(0, 100, 100, loaded->format->BitsPerPixel,
loaded->format->Rmask, loaded->format->Gmask, loaded->format->Bmask, loaded->format->Amask);
SDL_BlitScaled(loaded, NULL, out, NULL);
SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, out);
SDL_Rect rec = {10, 10, 110, 110};
SDL_RenderCopy(ren, tex, NULL, &rec);
Don't worry about my renderer or window etc. I've isolated the problem to somewhere in this code. The image does not appear on the screen, however it does if I create a texture from the loaded surface. Thoughts? I imagine I'm misusing either CreateRGBSurface, or BlitScaled (I did see another question about this, however the solution was unclear).
For me I had to do:
SDL_SetSurfaceBlendMode(loaded , SDL_BLENDMODE_NONE);
SDL_SetSurfaceBlendMode(out, SDL_BLENDMODE_NONE);
For it to work, otherwise some strange blending happens.
The docs page for this function says:
To copy a surface to another surface (or texture) without blending with the existing data, the blendmode of the SOURCE surface should be
set to 'SDL_BLENDMODE_NONE'.
So setting loaded is probably enough.
Edit: In the end I came up with this:
struct FreeSurface_Functor
{
void operator() (SDL_Surface* pSurface) const
{
if (pSurface)
{
SDL_FreeSurface(pSurface);
}
}
};
typedef std::unique_ptr<SDL_Surface, FreeSurface_Functor> SDL_SurfacePtr;
class SDLHelpers
{
public:
SDLHelpers() = delete;
static SDL_SurfacePtr ScaledCopy(SDL_Surface* src, SDL_Rect* dstSize)
{
SDL_SurfacePtr scaledCopy(SDL_CreateRGBSurface(0,
dstSize->w, dstSize->h,
src->format->BitsPerPixel,
src->format->Rmask, src->format->Gmask, src->format->Bmask, src->format->Amask));
// Get the old mode
SDL_BlendMode oldBlendMode;
SDL_GetSurfaceBlendMode(src, &oldBlendMode);
// Set the new mode so copying the source won't change the source
SDL_SetSurfaceBlendMode(src, SDL_BLENDMODE_NONE);
// Do the copy
if (SDL_BlitScaled(src, NULL, scaledCopy.get(), dstSize) != 0)
{
scaledCopy.reset();
}
// Restore the original blending mode
SDL_SetSurfaceBlendMode(src, oldBlendMode);
return scaledCopy;
}
};

OpenGL sampler object not updating bound texture

I am currently binding a sampler object to a texture unit (GL_TEXTURE12 to be specific) with
glBindSampler(12, sampler)
and the initial settings are very visible compared to the textures own settings. But when I change the samplers parameters with
glSamplerParameteri(sampler, GL_TEXTURE_***_FILTER, filter);
the texture bound to the texture unit filters just the same as it did before with no apparent change from any perspective.
I have tried re-binding the sampler to the texture unit again after the parameter change but I'm pretty sure this isn't required.
What changes can I make to get this working?
Since I could not explain why this statement: "I have tried re-binding the texture unit to the sampler again after the parameter change but I'm pretty sure this isn't required." makes no sense in comments, consider the following C pseudo-code.
/* Thin state wrapper */
struct SamplerObject {
SamplerState sampler_state;
};
/* Subsumes SamplerObject */
struct TextureObject {
ImageData* image_data;
...
SamplerState sampler_state;
};
/* Binding point: GL4.x gives you at least 80 of these (16 per-shader stage) */
struct TextureImageUnit {
TextureObject* bound_texture; /* Default = NULL */
SamplerObject* bound_sampler; /* Default = NULL */
} TextureUnits [16 * 5];
vec4 texture2D ( GLuint n,
vec2 tex_coords )
{
/* By default, sampler state is sourced from the bound texture object */
SamplerState* sampler_state = &TextureUnits [n]->bound_texture->sampler_state;
/* If there is a sampler object bound to texture unit N, use its state instead
of the sampler state built-in to the bound texture object. */
if (TextureUnits [n]->bound_sampler != NULL)
sampler_state = &TextureUnits [n]->bound_sampler->sampler_state;
...
}
I believe the source of confusion is coming from the fact that in GLSL the uniforms used to identify which texture image unit to sample from (and how) are called sampler[...]. Hopefully this clears up some of the confusion so we are all on the same page.

Constant game speed independent of variable FPS in OpenGL with GLUT?

I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one.
After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article.
First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it.
GLUT Toolkit:
GLUT is an OpenGL toolkit and helps with common tasks in OpenGL.
The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration.
The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once.
The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene.
The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load.
The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now.
I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it.
Current Implementation:
Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this:
#define TICKS_PER_SECOND 30
#define MOVEMENT_SPEED 2.0f
const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND;
int previousTime;
int currentTime;
int elapsedTime;
void renderScene(void) {
(...)
// Setup the camera position and looking point
SceneCamera.LookAt();
// Do all drawing below...
(...)
}
void processAnimationTimer(int value) {
// setups the timer to be called again
glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0);
// Get the time when the previous frame was rendered
previousTime = currentTime;
// Get the current time (in milliseconds) and calculate the elapsed time
currentTime = glutGet(GLUT_ELAPSED_TIME);
elapsedTime = currentTime - previousTime;
/* Multiply the camera direction vector by constant speed then by the
elapsed time (in seconds) and then move the camera */
SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f));
// Requests to render a new frame (this will call my renderScene() once)
glutPostRedisplay();
}
void main(int argc, char **argv) {
glutInit(&argc, argv);
(...)
glutDisplayFunc(renderScene);
(...)
// Setup the timer to be called one first time
glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0);
// Read the current time since glutInit was called
currentTime = glutGet(GLUT_ELAPSED_TIME);
glutMainLoop();
}
This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. [EDIT: Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time.]
I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. [EDIT: It's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes?]
I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow.
Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right?
How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT?
[EDIT] Another Approach:
I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance.
My code has now turned into this:
int previousTime;
int currentTime;
int elapsedTime;
void renderScene(void) {
(...)
// Setup the camera position and looking point
SceneCamera.LookAt();
// Do all drawing below...
(...)
}
void renderScene(void) {
(...)
// Get the time when the previous frame was rendered
previousTime = currentTime;
// Get the current time (in milliseconds) and calculate the elapsed time
currentTime = glutGet(GLUT_ELAPSED_TIME);
elapsedTime = currentTime - previousTime;
/* Multiply the camera direction vector by constant speed then by the
elapsed time (in seconds) and then move the camera */
SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f));
// Setup the camera position and looking point
SceneCamera.LookAt();
// All drawing code goes inside this function
drawCompleteScene();
glutSwapBuffers();
/* Redraw the frame ONLY if the user is moving the camera
(similar code will be needed to redraw the frame for other events) */
if(!IsTupleEmpty(cameraDirection)) {
glutPostRedisplay();
}
}
void main(int argc, char **argv) {
glutInit(&argc, argv);
(...)
glutDisplayFunc(renderScene);
(...)
currentTime = glutGet(GLUT_ELAPSED_TIME);
glutMainLoop();
}
Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving.
Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that?
Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think.
What do you think?
glut is designed to be the game loop. When you call glutMainLoop(), it executes a 'for loop' with no termination condition except the exit() signal. You can implement your program kind of like you're doing now, but you need some minor changes. First, if you want to know what the FPS is, you should put that tracking into the renderScene() function, not in your update function. Naturally, your update function is being called as fast as specified by the timer and you're treating elapsedTime as a measure of time between frames. In general, that will be true because you're calling glutPostRedisplay rather slowly and glut won't try to update the screen if it doesn't need to (there's no need to redraw if the scene hasn't changed). However, there are other times that renderScene will be called. For example, if you drag something across the window. If you did that, you'd see a higher FPS (if you were properly tracking the FPS in the render function).
You could use glutIdleFunc, which is called continuously whenever possible--similar to the while(game_is_running) loop. That is, whatever logic you would otherwise put into that while loop, you could put into the callback for glutIdleFunc. You can avoid using glutTimerFunc by keeping track of the ticks on your own, as in the article you linked (using glutGet(GLUT_ELAPSED_TIME)).
Have, as an example, a mouse-driven rotation matrix that updates at a fixed frame-rate, independently of the rendering frame-rate. In my program, space-bar toggles benchmarking mode, and determines the Boolean fxFPS.
Let go of the mouse button while dragging, and you can 'throw' an object transformed by this matrix.
If fxFPS is true then the rendering frame-rate is throttled to the animation frame-rate; otherwise identical frames are drawn repeatedly for benchmarking, even though not enough milliseconds will have passed to trigger any animation.
If you're thinking about slowing down AND speeding up frames, you have to think carefully about whether you mean rendering or animation frames in each case. In this example, render throttling for simple animations is combined with animation acceleration, for any cases when frames might be dropped in a potentially slow animation.
To accelerate the animation, rotations are performed repeatedly in a loop. Such a loop is not too slow compared with the option of doing trig with an adaptive rotation angle; just be careful what you put inside any loop that actually takes longer to execute, the lower the FPS. This loop takes far less than an extra frame to complete, for each frame-drop that it accounts for, so it's reasonably safe.
int xSt, ySt, xCr, yCr, msM = 0, msOld = 0;
bool dragging = false, spin = false, moving = false;
glm::mat4 mouseRot(1.0f), continRot(1.0f);
float twoOvHght; // Set in reshape()
glm::mat4 mouseRotate(bool slow) {
glm::vec3 axis(twoOvHght * (yCr - ySt), twoOvHght * (xCr - xSt), 0); // Perpendicular to mouse motion
float len = glm::length(axis);
if (slow) { // Slow rotation; divide angle by mouse-delay in milliseconds; it is multiplied by frame delay to speed it up later
int msP = msM - msOld;
len /= (msP != 0 ? msP : 1);
}
if (len != 0) axis = glm::normalize(axis); else axis = glm::vec3(0.0f, 0.0f, 1.0f);
return rotate(axis, cosf(len), sinf(len));
}
void mouseMotion(int x, int y) {
moving = (xCr != x) | (yCr != y);
if (dragging & moving) {
xSt = xCr; xCr = x; ySt = yCr; yCr = y; msOld = msM; msM = glutGet(GLUT_ELAPSED_TIME);
mouseRot = mouseRotate(false) * mouseRot;
}
}
void mouseButton(int button, int state, int x, int y) {
if (button == 0) {
if (state == 0) {
dragging = true; moving = false; spin = false;
xCr = x; yCr = y; msM = glutGet(GLUT_ELAPSED_TIME);
glutPostRedisplay();
} else {
dragging = false; spin = moving;
if (spin) continRot = mouseRotate(true);
}
}
}
And then later...
bool fxFPS = false;
int T = 0, ms = 0;
const int fDel = 20;
void display() {
ms = glutGet(GLUT_ELAPSED_TIME);
if (T <= ms) { T = ms + fDel;
for (int lp = 0; lp < fDel; lp++) {
orient = rotY * orient; orientCu = rotX * rotY * orientCu; // Auto-rotate two orientation quaternions
if (spin) mouseRot = continRot * mouseRot; // Track rotation from thowing action by mouse
}
orient1 = glm::mat4_cast(orient); orient2 = glm::mat4_cast(orientCu);
}
// Top secret animation code that will make me rich goes here
glutSwapBuffers();
if (spin | dragging) { if (fxFPS) while (glutGet(GLUT_ELAPSED_TIME) < T); glutPostRedisplay(); } // Fast, repeated updates of the screen
}
Enjoy throwing things around an axis; I find that most people do. Notice that the fps affects nothing whatsoever, in the interface or the rendering. I've minimised the use of divisions, so comparisons should be nice and accurate, and any inaccuracy in the clock does not accumulate unnecessarily.
Syncing of multiplayer games is another 18 conversations, I would judge.

Resources