AndEngine TimerHandler - onTimePassed - timer

I am using AndEngine to create Physics Simulations of projectiles being launched. As it simulates, I want to draw the parable's track.
To do so, I am drawing a square every second according to the position of the projectile(sPlayer).
time_handler=new TimerHandler(1, true, new ITimerCallback() {
#Override
public void onTimePassed(TimerHandler pTimerHandler) {
if(simulationOn){ // every 1 second if the simulation is on
int px=(int)sPlayer.getSceneCenterCoordinates()[0];
int py=(int)sPlayer.getSceneCenterCoordinates()[1];
parabola_point=new Rectangle(px, py,4, 4,getVertexBufferObjectManager());
parabola_point.setColor(Color.WHITE);
if(!highest_point_found){ //if highest point not found, check it
float difY = (float) Math.floor(Math.abs(body.getLinearVelocity().y)) ;
if(Float.compare(0f, difY) == 0){ // if it is the highest point
highest_point_found=true;
drawPointText(); //draw the positions on the scene
parabola_point=new Rectangle(px, py,16, 16,getVertexBufferObjectManager());
parabola_point.setColor(Color.RED); // paint this point red
}
}
parabola.add(parabola_point);
scene.attachChild(parabola_point);
}
// pTimerHandler.reset();
}
});
I am using a FixedStepEngine:
#Override
public Engine onCreateEngine(final EngineOptions pEngineOptions) {
return new FixedStepEngine(pEngineOptions, 50);
}
THE PROBLEM IS:
I don't know why onTimePassed is being called faster than 1 second interval.It happens after some seconds.
I read that problaby the FixedStepEngine is changing the interval that 'onTimePassed' is called. How to fix it?

It seems to me that you are not unregistering your timer handlers which causes them to intersect with one another. Try unregistering pTimerHandler

Related

Glut Idle Function with FrameRate Limit

My current OpenGL and Glut code uses this function to set a framerate limit
static int redisplay_interval;
void timer() {
glutPostRedisplay();
glutTimerFunc(redisplay_interval, timer, 0);
}
void SetFPS(int fps) {
redisplay_interval = 1000 / fps;
glutTimerFunc(redisplay_interval, timer, 0);
}
void Render() {
for
// iterate array of 3d objects movement 1 step
loop
glutSwapBuffers();
}
in Main:
glutDisplayFunc(Render);
SetFPS(60);
glutMainLoop();
...
However, I'm trying to update an array of 3D objects one step at a time and now I don't know where I would place it and make sure it iterates through the movement for loop once and then do a PostRedisplay with a frame rate limiter still in place.
A very similar topic and answer was also posted here:
https://stackoverflow.com/a/35612434/15578244
not sure on how to go about in my use case.

Aligning a card array in-game

I'm creating a tcg (trading card game) and I would like to know how can I change the layout of the cards while playing. I mean that the cards will be spread in line aligned to the center of the screen both vertically and horizontaly, on a canvas, and when I draw/dismiss a card I would like the cards to fill in the space and align again in game. How can I do that? any ideas? I thought of a solution about when your turn begins (Start from the center of the screen then step back the length of a step X the number of cards / 2 and then spawn the cards one after another), but I can't figure out how to change the alignment of cards when you dismiss one of them without loading them all again...
Image for example
Using the same method you used for the initial position you should be able to get the new position. Now you have two positions for each card: oldPos and newPos.
Your cards are already instantiated. Their positions are stored in Transform.position. Your goal is to move from oldPos to newPos. The simplest way would be:
myCard.transform.position = newPos;
This will instantly move your cards to their new positions. However, it's not common to teleport your objects because it does not often present good feelings to users. A better solution is to smoothly move the object from a position to another.
To do this, you can move around an existing object by transform.Translate(new Vector3());, where the Vector3 will decide its moving speed. The method Translate() is doing position += movementDirection * movementAmount as you would've expected.
Moving any object over frames is called Animation. There are techniques for animation to make movements look more better (look faster than it really is, or look natural). One common method from mathematics is called linear interpolation, or lerp. Using lerp, you can easily compute intermediate points between two end-positions, and it will look natural and nice if you put your objects along the points you calculated. I believe this is what you are looking for.
========
Edit:
Here's an example of how this could be achieved. Note that Card is moving by the same amount of distance per frame in this example. Using lerp (ease-in, ease-out, etc), you could make this animation even better.
Another point I would like you to note is that I'm doing if (Vector2.Distance(nextPosition, transform.position) < 10), not if(oldPosition.equals(newPosition)). The reason is that equals() is not safe to compare floats because they are often stored as 0.4999999 and 0.50001 instead of 0.5 and 0.5. So the best way of checking floats is to test if they are "Close Enough" to each other.
Finally, you could improve the following code may improve in MANY DIFFERNET WAYS. For instnace:
Destroy() and Instantiate() is very slow operations and you
should use Object Pooling because you know you will perform these
operations constantly.
The movement of Card could be improved by better animation technique like lerp.
There may be other ways of storing List<Card> Cards
OnCardClick() is using FindObjectOfType<CardSpawner>().OnCardDeleted(this) and this requires Card to know about CardSpawner. This is called Tight Coupling, which is known as evil. There are a lot of discussions you can find why this is bad. A recommended solution would be to use event (better UnityEvent in Unity3d).
CardSpawner.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CardSpawner : MonoBehaviour
{
[SerializeField] GameObject CardParent;
[SerializeField] GameObject CardPrefab;
Vector2 DefaultSpawnPosition = new Vector2(Screen.width / 2f, Screen.height / 10f);
List<Card> Cards = new List<Card>();
public void OnClickButton()
{
SpawnNewCard();
AssignNewPositions();
AnimateCards();
}
public void OnCardDeleted(Card removedCard)
{
Cards.Remove(removedCard);
AssignNewPositions();
AnimateCards();
}
void SpawnNewCard()
{
GameObject newCard = (GameObject)Instantiate(CardPrefab, DefaultSpawnPosition, new Quaternion(), CardParent.GetComponent<Transform>());
Cards.Add(newCard.GetComponent<Card>());
}
void AssignNewPositions()
{
int n = Cards.Count;
float widthPerCard = 100;
float widthEmptySpaceBetweenCards = widthPerCard * .2f;
float totalWidthAllCards = (widthPerCard * n) + (widthEmptySpaceBetweenCards * (n-1));
float halfWidthAllCards = totalWidthAllCards / 2f;
float centreX = Screen.width / 2f;
float leftX = centreX - halfWidthAllCards;
for (int i = 0; i < n; i++)
{
if (i == 0)
Cards[i].nextPosition = new Vector2(leftX + widthPerCard / 2f, Screen.height / 2f);
else
Cards[i].nextPosition = new Vector2(leftX + widthPerCard / 2f + ((widthPerCard + widthEmptySpaceBetweenCards) * i), Screen.height / 2f);
}
}
void AnimateCards()
{
foreach (Card card in Cards)
card.StartMoving();
}
}
Card.cs
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Card : MonoBehaviour
{
public Vector2 oldPosition;
public Vector2 nextPosition;
bool IsMoving;
void Update ()
{
if (IsMoving)
{
int steps = 10;
Vector2 delta = (nextPosition - oldPosition) / steps;
transform.Translate(delta);
if (Vector2.Distance(nextPosition, transform.position) < 10)
IsMoving = false;
}
}
public void StartMoving()
{
IsMoving = true;
oldPosition = transform.position;
}
public void OnCardClick()
{
UnityEngine.Object.Destroy(this.gameObject);
Debug.Log("AfterDestroy");
FindObjectOfType<CardSpawner>().OnCardDeleted(this);
}
}

Unity3D - Playback object array of position (with dynamic velocity)

We own two objects in the scene. One follows the mouse position on the screen, and the object 2 in turn follows the route object 1 did. We are storing the positions covered by the object 1 and causing the object 2 play them.
When you run the game, an object follows the other quietly, reproducing the stored position ... but when one object's speed is changed (on mouse click increase velocity) the object 2 can not keep up, as this still following the positions already be cached in the array (including the calculations speed). Please, watch the shot video below:
YouTube: https://youtu.be/_HbP09A3cFA
public class Play : MonoBehaviour
{
public Transform obj;
private List<Recorder> recordList;
private float velocity = 10.0f;
private Transform clone;
void Start()
{
recordList = new List<Recorder>();
clone = obj;
}
void Update()
{
if (Input.GetMouseButton(0))
{
velocity = 20.0f;
}
else {
velocity = 10.0f;
}
var dir = Input.mousePosition - Camera.main.WorldToScreenPoint(transform.position);
var angle = Mathf.Atan2(dir.y, dir.x) * Mathf.Rad2Deg;
transform.rotation = Quaternion.RotateTowards(transform.rotation, Quaternion.AngleAxis(angle, Vector3.forward), 180 * Time.deltaTime);
transform.position += transform.right * Time.deltaTime * velocity;
Camera.main.transform.position = new Vector3(transform.position.x, transform.position.y, Camera.main.transform.position.z);
recordList.Insert(0, new Recorder
{
Position = transform.position,
Rotation = transform.rotation,
Velocity = velocity
});
var x = 8;
if (x < recordList.Count)
{
clone.position = recordList[x].Position;
clone.rotation = recordList[x].Rotation;
clone.position += clone.right * Time.deltaTime * velocity;
}
if (recordList.Count > x)
recordList.RemoveRange(x, recordList.Count - x);
}
}
public class Recorder
{
public Vector3 Position{get;set;}
public Quaternion Rotation{get;set;}
public float Velocity{get;set;}
}
How can we play the positions stored always with the speed of the object 1?
Summary:
If the object 1 is slowly moving object 2 as well;
If the object 2 is running, the object 2 should do the route at a faster speed to always follow the object 1;
Thanks in advance.
If i understood correctly you might want to consider using Queue<T> instead of List<T>. I think it would be a better suited datatype as it represents a FIFO collection (first in, first out), which is how you use List anyway. You can add elements with Enqueue(T) to the end of the queue and always get the first item with Dequeue() (it also removes it). As for Stack<T> (the opposite), there is also a Peek() function which lets you "preview" the next element.
Another thing, it depends on distance and speed, but i have the feeling that storing the position of every frame could become a bit excessive (maybe im just overly concerned though)
I think the issue with your code is that you always get the 8th element of the List.

3D Hit Testing in WPF

I'm writing a WPF application that displays terrain in 3D.
When I perform hit testing, the wrong 3D point is returned (not the point I clicked on).
I tried highlighting the triangle that was hit (by creating a new mesh, taking the coordinates from the RayMeshGeometry3DHitTestResult object). I see that the wrong triangle gets hit (a triangle is highlighted, but it is not under the cursor).
I'm using a perspective camera with field of view of 60, and the near and far planes are of 3 and 35000 respectively.
Any idea why it might happen and what I can do to solve it?
Let me know if you need any more data.
Edit: This is the code I use to perform the hit testing:
private void m_viewport3d_MouseDown(object sender, MouseButtonEventArgs e)
{
Point mousePos = e.GetPosition(m_viewport3d);
PointHitTestParameters hitParams = new PointHitTestParameters(mousePos);
HitTestResult result = VisualTreeHelper.HitTest(m_viewport3d, mousePos);
RayMeshGeometry3DHitTestResult rayMeshResult = result as RayMeshGeometry3DHitTestResult;
if (rayMeshResult != null)
{
MeshGeometry3D mesh = new MeshGeometry3D();
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex1]);
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex2]);
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex3]);
mesh.TriangleIndices.Add(0);
mesh.TriangleIndices.Add(1);
mesh.TriangleIndices.Add(2);
GeometryModel3D marker = new GeometryModel3D(mesh, new DiffuseMaterial(Brushes.Blue));
//...add marker to the scene...
}
}
Something that caught me was that the points were in model coords. I had to transform to world coords. Here is my code that does the hit test (this will return all hits under the cursor, not just the first):
// This will cast a ray from the point (on _viewport) along the direction that the camera is looking, and returns hits
private List<RayMeshGeometry3DHitTestResult> CastRay(Point clickPoint, IEnumerable<Visual3D> ignoreVisuals)
{
List<RayMeshGeometry3DHitTestResult> retVal = new List<RayMeshGeometry3DHitTestResult>();
// This gets called every time there is a hit
HitTestResultCallback resultCallback = delegate(HitTestResult result)
{
if (result is RayMeshGeometry3DHitTestResult) // It could also be a RayHitTestResult, which isn't as exact as RayMeshGeometry3DHitTestResult
{
RayMeshGeometry3DHitTestResult resultCast = (RayMeshGeometry3DHitTestResult)result;
if (ignoreVisuals == null || !ignoreVisuals.Any(o => o == resultCast.VisualHit))
{
retVal.Add(resultCast);
}
}
return HitTestResultBehavior.Continue;
};
// Get hits against existing models
VisualTreeHelper.HitTest(grdViewPort, null, resultCallback, new PointHitTestParameters(clickPoint));
// Exit Function
return retVal;
}
And some logic that consumes a hit:
if (hit.VisualHit.Transform != null)
{
return hit.VisualHit.Transform.Transform(hit.PointHit);
}
else
{
return hit.PointHit;
}
You need to provide the ray to hit test along in order for this to work in 3d. Use the correct overload of VisualTreeHelper.HitTest which takes a Visual3D and a RayHitTestParameters: http://msdn.microsoft.com/en-us/library/ms608751.aspx
Figures out it was a Normalize issue. I shouldn't have normalized the camera's look and up vectors. In the scales I'm using, the distortion is too big for the hit test to work correctly.

Constant game speed independent of variable FPS in OpenGL with GLUT?

I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one.
After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article.
First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it.
GLUT Toolkit:
GLUT is an OpenGL toolkit and helps with common tasks in OpenGL.
The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration.
The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once.
The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene.
The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load.
The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now.
I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it.
Current Implementation:
Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this:
#define TICKS_PER_SECOND 30
#define MOVEMENT_SPEED 2.0f
const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND;
int previousTime;
int currentTime;
int elapsedTime;
void renderScene(void) {
(...)
// Setup the camera position and looking point
SceneCamera.LookAt();
// Do all drawing below...
(...)
}
void processAnimationTimer(int value) {
// setups the timer to be called again
glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0);
// Get the time when the previous frame was rendered
previousTime = currentTime;
// Get the current time (in milliseconds) and calculate the elapsed time
currentTime = glutGet(GLUT_ELAPSED_TIME);
elapsedTime = currentTime - previousTime;
/* Multiply the camera direction vector by constant speed then by the
elapsed time (in seconds) and then move the camera */
SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f));
// Requests to render a new frame (this will call my renderScene() once)
glutPostRedisplay();
}
void main(int argc, char **argv) {
glutInit(&argc, argv);
(...)
glutDisplayFunc(renderScene);
(...)
// Setup the timer to be called one first time
glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0);
// Read the current time since glutInit was called
currentTime = glutGet(GLUT_ELAPSED_TIME);
glutMainLoop();
}
This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. [EDIT: Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time.]
I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. [EDIT: It's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes?]
I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow.
Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right?
How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT?
[EDIT] Another Approach:
I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance.
My code has now turned into this:
int previousTime;
int currentTime;
int elapsedTime;
void renderScene(void) {
(...)
// Setup the camera position and looking point
SceneCamera.LookAt();
// Do all drawing below...
(...)
}
void renderScene(void) {
(...)
// Get the time when the previous frame was rendered
previousTime = currentTime;
// Get the current time (in milliseconds) and calculate the elapsed time
currentTime = glutGet(GLUT_ELAPSED_TIME);
elapsedTime = currentTime - previousTime;
/* Multiply the camera direction vector by constant speed then by the
elapsed time (in seconds) and then move the camera */
SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f));
// Setup the camera position and looking point
SceneCamera.LookAt();
// All drawing code goes inside this function
drawCompleteScene();
glutSwapBuffers();
/* Redraw the frame ONLY if the user is moving the camera
(similar code will be needed to redraw the frame for other events) */
if(!IsTupleEmpty(cameraDirection)) {
glutPostRedisplay();
}
}
void main(int argc, char **argv) {
glutInit(&argc, argv);
(...)
glutDisplayFunc(renderScene);
(...)
currentTime = glutGet(GLUT_ELAPSED_TIME);
glutMainLoop();
}
Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving.
Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that?
Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think.
What do you think?
glut is designed to be the game loop. When you call glutMainLoop(), it executes a 'for loop' with no termination condition except the exit() signal. You can implement your program kind of like you're doing now, but you need some minor changes. First, if you want to know what the FPS is, you should put that tracking into the renderScene() function, not in your update function. Naturally, your update function is being called as fast as specified by the timer and you're treating elapsedTime as a measure of time between frames. In general, that will be true because you're calling glutPostRedisplay rather slowly and glut won't try to update the screen if it doesn't need to (there's no need to redraw if the scene hasn't changed). However, there are other times that renderScene will be called. For example, if you drag something across the window. If you did that, you'd see a higher FPS (if you were properly tracking the FPS in the render function).
You could use glutIdleFunc, which is called continuously whenever possible--similar to the while(game_is_running) loop. That is, whatever logic you would otherwise put into that while loop, you could put into the callback for glutIdleFunc. You can avoid using glutTimerFunc by keeping track of the ticks on your own, as in the article you linked (using glutGet(GLUT_ELAPSED_TIME)).
Have, as an example, a mouse-driven rotation matrix that updates at a fixed frame-rate, independently of the rendering frame-rate. In my program, space-bar toggles benchmarking mode, and determines the Boolean fxFPS.
Let go of the mouse button while dragging, and you can 'throw' an object transformed by this matrix.
If fxFPS is true then the rendering frame-rate is throttled to the animation frame-rate; otherwise identical frames are drawn repeatedly for benchmarking, even though not enough milliseconds will have passed to trigger any animation.
If you're thinking about slowing down AND speeding up frames, you have to think carefully about whether you mean rendering or animation frames in each case. In this example, render throttling for simple animations is combined with animation acceleration, for any cases when frames might be dropped in a potentially slow animation.
To accelerate the animation, rotations are performed repeatedly in a loop. Such a loop is not too slow compared with the option of doing trig with an adaptive rotation angle; just be careful what you put inside any loop that actually takes longer to execute, the lower the FPS. This loop takes far less than an extra frame to complete, for each frame-drop that it accounts for, so it's reasonably safe.
int xSt, ySt, xCr, yCr, msM = 0, msOld = 0;
bool dragging = false, spin = false, moving = false;
glm::mat4 mouseRot(1.0f), continRot(1.0f);
float twoOvHght; // Set in reshape()
glm::mat4 mouseRotate(bool slow) {
glm::vec3 axis(twoOvHght * (yCr - ySt), twoOvHght * (xCr - xSt), 0); // Perpendicular to mouse motion
float len = glm::length(axis);
if (slow) { // Slow rotation; divide angle by mouse-delay in milliseconds; it is multiplied by frame delay to speed it up later
int msP = msM - msOld;
len /= (msP != 0 ? msP : 1);
}
if (len != 0) axis = glm::normalize(axis); else axis = glm::vec3(0.0f, 0.0f, 1.0f);
return rotate(axis, cosf(len), sinf(len));
}
void mouseMotion(int x, int y) {
moving = (xCr != x) | (yCr != y);
if (dragging & moving) {
xSt = xCr; xCr = x; ySt = yCr; yCr = y; msOld = msM; msM = glutGet(GLUT_ELAPSED_TIME);
mouseRot = mouseRotate(false) * mouseRot;
}
}
void mouseButton(int button, int state, int x, int y) {
if (button == 0) {
if (state == 0) {
dragging = true; moving = false; spin = false;
xCr = x; yCr = y; msM = glutGet(GLUT_ELAPSED_TIME);
glutPostRedisplay();
} else {
dragging = false; spin = moving;
if (spin) continRot = mouseRotate(true);
}
}
}
And then later...
bool fxFPS = false;
int T = 0, ms = 0;
const int fDel = 20;
void display() {
ms = glutGet(GLUT_ELAPSED_TIME);
if (T <= ms) { T = ms + fDel;
for (int lp = 0; lp < fDel; lp++) {
orient = rotY * orient; orientCu = rotX * rotY * orientCu; // Auto-rotate two orientation quaternions
if (spin) mouseRot = continRot * mouseRot; // Track rotation from thowing action by mouse
}
orient1 = glm::mat4_cast(orient); orient2 = glm::mat4_cast(orientCu);
}
// Top secret animation code that will make me rich goes here
glutSwapBuffers();
if (spin | dragging) { if (fxFPS) while (glutGet(GLUT_ELAPSED_TIME) < T); glutPostRedisplay(); } // Fast, repeated updates of the screen
}
Enjoy throwing things around an axis; I find that most people do. Notice that the fps affects nothing whatsoever, in the interface or the rendering. I've minimised the use of divisions, so comparisons should be nice and accurate, and any inaccuracy in the clock does not accumulate unnecessarily.
Syncing of multiplayer games is another 18 conversations, I would judge.

Resources