The framerate limiter for this game I'm doing some coding on is having some pretty irritating accuracy issues at certain framerates. I've been scratching my head trying to think of a better way to write this, but the best I've came up with is still fairly inaccurate. I was wondering if someone could give me a couple ideas on how to rewrite this short portion to be more accurate.
//g_dwLastfps & currenttime are equal to timeGetTime()
float fFrameLimit = 0;
if (g_nFrameLimitValue > 0) //g_nFrameLimitValue = user defined
fFrameLimit = 1000 / g_nFrameLimitValue;
while ((currentTime - g_dwLastTime) < fFrameLimit)
{
// -1 = wait an extra ms. seemed to help accuracy some
Sleep((float)fFrameLimit - ((currentTime - g_dwLastTime)) - 1);
currentTime = timeGetTime();
}
g_dwLastTime = currentTime;
I can think of a pretty simple solution:
uint64_t last_time = 0;
uint64_t current_time = time();
uint64_t frame_limit_ms = 17; // 60fps = 16.666... ms/f
uint64_t frame_diff = frame_limit_ms;
sleep(frame_limit_ms);
while(running){
last_time = current_time;
current_time = time(); // get frame start time in milliseconds
frame_diff = current_time - last_time; // get time since last frame in ms
if(frame_diff < frame_ms_limit){
sleep(frame_ms_limit - frame_diff);
// need to do a re-calculation for accuracy
current_time = time();
frame_diff = current_time - last_time;
}
do_physics(frame_diff);
draw_scene();
swap_buffers();
}
Which looks a lot like what you have, but doesn't use float so should be faster and is accurate up to one unit of time (milliseconds in this case).
If you want it to be more accurate, use a more accurate unit (nanoseconds) and convert it back to milliseconds if you need to.
One issue with the example code, is that last_time get updated to current_time which gets updated to time(), which can lead to drift over time. To avoid this, last_time should be based on an original reading of time(), and advanced by the desired delay for each frame. The following windows based code is similar what is used in games to keep a thread running at a fixed frequency with no drift. uRem is used to deal with frequencies that aren't exact multiples of the clock frequency. dwLateStep is a debugging aid that gets incremented each instance where the step time is exceeded. The code is Windows XP compatible, where a Sleep(1) can take up to 2 ms, so it checks for >= 2 ms delay pending before using Sleep(1).
/* code for a thread to run at fixed frequency */
typedef unsigned long long UI64; /* unsigned 64 bit int */
#define FREQ 400 /* frequency */
DWORD dwLateStep; /* late step count */
LARGE_INTEGER liPerfFreq; /* 64 bit frequency */
LARGE_INTEGER liPerfTemp; /* used for query */
UI64 uFreq = FREQ; /* process frequency */
UI64 uOrig; /* original tick */
UI64 uWait; /* tick rate / freq */
UI64 uRem = 0; /* tick rate % freq */
UI64 uPrev; /* previous tick based on original tick */
UI64 uDelta; /* current tick - previous */
UI64 u2ms; /* 2ms of ticks */
UI64 i;
/* ... */ /* wait for some event to start thread */
QueryPerformanceFrequency(&liPerfFreq);
u2ms = ((UI64)(liPerfFreq.QuadPart)+499) / ((UI64)500);
timeBeginPeriod(1); /* set period to 1ms */
Sleep(128); /* wait for it to stabilize */
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uOrig = uPrev = liPerfTemp.QuadPart;
for(i = 0; i < (uFreq*30); i++){
/* update uWait and uRem based on uRem */
uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
uRem = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
/* wait for uWait ticks */
while(1){
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
if(uDelta >= uWait)
break;
if((uWait - uDelta) > u2ms)
Sleep(1);
}
if(uDelta >= (uWait*2))
dwLateStep += 1;
uPrev += uWait;
/* fixed frequency code goes here */
/* along with some type of break when done */
}
timeEndPeriod(1); /* restore period */
Related
I'm learning C language and very new to this. I'm trying to write a simple game that spawns an enemy after 3 secs from the start of the level. I tried clock() function and had the problem that while it spawns after given time it also freezes the game so it is unplayable.
void delay (clock_t n) {
clock_t start = clock();
while(clock() - start < n);
}
Also tried the get_current_time() method but the game freezes if I got to this level so my code must be wrong. Can anyone give me some solution on how to approach this?
void draw_hero( void ) {
char * hero_image =
/**/ "H H"
/**/ "H H"
/**/ "HHHHH"
/**/ "H H"
/**/ "H H";
int hero_x = (screen_width() - HERO_WIDTH) / 2;
int hero_y = (screen_height() - HERO_HEIGHT) / 2;
hero = sprite_create(x, y, HERO_WIDTH, HERO_HEIGHT, hero_image);
double lastTime = get_current_time();
while (true){
double current = get_current_time();
double elapsed = current - lastTime;
//lastTime = current;
while (elapsed < lastTime + 500) ???
sprite_draw(hero);
show_screen();
}
clock is not the correct function to use. Nor is spinning in a tight loop to "sleep" appropriate.
Here's a minimal game loop based on your code and it demonstrates how to create the enemy thing at 3 seconds into it. But a real game is a bit more complicated...
I'm using "true", which is technically a C++ concept, but since you were using it in your own code, I suspect some C compilers will allow it.
I don't know what get_current_time is, but I assume it returns value with more granularity that a second (e.g. millisecond resolution).
// initialize - create screen, spawn initial sprites, create map etc..
bool has_created_enemy = false;
double baseTime = get_current_time(); // used for normalizing time to 0
double lastTime = 0; // last timestamp of last frame
double currentTime = 0; // timestamp of current frame
while (true)
{
lastTime = currentTime;
currentTime = get_current_time() - baseTime;
double elapsed_since_last_frame = currentTime - lastTime;
// keyboard input
// collision detection
// update game state
if ((currentTime >= 3.0) && (!has_created_enemy))
{
// spawn enemy
has_created_enemy = true;
}
// draw frame
}
I have a loop which runs every X usecs, which consists of doing some I/O then sleeping for the remainder of the X usecs. To (roughly) calculate the sleep time, all I'm doing is taking a timestamp before and after the I/O and subtract the difference from X. Here is the function I'm using for the timestamp:
long long getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (long long) (time.tv_sec + time.tv_usec);
}
As you can imagine, this starts to drift pretty fast and the actual time between I/O bursts is usually quite a few ms longer than X.
To try and make it a little more accurate, I thought maybe if I keep a record of the previous starting timestamp, every time I start a new cycle I can calculate how long the previous cycle took (the time between this starting timestamp and the previous one). Then, I know how much longer than X it was, and I can modify my sleep for this cycle to compensate.
Here is how I'm trying to implement it:
long long start, finish, offset, previous, remaining_usecs;
long long delaytime_us = 1000000;
/* Initialise previous timestamp as 1000000us ago*/
previous = getus() - delaytime_us;
while(1)
{
/* starting timestamp */
start = getus();
/* here is where I would do some I/O */
/* calculate how much to compensate */
offset = (start - previous) - delaytime_us;
printf("(%lld - %lld) - %lld = %lld\n",
start, previous, delaytime_us, offset);
previous = start;
finish = getus();
/* calculate to our best ability how long we spent on I/O.
* We'll try and compensate for its inaccuracy next time around!*/
remaining_usecs = (delaytime_us - (finish - start)) - offset;
printf("start=%lld,finish=%lld,offset=%lld,previous=%lld\nsleeping for %lld\n",
start, finish, offset, previous, remaining_usecs);
usleep(remaining_usecs);
}
It appears to work on the first iteration of the loop, however after that things get messed up.
Here's the output for 5 iterations of the loop:
(1412452353 - 1411452348) - 1000000 = 5
start=1412452353,finish=1412458706,offset=5,previous=1412452353
sleeping for 993642
(1412454788 - 1412452353) - 1000000 = -997565
start=1412454788,finish=1412460652,offset=-997565,previous=1412454788
sleeping for 1991701
(1412454622 - 1412454788) - 1000000 = -1000166
start=1412454622,finish=1412460562,offset=-1000166,previous=1412454622
sleeping for 1994226
(1412457040 - 1412454622) - 1000000 = -997582
start=1412457040,finish=1412465861,offset=-997582,previous=1412457040
sleeping for 1988761
(1412457623 - 1412457040) - 1000000 = -999417
start=1412457623,finish=1412463533,offset=-999417,previous=1412457623
sleeping for 1993507
The first line of output shows how the previous cycle time was calculated. It appears that the first two timestamps are basically 1000000us apart (1412452353 - 1411452348 = 1000005). However after this the distance between starting timestamps starts looking not so reasonable, along with the offset.
Does anyone know what I'm doing wrong here?
EDIT: I would also welcome suggestions of better ways to get an accurate timer and be
able to sleep during the delay!
After some more research I've discovered two things wrong here-
Firstly, I'm calculating the timestamp wrong. getus() should return like this:
return (long long) 1000000 * (time.tv_sec + time.tv_usec);
And secondly, I should be storing the timestamp in unsigned long long or uint64_t.
So getus() should look like this:
uint64_t getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (uint64_t) 1000000 * (time.tv_sec + time.tv_usec);
}
I won't actually be able to test this until tomorrow, so I will report back.
I am using Microsoft Visual Studio 2010. I wanted to measure time in micro seconds in C language on windows 7 platform. How can I do that.
The way to get accurate time measurements is via performance counters.
In Windows, you can use QueryPerformanceCounter() and QueryPerformanceFrequency():
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904%28v=vs.85%29.aspx
EDIT : Here's a simple example that measures the time needed to sum up from 0 to 1000000000:
LARGE_INTEGER frequency;
LARGE_INTEGER start;
LARGE_INTEGER end;
// Get the frequency
QueryPerformanceFrequency(&frequency);
// Start timer
QueryPerformanceCounter(&start);
// Do some work
__int64 sum = 0;
int c;
for (c = 0; c < 1000000000; c++){
sum += c;
}
printf("sum = %lld\n",sum);
// End timer
QueryPerformanceCounter(&end);
// Print Difference
double duration = (double)(end.QuadPart - start.QuadPart) / frequency.QuadPart;
printf("Seconds = %f\n",duration);
Output:
sum = 499999999500000000
Seconds = 0.659352
see QueryPerformanceCounter and QueryPerformanceFrequency
gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC
How do I stamp two times t1 and t2 and get the difference in milliseconds in C?
This will give you the time in seconds + microseconds
#include <sys/time.h>
struct timeval tv;
gettimeofday(&tv,NULL);
tv.tv_sec // seconds
tv.tv_usec // microseconds
Standard C99:
#include <time.h>
time_t t0 = time(0);
// ...
time_t t1 = time(0);
double datetime_diff_ms = difftime(t1, t0) * 1000.;
clock_t c0 = clock();
// ...
clock_t c1 = clock();
double runtime_diff_ms = (c1 - c0) * 1000. / CLOCKS_PER_SEC;
The precision of the types is implementation-defined, ie the datetime difference might only return full seconds.
If you want to find elapsed time, this method will work as long as you don't reboot the computer between the start and end.
In Windows, use GetTickCount(). Here's how:
DWORD dwStart = GetTickCount();
...
... process you want to measure elapsed time for
...
DWORD dwElapsed = GetTickCount() - dwStart;
dwElapsed is now the number of elapsed milliseconds.
In Linux, use clock() and CLOCKS_PER_SEC to do about the same thing.
If you need timestamps that last through reboots or across PCs (which would need quite good syncronization indeed), then use the other methods (gettimeofday()).
Also, in Windows at least you can get much better than standard time resolution. Usually, if you called GetTickCount() in a tight loop, you'd see it jumping by 10-50 each time it changed. That's because of the time quantum used by the Windows thread scheduler. This is more or less the amount of time it gives each thread to run before switching to something else. If you do a:
timeBeginPeriod(1);
at the beginning of your program or process and a:
timeEndPeriod(1);
at the end, then the quantum will change to 1 ms, and you will get much better time resolution on the GetTickCount() call. However, this does make a subtle change to how your entire computer runs processes, so keep that in mind. However, Windows Media Player and many other things do this routinely anyway, so I don't worry too much about it.
I'm sure there's probably some way to do the same in Linux (probably with much better control, or maybe with sub-millisecond quantums) but I haven't needed to do that yet in Linux.
/*
Returns the current time.
*/
char *time_stamp(){
char *timestamp = (char *)malloc(sizeof(char) * 16);
time_t ltime;
ltime=time(NULL);
struct tm *tm;
tm=localtime(<ime);
sprintf(timestamp,"%04d%02d%02d%02d%02d%02d", tm->tm_year+1900, tm->tm_mon,
tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec);
return timestamp;
}
int main(){
printf(" Timestamp: %s\n",time_stamp());
return 0;
}
Output: Timestamp: 20110912130940 // 2011 Sep 12 13:09:40
Use #Arkaitz Jimenez's code to get two timevals:
#include <sys/time.h>
//...
struct timeval tv1, tv2, diff;
// get the first time:
gettimeofday(&tv1, NULL);
// do whatever it is you want to time
// ...
// get the second time:
gettimeofday(&tv2, NULL);
// get the difference:
int result = timeval_subtract(&diff, &tv1, &tv2);
// the difference is storid in diff now.
Sample code for timeval_subtract can be found at this web site:
/* Subtract the `struct timeval' values X and Y,
storing the result in RESULT.
Return 1 if the difference is negative, otherwise 0. */
int
timeval_subtract (result, x, y)
struct timeval *result, *x, *y;
{
/* Perform the carry for the later subtraction by updating y. */
if (x->tv_usec < y->tv_usec) {
int nsec = (y->tv_usec - x->tv_usec) / 1000000 + 1;
y->tv_usec -= 1000000 * nsec;
y->tv_sec += nsec;
}
if (x->tv_usec - y->tv_usec > 1000000) {
int nsec = (x->tv_usec - y->tv_usec) / 1000000;
y->tv_usec += 1000000 * nsec;
y->tv_sec -= nsec;
}
/* Compute the time remaining to wait.
tv_usec is certainly positive. */
result->tv_sec = x->tv_sec - y->tv_sec;
result->tv_usec = x->tv_usec - y->tv_usec;
/* Return 1 if result is negative. */
return x->tv_sec < y->tv_sec;
}
how about this solution? I didn't see anything like this in my search. I am trying to avoid division and make solution simpler.
struct timeval cur_time1, cur_time2, tdiff;
gettimeofday(&cur_time1,NULL);
sleep(1);
gettimeofday(&cur_time2,NULL);
tdiff.tv_sec = cur_time2.tv_sec - cur_time1.tv_sec;
tdiff.tv_usec = cur_time2.tv_usec + (1000000 - cur_time1.tv_usec);
while(tdiff.tv_usec > 1000000)
{
tdiff.tv_sec++;
tdiff.tv_usec -= 1000000;
printf("updated tdiff tv_sec:%ld tv_usec:%ld\n",tdiff.tv_sec, tdiff.tv_usec);
}
printf("end tdiff tv_sec:%ld tv_usec:%ld\n",tdiff.tv_sec, tdiff.tv_usec);
Also making aware of interactions between clock() and usleep(). usleep() suspends the program, and clock() only measures the time the program is running.
If might be better off to use gettimeofday() as mentioned here
Use gettimeofday() or better clock_gettime()
U can try routines in c time library (time.h). Plus take a look at the clock() in the same lib. It gives the clock ticks since the prog has started. But you can save its value before the operation you want to concentrate on, and then after that operation capture the cliock ticks again and find the difference between then to get the time difference.
#include <sys/time.h>
time_t tm = time(NULL);
char stime[4096];
ctime_r(&tm, stime);
stime[strlen(stime) - 1] = '\0';
printf("%s",stime);
This program clearly shows how to do it. Takes time 1 pauses for 1 second and then takes time 2, the difference between the 2 times should be 1000 milliseconds. So your answer is correct
#include <stdio.h>
#include <time.h>
#include <unistd.h>
// Name: miliseconds.c
// gcc /tmp/miliseconds.c -o miliseconds
struct timespec ts1, ts2; // time1 and time2
int main (void) {
// get time1
clock_gettime(CLOCK_REALTIME, &ts1);
sleep(1); // 1 second pause
// get time2
clock_gettime(CLOCK_REALTIME, &ts2);
// nanoseconds difference in mili
long miliseconds1= (ts2.tv_nsec - ts1.tv_nsec) / 10000000 ;
// seconds difference in mili
long miliseconds2 = (ts2.tv_sec - ts1.tv_sec)*1000;
long miliseconds = miliseconds1 + miliseconds2;
printf("%ld\n", miliseconds);
return 0;
}