OS independent C library for calculating time lapse? - c

So in my game I need to simulate the physics in a hardware independent manner.
I'm going to use a fixed time-step simulation, but I need to be able to calculate how much time is passing between calls.
I tried this and got nowhere:
#include <time.h>
double time_elapsed;
clock_t last_clocks = clock();
while (true) {
time_elapsed = ( (double) (clock() - last_clocks) / CLOCKS_PER_SEC);
last_clocks = clock();
printf("%f\n", time_elapsed);
}
Thanks!

You can use gettimeofday to get the number of seconds plus the number of microseconds since the Epoch. If you want the number of seconds, you can do something like this:
#include <sys/time.h>
float getTime()
{
struct timeval time;
gettimeofday(&time, 0);
return (float)time.tv_sec + 0.000001 * (float)time.tv_usec;
}
I misunderstood your question at first, but you might find the following method of making a fixed-timestep physics loop useful.
There are two things you need to do to make a fixed-timestep physics loop.
First, you need to calculate the time between now and the last time you ran physics.
last = getTime();
while (running)
{
now = getTime();
// Do other game stuff
simulatePhysics(now - last);
last = now;
}
Then, inside of the physics simulation, you need to calculate a fixed timestep.
void simulatePhysics(float dt)
{
static float timeStepRemainder; // fractional timestep from last loop
dt += timeStepRemainder * SIZE_OF_TIMESTEP;
float desiredTimeSteps = dt / SIZE_OF_TIMESTEP;
int nSteps = floorf(desiredTimeSteps); // need integer # of timesteps
timeStepRemainder = desiredTimeSteps - nSteps;
for (int i = 0; i < nSteps; i++)
doPhysics(SIZE_OF_TIMESTEP);
}
Using this method, you can give whatever is doing physics (doPhysics in my example) a fixed timestep, while keeping a synchronization between real time and game time by calculating the right number of timesteps to simulate since the last time physics was run.

clock_gettime(3).

Related

Printing execution time, two decimal points

I'm using the following function to estimate the execution time (performance) of a function:
double print_time(struct timeval *start, struct timeval *end)
{
double usec;
usec = (end->tv_sec * 1000000 + end->tv_usec) - (start->tv_sec * 1000000 + start->tv_usec);
return usec / 1000.0;
}
Simple Code:
struct timeval start, end;
double t = 0.0;
gettimeofday(&start, NULL);
... //code
gettimeofday(&end, NULL);
t = print_time(&start, &end);
printf("%.2f", t);
Why when I print the variable I see the time formatted in this manner: 3.613.97? The problem is related for the two points. What means the first point and the second point? Normally I always saw just one decimal point to separate the digits.
Include the right headers:
#include <time.h>
#include <stdio.h>
At the beginning of the program:
clock_t starttime = clock();
At the end of the program:
printf("elapsed time: %.3f s\n", (float)(clock() - starttime) / CLOCKS_PER_SEC);
This works on several platforms (including Windows with MinGW-w64), but the resolution of the timer (CLOCKS_PER_SEC) may vary per platform.
Note that this measures the clock cycles used by your application, instead of the time passed between start and end.
So while it is not an exact chronometer, it gives a better idea of how much time your program actually took to run.

Measuring processor ticks in C

I wanted to calculate the difference in execution time when executing the same code inside a function. To my surprise, however, sometimes the clock difference is 0 when I use clock()/clock_t for the start and stop timer. Does this mean that clock()/clock_t does not actually return the number of clicks the processor spent on the task?
After a bit of searching, it seemed to me that clock_gettime() would return more fine grained results. And indeed it does, but I instead end up with an abitrary number of nano(?)seconds. It gives a hint of the difference in execution time, but it's hardly accurate as to exactly how many clicks difference it amounts to. What would I have to do to find this out?
#include <math.h>
#include <stdio.h>
#include <time.h>
#define M_PI_DOUBLE (M_PI * 2)
void rotatetest(const float *x, const float *c, float *result) {
float rotationfraction = *x / *c;
*result = M_PI_DOUBLE * rotationfraction;
}
int main() {
int i;
long test_total = 0;
int test_count = 1000000;
struct timespec test_time_begin;
struct timespec test_time_end;
float r = 50.f;
float c = 2 * M_PI * r;
float x = 3.f;
float result_inline = 0.f;
float result_function = 0.f;
for (i = 0; i < test_count; i++) {
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_begin);
float rotationfraction = x / c;
result_inline = M_PI_DOUBLE * rotationfraction;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_end);
test_total += test_time_end.tv_nsec - test_time_begin.tv_nsec;
}
printf("Inline clocks %li, avg %f (result is %f)\n", test_total, test_total / (float)test_count,result_inline);
for (i = 0; i < test_count; i++) {
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_begin);
rotatetest(&x, &c, &result_function);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_end);
test_total += test_time_end.tv_nsec - test_time_begin.tv_nsec;
}
printf("Function clocks %li, avg %f (result is %f)\n", test_total, test_total / (float)test_count, result_inline);
return 0;
}
I am using gcc version 4.8.4 on Linux 3.13.0-37-generic (Linux Mint 16)
First of all: As already mentioned in the comments, clocking a single run of execution one by the other will probably do you no good. If all goes down the hill, the call for getting the time might actually take longer than the actual execution of the operation.
Please clock multiple runs of the operation (including a warm up phase so everything is swapped in) and calculate the average running times.
clock() isn't guaranteed to be monotonic. It also isn't the number of processor clicks (whatever you define this to be) the program has run. The best way to describe the result from clock() is probably "a best effort estimation of the time any one of the CPUs has spent on calculation for the current process". For benchmarking purposes clock() is thus mostly useless.
As per specification:
The clock() function returns the implementation's best approximation to the processor time used by the process since the beginning of an implementation-dependent time related only to the process invocation.
And additionally
To determine the time in seconds, the value returned by clock() should be divided by the value of the macro CLOCKS_PER_SEC.
So, if you call clock() more often than the resolution, you are out of luck.
For profiling/benchmarking, you should --if possible-- use one of the performance clocks that are available on modern hardware. The prime candidates are probably
The HPET
The TSC
Edit: The question now references CLOCK_PROCESS_CPUTIME_ID, which is Linux' way of exposing the TSC.
If any (or both) are available depends on the hardware in is also operating system specific.
After googling a little bit I can see that clock() function can be used as a standard mechanism to find the tome taken for execution , but be aware that the time will be varying at different time depending upon the load of your processor,
You can just use the below code for calculation
clock_t begin, end;
double time_spent;
begin = clock();
/* here, do your time-consuming job */
end = clock();
time_spent = (double)(end - begin) / CLOCKS_PER_SEC;

Getting 0 when trying to measure the execution time of function in C

I have simple function which takes random words and puts them in lexicographical order using insertion sort algorithm.I have no problem with function(It works,tested),but when i try to measure execution time of function using two different clock() values, i get same values before and after the execution of function,so it shows 0 as elapsed time
clock_t t1 = clock();
InsertionSort(data, n);
clock_t t2 = clock();
/*
* Display the results.
*/
for (size = i, i = 0; i < size; ++i)
{
printf("data[%d] = \"%s\"\n", (int)i, data[i]);
}
/*
* Display the execution time
*/
printf("The time taken is.. %g ", (t2 -t1));
The time difference is too small to be measured by this method, without adding more code to execute. – Weather Vane
Usually, you contrive a way to measure a large number of loops of what you want to time. 10, 100, 1000, whatever produces a significant result. Bear in mind too that on a multi-tasking OS each iteration will take a slightly different time, and so you'll also establish a typical average.The result might also be affected by processor caching and/or file caching. – Weather Vane
Try like this:
#include <sys/types.h>
#include <sys/time.h>
#include <stdlib.h>
#include <stdio.h>
double gettime(void)
{
struct timediff td;
double d=0;
gettimeofday(&td, NULL);
d=td.td_usec;
d+= (double)td.td_usecs / 1000000.;
return d;
}
double t1=gettime();
InsertionSort(data, n);
printf("%.6f", gettime() - t1);
or may be you need to change your code like this:
clock_t t1 = clock();
InsertionSort(data, n);
clock_t t2 = clock();
double d= double(t2- t1) / CLOCKS_PER_SEC;
You can also refer: Easily measure elapsed time
You are incorrectly using the floating-point format specifier %g. Try this
printf("The time taken is.. %u clock ticks", (unsigned)(t2 -t1));
Always assuming the execution time is longer than the granularity of clock().

CPU usage in C (as percentage)

How can I get CPU usage as percentage using C?
I have a function like this:
static int cpu_usage (lua_State *L) {
clock_t clock_now = clock();
double cpu_percentage = ((double) (clock_now - program_start)) / get_cpus() / CLOCKS_PER_SEC;
lua_pushnumber(L,cpu_percentage);
return 1;
}
"program_start" is a clock_t that I use when the program starts.
Another try:
static int cpu_usage(lua_State *L) {
struct rusage ru;
getrusage(RUSAGE_SELF, &ru);
lua_pushnumber(L,ru.ru_utime.tv_sec);
return 1;
}
Is there any way to measure CPU? If I call this function from time to time it keeps returning me the increasing time... but that´s not what I want.
PS: I'm using Ubuntu.
Thank you! =)
Your function should work as expected. From clock
The clock() function shall return the implementation's best approximation to the processor time used by the process since the beginning of an implementation-defined era related only to the process invocation.
This means, it returns the CPU time for this process.
If you want to calculate the CPU time relative to the wall clock time, you must do the same with gettimeofday. Save the time at program start
struct timeval wall_start;
gettimeofday(&wall_start, NULL);
and when you want to calculate the percentage
struct timeval wall_now;
gettimeofday(&wall_now, NULL);
Now you can calculate the difference of wall clock time and you get
double start = wall_start.tv_sec + wall_start.tv_usec / 1000000;
double stop = wall_now.tv_sec + wall_now.tv_usec / 1000000;
double wall_time = stop - start;
double cpu_time = ...;
double percentage = cpu_time / wall_time;

clock_gettime on Raspberry Pi with C

I want to measure the time between the start to the end of the function in a loop. This difference will be used to set the amount of loops of the inner while-loops which does some here not important stuff.
I want to time the function like this :
#include <wiringPi.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BILLION 1E9
float hz = 1000;
long int nsPerTick = BILLION/hz;
double unprocessed = 1;
struct timespec now;
struct timespec last;
clock_gettime(CLOCK_REALTIME, &last);
[ ... ]
while (1)
{
clock_gettime(CLOCK_REALTIME, &now);
double diff = (last.tv_nsec - now.tv_nsec );
unprocessed = unprocessed + (diff/ nsPerTick);
clock_gettime(CLOCK_REALTIME, &last);
while (unprocessed >= 1) {
unprocessed --;
DO SOME RANDOM MAGIC;
}
}
The difference between the timer is always negative. I was told this was where the error was:
if ( (last.tv_nsec - now.tv_nsec)<0) {
double diff = 1000000000+ last.tv_nsec - now.tv_nsec;
}
else {
double diff = (last.tv_nsec - now.tv_nsec );
}
But still, my variable difference and is always negative like "-1095043244" (but the time spent during the function is a positive of course).
What's wrong?
Your first issue is that you have `last.tv_nsec - now.tv_nsec, which is the wrong way round.
last.tv_nsec is in the past (let's say it's set to 1), and now.tv_nsec will always be later (for example, 8ns later, so it's 9). In that case, last.tv_nsec - now.tv_nsec == 1 - 9 == -8.
The other issue is that tv_nsec isn't the time in nanoseconds: for that, you'd need to multiply the time in seconds by a billion and add that. So to get the difference in ns between now and last, you want:
((now.tv_sec - last.tv_sec) * ONE_BILLION) + (now.tv_nsec - last.tv_nsec)
(N.B. I'm still a little surprised that although now.tv_nsec and last.tv_nsec are both less than a billion, subtracting one from the other gives a value less than -1000000000, so there may yet be something I'm missing here.)
I was just investigating timing on Pi, with similar approach and similar problems. My thoughts are:
You don't have to use double. In fact you also don't need nano-seconds, as the clock on Pi has 1 microsecond accuracy anyway (it's the way the Broadcom did it). I suggest you to use gettimeofday() to get microsecs instead of nanosecs. Then computation is easy, it's just:
number of seconds + (1000 * 1000 * number of micros)
which you can simply calculate as unsigned int.
I've implemented the convenient API for this:
typedef struct
{
struct timeval startTimeVal;
} TIMER_usecCtx_t;
void TIMER_usecStart(TIMER_usecCtx_t* ctx)
{
gettimeofday(&ctx->startTimeVal, NULL);
}
unsigned int TIMER_usecElapsedUs(TIMER_usecCtx_t* ctx)
{
unsigned int rv;
/* get current time */
struct timeval nowTimeVal;
gettimeofday(&nowTimeVal, NULL);
/* compute diff */
rv = 1000000 * (nowTimeVal.tv_sec - ctx->startTimeVal.tv_sec) + nowTimeVal.tv_usec - ctx->startTimeVal.tv_usec;
return rv;
}
And the usage is:
TIMER_usecCtx_t timer;
TIMER_usecStart(&timer);
while (1)
{
if (TIMER_usecElapsedUs(timer) > yourDelayInMicroseconds)
{
doSomethingHere();
TIMER_usecStart(&timer);
}
}
Also notice the gettime() calls on Pi take almost 1 [us] to complete. So, if you need to call gettime() a lot and need more accuracy, go for some more advanced methods of getting time... I've explained more about it in this short article about Pi get-time calls
Well, I don't know C, but if it's a timing issue on a Raspberry Pi it might have something to do with the lack of an RTC (real time clock) on the chip.
You should not be storing last.tv_nsec - now.tv_nsec in a double.
If you look at the documentation of time.h, you can see that tv_nsec is stored as a long. So you will need something along the lines of:
long diff = end.tv_nsec - begin.tv_nsec
With that being said, only comparing the nanoseconds can go wrong. You also need to look at the number of seconds also. So to convert everything to seconds, you can use this:
long nanosec_diff = end.tv_nsec - begin.tv_nsec;
time_t sec_diff = end.tv_sec - begin.tv_sec; // need <sys/types.h> for time_t
double diff_in_seconds = sec_diff + nanosec_diff / 1000000000.0
Also, make sure you are always subtracting the end time from the start time (or else your time will still be negative).
And there you go!

Resources