I want to replace obsolete usleep function with nanosleep in my code:
static int timediff( struct timeval *large, struct timeval *small )
{
return ( ( ( large->tv_sec * 1000 * 1000 ) + large->tv_usec )
- ( ( small->tv_sec * 1000 * 1000 ) + small->tv_usec ) );
}
struct performance_s
{
struct timeval acquired_input;
};
performance_t *performance_new( int fieldtimeus )
{
performance_t *perf = malloc( sizeof( performance_t ) );
if( !perf ) return 0;
gettimeofday( &perf->acquired_input, 0 );
return perf;
}
performance_t *perf = 0;
int performance_get_usecs_since_frame_acquired( performance_t *perf )
{
struct timeval now;
gettimeofday( &now, 0 );
return timediff( &now, &perf->acquired_input );
}
int fieldtime = videoinput_get_time_per_field( norm );
if( rtctimer ) {
while( performance_get_usecs_since_frame_acquired( perf )
< ( (fieldtime*2) - (rtctimer_get_usecs( rtctimer ) / 2) ) ) {
rtctimer_next_tick( rtctimer );
}
} else {
int timeleft = performance_get_usecs_since_frame_acquired( perf );
if( timeleft < fieldtime )
usleep( fieldtime - timeleft );
Questions: does this replacement get the same precision timing than with usleep ( and is it a correct replacement )?
struct timespec delay = {0, ( fieldtime - timeleft )}; nanosleep(&delay, NULL);
One of the reasons usleep is obsolete is that the behavior when it was interrupted by a signal was inconsistent among historical systems. Depending on your needs, this may mean your naive replacement with nanosleep is not quite what you want. In particular, nanosleep returns immediately when any signal handler is executed, even if the signal handler was installed with SA_RESTART. So you may want to do something like:
while (nanosleep(&delay, &delay));
to save the remaining time if it's interrupted and restart sleeping for the remaining time.
Note also that nanosleep uses timespec, which is in nanoseconds, not microseconds. Thus, if your interval values are in microseconds, you must scale them by 1000 to make nanoseconds.
Also, be aware that it is an error (reported by EINVAL) to pass a nanoseconds value less than 0 or greater than 1000000000 (1 second). timespec values must be "normalized", i.e. the nanoseconds must be between 0 and 999999999 (inclusive) and larger values converted to use the seconds (tv_sec) field of the structure.
Your underlying objective here is to sleep until fieldtime microseconds after the previous frame was acquired. The clock_nanosleep() function allows you to do this directly - sleep until a particular absolute time has been reached - so it is a better fit for your requirement. Using this function would look like:
int fieldtime = videoinput_get_time_per_field( norm );
struct timespec deadline = performance->input;
deadline.tv_nsec += fieldtime * 1000L;
deadline.tv_sec += deadline.tv_nsec / 1000000000;
deadline.tv_nsec %= 1000000000;
while (clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &deadline, NULL) && errno == EINTR)
;
This presumes that you change performance->input to be a struct timespec set by clock_gettime(CLOCK_MONOTONIC, &performance->input) rather than gettimeofday(). The CLOCK_MONOTONIC clock is a better fit for this case, because it's not affected by changes to the system time.
Related
I need to define C API ex. GetTimerCountInNS(void) to get current TimerCount in Nanosecond, so using this API call I can calculate total execution time of some work done in nanosecond. Can someone suggest me what is wrong with my GetTimerCountInNS function as when I am calculating total execution time it shows incorrect execution time however for MilliSecond it shows correct one.
I already checked other query related to same one but I could not found exact answer.As I dont want to write all equation into main code when calculating time in nanosecond.
I need to use custom API to get count in Nanosecond and by getting different of start and stop time count I need to get total execution time.
How to get current timestamp in nanoseconds in linux using c
Calculating Function time in nanoseconds in C code
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdint.h>
#include <time.h>
#define BILLION 1000000000L;
// This API provides incorrect time in NS duration
uint64_t GetTimerCountInNS(void)
{
struct timespec currenttime;
clock_gettime( CLOCK_REALTIME, ¤ttime);
//I am not sure here how to calculate count in NS
return currenttime.tv_nsec;
}
// This API provides correct time in MS duration
uint64_t GetTimerCountInMS(void)
{
struct timespec currenttime;
clock_gettime( CLOCK_REALTIME, ¤ttime);
return (1000 * currenttime.tv_sec) + ((double)currenttime.tv_nsec / 1e6);
}
int main( int argc, char** argv )
{
struct timespec start, stop;
uint64_t start_ns,end_ns;
uint64_t start_ms,end_ms;
clock_gettime( CLOCK_REALTIME, &start);
start_ms = GetTimerCountInMS();
start_ns = GetTimerCountInNS();
int f = 0;
sleep(3);
clock_gettime( CLOCK_REALTIME, &stop);
end_ms = GetTimerCountInMS();
end_ns = GetTimerCountInNS();
double total_time_sec = ( stop.tv_sec - start.tv_sec ) + (double)( stop.tv_nsec - start.tv_nsec ) / (double)BILLION;
printf( "time in sec \t: %lf\n", total_time_sec );
printf( "time in ms \t: %ld\n", (end_ms - start_ms) );
printf( "time in ns \t: %ld\n", (end_ns - start_ns) );
return EXIT_SUCCESS;
}
Output:
time in sec : 3.000078
time in ms : 3000
time in ns : 76463 // This shows wrong time
A fix:
uint64_t GetTimerCountInNS(void) {
struct timespec currenttime;
clock_gettime(CLOCK_REALTIME, ¤ttime);
return UINT64_C(1000000000) * currenttime.tv_sec + currenttime.tv_nsec;
}
In the return, a uint64_t constant is used to promote all other operands of the binary arithmetic operators to uint64_t, in addition to converting seconds to nanoseconds.
I have to track how long a task executes for. I am working on Linux, but I do not have access to the kernel itself.
My task simply busy-loops until the process has been executing for a certain amount of time. Then the process is supposed to break out of this loop.
I had a somewhat working version that used clock_gettime() from time.h. I stored the time since Epoch right before I busy looped in a "start" variable. Then in each iteration of the loop, I checked the time since Epoch again in another variable called "current".
Oh each iteration of the loop, I took the difference between "current" and "start". If that difference was greater than or equal to my requested execution time, I broke out of the loop.
The trouble is clock_gettime() does not factor in suspension of a task. So if my task suspends, the way I am doing this now will treat the time a task is suspended as if it were still executing.
Does anyone have an alternative to clock_gettime() that will allow a timer to somehow ignore the suspension time? Code of my current method below.
//DOES NOT HANDLE TASK SUSPENSION
#include <time.h>
#define BILLION 1E9
//Set execution time to 2 seconds
double executionTime = 2;
//Variable used later to compute difference in time
double elapsedTime = -1;
struct timespec start;
struct timespec current;
//Get time before we busy-loop
clock_gettime(CLOCK_REALTIME, &start);
int i;
for (i = 0; i < 10000000000; i++)
{
//Get time on each busy-loop iteration
clock_gettime(CLOCK_REALTIME, ¤t);
elapsedTime = (current.tv_sec - start.tv_sec) + ((current.tv_nsec - start.tv_nsec) / BILLION);
//If we have been executing for the specified execution time, break.
if (elapsedTime >= executionTime)
{
break;
}
}
Change CLOCK_REALTIME to CLOCK_PROCESS_CPU_TIME.
using sleep() takes several seconds to accumulate a small amount of CPU time.
#include <stdio.h>
#include <unistd.h>
#include <time.h>
#define BILLION 1E9
int main ( void) {
double executionTime = 0.0001;
double elapsedTime = -1;
double elapsedTimertc = -1;
struct timespec startrtc;
struct timespec start;
struct timespec currentrtc;
struct timespec current;
clock_gettime(CLOCK_REALTIME, &startrtc);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
for (;;)
{
sleep ( 1);
clock_gettime(CLOCK_REALTIME, ¤trtc);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, ¤t);
elapsedTime = (current.tv_sec - start.tv_sec) + ((current.tv_nsec - start.tv_nsec) / BILLION);
elapsedTimertc = (currentrtc.tv_sec - startrtc.tv_sec) + ((currentrtc.tv_nsec - startrtc.tv_nsec) / BILLION);
if (elapsedTime >= executionTime)
{
break;
}
}
printf ( "elapsed time %f\n", elapsedTime);
printf ( "elapsed time %f\n", elapsedTimertc);
}
I am using clock_gettime() (from time.h) on Linux 2.6 to control timing in my thread loop. I need 500mS within +/- 5mS timing. It seems to be giving me 500mS for a while then starts drifting or jittering to +/- 30mS:
I am using the CLOCK_REALTIME call with it. Is there any way to improve the deviation it is having? I'm simply counting every mS with it and once the counter hits 500 fire off an interrupt.
This is also within the QT 4.3 Framework. The QTimer seemed even more jittery than this.
Based on the wording of your question, I have a feeling you might be accumulating your time differences incorrectly.
Try this approach:
#include <stdio.h>
#include <time.h>
long elapsed_milli( struct timespec * t1, struct timespec *t2 )
{
return (long)(t2->tv_sec - t1->tv_sec) * 1000L
+ (t2->tv_nsec - t1->tv_nsec) / 1000000L;
}
int main()
{
const long period_milli = 500;
struct timespec ts_last;
struct timespec ts_next;
const struct timespec ts_sleep = { 0, 1000000L };
clock_gettime( CLOCK_REALTIME, &ts_last );
while( 1 )
{
nanosleep( &ts_sleep, NULL );
clock_gettime( CLOCK_REALTIME, &ts_next );
long elapsed = elapsed_milli( &ts_last, &ts_next );
if( elapsed >= period_milli )
{
printf( "Elapsed : %ld\n", elapsed );
ts_last.tv_nsec += period_milli * 1000000L;
if( ts_last.tv_nsec >= 1000000000L )
{
ts_last.tv_nsec -= 1000000000L;
ts_last.tv_sec++;
}
}
}
return 0;
}
Every time the required period has elapsed, the "previous" time is updated to use the expected time at which that period elapsed, rather than the actual time. This example uses a 1ms sleep between each poll, which might be over the top.
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <time.h>
#define BILLION 1000000000L;
int main( int argc, char** argv )
{
struct timespec start, stop;
double accum;
uint32 StartTime, StopTime;
if( StartTime = clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
perror( "clock gettime" );
return EXIT_FAILURE;
}
StartTime = start.tv_sec + 0.0000000001 * start.tv_nsec;
system( argv[1] ); // or it could be any calculation
if( StopTime = clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
perror( "clock gettime" );
return EXIT_FAILURE;
}
StopTime = stop.tv_sec + 0.0000000001 * stop.tv_nsec;
accum = StopTime - StartTime;
printf( "%lf\n", accum );
return EXIT_SUCCESS;
}
This program calculates the time required to
execute the program specified as its first argument.
The time is printed in seconds, on standard out.
I am calculating the start time and stop time to perform some computaion. I am able to get the start time and stop time for the computation but not able to find the difference between the start ans stop time i.e. accum. could anyone help me in this ?
Remove StartTime and StopTime. Declare this function :
double to_double(struct timespec t) {
return t.tv_sec + 0.0000000001 * t.tv_nsec;
}
And you'll get your deltatime this way :
accum = to_double(stop) - to_double(start);
The problem is that StartTime and StopTime are both defined as integer types. The results of your calculations are not whole numbers but because the left hand side of the expressions are integers the results are getting truncated down to the integer part. Essentially, you are loosing the details provided by the tv_nsec field.
As suggested in the comments, declare StartTime and StopTime as doubles to fix that.
Change
uint32 StartTime, StopTime;
to
double StartTime, StopTime;
Lets say, I have a function (or functions) which takes a long time (wall time) to execute, for example:
#include "stdafx.h"
#include <math.h>
#include <windows.h>
void fun()
{
long sum = 0L;
for (long long i = 1; i < 10000000; i++){
sum += log((double)i);
}
}
double cputimer()
{
FILETIME createTime;
FILETIME exitTime;
FILETIME kernelTime;
FILETIME userTime;
if ( GetProcessTimes( GetCurrentProcess( ),
&createTime, &exitTime, &kernelTime, &userTime ) != -1 )
{
SYSTEMTIME userSystemTime;
if ( FileTimeToSystemTime( &userTime, &userSystemTime ) != -1 )
return (double)userSystemTime.wHour * 3600.0 +
(double)userSystemTime.wMinute * 60.0 +
(double)userSystemTime.wSecond +
(double)userSystemTime.wMilliseconds / 1000.0;
}
}
int _tmain(int argc, _TCHAR* argv[])
{
double start, stop;
start = cputimer();
fun();
stop = cputimer();
printf("Time taken: %f [seconds]\n", stop - start);
return 0;
}
I would like to measure a CPU load of this function and RAM usage that this function call uses. Is that possible? How can I do this? Im interested in Windows and Linux solutions.
On POSIX you can try using getrusage in the manner similar to how you check the wall time. Not sure about windows.
There is GetProcessTimes function for windows which can give you the CPU time. Also check the Process Status API
Also there is SIGAR which is platform independent.
On Linux you can try with getrusage