I have a function and that is called at specific intervals. I need to check the time previously its called, and the current time. If the difference between the function call is 10 milliseconds then execute some piece of code. Sleep should not be used since some other things are executing in parallel. I have written the following code and the function is called at every 10 milliseconds but the difference i am calcuting is giving 1 or 2 milliseconds less sometimes. what is best way to calculate the difference?
fxn()
{
int logCurTime;
static int logPrevTime = 0, logDiffTime = 0;
getCurrentTimeInMilliSec(&logCurTime);
if (logPrevTime > 0)
logDiffTime += logCurTime - logPrevTime;
if (logCurTime <= logPrevTime)
return;
if (logDiffTime >= 10)
{
...
...
logDiffTime = 0;
}
logPrevTime = logCurTime;
}
For eg:
fxn is called 10 times with the interval of 10 milliseconds. some instance logDiffTime is just 8 or 9 and next instance it accounts the remaining time. i.e., 11 or 12.
Using sleep() to get code executed in specific time intervals is indeed a bad idea. Register your function as the handler for a timer interrupt. Then it will be called very precisely on time.
If you're doing heavy lifting stuff in your function, than you should do it in another thread, because you will run into trouble when you're function is taking too long. (it will just be called from the beginning again).
In posix (linux) you could do it like this
#include <sys/time.h>
#include <stdio.h>
#include <signal.h>
if (signal (SIGALRM, fxn) == SIG_ERR)
perror ("Setting your function as timer handler failed");
unsigned seconds = 42;//your time
struct itimerval old, new_time;
new_time.it_interval.tv_usec = 0;
new_time.it_interval.tv_sec = 0;
new_time.it_value.tv_usec = 0;
new_time.it_value.tv_sec = (long int) seconds;
if (setitimer (ITIMER_REAL, &new_time, &old) != 0)
perror ("Setting the timer failed");
or in windows:
#include <Windows.h>
void Fxn_Timer_Proc_Wrapper(HWND,UINT,UINT_PTR,DWORD){
fxn();
}
unsigned seconds = 42;//your time
UINT_PTR timer_id;
if ( (timer_id = SetTimer(NULL,NULL,seconds *1000,(TIMERPROC) Fxn_Timer_Proc_Wrapper) == NULL){
//failed to create a timer
}
It may not be exactly what you are looking for, however I feel it should be clarified:
The sleep call only suspends the calling thread, not all threads of the process. Thus, you can still run parallel threads while one of them sleeps.
See this question for more:
Do sleep functions sleep all threads or just the one who call it?
For a solution to your problem you should register your function with a timer interrupt. See the other answer on how to do that.
10ms is at the edge of what is achievable see stack overflow : 1ms timer . However, several suggestions on how to get 10ms did come out.
timerfd_create allows your program to wait using select.
timer_settime allows your program to request the 10ms interval.
The caveats on linux are :-
May not be scheduled - the OS could be busy doing something else.
May not be accurate - as 10ms appears to be the shortest interval that works, it may be +/- 1 or 2 ms.
Related
I am trying to get the memory consumed by an algorithm, so I have created a group of functions that would stop the execution in periods of 10 milliseconds to let me read the memory using the getrusage() function. The idea is to set a timer that will raise an alarm signal to the process which will be received by a handler medir_memoria().
However, the program stops in the middle with this message:
[1] 3267 alarm ./memory_test
The code for reading the memory is:
#include "../include/rastreador_memoria.h"
#if defined(__linux__) || defined(__APPLE__) || (defined(__unix__) && !defined(_WIN32))
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <signal.h>
#include <sys/resource.h>
static long max_data_size;
static long max_stack_size;
void medir_memoria (int sig)
{
struct rusage info_memoria;
if (getrusage(RUSAGE_SELF, &info_memoria) < 0)
{
perror("Not reading memory");
}
max_data_size = (info_memoria.ru_idrss > max_data_size) ? info_memoria.ru_idrss : max_data_size;
max_stack_size = (info_memoria.ru_isrss > max_stack_size) ? info_memoria.ru_isrss : max_stack_size;
signal(SIGALRM, medir_memoria);
}
void rastrear_memoria ()
{
struct itimerval t;
t.it_interval.tv_sec = 0;
t.it_interval.tv_usec = 10;
t.it_value.tv_sec = 0;
t.it_value.tv_usec = 10;
max_data_size = 0;
max_stack_size = 0;
setitimer(ITIMER_REAL, &t,0);
signal(SIGALRM, medir_memoria);
}
void detener_rastreo ()
{
signal(SIGALRM, SIG_DFL);
printf("Data: %ld\nStack: %ld\n", max_data_size, max_stack_size);
}
#else
#endif
The main() function works calling all of them in this order:
rastrear_memoria()
Function of the algorithm I am testing
detener_rastreo()
How can I solve this? What does that alarm message mean?
First, setting an itimer to ring every 10 µs is optimistic, since ten microseconds is really a small interval of time. Try with 500 µs (or perhaps even 20 milliseconds, i.e. 20000 µs) instead of 10 µs first.
stop the execution in periods of 10 milliseconds
You have coded for a period of 10 microseconds, not milliseconds!
Then, you should exchange the two lines and code:
signal(SIGALRM, medir_memoria);
setitimer(ITIMER_REAL, &t,0);
so that a signal handler is set before the first itimer rings.
I guess your first itimer rings before the signal handler was installed. Read carefully signal(7) and time(7). The default handling of SIGALRM is termination.
BTW, a better way to measure the time used by some function is clock_gettime(2) or clock(3). Thanks to vdso(7) tricks, clock_gettime is able to get some clock in less than 50 nanoseconds on my i5-4690S desktop computer.
trying to get the memory consumed
You could consider using proc(5) e.g. opening, reading, and closing quickly /proc/self/status or /proc/self/statm etc....
(I guess you are on Linux)
BTW, your measurements will disappoint you: notice that quite often free(3) don't release memory to the kernel (thru munmap(2)...) but simply mark & manage that zone to be reusable by future malloc(3). You might consider mallinfo(3) or malloc_info(3) but notice that it is not async-signal-safe so cannot be called from inside a signal handler.
(I tend to believe that your approach is deeply flawed)
I'm trying to do a stopwatch in C ( for Windows ), the code seems to work but the time with the Sleep function doesn't match real time.
Process returned 0 (0x0) execution time : 1.907 s
Press any key to continue.
The problem is that the execution time is around 2 seconds but it should be just 1 sec.. just wondering what I am doing wrong since the Sleep function in Windows accepts milliseconds as parameters it should be working.. here is the code
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
int main()
{
int milliseconds=0;
int seconds=0;
int counter;
for(counter=0;counter<1000;counter++) {
Sleep(1);
milliseconds = milliseconds + 1;
printf("%d\n",milliseconds);
if(milliseconds==1000) {
seconds = seconds + 1;
milliseconds = 0;
printf("seconds: %d",seconds);
}
}
return 0;
}
You are sleeping with a timeout of 1ms. In effect you are giving up your timeslice for your current thread which is by default 15,6ms. But if you have a WPF application running just like Visual Studio it will be set to 1ms. Sleep will return not earlier than you wanted to sleep so you will wait effectively up to to timeslices which add up to 2s sleep time.
If you use a profiler like ETWController you can see the thread waits directly.
There you see that we have 1004 context switch events which did wait on average 1,6ms and not 1ms which you did anticipate. There is a lot more to how the OS scheduler influences how long your sleep takes. The best thing is to measure. See for example SpinWait is dangerous.
When I e.g. close all applications which enforce 1ms system timer I will get
a 6,5s sleep duration!
To check the true wait time I have used your code with a highres timer to print the true wait time:
#include "stdafx.h"
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <chrono>
int main()
{
int milliseconds = 0;
int seconds = 0;
int counter;
auto start = std::chrono::high_resolution_clock::now();
for (counter = 0; counter<1000; counter++) {
Sleep(1);
milliseconds = milliseconds + 1;
//printf("%d\n", milliseconds);
if (milliseconds == 1000) {
seconds = seconds + 1;
milliseconds = 0;
printf("\nseconds: %d", seconds);
}
}
auto stop = std::chrono::high_resolution_clock::now();
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count();
printf("Duration: %dms", ms);
return 0;
}
Here is the output:
ClockRes v2.0 - View the system clock resolution
Copyright (C) 2009 Mark Russinovich
SysInternals - www.sysinternals.com
Maximum timer interval: 15.625 ms
Minimum timer interval: 0.500 ms
Current timer interval: 1.000 ms
seconds: 1
Duration: 1713ms
ClockRes v2.0 - View the system clock resolution
Copyright (C) 2009 Mark Russinovich
SysInternals - www.sysinternals.com
Maximum timer interval: 15.625 ms
Minimum timer interval: 0.500 ms
Current timer interval: 15.625 ms
seconds: 1
Duration: 6593ms
You cannot use the Sleep function to write a stopwatch. It is not intended for timing anything. All it does is cause a thread to yield the rest of its time slice to other competing threads, allowing them to execute. There is no guarantee about the precise amount of time that your thread will sleep. Its execution may be pre-empted by a higher-priority thread. As per the documentation:
If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on.
It goes on to talk about how to increase the accuracy of the sleep interval, but that's not what you want, either. Creating a timer would be more appropriate for your purposes, e.g. using the SetTimer function. Specify a callback function and you will be notified when the time has elapsed. If you needed an extremely accurate timer, you could use a multimedia timer. Tutorials are available here. Probably unnecessary for a simple stopwatch, though.
To obtain time counts and implement your own timing facility, you could call the GetTickCount function. Or use the high-resolution timer APIs for maximum resolution. QueryPerformanceCounter returns the current timestamp, which you divide by the result of QueryPerformanceFrequency.
Requests to the operating system are processed on a best-effort basis, without any response time guarantee. Windows is designed and optimized with performance and throughput in mind for general purpose uses, not for real-time tasks. Your process shares the OS with other processes that also want the attention of the OS. The OS only guarantees that the process suspension will last for at least as long as you requested give or take a clock tick. Plus processing time for looping means that you will get inconsistent results from this code.
I am learning C at the moment but I cannot see any existing examples of how I could run a command every X minutes.
I can see examples concerning how to time a command but that isn't what I want.
How can I run a command every X minutes in C?
You cannot do that in standard C99 (that is, using only the functions defined by the language standard).
You can do that on POSIX systems.
Assuming you focus a Linux system, read time(7) carefully. Then read about sleep(3), nanosleep(2), clock_gettime(2), getrusage(2) and some other syscalls(2)... etc...
The issue is to define what should happen if a command is running for more than X minutes.
Read some book about Advanced Linux Programming or Posix programming.
BTW, Linux has crontab(5) and all the related utilities are free software, so you could study their source code.
You could ask your calling thread to sleep for specified seconds.
#include <unistd.h>
unsigned int sleep(unsigned int seconds);
This conform to POSIX.1-2001.
sleep is a non-standard function. As mentioned here:
On UNIX, you shall include <unistd.h>.
On MS-Windows, Sleep is rather from <windows.h>
To do this, and allow other things to happen between calls, suggests using a thread.
This is untested pseudo code, but if you are using Linux, it could look something like this: (launch a thread and make it sleep for 60 seconds in the worker function loop between calls to your periodic function call)
void *OneMinuteCall(void *param);
pthread_t thread0;
int gRunning == 1;
OneMinuteCall( void * param )
{
int delay = (int)param;
while(gRunning)
{
some_func();//periodic function
sleep(delay);//sleep for 1 minute
}
}
void some_func(void)
{
//some stuff
}
int main(void)
{
int delay = 60; //(s)
pthread_create(&thread0, NULL, OneMinuteCall, delay);
//do some other stuff
//at some point you must set gRunning == 0 to exit loop;
//then terminate the thread
return 0;
}
As user3386109 suggested, using some form of clock for the delay and sleep to reduce cpu overhead would work. Example code to provide the basic concept. Note that the delay is based on an original reading of the time, (lasttime is updated based on desired delay, not the last reading of the clock). numsec should be set to 60*X to trigger every X minutes.
/* numsec = number of seconds per instance */
#define numsec 3
time_t lasttime, thistime;
int i;
lasttime = time(NULL);
for(i = 0; i < 5; i++){ /* any loop here */
while(1){
thistime = time(NULL);
if(thistime - lasttime >= numsec)
break;
if(thistime - lasttime >= 2)
sleep(thistime - lasttime - 1);
}
/* run periodic code here */
/* ... */
lasttime += numsec; /* update lasttime */
}
I have a worker thread that gets work from pipe. Something like this
void *worker(void *param) {
while (!work_done) {
read(g_workfds[0], work, sizeof(work));
do_work(work);
}
}
I need to implement a 1 second timer in the same thread do to some book-keeping about the work. Following is what I've in mind:
void *worker(void *param) {
prev_uptime = get_uptime();
while (!work_done) {
// set g_workfds[0] as non-block
now_uptime = get_uptime();
if (now_uptime - prev_uptime > 1) {
do_book_keeping();
prev_uptime = now_uptime;
}
n = poll(g_workfds[0], 1000); // Wait for 1 second else timeout
if (n == 0) // timed out
continue;
read(g_workfds[0], work, sizeof(work));
do_work(work); // This can take more than 1 second also
}
}
I am using system uptime instead of system time because system time can get changed while this thread is running. I was wondering if there is any other better way to do this. I don't want to consider using another thread. Using alarm() is not an option as it already used by another thread in same process. This is getting implemented in Linux environment.
I agree with most of what webbi wrote in his answer. But there is one issue with his suggestion of using time instead of uptime. If the system time is updated "forward" it will work as intended. But if the system time is set back by say 30 seconds, then there will be no book keeping done for 30 seconds as (now_time - prev_time) will be negative (unless an unsigned type is used, in which case it will work anyway).
An alternative would be to use clock_gettime() with CLOCK_MONOTONIC as clockid ( http://linux.die.net/man/2/clock_gettime ). A bit messy if you don't need smaller time units than seconds.
Also, adding code to detect a backwards clock jump isn't hard either.
I have found a better way but it is Linux specific using timerfd_create() system call. It takes care of system time change. Following is possible psuedo code:
void *worker(void *param) {
int timerfd = timerfd_create(CLOCK_MONOTONIC, 0); // Monotonic doesn't get affected by system time change
// set timerfd to non-block
timerfd_settime(timerfd, 1 second timer); // timer starts
while (!work_done) {
// set g_workfds[0] as non-block
n = poll(g_workfds[0] and timerfd, 0); // poll on both pipe and timerfd and Wait indefinetly
if (timerfd is readable)
do_book_keeping();
if (g_workfds[0] is readable) {
read(g_workfds[0], work, sizeof(work));
do_work(work); // This can take more than 1 second also
}
}
}
It seems cleaner and read() on timerfd returns extra time elapsed in case do_work() takes long time which is quite useful as do_book_keeping() expects to get called every second.
I found some things weird in your code...
poll() has 3 args, you are passing 2, the second arg is the number of structs that you are passing in the struct array of first param, the third param is the timeout.
Reference: http://linux.die.net/man/2/poll
Besides that, it's fine for me that workaround, it's not the best of course, but it's fine without involving another thread or alarm(), etc.
You use time and not uptime, it could cause you one error if the system date gets changed, but then it will continue working as it will be updated and continuing waiting for 1 sec, no matter what time is.
I have a particular function (well, set of functions) that I want to start every 400ms. I'm not much of a C programmer, and so anything outside of the standard libraries is a mystery to me, as well as quite a bit within them.
My first thought is to use nanosleep to pause execution for 400ms in some sort of loop, but this of course doesn't take into account the execution time of the code I will be running. If I could measure it, and if it seemed fairly certain that it ran for the same approximate duration after 10 or 20 tests, I could then nanosleep() for the difference. This wouldn't be perfect, of course... but it might be close enough for a first try.
How do I measure the execution time of a C function? Or is there a better way to do this altogether, and what keywords do I need to be googling for?
You should be able to use settimer
int setitimer(int which, const struct itimerval *value,
struct itimerval *ovalue);
Just put the code that you want to execute every 400ms inside the SIGALRM handler. This way you don't need to account for the time that your code takes to run, which could potentially vary. I'm not sure what happens if the signal handler doesn't return before the next signal is generated.
An outline of what some of the code might look like is shown below.
void periodic_fuc(int signal_num)
{
...
signam(SIGALRM, periodic_func);
}
int main(...)
{
struct itimerval timerval = {0};
signal(SIGALRM, periodic_func);
...
timerval.it_interval.tv_usec = 400000;
timerval.it_value.tv_usec = 400000; // Wait 400ms for first trigger
settimer(ITIMER_REAL, &timerval, NULL);
while (!exit)
sleep(1);
return 0;
}
Take a look at gprof. It allows you to quickly recompile your code and generate information on which functions are being called and what is taking up the most time in your program.
I concur with torak about using setitimer(). However, since it's not clear if the interval is restarted when the SIGALRM handler exits, and you're really not supposed to do much work in a signal handler anyway, it's better to just have it set a flag, and do the work in the main routine:
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <unistd.h>
#include <sys/time.h>
volatile sig_atomic_t wakeup = 0;
void alarm_handler(int signal_num)
{
wakeup = 1;
}
int main()
{
struct itimerval timerval = { 0 };
struct sigaction sigact = { 0 };
int finished = 0;
timerval.it_interval.tv_usec = 400000;
timerval.it_value.tv_usec = 400000;
sigact.sa_handler = alarm_handler;
sigaction(SIGALRM, &sigact, NULL);
setitimer(ITIMER_REAL, &timerval, NULL);
while (!finished)
{
/* Wait for alarm wakeup */
while (!wakeup)
pause();
wakeup = 0;
/* Code here... */
printf("(Wakeup)\n");
}
return 0;
}
You could use gettimeofday() or clock_gettime() before and after the functions to time, and then calculate the delta between the two times.
For Linux, you can use gettimeofday. Call gettimeofday at the start of the function. Run whatever you have to run. Then get the end time and figure out how much longer you have to sleep. Then call usleep for the appropriate number of microseconds.
Look at POSIX timers. Here is some documentation at HP.
You can do the same functions as with setitimer, but you also have timer_getoverrun() to let you know if you missed any timer events during your function.