I am learning C at the moment but I cannot see any existing examples of how I could run a command every X minutes.
I can see examples concerning how to time a command but that isn't what I want.
How can I run a command every X minutes in C?
You cannot do that in standard C99 (that is, using only the functions defined by the language standard).
You can do that on POSIX systems.
Assuming you focus a Linux system, read time(7) carefully. Then read about sleep(3), nanosleep(2), clock_gettime(2), getrusage(2) and some other syscalls(2)... etc...
The issue is to define what should happen if a command is running for more than X minutes.
Read some book about Advanced Linux Programming or Posix programming.
BTW, Linux has crontab(5) and all the related utilities are free software, so you could study their source code.
You could ask your calling thread to sleep for specified seconds.
#include <unistd.h>
unsigned int sleep(unsigned int seconds);
This conform to POSIX.1-2001.
sleep is a non-standard function. As mentioned here:
On UNIX, you shall include <unistd.h>.
On MS-Windows, Sleep is rather from <windows.h>
To do this, and allow other things to happen between calls, suggests using a thread.
This is untested pseudo code, but if you are using Linux, it could look something like this: (launch a thread and make it sleep for 60 seconds in the worker function loop between calls to your periodic function call)
void *OneMinuteCall(void *param);
pthread_t thread0;
int gRunning == 1;
OneMinuteCall( void * param )
{
int delay = (int)param;
while(gRunning)
{
some_func();//periodic function
sleep(delay);//sleep for 1 minute
}
}
void some_func(void)
{
//some stuff
}
int main(void)
{
int delay = 60; //(s)
pthread_create(&thread0, NULL, OneMinuteCall, delay);
//do some other stuff
//at some point you must set gRunning == 0 to exit loop;
//then terminate the thread
return 0;
}
As user3386109 suggested, using some form of clock for the delay and sleep to reduce cpu overhead would work. Example code to provide the basic concept. Note that the delay is based on an original reading of the time, (lasttime is updated based on desired delay, not the last reading of the clock). numsec should be set to 60*X to trigger every X minutes.
/* numsec = number of seconds per instance */
#define numsec 3
time_t lasttime, thistime;
int i;
lasttime = time(NULL);
for(i = 0; i < 5; i++){ /* any loop here */
while(1){
thistime = time(NULL);
if(thistime - lasttime >= numsec)
break;
if(thistime - lasttime >= 2)
sleep(thistime - lasttime - 1);
}
/* run periodic code here */
/* ... */
lasttime += numsec; /* update lasttime */
}
Related
I am trying to get the memory consumed by an algorithm, so I have created a group of functions that would stop the execution in periods of 10 milliseconds to let me read the memory using the getrusage() function. The idea is to set a timer that will raise an alarm signal to the process which will be received by a handler medir_memoria().
However, the program stops in the middle with this message:
[1] 3267 alarm ./memory_test
The code for reading the memory is:
#include "../include/rastreador_memoria.h"
#if defined(__linux__) || defined(__APPLE__) || (defined(__unix__) && !defined(_WIN32))
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <signal.h>
#include <sys/resource.h>
static long max_data_size;
static long max_stack_size;
void medir_memoria (int sig)
{
struct rusage info_memoria;
if (getrusage(RUSAGE_SELF, &info_memoria) < 0)
{
perror("Not reading memory");
}
max_data_size = (info_memoria.ru_idrss > max_data_size) ? info_memoria.ru_idrss : max_data_size;
max_stack_size = (info_memoria.ru_isrss > max_stack_size) ? info_memoria.ru_isrss : max_stack_size;
signal(SIGALRM, medir_memoria);
}
void rastrear_memoria ()
{
struct itimerval t;
t.it_interval.tv_sec = 0;
t.it_interval.tv_usec = 10;
t.it_value.tv_sec = 0;
t.it_value.tv_usec = 10;
max_data_size = 0;
max_stack_size = 0;
setitimer(ITIMER_REAL, &t,0);
signal(SIGALRM, medir_memoria);
}
void detener_rastreo ()
{
signal(SIGALRM, SIG_DFL);
printf("Data: %ld\nStack: %ld\n", max_data_size, max_stack_size);
}
#else
#endif
The main() function works calling all of them in this order:
rastrear_memoria()
Function of the algorithm I am testing
detener_rastreo()
How can I solve this? What does that alarm message mean?
First, setting an itimer to ring every 10 µs is optimistic, since ten microseconds is really a small interval of time. Try with 500 µs (or perhaps even 20 milliseconds, i.e. 20000 µs) instead of 10 µs first.
stop the execution in periods of 10 milliseconds
You have coded for a period of 10 microseconds, not milliseconds!
Then, you should exchange the two lines and code:
signal(SIGALRM, medir_memoria);
setitimer(ITIMER_REAL, &t,0);
so that a signal handler is set before the first itimer rings.
I guess your first itimer rings before the signal handler was installed. Read carefully signal(7) and time(7). The default handling of SIGALRM is termination.
BTW, a better way to measure the time used by some function is clock_gettime(2) or clock(3). Thanks to vdso(7) tricks, clock_gettime is able to get some clock in less than 50 nanoseconds on my i5-4690S desktop computer.
trying to get the memory consumed
You could consider using proc(5) e.g. opening, reading, and closing quickly /proc/self/status or /proc/self/statm etc....
(I guess you are on Linux)
BTW, your measurements will disappoint you: notice that quite often free(3) don't release memory to the kernel (thru munmap(2)...) but simply mark & manage that zone to be reusable by future malloc(3). You might consider mallinfo(3) or malloc_info(3) but notice that it is not async-signal-safe so cannot be called from inside a signal handler.
(I tend to believe that your approach is deeply flawed)
I have a function and that is called at specific intervals. I need to check the time previously its called, and the current time. If the difference between the function call is 10 milliseconds then execute some piece of code. Sleep should not be used since some other things are executing in parallel. I have written the following code and the function is called at every 10 milliseconds but the difference i am calcuting is giving 1 or 2 milliseconds less sometimes. what is best way to calculate the difference?
fxn()
{
int logCurTime;
static int logPrevTime = 0, logDiffTime = 0;
getCurrentTimeInMilliSec(&logCurTime);
if (logPrevTime > 0)
logDiffTime += logCurTime - logPrevTime;
if (logCurTime <= logPrevTime)
return;
if (logDiffTime >= 10)
{
...
...
logDiffTime = 0;
}
logPrevTime = logCurTime;
}
For eg:
fxn is called 10 times with the interval of 10 milliseconds. some instance logDiffTime is just 8 or 9 and next instance it accounts the remaining time. i.e., 11 or 12.
Using sleep() to get code executed in specific time intervals is indeed a bad idea. Register your function as the handler for a timer interrupt. Then it will be called very precisely on time.
If you're doing heavy lifting stuff in your function, than you should do it in another thread, because you will run into trouble when you're function is taking too long. (it will just be called from the beginning again).
In posix (linux) you could do it like this
#include <sys/time.h>
#include <stdio.h>
#include <signal.h>
if (signal (SIGALRM, fxn) == SIG_ERR)
perror ("Setting your function as timer handler failed");
unsigned seconds = 42;//your time
struct itimerval old, new_time;
new_time.it_interval.tv_usec = 0;
new_time.it_interval.tv_sec = 0;
new_time.it_value.tv_usec = 0;
new_time.it_value.tv_sec = (long int) seconds;
if (setitimer (ITIMER_REAL, &new_time, &old) != 0)
perror ("Setting the timer failed");
or in windows:
#include <Windows.h>
void Fxn_Timer_Proc_Wrapper(HWND,UINT,UINT_PTR,DWORD){
fxn();
}
unsigned seconds = 42;//your time
UINT_PTR timer_id;
if ( (timer_id = SetTimer(NULL,NULL,seconds *1000,(TIMERPROC) Fxn_Timer_Proc_Wrapper) == NULL){
//failed to create a timer
}
It may not be exactly what you are looking for, however I feel it should be clarified:
The sleep call only suspends the calling thread, not all threads of the process. Thus, you can still run parallel threads while one of them sleeps.
See this question for more:
Do sleep functions sleep all threads or just the one who call it?
For a solution to your problem you should register your function with a timer interrupt. See the other answer on how to do that.
10ms is at the edge of what is achievable see stack overflow : 1ms timer . However, several suggestions on how to get 10ms did come out.
timerfd_create allows your program to wait using select.
timer_settime allows your program to request the 10ms interval.
The caveats on linux are :-
May not be scheduled - the OS could be busy doing something else.
May not be accurate - as 10ms appears to be the shortest interval that works, it may be +/- 1 or 2 ms.
I am creating a process with 2 children, 1 of the children is responsible to read questions (line by line from a file), output every question and reading the answer, and the other one is responsable to measure the time elapsed and notify the user at each past 1 minute about the remaining time. My problem is that i couldn't find any useful example of how i can make this set time function to work. Here is what i have tried so far. The problem is that it outputs the same elapsed time every time and never gets out from the loop.
#include<time.h>
#define T 600000
int main(){
clock_t start, end;
double elapsed;
start = clock();
end = start + T;
while(clock() < end){
elapsed = (double) (end - clock()) / CLOCKS_PER_SEC;
printf("you have %f seconds left\n", elapsed);
sleep(60);
}
return 0;
}
As I commented, you should read the time(7) man page.
Notice that clock(3) measure processor time, not real time.
I suggest using clock_gettime(2) with CLOCK_REALTIME (or perhaps CLOCK_MONOTONIC). See also localtime(3) and strftime(3).
Also timer_create(2), the Linux specific timerfd_create(2) and poll(2) etc... Read also Advanced Linux Programming.
If you dare use signals, read carefully signal(7). But timerfd_create and poll should probably be enough to you.
Here is something simple that seems to work:
#include <stdio.h>
#define TIME 300
int main()
{
int i;
for (i=TIME; i > 0; i--)
{
printf("You have [%d] minutes left.\n", i/60);
sleep(60);
}
}
Give it a try.
I have a particular function (well, set of functions) that I want to start every 400ms. I'm not much of a C programmer, and so anything outside of the standard libraries is a mystery to me, as well as quite a bit within them.
My first thought is to use nanosleep to pause execution for 400ms in some sort of loop, but this of course doesn't take into account the execution time of the code I will be running. If I could measure it, and if it seemed fairly certain that it ran for the same approximate duration after 10 or 20 tests, I could then nanosleep() for the difference. This wouldn't be perfect, of course... but it might be close enough for a first try.
How do I measure the execution time of a C function? Or is there a better way to do this altogether, and what keywords do I need to be googling for?
You should be able to use settimer
int setitimer(int which, const struct itimerval *value,
struct itimerval *ovalue);
Just put the code that you want to execute every 400ms inside the SIGALRM handler. This way you don't need to account for the time that your code takes to run, which could potentially vary. I'm not sure what happens if the signal handler doesn't return before the next signal is generated.
An outline of what some of the code might look like is shown below.
void periodic_fuc(int signal_num)
{
...
signam(SIGALRM, periodic_func);
}
int main(...)
{
struct itimerval timerval = {0};
signal(SIGALRM, periodic_func);
...
timerval.it_interval.tv_usec = 400000;
timerval.it_value.tv_usec = 400000; // Wait 400ms for first trigger
settimer(ITIMER_REAL, &timerval, NULL);
while (!exit)
sleep(1);
return 0;
}
Take a look at gprof. It allows you to quickly recompile your code and generate information on which functions are being called and what is taking up the most time in your program.
I concur with torak about using setitimer(). However, since it's not clear if the interval is restarted when the SIGALRM handler exits, and you're really not supposed to do much work in a signal handler anyway, it's better to just have it set a flag, and do the work in the main routine:
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <unistd.h>
#include <sys/time.h>
volatile sig_atomic_t wakeup = 0;
void alarm_handler(int signal_num)
{
wakeup = 1;
}
int main()
{
struct itimerval timerval = { 0 };
struct sigaction sigact = { 0 };
int finished = 0;
timerval.it_interval.tv_usec = 400000;
timerval.it_value.tv_usec = 400000;
sigact.sa_handler = alarm_handler;
sigaction(SIGALRM, &sigact, NULL);
setitimer(ITIMER_REAL, &timerval, NULL);
while (!finished)
{
/* Wait for alarm wakeup */
while (!wakeup)
pause();
wakeup = 0;
/* Code here... */
printf("(Wakeup)\n");
}
return 0;
}
You could use gettimeofday() or clock_gettime() before and after the functions to time, and then calculate the delta between the two times.
For Linux, you can use gettimeofday. Call gettimeofday at the start of the function. Run whatever you have to run. Then get the end time and figure out how much longer you have to sleep. Then call usleep for the appropriate number of microseconds.
Look at POSIX timers. Here is some documentation at HP.
You can do the same functions as with setitimer, but you also have timer_getoverrun() to let you know if you missed any timer events during your function.
I don't know exactly how to word a search for this.. so I haven't had any luck finding anything.. :S
I need to implement a time delay in C.
for example I want to do some stuff, then wait say 1 minute, then continue on doing stuff.
Did that make sense? Can anyone help me out?
In standard C (C99), you can use time() to do this, something like:
#include <time.h>
:
void waitFor (unsigned int secs) {
unsigned int retTime = time(0) + secs; // Get finishing time.
while (time(0) < retTime); // Loop until it arrives.
}
By the way, this assumes time() returns a 1-second resolution value. I don't think that's mandated by the standard so you may have to adjust for it.
In order to clarify, this is the only way I'm aware of to do this with ISO C99 (and the question is tagged with nothing more than "C" which usually means portable solutions are desirable although, of course, vendor-specific solutions may still be given).
By all means, if you're on a platform that provides a more efficient way, use it. As several comments have indicated, there may be specific problems with a tight loop like this, with regard to CPU usage and battery life.
Any decent time-slicing OS would be able to drop the dynamic priority of a task that continuously uses its full time slice but the battery power may be more problematic.
However C specifies nothing about the OS details in a hosted environment, and this answer is for ISO C and ISO C alone (so no use of sleep, select, Win32 API calls or anything like that).
And keep in mind that POSIX sleep can be interrupted by signals. If you are going to go down that path, you need to do something like:
int finishing = 0; // set finishing in signal handler
// if you want to really stop.
void sleepWrapper (unsigned int secs) {
unsigned int left = secs;
while ((left > 0) && (!finishing)) // Don't continue if signal has
left = sleep (left); // indicated exit needed.
}
Here is how you can do it on most desktop systems:
#ifdef _WIN32
#include <windows.h>
#else
#include <unistd.h>
#endif
void wait( int seconds )
{ // Pretty crossplatform, both ALL POSIX compliant systems AND Windows
#ifdef _WIN32
Sleep( 1000 * seconds );
#else
sleep( seconds );
#endif
}
int
main( int argc, char **argv)
{
int running = 3;
while( running )
{ // do something
--running;
wait( 3 );
}
return 0; // OK
}
Here is how you can do it on a microcomputer / processor w/o timer:
int wait_loop0 = 10000;
int wait_loop1 = 6000;
// for microprocessor without timer, if it has a timer refer to vendor documentation and use it instead.
void
wait( int seconds )
{ // this function needs to be finetuned for the specific microprocessor
int i, j, k;
for(i = 0; i < seconds; i++)
{
for(j = 0; j < wait_loop0; j++)
{
for(k = 0; k < wait_loop1; k++)
{ // waste function, volatile makes sure it is not being optimized out by compiler
int volatile t = 120 * j * i + k;
t = t + 5;
}
}
}
}
int
main( int argc, char **argv)
{
int running = 3;
while( running )
{ // do something
--running;
wait( 3 );
}
return 0; // OK
}
The waitloop variables must be fine tuned, those did work pretty close for my computer, but the frequency scale thing makes it very imprecise for a modern desktop system; So don't use there unless you're bare to the metal and not doing such stuff.
Check sleep(3) man page or MSDN for Sleep
Although many implementations have the time function return the current time in seconds, there is no guarantee that every implementation will do so (e.g. some may return milliseconds rather than seconds). As such, a more portable solution is to use the difftime function.
difftime is guaranteed by the C standard to return the difference in time in seconds between two time_t values. As such we can write a portable time delay function which will run on all compliant implementations of the C standard.
#include <time.h>
void delay(double dly){
/* save start time */
const time_t start = time(NULL);
time_t current;
do{
/* get current time */
time(¤t);
/* break loop when the requested number of seconds have elapsed */
}while(difftime(current, start) < dly);
}
One caveat with the time and difftime functions is that the C standard never specifies a granularity. Most implementations have a granularity of one second. While this is all right for delays lasting several seconds, our delay function may wait too long for delays lasting under one second.
There is a portable standard C alternative: the clock function.
The clock function returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation. To determine the time in seconds, the value returned by the clock function should be divided by the value of the macro CLOCKS_PER_SEC.
The clock function solution is quite similar to our time function solution:
#include <time.h>
void delay(double dly){
/* save start clock tick */
const clock_t start = clock();
clock_t current;
do{
/* get current clock tick */
current = clock();
/* break loop when the requested number of seconds have elapsed */
}while((double)(current-start)/CLOCKS_PER_SEC < dly);
}
There is a caveat in this case similar to that of time and difftime: the granularity of the clock function is left to the implementation. For example, machines with 32-bit values for clock_t with a resolution in microseconds may end up wrapping the value returned by clock after 2147 seconds (about 36 minutes).
As such, consider using the time and difftime implementation of the delay function for delays lasting at least one second, and the clock implementation for delays lasting under one second.
A final word of caution: clock returns processor time rather than calendar time; clock may not correspond with the actual elapsed time (e.g. if the process sleeps).
For delays as large as one minute, sleep() is a nice choice.
If someday, you want to pause on delays smaller than one second, you may want to consider poll() with a timeout.
Both are POSIX.
There are no sleep() functions in the pre-C11 C Standard Library, but POSIX does provide a few options.
The POSIX function sleep() (unistd.h) takes an unsigned int argument for the number of seconds desired to sleep. Although this is not a Standard Library function, it is widely available, and glibc appears to support it even when compiling with stricter settings like --std=c11.
The POSIX function nanosleep() (time.h) takes two pointers to timespec structures as arguments, and provides finer control over the sleep duration. The first argument specifies the delay duration. If the second argument is not a null pointer, it holds the time remaining if the call is interrupted by a signal handler.
Programs that use the nanosleep() function may need to include a feature test macro in order to compile. The following code sample will not compile on my linux system without a feature test macro when I use a typical compiler invocation of gcc -std=c11 -Wall -Wextra -Wpedantic.
POSIX once had a usleep() function (unistd.h) that took a useconds_t argument to specify sleep duration in microseconds. This function also required a feature test macro when used with strict compiler settings. Alas, usleep() was made obsolete with POSIX.1-2001 and should no longer be used. It is recommended that nanosleep() be used now instead of usleep().
#define _POSIX_C_SOURCE 199309L // feature test macro for nanosleep()
#include <stdio.h>
#include <unistd.h> // for sleep()
#include <time.h> // for nanosleep()
int main(void)
{
// use unsigned sleep(unsigned seconds)
puts("Wait 5 sec...");
sleep(5);
// use int nanosleep(const struct timespec *req, struct timespec *rem);
puts("Wait 2.5 sec...");
struct timespec ts = { .tv_sec = 2, // seconds to wait
.tv_nsec = 5e8 }; // additional nanoseconds
nanosleep(&ts, NULL);
puts("Bye");
return 0;
}
Addendum:
C11 does have the header threads.h providing thrd_sleep(), which works identically to nanosleep(). GCC did not support threads.h until 2018, with the release of glibc 2.28. It has been difficult in general to find implementations with support for threads.h (Clang did not support it for a long time, but I'm not sure about the current state of affairs there). You will have to use this option with care.
Try sleep(int number_of_seconds)
sleep(int) works as a good delay. For a minute:
//Doing some stuff...
sleep(60); //Freeze for A minute
//Continue doing stuff...
Is it timer?
For WIN32 try http://msdn.microsoft.com/en-us/library/ms687012%28VS.85%29.aspx
you can simply call delay() function. So if you want to delay the process in 3 seconds, call delay(3000)...
If you are certain you want to wait and never get interrupted then use sleep in POSIX or Sleep in Windows. In POSIX sleep takes time in seconds so if you want the time to be shorter there are varieties like usleep() which uses microseconds. Sleep in Windows takes milliseconds, it is rare you need finer granularity than that.
It may be that you wish to wait a period of time but want to allow interrupts, maybe in the case of an emergency. sleep can be interrupted by signals but there is a better way of doing it in this case.
Therefore I actually found in practice what you do is wait for an event or a condition variable with a timeout.
In Windows your call is WaitForSingleObject. In POSIX it is pthread_cond_timedwait.
In Windows you can also use WaitForSingleObjectEx and then you can actually "interrupt" your thread with any queued task by calling QueueUserAPC. WaitForSingleObject(Ex) will return a code determining why it exited, so you will know when it returns a "TIMEDOUT" status that it did indeed timeout. You set the Event it is waiting for when you want it to terminate.
With pthread_cond_timedwait you can signal broadcast the condition variable. (If several threads are waiting on the same one, you will need to broadcast to wake them all up). Each time it loops it should check the condition. Your thread can get the current time and see if it has passed or it can look to see if some condition has been met to determine what to do. If you have some kind of queue you can check it. (Your thread will automatically have a mutex locked that it used to wait on the condition variable, so when it checks the condition it has sole access to it).
// Provides ANSI C method of delaying x milliseconds
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void delayMillis(unsigned long ms) {
clock_t start_ticks = clock();
unsigned long millis_ticks = CLOCKS_PER_SEC/1000;
while (clock()-start_ticks < ms*millis_ticks) {
}
}
/*
* Example output:
*
* CLOCKS_PER_SEC:[1000000]
*
* Test Delay of 800 ms....
*
* start[2054], end[802058],
* elapsedSec:[0.802058]
*/
int testDelayMillis() {
printf("CLOCKS_PER_SEC:[%lu]\n\n", CLOCKS_PER_SEC);
clock_t start_t, end_t;
start_t = clock();
printf("Test Delay of 800 ms....\n", CLOCKS_PER_SEC);
delayMillis(800);
end_t = clock();
double elapsedSec = end_t/(double)CLOCKS_PER_SEC;
printf("\nstart[%lu], end[%lu], \nelapsedSec:[%f]\n", start_t, end_t, elapsedSec);
}
int main() {
testDelayMillis();
}
C11 has a function specifically for this:
#include <threads.h>
#include <time.h>
#include <stdio.h>
void sleep(time_t seconds) {
struct timespec time;
time.tv_sec = seconds;
time.tv_nsec = 0;
while (thrd_sleep(&time, &time)) {}
}
int main() {
puts("Sleeping for 5 seconds...");
sleep(5);
puts("Done!");
return 0;
}
Note that this is only available starting in glibc 2.28.
for C use in gcc.
#include <windows.h>
then use Sleep(); /// Sleep() with capital S. not sleep() with s .
//Sleep(1000) is 1 sec /// maybe.
clang supports sleep(), sleep(1) is for 1 sec time delay/wait.
For short delays (say, some microseconds) on Linux OS, you can use "usleep":
// C Program to perform short delays
#include <unistd.h>
#include <stdio.h>
int main(){
printf("Hello!\n");
usleep(1000000); // For a 1-second delay
printf("Bye!\n);
return 0;
system("timeout /t 60"); // waits 60s. this is only for windows vista,7,8
system("ping -n 60 127.0.0.1 >nul"); // waits 60s. for all windows
Write this code :
void delay(int x)
{ int i=0,j=0;
for(i=0;i<x;i++){for(j=0;j<200000;j++){}}
}
int main()
{
int i,num;
while(1) {
delay(500);
printf("Host name");
printf("\n");}
}