I am writing a Gif animator in C.
I have two threads running in parallel, both . The first allows the user to alter the speed of the animation. The second draws the current frame, and then calls Sleep(Constant * 100 / CurrentSpeed), where CurrentSpeed is a percentage amount, ranging from 1 to 200.
The problem is that if you quickly change the speed from 100%, to 1%, and then back to the first, the second thread will execute the following:
Sleep(Constant * 100)
This will draw frame A, wait many seconds (although the speed was changed by the user), and only then draw B and the following frames in the default speed.
It seems to me that Sleep is a poor choice of mine in this case. What can I do to solve this problem?
EDIT:
The code I currently have (Simplified):
while (1) {
InvalidateRect(Handle, &ImageRect, FALSE);
if (shouldDispose) {
break;
}
if (DelayTime)
Sleep(DelayTime * 100 / CurrentSpeed);
SelectNextImage();
}
Instead of calling Sleep() with the desired frame rate, why don't you call it with a constant interval of 1 ms, for example, and use a variable as a counter?
For example, let C be a global variable (counter) which is loaded with a number of 'ticks' of 1ms. Then, write the loop:
while(1) { //Main loop of the player thread
if (C > 0) C--;
if (C == 0) nextframe(); //if counter reaches 0, load next frame.
Sleep(1);
}
The control thread would load C with a number of 1ms ticks (i.e. frame rate), and the player thread will never be stopped beyond 1 ms. The use of 1ms as the base rate is arbitrary. Use the minimum time that allows you the maximum frame rate, in order to load CPU the less as possible.
EDIT
After some hot comments (arguing is good after all), I'd like to point out that this solution is sub-optimal, i.e., it doesn't use any OS mechanism for signaling threads or any other API for preventing the thread from wasting CPU time. The solution shown here is generic: it may be used in any system (even in embedded systems without any running OS. But above all, it is based on the original code posted by the user that asked the question: using Sleep(), how can I achieve my purpose. I give him my humble answer. Anyway, I encourage other people to write sample code using the appropriate API for achieving the same goal. With no hard feelings, special thanks to Martin James.
Find a synchro API on your OS that allows a wait with a timeout, eg. WaitForSingleObject() on Windows. If you want to change the delay, change the timeout and signal the event upon which the WFSO is waiting to make it return 'early' and restart the wait with the new timeout.
Polling with Sleep(1) loops is rarely justifiable.
Create a waitable timer. When you set the timer, you can specify a callback function that will run in the setting thread's context. This means you can do it with two threads, but it actually works just fine with only a single thread as well.
The main advantage of a waitable timer is, however, that it is more accurate and more reliable than Sleep. A timer is conceptually much different from Sleep insofar as Sleep only gives up control and the scheduler marks the thread as ready to run when the time is up and when the scheduler runs anyway. It doesn't do anything beyond that. Which means that the thread will eventually be scheduled to run again, like any other thread that is ready.
A thread that is waiting on a timer (or other waitable object) causes the scheduler to run when the timer is up and has its priority temporarily boosted. It therefore runs not only more reliably and more closely to the desired time, but also earlier than all other threads with the same base priority. Which does not give a realtime guarantee but at least gives a sort of "soft guarantee".
If you still want to use Sleep, use SleepEx instead which you can alert, either by queueing an APC, or by calling the undocumented NtAlertThread function.
In any case, Sleep is troublesome not only because of being unreliable, but also because it bases on the granularity of the system-wide timer. Which you can, of course, set to as low as 1ms (or less on some systems), but that will cause a lot of unnecessary interrupts.
Related
I am trying to develop a contiki piece of code in which I need to wait for three seconds for a transducer output. Although this may sound quite un-transducer like, at development time, I want to simulate the behavior at human readable speeds and hence I need to set the timer to say 3 seconds.
The contiki timer library is pretty well documented and has a good series of examples which mention the creation, setting and resetting of the timer. However, if I have code like the following:
timer_set(&transducerOutputWaitTimer);
bool if_blk_executed;
if(timer_expired(&&transducerOutputWaitTimer)){
if_blk_executed = true;
//do something
}
if(if_blk_executed)
printf("Sunrise");
else
printf("It is not dawned yet");
Now the expiry is not immediately triggered after the timer is set. So the if block is never executed. Effectively it will never dawn.
Now there are two ways in which I can get the system to wait. One, by adding a while loop on the timer like this:
while(!timer_expired(&&transducerOutputWaitTimer)){};
//do something
or
cpu_delay_usecs{mytimerdur_in_secs*10^6*};
//do something
I do not see that either approaches are elegant. While one wastes CPU cycles, the other enforces an unnecessarily large computation.
Is there any better way? I know that clock
Attached to this question are the following two:
How can I trigger one Contiki protothread from another? If I can do this, I can get the interrupt that causes the transducerOutput to invoke a process from which I can trigger the etimer and process events.
What exactly does the CPU delay mean? Does this mean that the entire CPU clock cycles get held back for the given duration? If yes, how are other processes running currently in the system affected?
Updates
Update 1: the while method did not work. The code possibly went into an infinite loop.
Update 2: I tried with a clock_delay(3*CLOCK_SECOND) approach after setting my timer. It worked. However, in this case, why do I need the timer method at all?
Update 3 (This changes the context of the answer, hence adding this comment as per the suggestion)
My Timer needs to be used outside a process in a different void function. In that case, I need to use the timer() library rather than the etimer (which is specific to a process. So how do I get my method to wait for the desired time in such cases?
There are several Contiki abstractions built on top of the timer library - there is usually no need to user the timer_t structures directly. Instead, there are event timers (struct etimer), which play well together with Contiki processes, and callback timers (struct ctimer), both of which internally use struct timer. For a less RAM hungry option with a more limited API there are second timers (struct stimer). Finally, there is a single real-time timer in the system (struct rtimer).
An example event timer usage: set and wait for 3 seconds:
static struct etimer timer;
etimer_set(&timer, 3 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
How can I trigger one Contiki protothread from another?
There is process_poll - see the process API documentation for more examples.
What exactly does the CPU delay mean?
It's a form of busy waiting. Don't use the delay functions or other forms of busy waiting for delays longer than microseconds - not energy efficient, does not allow other processes to run, and in worst case the watchdog might expire.
I am doing C program in Linux . I have a main thread which continuously updates values of two variables and other thread write those variable values into a file every 20 milliseconds. I have used usleep to achieve this time interval. sample code is below.
main()
{
.
.
.
.
.
pthread_create(...write_file..); /* started another thread by passing a function write_file */
while(variable1)
{
updates value of variables
}
return 0;
}
void write_file()
{
.
.
.
.
fp = fopen("sample.txt" , "a");
while(variable2)
{
fprintf(fp," %d \n", somevariable);
usleep(20 * 1000);
}
fclose(fp);
}
Is it suitable to use usleep function achieve 20 milliseconds time interval or should I use some other methods like Timer.?
Is this usleep is accurate enough ? Does this sleep function any way affect the main thread ?
Using of sleep() family often results in non-precise timing, especially when process has many CPU-consuming threads and required intervals are relatively small, like 20ms. So you shouldn't assume that *sleep() call blocks execution exactly to specified time. For described above situation actual sleep duration may be even twice or more greater than specified (assuming that kernel is not real-time one). As result you should implement some kind of compensation logic, that adjusts sleep duration for subsequent calls.
More precise (but of course not ideal) approach is to use POSIX timers. See timer_create(). The most precise timers are the ones that use SIGEV_SIGNAL or SIGEV_THREAD_ID notifications (latter is only on Linux systems). As signal number you can use one of the real-time signals (SIGRTMIN to SIGRTMAX), but be aware that pthread implementations often use few of these signals internally, so you should choose actual number carefully. And also doing something in signal handler context requires extra attention, because not every library function may be used safely here. You can find safe list here.
P.S. Also note that select() called with empty sets is a fairly portable way to sleep with subsecond precision.
Sleeping: sleep() and usleep()
Now, let me start with the easier timing calls. For delays of multiple seconds, your best bet is probably to use sleep(). For delays of at least tens of milliseconds (about 10 ms seems to be the minimum delay), usleep() should work. These functions give the CPU to other processes (``sleep''), so CPU time isn't wasted. See the manual pages sleep(3) and usleep(3) for details.
For delays of under about 50 milliseconds (depending on the speed of your processor and machine, and the system load), giving up the CPU takes too much time, because the Linux scheduler (for the x86 architecture) usually takes at least about 10-30 milliseconds before it returns control to your process. Due to this, in small delays, usleep(3) usually delays somewhat more than the amount that you specify in the parameters, and at least about 10 ms.
nanosleep()
In the 2.0.x series of Linux kernels, there is a new system call, nanosleep() (see the nanosleep(2) manual page), that allows you to sleep or delay for short times (a few microseconds or more).
For delays <= 2 ms, if (and only if) your process is set to soft real time scheduling (using sched_setscheduler()), nanosleep() uses a busy loop; otherwise it sleeps, just like usleep().
The busy loop uses udelay() (an internal kernel function used by many kernel drivers), and the length of the loop is calculated using the BogoMips value (the speed of this kind of busy loop is one of the things that BogoMips measures accurately). See /usr/include/asm/delay.h) for details on how it works.
Source: http://tldp.org/HOWTO/IO-Port-Programming-4.html
Try use nanosleep() instead usleep(), it should be more accurately for 20ms interval.
Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.
So I have an embedded Linux device that is connected to a motor controller via a serial port. I am writing an interface library which makes a lot of nice generic functions which other programs will call. One of which is a function to run the program that is currently on the controller's flash disk:
int run_motor_program(int serial_fd, char *label, timeout);
The general pseudocode for this function is:
call write(serial_fd, "start program at `label`")
perform a couple read()'s / write()'s to check whether program has started on the motor controller
do
/* some stuff */
while(program is running AND timeout hasn't exceeded)
If the timeout exceeded, kill motor and return timeout error
The timeout in the above function definition is used in case something goes wrong while running the program on the motor controller. If the motor controller gets stuck in a longer loop than expected, I need the ability to stop program.
The only ways I know for keeping track of a timeout are:
1) Calling gettimeofday() before and during the loop to see if elapsed time is > timeout value passed in
2) Calling clock_gettime() and basically doing the same as 1.
3) Using timer_create() before the loop and timer_getoverrun() in the loop to check if the time has elapsed (this seems to be the most elegant solution, but I can't seem to get timer_getoverrun() to work with SIGEV_NONE [I don't want to use signals]).
Which of these (or if anyone has any other suggestions) is the best way to handle including a timeout in a function? I really only need resolution down to the millisecond.
I tend to do option 1 myself. If subsecond granularity isn't needed, then I'll use time. Typically the work is checking for IO, so I also use a select with a timeout configured.
You could consider using one of the alarm signal mechanisms. The simplest and oldest is alarm(), which schedules a SIGALRM signal after the specified number of seconds. If you have a signal handler for SIGALRM, your process won't die but will allow you to recover from the error.
The primary limitation of alarm() is that it deals in whole seconds. There are a plethora of sub-second or fractional second alternatives. You should look at setitimer(). You might use nanosleep() but you'd probably also need to use threads since nanosleep() blocks the calling thread. That moves it up the complexity scale. There are calls like pthread_cond_timedwait() that could also be used in a threaded program.
Your prototype int run_motor_program(int serial_fd, char *label, timeout); won't compile; you need to define the type of the timeout argument. You also need to decide what your argument means - whether it is an interval or duration of time (the number of seconds to run the motor for before timing out) or whether it is the end time (the Unix time after which the program must be stopped). There are various sub-second structures that you'll have to negotiate. Your choice is likely to be affected by which system call you use for implementing the timeout.
I'm trying simulate a key down and key up action.
For example: 2638 millseconds.
SendMessage(hWnd, WM_KEYDOWN, keyCode, 0);
Sleep(2638);
SendMessage(hWnd, WM_KEYUP, keyCode, 0);
How would you know if it really worked?
You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.
To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)
On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.
Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.
Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)
Sleep works in terms of the standard Windows thread scheduling. It is accurate up to about 20-50 milliseconds.
So that it's ok for user experience-dependent things. However it's absolutely inappropriate for real-time things.
Beside of this, there're much better ways to simulate keyboard/mouse events. Please see SendInput.
The sleep() function will return before the desired delay when the requested delay is shorter than the time left until the next interrupt occurs. But this only points out that you want to sleep for a shorter period of time than currently is supported by your system. It is advisable to setup the multimedia timer resource to a higher interrupt frequency to obtain better matching of the observed sleep delay with respect to the desired delay.
The the comments in the following threads:
How to get an accurate 1ms Timer Tick under WinXP
Sleep Less Than One Millisecond
The command Sleep() will ensure that thread is suspended at least the amount of time which is given as argument. Operating system does not guarantee it. For detailed discussion you can refer the below post
how is sleep implemented at OS level?