Is a Windows Timer as accurate as Sleep()? - c

Sleep() is very accurate, so for example if I want to sleep for 10 hours:
Sleep(36000000); // sleep for 10 hours
My thread will wait for exactly 10 hours (plus the time that Windows needs to wake up my thread, which is negligible).
However, since Sleep() will block my UI thread, I which to use Windows Timers instead. So is a Windows Timer as accurate as Sleep()? that is, will it wait for exactly 10 hours (plus the time it needs for my Window Procedure to receive the WM_TIMER message)?

Yes, the basic plumbing underneath Sleep() and SetTimer() is the same. Something you can see by calling timeBeginPeriod(), it affects the accuracy of both. It is the clock tick interrupt handler that counts sleeps and timers down and gets the thread scheduled when it gets ready to run again. Their sleep/wait time is adjusted when the system clock gets re-calibrated by a time server.

Related

When will the thread resume running after it has called sleep() or Sleep()?

As per the linux programmer's manual, which says:
"sleep() makes the calling thread sleep until seconds seconds have elapsed or a signal arrives which is not ignored."
I think a thread would not resume execution as soon as its sleep duration expires.It may wake up early or late,It's not determinated.Right?
Typically, threads don’t wake up early (unless as aforementioned there is a signal to the kernel from a separate thread indicating that it should do so).
In most kernels, the time argument of a sleep function is interpreted into a number of SysTicks. The task then reports to the kernel that it doesn’t think it will need compute time for at least that many ticks - so it is excluded from process queuing until the SysTick register is greater than (value at time of call) + (time argument).
Typically, in a preemptive kernel that isn’t bogged down, a sleep call will last the desired time + 1 SysTick +- 1 SysTick (so anywhere from on time to a wee bit late).
That is, at least, in the embedded world. All the timing goes mushy when you transition to x86.
As per the linux programmer's manual, which says " Furthermore, after the sleep completes, there may still be a delay before the CPU becomes free to once again execute the calling thread.".
So the time is not determined.

Contiki timer without pausing the process

is there a way for wait that a timer expire without pausing the process? If we use
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&et));
we pause the process.
Suppose we want continue to do other stuff and when the timer expires check if the value of a function has changed.
If it is not possible, may I have to start a new process that just wait?
Thank you.
No, there isn't - that's a fundamental consequence of how event timers work. Contiki multithreading / multiprocessing is cooperative - processes have to voluntarily pause execution to let other processes run. Since event timers are managed by another (system) process, if your process never gives up execution, the timer process never gets to run. Hence, your process will never get the timer event back.
Sounds like event timer might not be the best option for you. You can use rtimer instead:
rtimer_clock_t end = RTIMER_NOW() + RTIMER_SECOND;
while(RTIMER_CLOCK_LT(RTIMER_NOW(), end)) {
/* do stuff */
}
Remember to poke watchdog timer occasionally - if your process will be stuck doing things for a couple of seconds (not recommended anyway), the watchdog will expire.
the normal method is to write a interrupt handler to handle the timer interrupt,
The interrupt handler has a higher priority than the main application.
So when the interrupt event occurs, the interrupt handler runs to completion, then execution returns to the main application

SetThreadPriority SetPriorityClass and SetProcessAffinityMask

I am having a small issue which I'm not understanding quite entirely.
So basically I have a thread which is waiting on an event and a timeSetEvent from WinMM which is pulsing the event every 1ms.
I put some query performance counter in my thread to find out the the time distance between each thread start. The thread is currently just waiting for the event and checking its own rate and doing nothing else.
I verified that he WinMM is correctly scheduled every 1ms, however, once the event is signaled, sometimes my thread is being preempted and runs ~6ms later than expected. At this point I started playing with priorities and affinity. So i cranked up my priority class to real time and my threads to time critical. And when on core 0 my thread still gets preempted every now and then (~1-2 times every 15 seconds). Instead if I set the affinity to core 2 it never gets preempted (like never ever, I ran the test software for a few hours, never got prempted once). Are there some driver/system threads running with priority above real time/time critical that are bound to core 0 only?
I am running on windows 7 pro on Intel i7-3470.

How to decrease CPU Usage when it reaches to 100% when using while(1) loop

I am working on UDP Server/Multiple Client Application.
There are multiple threads handling multiple clients.
There is one single thread which keeps on sending KEEPALIVE Messages to each active clients. Since this thread is in while(1) so CPU Usage reaches to 100%.
Since I want this functionality to go on, I have used a while(1) thread. I also tried adding a sleep after each execution of while but I don't think sleep() frees the CPU. Is there any way I can decrease CPU Usage for a specific time. e.g after a single execution of while, I can free up the CPU for like 10 secs and then continue back to while.
Please help me. Thanks a lot in advance.
sleep - Suspends the execution of the current thread until the time-out interval elapses.
And gives processor to other threads which are ready to run.
source : http://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx
So, just sleep does it all you need.
Sending keep-alive messages inside while(1) loop is bad idea because not only you burn all the CPU time but you also flood the network and storm the recipients of those messages. You can use Sleep() WinAPI function with a reasonable delay (10 seconds you suggested look reasonable) to suspend your sending thread for a while:
while( 1 ) {
sendKeepAlive();
Sleep( 10 * 1000 ); // 10 seconds
}
Sleep() definitely does suspend your thread and while the thread is suspended it doesn't consume CPU time.
instead of sleep try int usleep(useconds_t usec);
http://pubs.opengroup.org/onlinepubs/7908799/xsh/usleep.html
For windows specific you can give a try to timeBeginPeriod / timeEndPeriod.See the link – http://www.geisswerks.com/ryan/FAQS/timing.html
On Linux I use nanosleep() and then sleep(): nanosleep(&tsleep,NULL); sleep(1500)
int period=100000;
int limit=300;
struct timespec twork,tsleep; //time to work, and time to sleep
twork.tv_sec=0;
twork.tv_nsec=period*limit*1000;
tsleep.tv_sec=0;
tsleep.tv_nsec=period*limit*1000;

What could produce this bizzare behavior with two threads sleeping at the same time?

There are two threads. One is an events thread, and another does rendering. The rendering thread uses variables from the events thread. There are mutex locks but they are irrelevant since I noticed the behavior is same even if I remove them completely (for testing).
If I do a sleep() in the rendering thread alone, for 10 milliseconds, the FPS is normally 100.
If I do no sleep at all in the rendering thread and a sleep in the events thread, the rendering thread does not slow down at all.
But, if I do a sleep of 10 milliseconds in the rendering thread and 10 in the events thread, the FPS is not 100, but lower, about 84! (notice it's the same even if mutex locks are removed completely)
(If none of them has sleeps it normally goes high.)
What could produce this behavior?
--
The sleep command used is Sleep() of windows or SDL_Delay() (which probably ends up to Sleep() on windows).
I believe I have found an answer (own answer).
Sleeping is not guaranteed to wait for a period, but it will wait at least a certain time, due to OS scheduling.
A better approach would be to calculate actual time passed explicitly (and allow execution via that, only if certain time has passed).
The threads run asynchronously unless you synchronise them, and will be scheduled according to the OS's scheduling policy. I would suggest that the behaviour will at best be non-deterministic (unless you were running on an RTOS perhaps).
You might do better to have one thread trigger another by some synchronisation mechanism such as a semaphore, then only have one thread Sleep, and the other wait on the semaphore.
I do not know what your "Events" thread does but given its name, perhaps it would be better to wait on the events themselves rather than simply sleep and then poll for events (if that is what it does). Making the rendering periodic probably makes sense, but waiting on events would be better doing exactly that.
The behavior will vary depending on many factors such as the OS version (e.g. Win7 vs. Win XP) and number of cores. If you have two cores and two threads with no synchronization objects they should run concurrently and Sleep() on one thread should not impact the other (for the most part).
It sounds like you have some other synchronization between the threads because otherwise when you have no sleep at all in your rendering thread you should be running at >100FPS, no?
In case that there is absolutely no synchronization then depending on how much processing happens in the two threads having them both Sleep() may increase the probability of contention for a single core system. That is if only one thread calls Sleep() it is generally likely to be given the next quanta once it wakes up and assuming it does very little processing, i.e. yields right away, that behavior will continue. If two threads are calling Sleep() there is some probability they will wake up in the same quanta and if at least one of them needs to do any amount of processing the other will be delayed and the observed frequency will be lower. This should only apply if there's a single core available to run the two threads on.
If you want to maintain a 100FPS update rate you should keep track of the next scheduled update time and only Sleep for the remaining time. This will ensure that even if your thread gets bumped by some other thread for a CPU quanta you will be able to keep the rate (assuming there is enough CPU time for all processing). Something like:
DWORD next_frame_time = GetTickCount(); // Milli-seconds. Note the resolution of GetTickCount()
while(1)
{
next_frame_time += 10; // Time of next frame update in ms
DWORD wait_for = next_frame_time - GetTickCount(); // How much time remains to next update
if( wait_for < 11 ) // A simplistic test for the case where we're already too late
{
Sleep(wait_for);
}
// Do periodic processing here
}
Depending on the target OS and your accuracy requirements you may want to use a higher resolution time function such as QueryPerformanceCounter(). The code above will not work well on Windows XP where the resolution of GetTickCount() is ~16ms but should work in Win7 - it's mostly to illustrate my point rather than meant to be copied literally in all situations.

Resources