Minimum time quantum needed in nanosleep(), usleep() to yield the CPU - c

In concurrent code in my workplace, there are several occurrences of nanosleep() or usleep() with a non-zero constant to free up the CPU without relying on futex(), or a sleeping synchronization primitive to put the thread to sleep (for instance, when waiting for an element from a concurrent queue). The code claims to prevent pathological cases where threads consume CPU without doing any actual work when other threads are available to get scheduled on that CPU. This sounds reasonable by itself assuming the cooperation between the sleep functions and the kernel thread scheduler is correct.
Is there a concept in linux where a minimum duration passed to nanosleep(), usleep(), et al. is known to put the calling thread to sleep and run another thread in it's place on the same core when cores are oversubscribed? And if the duration is smaller than that, then the thread does not actually yield the CPU but continue spinning? This forms the basis of the constant passed to the sleep() functions in order to make it behave like a coarse-yield.
I realize that a sched_yield() is probably better suited for what the code is doing; but I just wanted to educate myself on the behavior of the linux sleep() functions before benchmarking a replacement or improvement on the existing code.
Thanks!

The man page makes it clear that it no longer busy-waits.
In order to support applications requiring much more precise pauses
(e.g., in order to control some time-critical hardware), nanosleep()
would handle pauses of up to 2 milliseconds by busy waiting with
microsecond precision when called from a thread scheduled under a
real-time policy like SCHED_FIFO or SCHED_RR. This special extension
was removed in kernel 2.5.39, and is thus not available in Linux
2.6.0 and later kernels.

#stark has answered your question as written, but to elaborate, don't do that. If you're waiting for an event to happen, perform an operation that waits for the event, like pthread_cond_wait, sem_wait, poll, read, etc. rather than sleeping and retrying. This will avoid wasting lots of cpu time, and it also discourages erroneous programming models full of data races (because normally the same primitive that waits also ensures exclusive access/synchronization).

Related

Is there ever a valid reason to call pthread_yield() when running under a modern/pre-emptive scheduler?

pthread_yield is documented as "causes the calling thread to relinquish the CPU", but on a modern OS/scheduler, the relinquishing of the CPU happens automatically at the appropriate times (i.e. whenever the thread calls a blocking operation, and/or when the thread's quantum has expired). Is pthread_yield() therefore vestigial/useless except in the special case of running under a co-operative-only task scheduler? Or are there some use-cases where calling it would still be correct/useful even under a modern pre-emptive scheduler?
pthread_yield() gives you a chance to do a short sleep -- not a timed sleep. You relinquish the remainder of time slice to some other thread or process, but you don't put the thread in a wait queue.
Also a while ago I read about how schedulers prioritizing interactive processes. These are the processes that user interacts with directly and you feel their sluggishness most (you have less of a feeling of your system being slow if your UI is responsive). One of the properties of interactive processes is that they have little to do and mostly don't use entire time slice. So if a process keeps yielding before its time slice is up you assume it is interactive and you boost its priority. There were exploits that used this trick to effectively use 99% of CPU while showing the offending process as being at 0%.

How does a process know that semaphore is available

I have a very basic doubt.
when a process is waiting on a semaphore , it goes into sleep state.
So no way it can poll the semaphore value.
Does kernel poll the semaphore value and if available sends a signal to all process waiting for it ? If so, wont it be too much overhead for the kernel.
Or does the signal() call internally notifies all the process waiting for the semaphore.
Please let me know on this.
The operating system schedules the process once more when the operating system is told by another process that it has done with the semaphore.
Semaphores are just one of the ways of interacting with the OS scheduler.
The kernel doesn't poll the semaphore; it doesn't need to. Every time a process calls sem_post() (or equivalent), that involves interaction with the kernel. What the kernel does during the sem_post() is look up whatever processes have previously called sem_wait() on the same semaphore. If one or more processes have called sem_wait(), it picks the process with the highest priority and schedules it. This shows up as that sem_wait() finally returning and that process carries on executing.
How This is Implemented Under the Hood
Fundamentally the kernel needs to implement something called an "atomic test and set". That is an operation where by the value of some variable can be tested and, if a certain condition is met (such as the value == 0) the variable value is altered (e.g. value = 1). If this succeeds, the kernel will do one thing, (like schedule a process), if this does not (because the condition value==0 was false) the kernel will do something difference (like put a process on the do-not-schedule list). The 'atomic' part is that this decision is made without anything else being able to look at and change the same variable at the same time.
There's several ways of doing this. One is to suspend all processes (or at least all activity within the kernel) so that nothing else is testing the value of the variable at the same time. That's not very fast.
For example, the Linux kernel once had something called the Big Kernel Lock. I don't know if this was used to process semaphore interactions, but that's the kind of thing that OSes used to have for atomic test & sets.
These days CPUs have atomic test & set op codes, which is a lot faster. The good ole' Motorola 68000 had one of these a long time ago; it took CPUs like the PowerPC and the x86 many, many years to get the same kind of instruction.
If you root around inside linux you'll find mention of futexes. a futex is a fast mutex - it relies on a CPU's test/set instruction to implement a fast mutex semaphore.
Post a Semaphore in Hardware
A variation is a mailbox semaphore. This is a special variation on a semaphore that is extremely useful in some system types where hardware needs to wake up a process at the end of a DMA transfer. A mailbox is a special location in memory which when written to will cause an interrupt to be raised. This can be turned into a semaphore by the kernel because when that interrupt is raised, it goes through the same motions as it would had something called sem_post().
This is incredibly handy; a device can DMA a large amount of data to some pre-arranged buffer, and top that off with a small DMA transfer to the mail box. The kernel handles the interrupt, and if a process has previously called sem_wait() on the mailbox semaphore the kernel schedules it. The process, which also knows about this pre-arranged buffer, can then process the data.
On a real time DSP systems this is very useful, because it's very fast and very low latency; it allows a process to receive data from some device with very little delay. The alternative, to have a full up device driver stack that uses read() / write() to transfer data from the device to the process is incredibly slow by comparison.
Speed
The speed of semaphore interactions depends entirely on the OS.
For OSes like Windows and Linux, the context switch time is fairly slow (in the order of several microseconds, if not tens of microseconds). Basically this means that when a process calls something like sem_post(), the kernel is doing a lot of different things whilst it has the opportunity before finally returning control to the process(es). What it's doing during this time could be, well, almost anything!
If a program has made use of a lot threads, and they're all rapidly interacting between themselves using semaphores, quite a lot of time is lost to the sem_post() and sem_wait(). This places an emphasis on doing a decent amount of work once a process has returned from sem_wait() before calling the next sem_post().
However on OSes like VxWorks, the context switch time is lightning fast. That is there's very little code in the kernel that gets run when sem_post() is called. The result is that a semaphore interaction is a lot more efficient. Moreover, and OS like VxWorks is written in such a way so as to guarantee that the time take to do all this sem_post() / sem_wait() work is constant.
This influences the architecture of one's software on these systems. On VxWorks, where a context switch is cheap, there's very little penalty in having a large number of threads all doing quite small tasks. On Windows / Linux there's more of an emphasis on the opposite.
This is why OSes like VxWorks are excellent for hard real time applications, and Windows / Linux are not.
The Linux PREEMPT_RT patch set in part aims to improve the latency of the linux kernel during operations like this. For example, it pushes a lot of device interrupt handlers (device drivers) up into kernel threads; these are scheduled almost just like any other thread. The idea is to reduce the amount of work that is being done by the kernel (and have more done by kernel threads), so that the work it still has to do itself (such as handling sem_post() / sem_wait()) takes less time and is more consistent about how long this takes. It still not a hard guarantee of latency, but it's a pretty good improvement. This is what we call a soft-realtime kernel. The impact though is that overall throughput of the machine can be lower.
Signals
Signals are nasty, horrible things that really get in the way of using things like sem_post() and sem_wait(). I avoid them like the plague.
If you are on a Linux platform and you do have to use signals, take a serious long look at signalfd (man page). This is a far better way of dealing with signals because you can choose to accept them at a convenient time (simply by called read()), instead of having to handle them as soon as they occur. Certainly if you're using epoll() or select() anywhere at all in a program then signalfd is the way to go.

sleep vs SIG_ALARM usage and CPU performance

When should I use sleep() and a reconfiguration of SIG_ALRM?
For example, I'm thinking of scheduling some task at some specific time. I could spawn a thread with an sleep() call inside and when sleep() returns, do some task, or I could specify a handler for SIG_ALRM and do the task inside the alarm interrupt. Do they take the same CPU usage and time? (besides the thread).
I've done some "tests" looking at the processes with ps command, showing me a CPU % and a CPU TIME of 0, but I'm wondering if I'm missing something or I'm looking at the wrong data.
BTW, I'm using Linux.
Note that what you do in a signal handler is very limited. You can only call certain POSIX functions and most of the C library is not allowed. Certainly not any C functions that might allocate or free memory or do I/O (you can use some POSIX I/O calls).
The sleeping thread might be the easiest way for you to go. If you use nanosleep it won't cause a signal at all, so you won't need to mess with handlers and such.
If your program is doing work, a common pattern is to have a central work loop, and in that loop you can check the time periodically to see if you should run your delayed job. Or you can skip checking the time and check a flag variable instead which your SIG_ALARM handler will set. Setting a sig_atomic_t variable is one of the things a signal handler is allowed to do.
CPU usage for a sleeping task is zero. It goes into the kernel as a timer event and is woken up to run when the timer expires.

The disadvantages of using sleep()

For c programming, if i want to coordinate two concurrently executing processes, I can use sleep(). However, i heard that sleep() is not a good idea to implement the orders of events between processes? Are there any reasons?
sleep() is not a coordination function. It never has been. sleep() makes your process do just that - go to sleep, not running at all for a certain period of time.
You have been misinformed. Perhaps your source was referring to what is known as a backoff after an acquisition of a lock fails, in which case a randomized sleep may be appropriate.
The way one generally establishes a relative event ordering between processes (ie, creates a happens-before edge) is to use a concurrency-control structure such as a condition variable which is only raised at a certain point, or a more-obtuse barrier which causes each thread hitting it to wait until all others have also reached that point in the program.
Using sleep() will impact the latency and CPU load. Let's say you sleep for 1ms and check some atomic shared variable. The average latency will be (at least) 0.5ms. You will be consuming CPU cycles in this non-active thread to poll the shared atomic variable. There are also often no guarantees about the sleep time.
The OS provides services to communicate/synchronize between threads/processes. Those have low latency, consume less CPU cycles, and often have other guarantees - those are the ones you should use... (E.g. condition variables, events, semaphores etc.). When you use those the thread/process does not need to "poll". The kernel wakes up the waiting threads/processes when needed (the thread/process "blocks").
There are some rare situations where polling is the best solution for thread/process synchronization, e.g. a spinlock, usually when the overhead of going through the kernel is larger than the time spent polling.
Sleep would not be a very robust way to handle event ordering between processes as there are so many things that can go wrong.
What if your sleep() is interrupted?
You need to be a bit more specific about what you mean by "implement the order of events between processes".
In my case, I was using this function in celery. I was doing time.sleep(10). And it was working fine if the celery_task was called once or twice per minute. But it created chaos in one case.
If the celery_task is called 1000 times
I had 4 celery workers, so the above 1000 celery calls were queued for execution.
The first 4 calls were executed by the 4 workers and the remaining 996 were still in the queue.
the workers were busy in the 4 tasks for 10 seconds and after 10 secs it took the next 4 tasks. Going this way it may take around 1000\4*10=2500 seconds.
Eventually, we had to remove time.sleep as it was blocking the worker for 10 seconds in my case.

What could produce this bizzare behavior with two threads sleeping at the same time?

There are two threads. One is an events thread, and another does rendering. The rendering thread uses variables from the events thread. There are mutex locks but they are irrelevant since I noticed the behavior is same even if I remove them completely (for testing).
If I do a sleep() in the rendering thread alone, for 10 milliseconds, the FPS is normally 100.
If I do no sleep at all in the rendering thread and a sleep in the events thread, the rendering thread does not slow down at all.
But, if I do a sleep of 10 milliseconds in the rendering thread and 10 in the events thread, the FPS is not 100, but lower, about 84! (notice it's the same even if mutex locks are removed completely)
(If none of them has sleeps it normally goes high.)
What could produce this behavior?
--
The sleep command used is Sleep() of windows or SDL_Delay() (which probably ends up to Sleep() on windows).
I believe I have found an answer (own answer).
Sleeping is not guaranteed to wait for a period, but it will wait at least a certain time, due to OS scheduling.
A better approach would be to calculate actual time passed explicitly (and allow execution via that, only if certain time has passed).
The threads run asynchronously unless you synchronise them, and will be scheduled according to the OS's scheduling policy. I would suggest that the behaviour will at best be non-deterministic (unless you were running on an RTOS perhaps).
You might do better to have one thread trigger another by some synchronisation mechanism such as a semaphore, then only have one thread Sleep, and the other wait on the semaphore.
I do not know what your "Events" thread does but given its name, perhaps it would be better to wait on the events themselves rather than simply sleep and then poll for events (if that is what it does). Making the rendering periodic probably makes sense, but waiting on events would be better doing exactly that.
The behavior will vary depending on many factors such as the OS version (e.g. Win7 vs. Win XP) and number of cores. If you have two cores and two threads with no synchronization objects they should run concurrently and Sleep() on one thread should not impact the other (for the most part).
It sounds like you have some other synchronization between the threads because otherwise when you have no sleep at all in your rendering thread you should be running at >100FPS, no?
In case that there is absolutely no synchronization then depending on how much processing happens in the two threads having them both Sleep() may increase the probability of contention for a single core system. That is if only one thread calls Sleep() it is generally likely to be given the next quanta once it wakes up and assuming it does very little processing, i.e. yields right away, that behavior will continue. If two threads are calling Sleep() there is some probability they will wake up in the same quanta and if at least one of them needs to do any amount of processing the other will be delayed and the observed frequency will be lower. This should only apply if there's a single core available to run the two threads on.
If you want to maintain a 100FPS update rate you should keep track of the next scheduled update time and only Sleep for the remaining time. This will ensure that even if your thread gets bumped by some other thread for a CPU quanta you will be able to keep the rate (assuming there is enough CPU time for all processing). Something like:
DWORD next_frame_time = GetTickCount(); // Milli-seconds. Note the resolution of GetTickCount()
while(1)
{
next_frame_time += 10; // Time of next frame update in ms
DWORD wait_for = next_frame_time - GetTickCount(); // How much time remains to next update
if( wait_for < 11 ) // A simplistic test for the case where we're already too late
{
Sleep(wait_for);
}
// Do periodic processing here
}
Depending on the target OS and your accuracy requirements you may want to use a higher resolution time function such as QueryPerformanceCounter(). The code above will not work well on Windows XP where the resolution of GetTickCount() is ~16ms but should work in Win7 - it's mostly to illustrate my point rather than meant to be copied literally in all situations.

Resources