I'm extending the functionality of a semaphore. I ran into a roadblock when I realized I don't know the implementation of an actual semaphore and to make sure my code ran correctly, I needed to know this.
I know a semaphore works by blocking threads that are waiting on it when they call sem_wait() and another thread currently has it locked. The thread is then blocked and then put into a wait list for that semaphore.
My question relates to what happens on a sem_post(). Is the next thread pulled off the waiting list, set as the locking thread, and allowed to be unblocked? Or is the scheme for posting completely different?
Thanks!
The next thread to unblock on it's sem_wait() will be whatever thread the OS decides is the next one to context switch into. Nobody makes any guarantee of ordering; it depends on your OS's scheduling strategy. It might be the thread that has been off the CPU for the longest, or the one that has been assigned the highest "priority", or the one that has historically had certain resource-usage statistics, or whatever.
Most likely, your current thread (the one that called sem_post()) will continue running for a while, until it either starts waiting for user input, blocks on another semaphore, or runs out of its os-allotted time slice. Then, the OS will switch in some totally unrelated process to run for a fraction of a second (probably Firefox or something), then go off and handle some network traffic, get itself a cup of tea, and, finally, when it gets around to it, pick whichever of your other threads it feels like, based on something like whether it feels based on past history that the particular thread is more CPU or I/O-bound.
In many OSes, priority is given to I/O-bound processes that haven't been around for very long. The theory is that new processes might be short-lived (if it's been around for five hours already, odds are it won't be finishing up in the next 1ms) so we might as well get them over with. I/O-bound processes are likely to continue to be I/O-bound, which means that chances are they are going to switch off the CPU shortly while waiting for other resources. Basically, the OS wants to find the process that it's going to be able to be done with ASAP, so it can get back to sipping its tea and running your malware.
Semaphores have two operations:
P() To acquire the semaphore (you seem to call this sem_wait)
V() To release the semaphore (you seem to call this sem_post)
Semaphores also have an integer associated to them, which is the number of concurrent threads allowed to pass P() without blocking. Other calls to P() will block until V() is called to free up spots.
That is the classic definition of a semaphore.
Edit: Semaphores do not make any guarantee of order. They don't have to actually use a queue or other FIFO structure. When only one thread is allowed at a time, when it calls V(), another (possibly random) thread will then return from its P() call and continue.
According to the IEEE standards, the behavior of POSIX semaphores:
If the semaphore value resulting from this operation is positive, then no threads were blocked waiting for the semaphore to become unlocked; the semaphore value is simply incremented.
If the value of the semaphore resulting from this operation is zero, then one of the threads blocked waiting for the semaphore shall be allowed to return successfully from its call to sem_wait(). If the Process Scheduling option is supported, the thread to be unblocked shall be chosen in a manner appropriate to the scheduling policies and parameters in effect for the blocked threads. In the case of the schedulers SCHED_FIFO and SCHED_RR, the highest priority waiting thread shall be unblocked, and if there is more than one highest priority thread blocked waiting for the semaphore, then the highest priority thread that has been waiting the longest shall be unblocked. If the Process Scheduling option is not defined, the choice of a thread to unblock is unspecified.
If the Process Sporadic Server option is supported, and the scheduling policy is SCHED_SPORADIC, the semantics are as per SCHED_FIFO above."
Related
I have created a program in C that creates 2 buffers. The buffer indices hold single characters, 'A' or 'b' etc... In order to learn more about multithreading, I created a set of semaphores based on the producer/consumer problem to produce characters and consume characters from the buffers. I have 3 producer threads for each buffer and 10 consumer threads. The consumers take one item from each buffer, then report it (freeing the memory of the consumed item also). Now, from what I've read, sem_wait() is supposed to signal the "longest waiting thread" when it comes out of a blocking state (I read this in a book and in an online POSIX library).
Now, is this actually true?
The application I have made should have both consumers and producers waiting at the same sem_wait() gate, but the producers get into the critical section more than double the time of any consumer. The consumers do have an extra semaphore to wait for, but that shouldn't make that huge of a difference. I can't seem to figure out why it's happening, so I'm hoping someone else does. If I sleep(1) on the producer threads, the consumers get in just fine and the buffers hover around 0 items...like I would think would happen otherwise.
Also, should thread creation order play any role in how I structure the program for fairness?
IE, produce one of each type in a round robin fashion until everyone is created and running.
Are there any methods anyone can describe to me to institute a more fair system of thread access? I've read that creating a FIFO queue system might be one solution, where the longest waiting thread has the highest priority (which is what I thought sem_wait() would do anyways).
Just wondering what methods are out there for both rudimentary and higher level threading.
The POSIX standard actually says that "the highest priority thread that has been waiting the longest shall be unblocked" only when the SCHED_FIFO or SCHED_RR scheduling policy applies to the blocked thread.
If you're not using one of those two realtime scheduling policies, then the semaphore does not have to be "fair".
I have created a program in C that creates 2 buffers. The buffer indices hold single characters, 'A' or 'b' etc... In order to learn more about multithreading, I created a set of semaphores based on the producer/consumer problem to produce characters and consume characters from the buffers. I have 3 producer threads for each buffer and 10 consumer threads. The consumers take one item from each buffer, then report it (freeing the memory of the consumed item also). Now, from what I've read, sem_wait() is supposed to signal the "longest waiting thread" when it comes out of a blocking state (I read this in a book and in an online POSIX library).
Now, is this actually true?
The application I have made should have both consumers and producers waiting at the same sem_wait() gate, but the producers get into the critical section more than double the time of any consumer. The consumers do have an extra semaphore to wait for, but that shouldn't make that huge of a difference. I can't seem to figure out why it's happening, so I'm hoping someone else does. If I sleep(1) on the producer threads, the consumers get in just fine and the buffers hover around 0 items...like I would think would happen otherwise.
Also, should thread creation order play any role in how I structure the program for fairness?
IE, produce one of each type in a round robin fashion until everyone is created and running.
Are there any methods anyone can describe to me to institute a more fair system of thread access? I've read that creating a FIFO queue system might be one solution, where the longest waiting thread has the highest priority (which is what I thought sem_wait() would do anyways).
Just wondering what methods are out there for both rudimentary and higher level threading.
The POSIX standard actually says that "the highest priority thread that has been waiting the longest shall be unblocked" only when the SCHED_FIFO or SCHED_RR scheduling policy applies to the blocked thread.
If you're not using one of those two realtime scheduling policies, then the semaphore does not have to be "fair".
I have a boss thread that spawns up to M worker threads. Over the lifetime of the program, workers may be added and removed. When the program-wide shutdown flag is signalled, I want to await the completion of these workers.
Currently, any of the threads can add/remove threads, but it's strictly not a requirement as long as any thread can initiate a spawn/removal.
What's stopping me from using a counting semaphore or pthread_barrier_wait() is that it expects a fixed number of threads.
I can't loop pthread_join() over all workers either because I'd risk leaking zombie threads that have exited and possibly since then been replaced.
The boss thread itself has no other purpose than spawning the threads initially and making sure that the process exits gracefully.
I've spent days on and off on this problem and cannot come up with something robust and simple; are there any fairly well-established ways to accomplish this with POSIX threads?
1) "Currently, any of the threads can add/remove threads"
and
2) "are there any fairly well-established ways to accomplish this with POSIX threads"
Yes. Don't do (1). Have the boss thread do it.
Or, you can protect the code which spawns threads with a critical section or mutex (I assume you are already doing this). They should check a flag to see if shutdown is in progress, and if it is, don't spawn any more threads.
You can also have a counter of "ideal number of threads" and "actual number of threads" and have threads suicide if they find "ideal > actual". (I.e. they should decrement actual, exit the critical section/mutex, then quit).
When you need to initiate shutdown, use the SAME mutex/section to set the flag. Once done, you know the number of threads cannot increase, so you can use the most recent value.
Indeed, to exit you can just have the boss thread set "ideal" to zero, exit the mutex, and repeatedly sleep 10ms and repeat until all threads have exited. Worst case is you wait an extra 10ms to quit. If that's too much cut it to 1ms.
These are just ideas. The central concept is that all thread creation/removal, and messages about thread creation/removal should be protected by a mutex to ensure that only one thread is adding/removing/querying status at a time. Once you have that in place, there is more than one way to do it...
Threads that want to initiate spawns/removals should ask the boss thread to actually do it for them. Then the boss thread doesn't have to worry about threads it doesn't know about, and you can use one of the simple methods you described in your question.
I'll take the opposite tac as some of the other answers since I have to do this now and again.
(1) Give every spawned thread access to a single pipe file descriptor either through the data passed through pthread_create or globally. Only the boss thread reads the pipe. Each thread announces its creation and termination to the boss via the pipe by passing its tid and boss adds or removes it from its list and pthread_joins it as appropriate. Boss can block on the pipe w/o having to do anything special.
(2) Do more or less the above with some other mechanism. Global ctr and list with accompanying condition variable to wake up boss; a message queue, etc.
Suppose I have multiple threads blocking on a call to pthread_mutex_lock(). When the mutex becomes available, does the first thread that called pthread_mutex_lock() get the lock? That is, are calls to pthread_mutex_lock() in FIFO order? If not, what, if any, order are they in? Thanks!
When the mutex becomes available, does the first thread that called pthread_mutex_lock() get the lock?
No. One of the waiting threads gets a lock, but which one gets it is not determined.
FIFO order?
FIFO mutex is rather a pattern already. See Implementing a FIFO mutex in pthreads
"If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy shall determine which thread shall acquire the mutex."
Aside from that, the answer to your question isn't specified by the POSIX standard. It may be random, or it may be in FIFO or LIFO or any other order, according to the choices made by the implementation.
FIFO ordering is about the least efficient mutex wake order possible. Only a truly awful implementation would use it. The thread that ran the most recently may be able to run again without a context switch and the more recently a thread ran, more of its data and code will be hot in the cache. Reasonable implementations try to give the mutex to the thread that held it the most recently most of the time.
Consider two threads that do this:
Acquire a mutex.
Adjust some data.
Release the mutex.
Go to step 1.
Now imagine two threads running this code on a single core CPU. It should be clear that FIFO mutex behavior would result in one "adjust some data" per context switch -- the worst possible outcome.
Of course, reasonable implementations generally do give some nod to fairness. We don't want one thread to make no forward progress. But that hardly justifies a FIFO implementation!
In a past question, I asked about implementing pthread barriers without destruction races:
How can barriers be destroyable as soon as pthread_barrier_wait returns?
and received from Michael Burr with a perfect solution for process-local barriers, but which fails for process-shared barriers. We later worked through some ideas, but never reached a satisfactory conclusion, and didn't even begin to get into resource failure cases.
Is it possible on Linux to make a barrier that meets these conditions:
Process-shared (can be created in any shared memory).
Safe to unmap or destroy the barrier from any thread immediately after the barrier wait function returns.
Cannot fail due to resource allocation failure.
Michael's attempt at solving the process-shared case (see the linked question) has the unfortunate property that some kind of system resource must be allocated at wait time, meaning the wait can fail. And it's unclear what a caller could reasonably do when a barrier wait fails, since the whole point of the barrier is that it's unsafe to proceed until the remaining N-1 threads have reached it...
A kernel-space solution might be the only way, but even that's difficult due to the possibility of a signal interrupting the wait with no reliable way to resume it...
This is not possible with the Linux futex API, and I think this can be proven as well.
We have here essentially a scenario in which N processes must be reliably awoken by one final process, and further no process may touch any shared memory after the final awakening (as it may be destroyed or reused asynchronously). While we can awaken all processes easily enough, the fundamental race condition is between the wakeup and the wait; if we issue the wakeup before the wait, the straggler never wakes up.
The usual solution to something like this is to have the straggler check a status variable atomically with the wait; this allows it to avoid sleeping at all if the wakeup has already occurred. However, we cannot do this here - as soon as the wakeup becomes possible, it is unsafe to touch shared memory!
One other approach is to actually check if all processes have gone to sleep yet. However, this is not possible with the Linux futex API; the only indication of number of waiters is the return value from FUTEX_WAKE; if it returns less than the number of waiters you expected, you know some weren't asleep yet. However, even if we find out we haven't woken enough waiters, it's too late to do anything - one of the processes that did wake up may have destroyed the barrier already!
So, unfortunately, this kind of immediately-destroyable primitive cannot be constructed with the Linux futex API.
Note that in the specific case of one waiter, one waker, it may be possible to work around the problem; if FUTEX_WAKE returns zero, we know nobody has actually been awoken yet, so you have a chance to recover. Making this into an efficient algorithm, however, is quite tricky.
It's tricky to add a robust extension to the futex model that would fix this. The basic problem is, we need to know when N threads have successfully entered their wait, and atomically awaken them all. However, any of those threads may leave the wait to run a signal handler at any time - indeed, the waker thread may also leave the wait for signal handlers as well.
One possible way that may work, however, is an extension to the keyed event model in the NT API. With keyed events, threads are released from the lock in pairs; if you have a 'release' without a 'wait', the 'release' call blocks for the 'wait'.
This in itself isn't enough due to the issues with signal handlers; however, if we allow for the 'release' call to specify a number of threads to be awoken atomically, this works. You simply have each thread in the barrier decrement a count, then 'wait' on a keyed event on that address. The last thread 'releases' N - 1 threads. The kernel doesn't allow any wake event to be processed until all N-1 threads have entered this keyed event state; if any thread leaves the futex call due to signals (including the releasing thread), this prevents any wakeups at all until all threads are back.
After a long discussion with bdonlan on SO chat, I think I have a solution. Basically, we break the problem down into the two self-synchronized deallocation issues: the destroy operation and unmapping.
Handling destruction is easy: Simply make the pthread_barrier_destroy function wait for all waiters to stop inspecting the barrier. This can be done by having a usage count in the barrier, atomically incremented/decremented on entry/exit to the wait function, and having the destroy function spin waiting for the count to reach zero. (It's also possible to use a futex here, rather than just spinning, if you stick a waiter flag in the high bit of the usage count or similar.)
Handling unmapping is also easy, but non-local: ensure that munmap or mmap with the MAP_FIXED flag cannot occur while barrier waiters are in the process of exiting, by adding locking to the syscall wrappers. This requires a specialized sort of reader-writer lock. The last waiter to reach the barrier should grab a read lock on the munmap rw-lock, which will be released when the final waiter exits (when decrementing the user count results in a count of 0). munmap and mmap can be made reentrant (as some programs might expect, even though POSIX doesn't require it) by making the writer lock recursive. Actually, a sort of lock where readers and writers are entirely symmetric, and each type of lock excludes the opposite type of lock but not the same type, should work best.
Well, I think I can do it with a clumsy approach...
Have the "barrier" be its own process listening on a socket. Implement barrier_wait as:
open connection to barrier process
send message telling barrier process I am waiting
block in read() waiting for reply
Once N threads are waiting, the barrier process tells all of them to proceed. Each waiter then closes its connection to the barrier process and continues.
Implement barrier_destroy as:
open connection to barrier process
send message telling barrier process to go away
close connection
Once all connections are closed and the barrier process has been told to go away, it exits.
[Edit: Granted, this allocates and destroys a socket as part of the wait and release operations. But I think you can implement the same protocol without doing so; see below.]
First question: Does this protocol actually work? I think it does, but maybe I do not understand the requirements.
Second question: If it does work, can it be simulated without the overhead of an extra process?
I believe the answer is "yes". You can have each thread "take the role of" the barrier process at the appropriate time. You just need a master mutex, held by whichever thread is currently "taking the role" of the barrier process. Details, details... OK, so the barrier_wait might look like:
lock(master_mutex);
++waiter_count;
if (waiter_count < N)
cond_wait(master_condition_variable, master_mutex);
else
cond_broadcast(master_condition_variable);
--waiter_count;
bool do_release = time_to_die && waiter_count == 0;
unlock(master_mutex);
if (do_release)
release_resources();
Here master_mutex (a mutex), master_condition_variable (a condition variable), waiter_count (an unsigned integer), N (another unsigned integer), and time_to_die (a Boolean) are all shared state allocated and initialized by barrier_init. waiter_count is initialiazed to zero, time_to_die to false, and N to the number of threads the barrier is waiting for.
Then barrier_destroy would be:
lock(master_mutex);
time_to_die = true;
bool do_release = waiter_count == 0;
unlock(master_mutex);
if (do_release)
release_resources();
Not sure about all the details concerning signal handling etc... But the basic idea of "last one out turns off the lights" is workable, I think.