Is there a mechanism to try to lock one of several mutexes? - c

How can a program try to lock multiple mutexes at the same time, and know which mutex it ended up unlocking. Essentially, I am looking for is an equivalent of select() but for mutexes. Does such a thing exist? If not, are there any libraries which implement it?

I'm (almost) certain that this kind of functionality ought to be implemented with a monitor (and condition variables using signal/wait/broadcast), but I think you can solve your problem with a single additional semaphore.
Assuming all your mutex objects begin in the "locked" state, create a semaphore with initial value 0. Whenever a mutex object is unlocked, increment (V) the semaphore. Then, implement select() like this:
// grab a mutex if possilbe
Mutex select(Semaphore s, Mutex[] m) {
P(s); // wait for the semaphore
for (Mutex toTry : m) {
boolean result = try_unlock(m);
if (result) return m;
}
}
Essentially, the semaphore keeps track of the number of available locks, so whenever P(s) stops blocking, there must be at least one available mutex (assuming you correctly increment the semaphore when a mutex becomes available!)
I haven't attempted to prove this code correct nor have I tested it... but I don't see any reason why it shouldn't work.
Once again, you likely want to use a monitor!

Related

How to assure that no other thread acquires a lock immediately before you destroy a mutex

In the linux man page for pthread_mutex_destroy, it has the following code snippet below.
One thing I don't understand about this procedure to destroy a mutex, is that how do we know that between pthread_mutex_unlock and pthread_mutex_destroy no other thread tries to acquire a lock on said mutex?
Typically, how should this be handled? 1) Should an additional mutex be used to ensure that this cannot happen? 2) Or is it the clients responsibility to not try to increase the reference count after it hits 0?
obj_done(struct obj *op)
{
pthread_mutex_lock(&op->om);
if (--op->refcnt == 0) {
pthread_mutex_unlock(&op->om);
(A) pthread_mutex_destroy(&op->om);
(B) free(op);
} else
(C) pthread_mutex_unlock(&op->om);
}
Something should be done to ensure the mutex isn’t going to get another lock attempt while you’re destroying it, yes. In the example case, with a reference count going to 0 involved, it's reasonable to expect that the thread holding the mutex is also the last thread with a pointer to the object. All the other threads that were using the object are finished with it, have decremented the reference count, and have forgotten about the object. So no thread will be attempting to lock the mutex when pthread_mutex_destroy is executed.
That's the typical design pattern. You don't destroy a mutex until all threads are done with it. The natural lifetime of a mutex means you don’t have to synchronize destroying them.

POSIX binary semaphore

How can I implement a binary semaphore using the POSIX counting semaphore API? I am using an unnamed semaphore and need to limit its count to 1. I believe I can't use a mutex because I need to be able to unlock from another thread.
If you actually want a semaphore that "absorbs" multiple posts without allowing multiple waits to succeed, and especially if you want to be strict about that, POSIX semaphores are not a good underlying promitive to use to implement it. The right set of primitives to implement it on top of is a mutex, a condition variable, and a bool protected by the mutex. When changing the bool from 0 to 1, you signal the condition variable.
With that said, what you're asking for is something of a smell; it inherently has ambiguous orderings. For example if threads A and B both post the semaphore one after another, and threads X and Y are both just starting to wait, it's possible with your non-counting semaphore that either both waits succeed or that only one does, depending on the order of execution: ABXY or AXBY (or other comparable permutation). Thus, the pattern is likely erroneous unless either there's only one thread that could possibly psot at any given time (in which case, why would it post more than once? maybe this is a non-issue) or ability to post is controlled by holding some sort of lock (again, in which case why would it post more than once?). So if you don't have a design flaw here, it's likely that just using a counting semaphore but not posting it more than once gives the behavior you want.
If that's not the case, then there's probably some other data associated with the semaphore that's not properly synchronized, and you're trying to use the semaphore like a condition variable for it. If that's the case, just put a proper mutex and condition variable around it and use them, and forget the semaphore.
One comment for addressing your specific situation:
I believe I can't use a mutex because I need to be able to unlock from another thread.
This becomes a non-issue if you use a combination of mutex and condition variable, because you don't keep the mutex locked while working. Instead, the fact that the combined system is in-use is part of the state protected by the mutex (e.g. the above-mentioned bool) and any thread that can obtain the mutex can change it (to return it to a released state).

Semaphore wait within mutex lock

Can I call semaphore.wait() within mutex_lock if in the path somehow it can be guaranteed that resource protected by the semaphore is available?
I.e. I want to do something like following:
void some_function {
mutex_lock()
// Do something
if (certain_conditions == TRUE) {
semaphore_wait() // Guaranteed that resource is available.
// Can not get blocked for sure.
}
// Do some more things
mutex_unlock()
}
Basically, the answer to your question is:Yes. You can call a "wait" primitive on a semaphore within a Mutex lock context.
Actually, that is something that is constantly done. Think of, for example, implementation of Message Queue IPC services with Counting Semaphores: you need to lock the Mutex protecting the queue before calling your "wait" primitive on the Counting Semaphore.
Thinking of POSIX, if you have to implement a Mailbox (typical Producer/Consumer example), you can safely and easily do it with Mutex and Condition Variables (which would be used as your Semaphores). What you want to do is nothing strange as long as you're in control of the situation.

General Race Condition

I am new to C and wanted to know about race conditions. I found this on the internet and it asked to find the race condition, and a solution to it.
My analysis is that the race condition is in the create-thread() method has the race condition, specifically in the if-else statement. So when the method is being accessed another thread could be created or removed during the check-and-act and the thread_amt would be off.
In order to not have the race condition, then lock the if-else using mutex, semaphores, etc?
Can anyone correct me if I am wrong, and could possibly show me how to implement mutex?
#define MAXT 255
int threads_amt = 0;
int create-thread() // create a new thread
{
int tid;
if (threads_amt == MAXT) return -1;
else
{
threads_amt++;
return tid;
}
}
void release-thread()
{
/* release thread resources */
--threads_amt;
}
Yeah, the race condition in this case happens because you have no guarantee that the checking and the manipulation of threads_amt are going to happen with no interruption/execution of another thread.
Three solutions off the top of my head:
1) Force mutual exclusion to that part of code using a binary semaphore (or mutex) to protect the if-else part.
2) Use a semaphore with initial value MAXT, and then, upon calling create_thread (mind, you can't use hyphens in function names!), use "wait()" (depending on the type of semaphore, it could have different names (such as sem_wait())). After that, create the thread. When calling release_thread(), simply use "signal()" (sem_post(), when using semaphore.h).
3) This is more of an "hardware" solution: you could assume that you are given an atomic function that performs the entire if-else part, and therefore avoids any race condition problem.
Of these solutions, the "easiest" one (based on the code you already have) is the first one.
Let's use semaphore.h's semaphores:
#define MAXT 255
// Global semaphore
sem_t s;
int threads_amt = 0;
int main () {
...
sem_init (&s, 0, 1); // init semaphore (initial value = 1)
...
}
int create_thread() // create a new thread
{
int tid;
sem_wait(&s);
if (threads_amt == MAXT) {
sem_post(&s); // the semaphore is now available
return -1;
}
else
{
threads_amt++;
sem_post(&s); // the semaphore is now available
return tid;
}
}
void release_thread()
{
/* release thread resources */
sem_wait(&s);
--threads_amt;
sem_post(&s);
}
This should work just fine.
I hope it's clear. If it's not, I suggest that you study how semaphores work (use the web, or buy some Operating System book). Also, you mentioned that you are new to C: IMHO you should start with something easier than this: semaphores aren't exactly the next thing you want to learn after the 'hello world' ;-)
The race condition is not in the if() statements.
It is with access to the variable threads_amt that is potentially changed and accessed at the same time in multiple threads.
Essentially, any thread that modifies the variable must have exclusive access to avoid a race condition. That means all code which modifies the variable or reads its value must be synchronised (e.g. grab a mutex first, release after). Readers don't necessarily need exclusive access (e.g. two threads reading at the same time won't necessarily affect each other) but writers do (so avoid reading a value while trying to change it in another thread) - such considerations can be opportunities to use synchronisation methods other than a mutex - for example, semaphores.
To use a mutex, it is necessary to create it first (e.g. during project startup). Then grab it when needed, and remember to release it when done. Every function should minimise the time that it holds the mutex, since other threads trying to grab the mutex will be forced to wait.
The trick is to make the grabbing and releasing of the mutex unconditional, wherever it occurs (i.e. avoid a function that grabs the mutex, being able to return without releasing it). That depends on how you structure each function.
The actual code for implementing depends on which threading library you're using (so you need to read the documentation) but the concepts are the same. All threading libraries have functions for creating, grabbing (or entering), and releasing mutexes, semaphores, etc etc.

Does pthread_cond_wait(&cond_t, &mutex); unlock and then lock the mutex?

I m using pthread_cond_wait(&cond_t, &mutex); in my program and I m wondering why this function need as a second parameter a mutex variable.
Does the pthread_cond_wait() unlock the mutex at the beggining (beggining of the execution pthread_cond_wait()) and then locked when it finish (just before leaving pthread_cond_wait())?
There are many text on the subject of condition variables and their usage, so I'll not bore you with a ton of ugly details. The reason they exist at all is to allow you to notify change in a predicate state. The following are critical in understanding proper use of condition variables and their mutex association:
pthread_cond_wait() simultaneously unlocks the mutex and begins waiting for the condition variable to be signalled. thus you must always have ownership of the mutex before invoking it.
pthread_cond_wait() returns with the mutex locked, thus you must unlock the mutex to allow its use somewhere else when finished with it. Whether the return happened because the condition variable was signalled or not isn't relevant. You still need to check your predicate regardless to account for potential spurious wakeups.
The purpose of the mutex is not to protect the condition variable; it is to protect the predicate on which the condition variable is being used as a signaling mechanism. This is hands-down the most often misunderstood idiom of pthread condition variables and their mutexes. The condition variable doesn't need mutual exclusion protection; the predicate data does. Think of the predicate as an outside-state which is being monitored by the users of the condition-variable/mutex pair.
For example, a trivial yet obviously wrong piece of code to wait for a boolean flag fSet:
bool fSet = false;
int WaitForTrue()
{
while (!fSet)
{
sleep(n);
}
}
I should be obvious the main problem is the predicate, fSet, is not protected at all. Many things can go wrong here. Ex: From the time you evaluate the while-conditon until the time you begin waiting (or spinning, or whatever) the value may have changed. If that change notification is somehow missed, you're needlessly waiting.
We can change this a little so at least the predicate is protected somehow. Mutual exclusion in both modifying and evaluating the predicate is easily provided with (what else) a mutex.
pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER;
bool fSet = false;
int WaitForTrue()
{
pthread_mutex_lock(&mtx);
while (!fSet)
sleep(n);
pthread_mutex_unlock(&mtx);
}
Well, that seems simple enough.. Now we never evaluate the predicate without first getting exclusive access to it (by latching the mutex). But this is still a major problem. We latched the mutex, but we never release it until our loop is finished. If everyone else plays by the rules and waits for the mutex lock before evaluation or modification of fSet, they're never be able to do so until we give up the mutex. The only "someone" that can do that in this case is us.
So what about adding still more layers to this. Will this work?
pthread_mutex_t mtx = PTHREAD_MUTEX_INITIALIZER;
bool fSet = false;
int WaitForTrue()
{
pthread_mutex_lock(&mtx);
while (!fSet)
{
pthread_mutex_unlock(&mtx);
// XXXXX
sleep(n);
// YYYYY
pthread_mutex_lock(&mtx);
}
pthread_mutex_unlock(&mtx);
}
Well, yes it will "work", but still is not much better. The period between XXXXX and YYYYY we don't own the mutex (which is ok, since we're not checking or modifying fSet anyway). But anytime during that period some other thread can (a) obtain the mutex, (b) modify fSet, and (c) release the mutex, and we won't know a thing about it until we finish our sleep(), once-again obtain the mutex lock, and loop around for another check.
There has to be a better way. Somehow there should be a way that we can release the mutex and begin waiting for some sort of signal that tells us a change in the predicate may have happened. Equally important, when we receive that signal and return to our code, we should already own the lock that grants us access to check the predicate data. This is exactly what a condition-variable is designed to provide.
The Condition Variable In Action
Enter the condition-variable + mutex pair. The mutex protects access to changing or checking the predicate, while the condition variable sets up a system of monitoring a change, and more importantly, doing so atomically (as far as you're concerned, anyway) with the predicate mutual exclusion:
int WaitForPredicate()
{
// lock mutex (means:lock access to the predicate)
pthread_mutex_lock(&mtx);
// we can safely check this, since no one else should be
// changing it unless they have the mutex, which they don't
// because we just locked it.
while (!predicate)
{
// predicate not met, so begin waiting for notification
// it has been changed *and* release access to change it
// to anyone wanting to by unlatching the mutex, doing
// both (start waiting and unlatching) atomically
pthread_cond_wait(&cv,&mtx);
// upon arriving here, the above returns with the mutex
// latched (we own it). The predicate *may* be true, and
// we'll be looping around to see if it is, but we can
// safely do so because we own the mutex coming out of
// the cv-wait call.
}
// we still own the mutex here. further, we have assessed the
// predicate is true (thus how we broke the loop).
// take whatever action needed.
// You *must* release the mutex before we leave. Remember, we
// still own it even after the code above.
pthread_mutex_unlock(&mtx);
}
For some other thread to signal the loop above, there are several ways to do it, the two most popular below:
pthread_mutex_lock(&mtx);
TODO: change predicate state here as needed.
pthread_mutex_unlock(&mtx);
pthread_cond_signal(&cv);
Another way...
pthread_mutex_lock(&mtx);
TODO: change predicate state here as needed.
pthread_cond_signal(&cv);
pthread_mutex_unlock(&mtx);
Each has different intrinsic behavior and I invite you to do some homework on those differences and determine which is more appropriate for specific circumstances. The former provides better program flow at the expense of introducing potentially unwarranted wake-ups. The latter reduces those wake-ups but at the price of less context synergy. Either will work in our sample, and you can experiment with how each affects your waiting loops. Regardless, one thing paramount, and both methods fulfill this mandate:
Never change, nor check, the predicate condition unless the mutex is locked. Ever.
Simple Monitor Thread
This type of operation is common in a monitor thread that acts on a specific predicate condition, which (sans' error checking) typically looks something like this:
void* monitor_proc(void *pv)
{
// acquire mutex ownership
// (which means we own change-control to the predicate)
pthread_mutex_lock(&mtx);
// heading into monitor loop, we own the predicate mutex
while (true)
{
// safe to check; we own the mutex
while (!predicate)
pthread_cond_wait(&cv, &mtx);
// TODO: the cv has been signalled. our predicate data should include
// data to signal a break-state to exit this loop and finish the proc,
// as well as data that we may check for other processing.
}
// we still own the mutex. remember to release it on exit
pthread_mutex_unlock(&mtx);
return pv;
}
A More Complex Monitor Thread
Modifying this basic form to account for a notification system that doesn't require you to keep the mutex latched once you've picked up the notification becomes a little more involved, but not by very much. Below is a monitor proc that does not keep the mutex latched during regular processing once we've established we've been served (so to speak).
void* monitor_proc(void *pv)
{
// acquire mutex ownership
// (which means we own change-control to the predicate)
pthread_mutex_lock(&mtx);
// heading into monitor loop, we own the predicate mutex
while (true)
{
// check predicate
while (!predicate)
pthread_cond_wait(&cv, &mtx);
// some state that is part of the predicate to
// inform us we're finished
if (break-state)
break;
// TODO: perform latch-required work here.
// unlatch the mutex to do our predicate-independant work.
pthread_mutex_unlock(&mtx);
// TODO: perform no-latch-required work here.
// re-latch mutex prior to heading into wait
pthread_mutex_lock(&mtx);
}
// we still own the mutex. remember to release it on exit
pthread_mutex_unlock(&mtx);
return pv;
}
Where would someone use something like that ? Well, suppose your "predicate" is the "state" of a work queue as well as some flag to tell you to stop looping and exit. Upon receiving the notification that something is "different", you check to see if you should continue executing your loop, and deciding you should continue, pop some data off the queue. Modifying the queue requires the mutex be latched (remember, its "state" is part of our predicate). Once we have popped our data, we have it locally and can process it independent of the queue state, so we release the mutex, do our thing, then require the mutex for the next go-around. There are many ways to code the above concept, including judicious use of pthread_cond_broadcast, etc. But the basic form is hopefully understandable.
This turned out to be considerably longer than I had hoped, but this is a major hurdle for people learning pthread-programming, and I feel it is worth the extra time/effort. I hope you got something out of it.
When the first thread calls pthread_cond_wait(&cond_t, &mutex); it releases the mutex and it waits till condition cond_t is signaled as complete and mutex is available.
So when pthread_cond_signal is called in the other thread, it doesn't "wake up" the thread that waits yet. mutex must be unlocked first, only then there is a chance that first thread will get a lock, which means that "upon successful return of pthread_cond_wait mutex shall have been locked and shall be owned by the calling thread."
yes it unlocks, waits for the condition to be fulfilled and then waits till it can reaquire the passed mutex.

Resources