This question already has answers here:
Check to see if a pthread mutex is locked or unlocked (After a thread has locked itself)
(3 answers)
Closed 3 years ago.
I want to find out if a pthread lock variable is locked or not.
One simple approach is to use a trylock as shown below
pthread_mutex_t lockVar;
if(pthread_mutex_trylock(&lockVar) == 0)
{
//lock was successful
pthread_mutex_unlock(&lockVar);
}
else
{
//someone else holds the lock
}
How do I do the same thing without obtaining a lock?
When you have multiple executions happening concurrently, there is no notion of "simultaneity". Events cannot be given a global order in time, except for when explicit synchronizations happen. There is no property that's observable from within the language that can observe any kind of ordering in general.
Specifically, the question "is that mutex over there locked" is meaningless. It has no answer upon which you can act. Whatever the fictitious answer is, the state of the mutex can change right after. There is nothing you can do with an answer like "yes, it's locked" (it might become unlocked at the same time), or "no, it's unlocked" (it may get locked before you get there).
The only thing you can do with a mutex is try to lock it. You either fail, and thus know that it was locked, or you succeed and thus know that it wasn't, and now you have the lock.
Regardless of what you want to do with the result, the way you're doing it right now is the only way. You cannot simply query whether a mutex is locked without locking it. If you want a synchronization primitive for which you can query the status without modifying it, POSIX semaphores would be a possibility. A binary semaphore can serve as a lock (although it lacks the concept of an owner, which may be a problem for you if you need recursive locking) and sem_getvalue can determine whether or not the semaphore is "locked" at a single moment in time.
As for whether the question you're asking is even meaningful, Kerrek has already told you it's usually not. Here are a few possible, minimally-useful pieces of information you could gather from negative or positive results:
Negative:
If you know the lock started out locked, and won't be locked again once it's unlocked, this tells you an operation has finished. Here, a trylock-and-unlock approach would prevent multiple threads from being able to do this, since it would break the "never locked again once it's unlocked" invariant.
May indicate that it's worth preparing data, without the lock held, that you could apply later via trylock/unlock held only for a brief interval once the data is prepared. But of course you have to prepare for the case where things have changed by the time you take the lock later.
Positive:
If you know that the lock will not be unlocked once it's taken until your thread performs a further action allowing the lock-holder to proceed, this gives you information that a lock-holder has arrived.
Maybe others...
But for the most part, "is the mutex locked?" is not a useful question.
Related
How can I implement a binary semaphore using the POSIX counting semaphore API? I am using an unnamed semaphore and need to limit its count to 1. I believe I can't use a mutex because I need to be able to unlock from another thread.
If you actually want a semaphore that "absorbs" multiple posts without allowing multiple waits to succeed, and especially if you want to be strict about that, POSIX semaphores are not a good underlying promitive to use to implement it. The right set of primitives to implement it on top of is a mutex, a condition variable, and a bool protected by the mutex. When changing the bool from 0 to 1, you signal the condition variable.
With that said, what you're asking for is something of a smell; it inherently has ambiguous orderings. For example if threads A and B both post the semaphore one after another, and threads X and Y are both just starting to wait, it's possible with your non-counting semaphore that either both waits succeed or that only one does, depending on the order of execution: ABXY or AXBY (or other comparable permutation). Thus, the pattern is likely erroneous unless either there's only one thread that could possibly psot at any given time (in which case, why would it post more than once? maybe this is a non-issue) or ability to post is controlled by holding some sort of lock (again, in which case why would it post more than once?). So if you don't have a design flaw here, it's likely that just using a counting semaphore but not posting it more than once gives the behavior you want.
If that's not the case, then there's probably some other data associated with the semaphore that's not properly synchronized, and you're trying to use the semaphore like a condition variable for it. If that's the case, just put a proper mutex and condition variable around it and use them, and forget the semaphore.
One comment for addressing your specific situation:
I believe I can't use a mutex because I need to be able to unlock from another thread.
This becomes a non-issue if you use a combination of mutex and condition variable, because you don't keep the mutex locked while working. Instead, the fact that the combined system is in-use is part of the state protected by the mutex (e.g. the above-mentioned bool) and any thread that can obtain the mutex can change it (to return it to a released state).
Let's say two semaphores are protecting a critical piece of code, and you only want a critical piece of code to execute if both of them are available. Is there a pattern for writing this?
In other words, is there a statement that reads, "If semaphore a and b are available, then run... otherwise sleep"?
The simplest way to implement this is to use a single pthread_mutex_t to protect some state, and a single pthread_cond_t to notify all threads when the state has changed. If you always broadcast on the condvar, then you will always wake all waiting threads. The threads can then perform arbitrarily complex tests and updates to the shared state.
Of course, this is not the most efficient solution since it potentially wakes threads when the state does not satisfy the condition they are waiting for (and they have to go back to sleep). It could also lead to starvation since a thread may always find itself at the back of the queue whenever it waits on the condvar, and never find an acceptable state when it awakens.
Without knowing more details of the problem you are trying to solve, it is hard to give an air tight answer.
pthreads does not allow you to acquire multiple locks/semaphores atomically; however, as pointed out by #Greg, you can avoid deadlock by assigning an order to the locks/semaphores, and having the threads always acquire them in that order. Of course, you have to know which locks you intend to acquire before you start to acquire any of them. It will not work if you cannot determine the next lock to acquire until you have acquired the current one, since you may be required to take a lock out of order. If you release all of the locks and start over, you may find the state has changed, requiring you to acquire a different set of locks, which could lead to livelock.
I know that this isn't a "homework helper website", but I got insane in the last days because i have to implement the access to resource avoiding starvation and i can't figure out how to do that. Can anyone help me with some application examples or documentation? The assignment is: a resource may be used by 2 types of processes: black and white. When the resource is used by the white processes, it can not be used by the black processes and vice-versa. Implement the access to the resource avoiding starvation. Is this a producer-consumer case?
Let's make a few assumptions (for the sake of discussion):
Our processes will be threads -- not actual software processes, there's a difference which may be important in your assignment.
White processes are Readers.
Black processes are Writers.
Our common resource is particular Variable.
Mutual exclusion locks (mutex):
A mutex is a type of exclusive lock, it has a binary state, it's either locked or unlocked. You can lock it, unlock it or check to see if it's locked or not.
Threads can lock each other out using mutex (mutual exclusion locks) just as processes can lock each other out using semaphores.
When you want to protect a variable from being used by two threads at once you create a mutex for that variable and write every thread so that it attempts to lock the mutex before attempting to use the variable and unlock it after they're done.
This makes any first thread lock the mutex and any subsequent thread block until the first thread unlocks the mutex basically forcing all of these threads to line up and operate on that particular variable sequentially.
This is a bit ineffective when you just want to read the variable, not change its value, because two threads reading the same content doesn't create any conflict or invalid data. Two threads writing at the same time might however corrupt the data.
Readers/Writers locks (RWL):
Most implementations of Readers/Write locks will use a shared lock and an exclusive lock, but they expose a simple usage approach: if you want to read grab a "read lock", if you want to write grab a "write lock".
"Read locks" are not exclusive and they allow multiple readers to be reading at one particular time (without blocking).
"Write locks" are exclusive and only one writer can be writing at one particular time (without blocking).
Starvation:
First step: Readers/Writers Locks is the event when a first (read) thread grabs a "read lock" on the variable, a second (write) tries to grab a "write lock" but is blocked until all readers finish reading.
Second step: before the first thread finishes reading, a third (read) thread grabs a "read lock" on the variable; this means the second (write) thread has to wait for this third thread to finish.
Repeat the second step until starvation is achieved.
Avoiding starvation with Seqlock:
A seqlock is implemented with one mutex and some counters. It always allows reading, even while the writers are writing to the variable but it gives the readers a means of checking if the data has been written to during the time it was being read, if so it may be corrupt so the readers will have to reread the data and check for consistency again.
The "read & consistency check" phase runs in a loop until the check confirms consistency of the data, at which point the reader can continue with its usual task.
The writers use the mutex to grab exclusive access so they never overlap their operations.
This is good for high read low write situations. If there would be too many writers the readers would continuously loop rereading the data.
Your particular situation:
If black processes need to be able to share the resource among themselves and white processes need to be able to share the resource among themselves but white processes can't share the resource with black processes then the solution will not be either RWL or Seqlock.
A variation on the Seqlock algorithm might be your solution.
Generally, is a problem in which it comes to access a shared resource (or mutex).
If you have two object of the same class, both threads:
pseudo-code:
loop
if shared_resource is free
lock shared_resource
do something
free shared_resource
This in VERY broad terms!
Suppose I have multiple threads blocking on a call to pthread_mutex_lock(). When the mutex becomes available, does the first thread that called pthread_mutex_lock() get the lock? That is, are calls to pthread_mutex_lock() in FIFO order? If not, what, if any, order are they in? Thanks!
When the mutex becomes available, does the first thread that called pthread_mutex_lock() get the lock?
No. One of the waiting threads gets a lock, but which one gets it is not determined.
FIFO order?
FIFO mutex is rather a pattern already. See Implementing a FIFO mutex in pthreads
"If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy shall determine which thread shall acquire the mutex."
Aside from that, the answer to your question isn't specified by the POSIX standard. It may be random, or it may be in FIFO or LIFO or any other order, according to the choices made by the implementation.
FIFO ordering is about the least efficient mutex wake order possible. Only a truly awful implementation would use it. The thread that ran the most recently may be able to run again without a context switch and the more recently a thread ran, more of its data and code will be hot in the cache. Reasonable implementations try to give the mutex to the thread that held it the most recently most of the time.
Consider two threads that do this:
Acquire a mutex.
Adjust some data.
Release the mutex.
Go to step 1.
Now imagine two threads running this code on a single core CPU. It should be clear that FIFO mutex behavior would result in one "adjust some data" per context switch -- the worst possible outcome.
Of course, reasonable implementations generally do give some nod to fairness. We don't want one thread to make no forward progress. But that hardly justifies a FIFO implementation!
I am a little bit confused trying to implement a very simple mutex (lock) in C. I understand that a mutex is similar to a binary semaphore, except that the mutex also enforces the constraint that the thread that releases the lock, must be the same thread that most recently acquired it. I am confused on how the ownership is kept track of?
This is what I have so far. Keep in mind that it is not completed yet, and is suppose to be really simple (uniprocessor, no recursion on mutex, disabling interrupts as mutual exclusion method, etc).
struct mutex {
char *mutexName;
volatile int inUse;
};
I believe I should add in another member variable, i.e., whoIsOwner, but I am kind of confused as what to store there. I assume it has to be something that can uniquely identify the thread trying to call the lock? Is this correct?
I have a thread structure in place that has a "char *threadName" member variable (along with others), but I'm not sure how I would access this from within the mutex implementation.
Any pointers/hints/ideas would be appreciated.
You could implement the mutex as an atomic integer which is 0 when unlocked, and which takes the value of the locking thread's ID to indicate it's locked. Of course access to the variable has to be atomic, and suitably fenced to prevent reordering (acquire-release fence pairs suffice).
Ultimately you can of course never prevent yourself from shooting yourself in the foot; if you really want you can overwrite the mutex's memory by force from another thread, or something like that. You'll only get the correct behaviour if you use the tools correctly. With that in mind, you might be satisfied with a simple bool for the locking variable.
uint32_t semOwner;
If the above field is 0, then it is available. If it is "owned", then let it be set to the ID of the owning task, or thread, or Process ID/Thread ID combo (or some other combination that may suit your system).
Hope this helps.