Mutex waits forever after destroy and re-init - c

I am trying to create a multi-threaded app in C. At some point the program waits when trying to acquire lock on mutexQueue. but i don't know why. This happens after recreation of the mutex.
for(int i = 80; i<= 8080; i++)
{
pthread_mutex_init(&mutexQueue,NULL);
...
pthread_mutex_lock(&mutexQueue); <= here it waits forever, after the first iteration (when i=81)
...
pthread_mutex_destroy(&mutexQueue);
}
First time it passes after pthread_mutex_lock therefore it can acquire lock, second time not.
Is there a problem to destroy the mutex and then re-init it after?
Full program execution in real time : https://onlinegdb.com/T5kzCaFUA
EDIT:
as #John Carter suggested and reading current pthread documentation (https://pubs.opengroup.org/onlinepubs/007904875/functions/pthread_mutex_destroy.html) it writes :
In cases where default mutex attributes are appropriate, the macro
PTHREAD_MUTEX_INITIALIZER can be used to initialize mutexes that are
statically allocated. The effect shall be equivalent to dynamic
initialization by a call to pthread_mutex_init() with parameter attr
specified as NULL, except that no error checks are performed.
i also get the __pthread_mutex_cond_lock_adjust: Assertion (mutex->__data.__kind & 128) == 0' failed. error sometimes, after a long run.
So the error should be somewhere around here, still searching for it.
Thank you.

Are you unlocking the mutex? Destroying a locked mutex results in undefined behaviour:
The pthread_mutex_destroy() function shall destroy the mutex object referenced by mutex; the mutex object becomes, in effect, uninitialized. An implementation may cause pthread_mutex_destroy() to set the object referenced by mutex to an invalid value. A destroyed mutex object can be reinitialized using pthread_mutex_init(); the results of otherwise referencing the object after it has been destroyed are undefined.
phread_mutex_destroy

Is there a problem to destroy the mutex and then re-init it after?
If something might still be using it, yes.
The code you showed is crazy: a shared queue's mutex should live as long as the queue it protects.
Further, the code you show acquires a lock and never unlocks. That doesn't make much sense either.

I don't know exactly what is your problem really is, but I think you have some misunderstanding with mutex and threads so ill explain to you some basic knowledge about mutex.
A mutex, in its most fundamental form, is just an integer in memory.
This integer can have a few different values depending on the state of
the mutex. When we speak of mutexes, we also need to speak about the
locking operations. The integer in memory is not intriguing, but the
operations around it are.
There are two fundamental operations which a mutex must provide:
lock
unlock
A thread wishing to use the mutex, must first call lock, then
eventually call unlock to release it
so as I see in your code:
for(int i = 80; i<= 8080; i++)
{
pthread_mutex_init(&mutexQueue,NULL);
...
pthread_mutex_lock(&mutexQueue); //here you lock 'mutexQueue' so 'mutexQueue' need
//to be unlocked so it can pass again the second
//time, otherwise it gonna stop here forever.
...
pthread_mutex_destroy(&mutexQueue);
}
where you declare pthread_mutex_lock(&mutexQueue); normally your code must wait here forever until pthread_mutex_unlock(&mutexQueue); is called in the other tread then that thread can pass and lock the mutex again for the other treads and so on.
You can check that website have some good information about threads https://mortoray.com/how-does-a-mutex-work-what-does-it-cost/
for me, I worked on a project called dining_philosopher_problem and it help me a lot to discover threads/mutex/sharing_memorie/process/semaphore/...
you can check it here https://github.com/mittous/philosopher

Related

How to assure that no other thread acquires a lock immediately before you destroy a mutex

In the linux man page for pthread_mutex_destroy, it has the following code snippet below.
One thing I don't understand about this procedure to destroy a mutex, is that how do we know that between pthread_mutex_unlock and pthread_mutex_destroy no other thread tries to acquire a lock on said mutex?
Typically, how should this be handled? 1) Should an additional mutex be used to ensure that this cannot happen? 2) Or is it the clients responsibility to not try to increase the reference count after it hits 0?
obj_done(struct obj *op)
{
pthread_mutex_lock(&op->om);
if (--op->refcnt == 0) {
pthread_mutex_unlock(&op->om);
(A) pthread_mutex_destroy(&op->om);
(B) free(op);
} else
(C) pthread_mutex_unlock(&op->om);
}
Something should be done to ensure the mutex isn’t going to get another lock attempt while you’re destroying it, yes. In the example case, with a reference count going to 0 involved, it's reasonable to expect that the thread holding the mutex is also the last thread with a pointer to the object. All the other threads that were using the object are finished with it, have decremented the reference count, and have forgotten about the object. So no thread will be attempting to lock the mutex when pthread_mutex_destroy is executed.
That's the typical design pattern. You don't destroy a mutex until all threads are done with it. The natural lifetime of a mutex means you don’t have to synchronize destroying them.

Is a mutex lock used inside a shared function or outside of it

Assume sharedFnc is a function that is used between multiple threads:
void sharedFnc(){
// do some thread safe work here
}
Which one is the proper way of using a Mutex here?
A)
void sharedFnc(){
// do some thread safe work here
}
int main(){
...
pthread_mutex_lock(&lock);
sharedFnc();
pthread_mutex_unlock(&lock);
...
}
Or B)
void sharedFnc(){
pthread_mutex_lock(&lock);
// do some thread safe work here
pthread_mutex_unlock(&lock);
}
int main(){
...
sharedFnc();
...
}
Let's consider two extremes:
In the first extreme, you can't even tell what lock you need to acquire until you're inside the function. Maybe the function locates an object and operates on it and the lock is per-object. So how can the caller know what lock to hold?
And maybe the code needs to do some work while holding the lock and some work while not holding the lock. Maybe it needs to release the lock while waiting for something.
In this extreme, the lock must be acquired and released inside the function.
In the opposite extreme, the function might not even have any idea it's used by multiple threads. It may have no idea what lock its data is associated with. Maybe it's called on different data at different times and that data is protected by different locks.
Maybe its caller needs to call several different functions while holding the same lock. Maybe this function reports some information on which the thread will decide to call some other function and it's critical that state not be changed by another thread between those two functions.
In this extreme, the caller must acquire and release the lock.
Between these two extremes, it's a judgment call based on which extreme the situation is closer to. Also, those aren't the only two options available. There are "in-between" options as well.
There's something to be said for this pattern:
// Only call this with `lock` locked.
//
static sometype foofunc_locked(...) {
...
}
sometype foofunc(...) {
pthread_mutex_lock(&lock);
sometype rVal = foofunc_locked(...);
pthread_mutex_unlock(&lock);
return rVal;
}
This separates the responsibility for locking and unlocking the mutex from whatever other responsibilities are embodied by foofunc_locked(...).
One reason you would want to do that is, it's very easy to see whether every possible invocation of foofunc() unlocks the lock before it returns. That might not be the case if the locking and unlocking was mingled with loops, and switch statements and nested if statements and returns from the middle, etc.
If the lock is inside the function, you better make damn sure there's no recursion involved, especially no indirect recursion.
Another problem with the lock being inside the function is loops, where you have two big problems:
Performance. Every cycle you're releasing and reacquiring your locks. That can be expensive, especially in OS's like Linux which don't have light locks like critical sections.
Lock semantics. If there's work to be done inside the loop, but outside your function, you can't acquire the lock once per cycle, because it will dead-lock your function. So you have to piece-meal your loop cycle even more, calling your function (acquire-release), then manually acquire the lock, do the extra work, and manually release it before the cycle ends. And you have absolutely no guarantee of what happens between your function releasing it and you acquiring it.

c - how do multiple threads change static variable that is mutex locked

As a beginner to threads, I have a slight difficulty in understanding how the logic of mutex works. Need help in understanding how multi-threading works in the following snippet and what would be the output of x for every foo() call:
foo()
{
static int x;
X_lock(); //locking
x++;
X_unlock; //unlocking
return x;
}
And what's the basic difference between a semaphore and mutex? A simple example would be nice.
Some times threads needs to use the same resource and that can invoke unidentified behavior. For example addition is not atomic operation and therefore can cause this problem. So there is need for some kind of barrier between different threads, only one thread can pass that barrier, and others have to wait for that thread to finish, after one thread finishes, next go trough barrier and others have to wait for thread to finish.
This is one example of race condition and MUTEX (mutual exclusion) is used for this. How does mutex work? First you must initialize mutex in main function:
pthread_mutex_init(&lock, NULL).
Variable pthread_mutex_t lock; is global, so every thread can access it. Afterwards, one thread will lock mutex:
pthread_mutex_lock(&lock);
And now, next thread comes to this same point, to this line of code I just wrote, and can't get passed trough it. So every other thread have to wait at this barrier - this line of code, until first thread unlock mutex:
pthread_mutex_unlock(&lock);
Then depending which thread get processor time from OS will pass trough barrier and same thing repeats all over again.
Mutexes are very important concept to understand. As for semaphores, they are used for same thing, thread synchronization, here is excellent article covering this topic.

Is there a mechanism to try to lock one of several mutexes?

How can a program try to lock multiple mutexes at the same time, and know which mutex it ended up unlocking. Essentially, I am looking for is an equivalent of select() but for mutexes. Does such a thing exist? If not, are there any libraries which implement it?
I'm (almost) certain that this kind of functionality ought to be implemented with a monitor (and condition variables using signal/wait/broadcast), but I think you can solve your problem with a single additional semaphore.
Assuming all your mutex objects begin in the "locked" state, create a semaphore with initial value 0. Whenever a mutex object is unlocked, increment (V) the semaphore. Then, implement select() like this:
// grab a mutex if possilbe
Mutex select(Semaphore s, Mutex[] m) {
P(s); // wait for the semaphore
for (Mutex toTry : m) {
boolean result = try_unlock(m);
if (result) return m;
}
}
Essentially, the semaphore keeps track of the number of available locks, so whenever P(s) stops blocking, there must be at least one available mutex (assuming you correctly increment the semaphore when a mutex becomes available!)
I haven't attempted to prove this code correct nor have I tested it... but I don't see any reason why it shouldn't work.
Once again, you likely want to use a monitor!

Null arguments to pthread_cond_wait

If a thread calls pthread_cond_wait(cond_ptr,mutex_ptr) will a null cond_ptr, is it guaranteed to not fall asleep?
According to http://pubs.opengroup.org/onlinepubs/007908799/xsh/pthread_cond_wait.html,
a null cond_ptr just means pthread_cond_wait() may (not an emphatic will) fail, so I guess threads can then fall asleep on null condition variables?
I can't see a valid use case for this and I'm wondering why it would ever matter. You shouldn't be calling pthread_cond_wait with an invalid condition variable.
If you're worried about it, just change your code from:
pthread_cond_wait (pcv, pmutex);
to something like:
if (pcv != NULL) pthread_cond_wait (pcv, pmutex);
and it won't be called with a NULL.
I suspect it was put in as "may" simply because there was an implementation of pthreads (perhaps even the original DEC threads itself) which didn't return a failure code for that circumstance.
But, since the alternative is almost certainly that the whole thing fell in a screaming heap, I wouldn't be relying on it :-)
If you're worried about the atomicity of that code, you needn't be. Simply use the same mutex that protects the condition variable to protect the CV pointer being held in your list:
claim mutex A
somenode->cv = NULL
release mutex A
and, in your looping code:
claim mutex A
if loopnode->cv != null:
wait on condvar loopnode->cv using mutex A
// mutex A is locked again
: : :
The fact that the mutex is locked during both the if and calling pthread_condvar_wait means that no race condition can exist. Nothing can change the node condition variables until the looping thread releases the mutex within the pthread_condvar_wait call. And by that time, the call is using its own local copy of the pointer so changing te one in the list will have no effect.
And, if the node-changing code grabs the mutex, the if and pthread_condvar_wait can't execute until that mutex is released.

Resources