pthread_mutex_timedlock and the deadlock - c

Typically if a task1 holds lock A wants to take a lock B and another task2 has taken lock B and is waiting for lock A held by task1), this causes the deadlock.
But when it comes to pthread_mutex_timedlock, it attempts the mutex lock or timeout after the specified timeout.
I hit the deadlock scenario where i was trying to take the timed lock, which would have timed out eventually, which puzzles me.
edit: Deadlocks can be avoided by having a better design, which is what i ended up doing, i made sure that the order of taking mutex locks is same, to avoid deadlock
but the question remains open as to if the deadlock can be avoided since i chose timedlock
Can someone explain me this behaviour ?
Edit: Attaching a sample code to make the scenario more clear(real tasks are fairly complicated and run into thousands of lines)
T1
pthread_mutex_lock(&lockA);
//call some API, which results in a lock of m2
pthread_mutex_lock(&lockB);
//unlock in the order
pthread_mutex_unlock(&lockB);
pthread_mutex_unlock(&lockA);
T2
pthread_mutex_lock(&lockB);
//call some API, which results in locking m1
pthread_mutex_timedlock(&lockA,<10 sec>);
The crash is seen in the context of T2, bt:
Program terminated with signal 6, Aborted.
#0 0x57edada0 in raise () from /lib/libc.so.6
(gdb) bt
#0 0x57edada0 in raise () from /lib/libc.so.6
#1 0x57edc307 in abort () from /lib/libc.so.6
#2 0x57ed4421 in __assert_fail () from /lib/libc.so.6
#3 0x57bb2a7c in pthread_mutex_timedlock () from /lib/libpthread.so.0
I traced the error to following
pthread_mutex_timedlock: Assertion `(-(e)) != 35 || (kind != PTHREAD_MUTEX_ERRORCHECK_NP && kind != PTHREAD_MUTEX_RECURSIVE_NP)' failed.

In glibc sources pthread_mutex_timedlock() this assert looks like this:
int e = INTERNAL_SYSCALL (futex, __err, 4, &mutex->__data.__lock,
__lll_private_flag (FUTEX_LOCK_PI,
private), 1,
abstime);
if (INTERNAL_SYSCALL_ERROR_P (e, __err))
{
if (INTERNAL_SYSCALL_ERRNO (e, __err) == ETIMEDOUT)
return ETIMEDOUT;
if (INTERNAL_SYSCALL_ERRNO (e, __err) == ESRCH
|| INTERNAL_SYSCALL_ERRNO (e, __err) == EDEADLK)
{
assert (INTERNAL_SYSCALL_ERRNO (e, __err) != EDEADLK
|| (kind != PTHREAD_MUTEX_ERRORCHECK_NP
&& kind != PTHREAD_MUTEX_RECURSIVE_NP));
/* ESRCH can happen only for non-robust PI mutexes where
the owner of the lock died. */
assert (INTERNAL_SYSCALL_ERRNO (e, __err) != ESRCH
|| !robust);
It is probably that e == EDEADLK and kind is either PTHREAD_MUTEX_ERRORCHECK_NP or PTHREAD_MUTEX_RECURSIVE_NP. The other thing to notice is that timeout is handled before this check, i.e. you don't hit the timeout.
In the kernel it is futex_lock_pi_atomic() returning EDEADLK code:
/*
* Detect deadlocks.
*/
if ((unlikely((curval & FUTEX_TID_MASK) == vpid)))
return -EDEADLK;
/*
The above piece compares the TID of the thread that has locked the mutex and the TID of the thread that tries to acquire the mutex. If they are the same it suggests that the thread is trying to acquire the mutex that it has already acquired.

first of all what was the time specified for time out ? Was it large?
pthread_mutex_timedlock fails in three condtion
1> A deadlock condition was detected or the current thread already owns the mutex.
2>The mutex could not be acquired because the maximum number of recursive locks for mutex has been exceeded.
3>The value specified by mutex does not refer to an initialized mutex object.
was your code subjected to any of the above.
Also code snipet may help to clear things up for us to see the problem.

Related

Implementation of a lock using test and set

Below is the code is given in OSTEP book regarding the implementation of a lock using test and set instruction. My question is that in such implementation, couldn't a thread that is not holding the lock call the unlock function and take away the lock?
typedef struct __lock_t {
int flag;
} lock_t;
void init(lock_t *lock) {
// 0 indicates that lock is available, 1 that it is held
lock->flag = 0;
}
void lock(lock_t *lock) {
while (TestAndSet(&lock->flag, 1) == 1)
; // spin-wait (do nothing)
}
void unlock(lock_t *lock) {
lock->flag = 0;
}
It's assumed that it's a coding error so that case should not be happening.
My question is that in such implementation, couldn't a thread that is not holding the lock call the unlock function and take away the lock?
There's nothing wrong with (e.g.) one thread acquiring a lock, notifying another thread that it can proceed (or maybe spawning a new thread), then the other thread releasing the lock.
Something is very wrong if a single thread acquires a lock and then releases it twice.
To detect bugs (while still supporting all legitimate scenarios), you'd want to detect "unlock() called when lock not held". You could do that by using an atomic TestAndClear() in the unlock().
If you want to enforce "only the thread that acquired the lock may release the lock" then you have to know which thread acquired the lock, so you have to store a thread ID in the lock when its acquired (e.g. maybe by using an atomic "compare, and exchange if it was equal").

pthread_mutex_trylock() use to wait , until other lock has not released

In my code I am using pthread_mutx_trylock() to check thread 1 has completed his
job and release the mutexlock or not ?Please let me know either its a valid way or not ?
In Thread 1:
pthread_mutex_lock(&sync_wait);
// Waiting for return type.
pthread_mutex_unlock(&sync_wait);
In Thread 2:
while (pthread_mutex_trylock(&sync_wait) == 0) {
}; // Wait until other thread has lock
// Waiting till thread 1 sync wait lock has not released.
pthread_mutex_unlock(&sync_wait);
From manual Page
The pthread_mutex_trylock() function shall return zero if a lock on
the mutex object referenced by mutex is acquired. Otherwise, an error
number is returned to indicate the error.
// so this will loop forever once you aquire lock
while (pthread_mutex_trylock(&sync_wait) == 0) {
}; // Wait until other thread has lock
Edit:
This section of code should handle your scenario
while ( int ret = pthread_mutex_trylock( &sync_wait) )
{
// Unable to get Mutex probably some other thread aquired it
// sleep for some time usleep or even better use pthread_mutex_timedlock
// Ideally possible ret values should have been handled but for now
// this will do
}
and yes pthread_mutex_unlock( );once done with work
here is the manpage
also there is a question on so about difference between pthread_mutex_lock and pthread_mutex_trylock here
this is another example of handling multiple return values from pthread_try_lock()
If you want to wake up a specific thread once another thread reaches a certain point in its execution, mutexes are typically not the appropriate synchronization primitive. Alternatives are:
Barriers and the pthread_barrier_wait function
Conditional variables and the pthread_cond_wait and pthread_cond_signal functions
Possibly semaphores and the sem_wait and sem_post functions

When will a thread woken via a condition variable run?

When I wake up a thread waiting on a condition variable while holding the corresponding mutex, can I assume that the woken thread will run after I release the mutex and before anyone else (myself included) can lock the mutex again? Or can I only be sure that it will run at some point in the future?
To be precise, assume that I have the following functions.
bool taken = false;
int waiting = 0;
pthread_mutex_t m; // properly initialised elsewhere
pthread_cond_t c;
void enter() {
pthread_mutex_lock(&m);
// Is `taken || (waiting == 0)` guaranteed here?
while (taken) {
++ waiting;
pthread_cond_wait(&c, &m);
-- waiting;
}
taken = true;
pthread_mutex_unlock(&m);
}
void leave() {
pthread_mutex_lock(&m);
taken = false;
if (waiting > 0) {
pthread_cond_signal(&c);
}
pthread_mutex_unlock(&m);
}
(This is a toy example, not meant to be useful.)
Also assume that all threads are using these functions as
enter()
// some work here
leave()
Can I then be sure that directly after acquiring the mutex in enter() (see comment in the code), if taken is false waiting has to be zero? [It seems to me that this should be the case, because the woken thread will assume to find the state that the waking thread has left behind (if the wake-up was not spurious) and this could otherwise not be guaranteed, but I did not find it clearly worded anywhere.]
I am mainly interested in the behaviour on (modern) Linux, but of course knowing whether this is defined by POSIX would also be of interest.
Note: This may have already been asked in another question, but I hope that mine is clearer.
After t0 does pthread_cond_signal, t1 is no longer waiting for the condition variable, but it is also not running, because t0 still holds the mutex; t1 is instead waiting for the mutex. The thread t2 may also be waiting for the mutex, at the beginning of enter(). Now t0 releases the mutex. Both t1 and t2 are waiting for it. Is t1 handled in a special way and guaranteed to get it, or could t2 get it instead?
Thanks for your comment, Carsten, that cleared it up.
No, see this answer for more reference. Given your example, t2 could acquire the mutex before t1 and a race condition may occur leading to unexpected outcomes.
Reiterating, t0 may initially have the mutex and be in the while loop holding on the line pthread_cond_wait(&c, &m);and the mutex is released atomically reference. t1 could call leave() acquiring the mutex signaling the condition c, and then release the mutex. t0 will prepare to run --waiting by trying to acquire the now-freed mutex by t1, but it can become context switched by the OS. Some other thread t2 waiting on the mutex can grab it and run enter() causing undesired outcomes, see non-reentrant. t2 then releases the mutex. t0 may swaps back only to see values have been mutated.

possible causes of infinite wait in pthread_cond_wait

I am currently trying to analyse a issue in third party source code where a thread (code snippet corresponding to THREAD-T1) is in infinite wait state. The suspicion is that the thread is stuck in pthread_cond_wait. The following are the details of the same.
Code description
T1 does an asynchronous call to an API exposed by T2.
Hence T1 moves to a blocking wait on a conditional variable (say cond_t).
The conditional variable cond_t is signalled in the callback event generated by T2.
The above cycle is repeated n times until the API returns success.
To consolidate, the above is a series of steps which makes the asynchronous call similar to a synchronous one by the use of condition variables.
Sample code
#define MAX_RETRY (3)
bool g_b_ret_val;
pthread_mutex_t g_cond_mutex;
pthread_mutex_t g_ret_val_mutex; /* Assume iniitailzed in the main thread */
pthread_cond_t g_cond_t; /* Assume iniitailzed in the main thread */
retry_async_call_routine() /* Thread-T1 */
{
while(( false == g_b_ret_val) && (retry < MAX_RETRY))
{
(void)invoke_async_api();
pthread_mutex_init(&g_cond_mutex, NULL);
pthread_mutex_lock(&g_cond_mutex);
pthread_cond_wait(g_cond_t, &g_cond_mutex);
pthread_mutex_unlock(&g_cond_mutex);
pthread_mutex_destroy(&g_cond_mutex);
retry ++ ;
}
}
callback_routine() /* Thread-T2 */
{
pthread_mutex_lock(&g_ret_val_mutex);
g_b_ret_val = true; /* May be false also on failure */
pthread_mutex_unlock(&g_ret_val_mutex);
pthread_cond_signal(&g_cond_t);
}
Known issues that I see in the code
Missing retest of condition in a while loop on pthread_cond_wait
Missing mutex lock while signalling
Questions
Please point out me on any more loop holes (or) possibility of infinite wait (if any).
g_cond_t is not reset using pthread_cond_destroy between successive waits, what is the behaviour of the same ? ( Any references regarding this)
This code seems absurd. You are not supposed to create and destroy a mutex just so you can wait on the condition variable. A mutex needs to be created before thread-shared data are used, then the mutex must be used to protect the shared data. In this case, that's g_ret_val_mutex which protects g_b_ret_val.
The condition variable itself is just used for waiting (with regular or timed wait) and signaling (signal or broadcast). It generally does not need its own lock, and in fact, having a separate one (as in the above loop) gets in the way of calling pthread_cond_wait, which takes only one mutex to unlock, not two. There's no need to destroy and re-create condition variables unless you need new/different attributes.
The key to "not getting stuck"—avoiding infinite wait—is to guarantee that, whenever a thread calls pthread_cond_wait, there is definitely some other thread that will, in the future, call pthread_cond_signal (or pthread_cond_broadcast). That is, the waiter tests "why to wait" first, with the "why" part locked, then waits only if the "why" part says "you should wait". The wake-up thread may use the same lock to determine that a wake-up is necessary, or—if the wake-up thread is "lazy", as in the above example—simply issues a "wake up call" every time.
The minimal change for correctness would thus seem to be to change the loop to read:
pthread_mutex_lock(&g_ret_val_mutex);
for (retry = 0; retry < MAX_RETRY && !g_b_ret_val; retry++) {
(void)invoke_async_api();
pthread_cond_wait(&g_cond_t, &g_ret_val_mutex);
}
success = g_b_ret_val; /* if false, we failed */
/* or: success = retry < MAX_RETRY; -- same result */
pthread_mutex_unlock(&g_ret_val_mutex);
(Aside: g_cond_t is a terrible name for a variable; the _t suffix is meant for types.)
It's sometimes wise to separate "some thread needs a wake-up" from "final result of that thread is success". If needed, I'd probably add that using a second boolean. Let's call it g_waiting, which we set true when callback_routine() is (supposedly) guaranteed to be called and it should do a wake-up event, and false when it's not guaranteed to be called or the wakeup is not required. This kind of coding allows you to switch to pthread_cond_timedwait, in case the asynchronous event might never occur for some reason.
Given that g_ret_val_mutex protects g_b_ret_val, it's appropriate to use that for the "waiting" flag as well—adding another mutex just offers more opportunities for problems, here. So now we get:
pthread_mutex_lock(&g_ret_val_mutex);
for (retry = 0; retry < MAX_RETRY && !g_b_ret_val; retry++) {
(void)invoke_async_api();
compute_wakeup_time(&abstime);
g_waiting = true;
pthread_cond_timedwait(&g_cond_t, &g_ret_val_mutex, &abstime);
if (g_waiting) {
/* timeout occurred, we never got our callback */
/* may want something special for this case */
} else {
/* wakeup occurred, result is in g_b_ret_val */
}
}
success = g_b_ret_val;
/* or: success = retry < MAX_RETRY; */
g_waiting = false;
pthread_mutex_unlock(&g_ret_val_mutex);
Meanwhile:
callback_routine() /* Thread-T2 */
{
pthread_mutex_lock(&g_ret_val_mutex);
g_b_ret_val = compute_success_or_failure();
if (g_waiting) {
g_waiting = false;
pthread_cond_signal(&g_cond_t);
}
pthread_mutex_unlock(&g_ret_val_mutex);
}
I've moved the signal to "inside" the mutex, although it's OK either way, so that I can do it only if g_waiting is set, and clear g_waiting. Since we hold the mutex, it's OK to clear g_waiting either before or after calling pthread_cond_signal (as long as no other code will interrupt the sequence).
Note: if we do start using timedwait, we need to find out whether it is OK to call invoke_async_api when another earlier invoke was used but no result was returned before the timeout.

Pthread - setting scheduler parameters

I wanted to use read-writer locks from pthread library in a way, that writers have priority over readers. I read in my man pages that
If the Thread Execution Scheduling option is supported, and the threads involved in the lock are executing with the scheduling policies SCHED_FIFO or SCHED_RR, the calling thread shall not acquire the lock if a writer holds the lock or if writers of higher or equal priority are blocked on the lock; otherwise, the calling thread shall acquire the lock.
so I wrote small function that sets up thread scheduling options.
void thread_set_up(int _thread)
{
struct sched_param *_param=malloc(sizeof (struct sched_param));
int *c=malloc(sizeof(int));
*c=sched_get_priority_min(SCHED_FIFO)+1;
_param->__sched_priority=*c;
long *a=malloc(sizeof(long));
*a=syscall(SYS_gettid);
int *b=malloc(sizeof(int));
*b=SCHED_FIFO;
if (pthread_setschedparam(*a,*b,_param) == -1)
{
//depending on which thread calls this functions, few thing can happen
if (_thread == MAIN_THREAD)
client_cleanup();
else if (_thread==ACCEPT_THREAD)
{
pthread_kill(params.main_thread_id,SIGINT);
pthread_exit(NULL);
}
}
}
sorry for those a,b,c but I tried to malloc everything, still I get SIGSEGV on the call to pthread_setschedparam, I am wondering why?
I don't know if these are the exact causes of your problems but they should help you hone in on it.
(1) pthread_setschedparam returns a 0 on success and a positive number otherwise. So
if (pthread_setschedparam(*a,*b,_param) == -1)
will never execute. It should be something like:
if ((ret = pthread_setschedparam(*a, *b, _param)) != 0)
{ //yada yada
}
As an aside, it isn't 100% clear what you are doing but pthread_kill looks about as ugly a way to do it as possible.
(2) syscall(SYS_gettid) gets the OS threadID. pthread__setschedparam expects the pthreads thread id, which is different. The pthreads thread id is returned by pthread_create and pthread_self in the datatype pthread_t. Change the pthread__setschedparam to use this type and the proper values instead and see if things improve.
(3) You need to run as a priviledge user to change the schedule. Try running the program as root or sudo or whatever.

Resources