I'm coding in C on Ubuntu.
I need to write a thread called for example "timeredThread" that do some operations in a critical section after N microseconds like the following:
void * timeredThread(void)
{
sleep(TIMEOUT);
pthread_mutex_lock(&mutex);
//operations
pthead_mutex_unlock(&mutex);
}
Also, I need another thread, called for example "timerManager", that can reset the previous timer. My first idea was to create a "timerManager" that kills "timeredThread" and create another one, but this does not work because if I kill "timeredThread" with pthread_cancel() when it's waiting for the mutex I create a deadlock. Deadlock is created because the mutex is in the lock state.
What can I do about it?
Thanks to all in advance.
pthread_mutex_t watchdog_mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t watchdog_cond = PTHREAD_COND_INITIALIZER;
int reset_watchdog = 0;
void reset_watchdog(void) {
pthread_mutex_lock(&watchdog_mutex);
reset_watchdog = 1;
pthread_cond_signal(&watchdog_cond);
pthead_mutex_unlock(&watchdog_mutex);
}
void watchdog(void) {
struct timespec sleep_until = { 0 };
sleep_until.tv_sec = time(NULL) + TIMEOUT;
pthread_mutex_lock(&watchdog_mutex);
// Loop until a time out.
while (!pthread_cond_timedwait(&watchdog_cond, &watchdog_mutex, &sleep_until)) {
if (reset_watchdog) {
sleep_until.tv_sec = time() + TIMEOUT;
reset_watchdog = 0;
}
}
pthead_mutex_unlock(&watchdog_mutex);
// ...
}
Related
Some doubts when reading the operating system material of implementing locks
struct lock {
int locked;
struct queue q;
int sync; /* Normally 0. */
};
void lock_acquire(struct lock *l) {
intr_disable();
while (swap(&l->sync, 1) != 0) {
/* Do nothing */
}
if (!l->locked) {
l->locked = 1;
l->sync = 0;
} else {
queue_add(&l->q, thread_current());
thread_block(&l->sync);
}
intr_enable();
}
void lock_release(struct lock *l) {
intr_disable();
while (swap(&l->sync, 1) != 0) {
/* Do nothing */
}
if (queue_empty(&l->q) {
l->locked = 0;
} else {
thread_unblock(queue_remove(&l->q));
}
l->sync = 0;
intr_enable();
}
What is the purpose of sync?
My gut feeling is that the solutions are all broken. For a lock working correctly the lock_acquire needs to have acquire semantics and the lock_release needs to have release semantics. This way the loads/stores inside the critical section can't move outside of the critical section + you have a happens before edge between a lock release and a subsequent lock acquire on the same lock.
If you take a look at the spinning version:
struct lock {
int locked;
};
void lock_acquire(struct lock *l) {
while (swap(&l->locked, 1)) {
/* Do nothing */
}
}
void lock_release(struct lock *l) {
l->locked = 0;
}
The assignment of locked=0 is just an ordinary store. This means that it can be reordered with other loads and stores before it + it doesn't provide a happens before edge.
It seems to me that 'sync' is a way for the thread to let the OS know that the lock is in use since bot "lock" and "unlock" waits for 'sync" value to be changed before proceeding.
(It's a bit peculiar that interrupts are disabled before checking the 'sync' value)
let's suppose we have a Client-Server application based on TCP/IP communication and multi-thread.
Let's suppose Server-side we have these three global variables:
char matrix[ROW][COLUMNS];
int isEmpty = 0;
float anotherDummyVariable;
If I declare a global pthread mutex as follows
pthread_mutex_t myMutex = PTHREAD_MUTEX_INITIALIZER;
can I use this mutex to lock and unlock any of these three variables, as follows:
...somewhere in the code...
pthread_mutex_lock(&myMutex);
isEmpty = 1;
pthread_mutex_unlock(&myMutex);
and somewhere else...
pthread_mutex_lock(&myMutex);
matrix[ROW][COLUMNS]={0};
pthread_mutex_unlock(&myMutex);
or should I declare three mutexes, one for each global variable to manage, as follows:
pthread_mutex_t matrixMutex = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t isEmptyMutex = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t anotherDummyVariableMutex = PTHREAD_MUTEX_INITIALIZER;
and, somewhere in the code...
pthread_mutex_lock(&isEmptyMutex);
isEmpty = 1;
pthread_mutex_unlock(&isEmptyMutex);
and somewhere else...
pthread_mutex_lock(&matrixMutex);
matrix[ROW][COLUMNS]={0};
pthread_mutex_unlock(&matrixMutex);
?
Using one mutex will work. However this is not optimal, as each time a variable is locked, all the others are locked as well.
Depending on the program, that may be necessary (if there are some dependencies between variable, one cannot be changed without the other). But if it is not, it's better to have one mutex per block of data / variable.
So, depending on the performance impact, and also, the algorithm, you might prefer to use multiple mutexes.
Taking an exemple,
int data[N];
int count = 0; // number of items in data
we have data and count inter dependent.
void set(int arr, int size) { // add size items from arr to data
...
for(i=0 ; i<size ; i++) data[count++] = arr[i];
...
}
In this case it would be better to lock the whole function with one mutex (making a critical section), that protects data and count
void set(int arr, int size) { // add size items from arr to data
pthread_mutex_lock(dataaccess);
...
for(i=0 ; i<size ; i++) data[count++] = arr[i];
...
pthread_mutex_unlock(dataaccess);
}
Somewhere else having a function that checks count and reads data[i]
int read(int i) {
pthread_mutex_lock(dataaccess);
...
if (i >= count) { ...throw error }
int res = data[i];
...
pthread_mutex_unlock(dataaccess);
return res;
}
I want to make a Posix thread finish its job after a certain amount of time has passed. You can see my solution in the simple C + Python pseudocode. But I don't think that is an efficient and accurate solution. What is the best way to achive this?
Mutex incrementLock
BigInteger n = 0
int milliToWork = 5000
Worker()
int elapsedMilli = 0
while elapsedMilli < milliToWork
clock_t startClock = clock()
Lock(incrementLock)
n += 1
Unlock(incrementLock)
clock_t endClock = clock()
elapsedMilli += (double)(endClock - startClock) / (double)CLOCKS_PER_SEC * 1000.0
main()
int nThreads = 100
Thread threads[nThreads]
for i = 1 to nThreads
ThreadCreate(threads[i], Worker)
for i = 1 to nThreads
ThreadJoin(threads[i])
You can set up a timer to send you a signal when your time is up. Pseudocode:
sig_action_handler()
{
/*cleanup*/
pthread_exit();
}
worker()
{
sigaction(sig_action_handler);
timer_create();
timer_settime();
while(true)
{
/*do work*/
}
}
Alternatively, and for easier threading:
sig_action_handler(int, siginfo_t *t, void *)
{
volatile sig_atomic_t *at = t->si_value.sival_ptr;
*at = true;
}
worker()
{
volatile sig_atomic_t at = 0;
struct sigevent si = {/*...*/, .sigev_value.sival_ptr = &at};
sigaction(sig_action_handler);
timer_create(/*...*/, &si, /*...*/);
timer_settime();
while(!at)
{
/*do work*/
}
/*cleanup*/
}
i have a variable accessed via mutex lock in multiple threads.
when i run coverity static analysis on it, it gives the following error:-
MISSING_LOCK (Accessing variable"g_atag"(g_atag) requires the osag_mutex.mutex lock.) [coverity]
Code snippet:
unsigned long g_atag = 0;
pthread_mutex_t g_atag_lock = PTHREAD_MUTEX_INITIALIZER;
void get_atag(unsigned long *atag)
{
int ret = -1;
ret = pthread_mutex_lock(&g_atag_lock);
if (0 != ret) {
return;
}
if (g_atag < 10000) {
g_atag++;
} else {
g_atag = 0;
}
*atag = g_atag;
pthread_mutex_unlock(&g_atag_lock);
}
Does any one sees any problem in this? i have added the locks then why is it saying the locks are missing?
I am developing a userspace premptive thread library(fibre) that uses context switching as the base approach. For this I wrote a scheduler. However, its not performing as expected. Can I have any suggestions for this.
The structure of the thread_t used is :
typedef struct thread_t {
int thr_id;
int thr_usrpri;
int thr_cpupri;
int thr_totalcpu;
ucontext_t thr_context;
void * thr_stack;
int thr_stacksize;
struct thread_t *thr_next;
struct thread_t *thr_prev;
} thread_t;
The scheduling function is as follows:
void schedule(void)
{
thread_t *t1, *t2;
thread_t * newthr = NULL;
int newpri = 127;
struct itimerval tm;
ucontext_t dummy;
sigset_t sigt;
t1 = ready_q;
// Select the thread with higest priority
while (t1 != NULL)
{
if (newpri > t1->thr_usrpri + t1->thr_cpupri)
{
newpri = t1->thr_usrpri + t1->thr_cpupri;
newthr = t1;
}
t1 = t1->thr_next;
}
if (newthr == NULL)
{
if (current_thread == NULL)
{
// No more threads? (stop itimer)
tm.it_interval.tv_usec = 0;
tm.it_interval.tv_sec = 0;
tm.it_value.tv_usec = 0; // ZERO Disable
tm.it_value.tv_sec = 0;
setitimer(ITIMER_PROF, &tm, NULL);
}
return;
}
else
{
// TO DO :: Reenabling of signals must be done.
// Switch to new thread
if (current_thread != NULL)
{
t2 = current_thread;
current_thread = newthr;
timeq = 0;
sigemptyset(&sigt);
sigaddset(&sigt, SIGPROF);
sigprocmask(SIG_UNBLOCK, &sigt, NULL);
swapcontext(&(t2->thr_context), &(current_thread->thr_context));
}
else
{
// No current thread? might be terminated
current_thread = newthr;
timeq = 0;
sigemptyset(&sigt);
sigaddset(&sigt, SIGPROF);
sigprocmask(SIG_UNBLOCK, &sigt, NULL);
swapcontext(&(dummy), &(current_thread->thr_context));
}
}
}
It seems that the "ready_q" (head of the list of ready threads?) never changes, so the search of the higest priority thread always finds the first suitable element. If two threads have the same priority, only the first one has a chance to gain the CPU. There are many algorithms you can use, some are based on a dynamic change of the priority, other ones use a sort of rotation inside the ready queue. In your example you could remove the selected thread from its place in the ready queue and put in at the last place (it's a double linked list, so the operation is trivial and quite inexpensive).
Also, I'd suggest you to consider the performace issues due to the linear search in ready_q, since it may be a problem when the number of threads is big. In that case it may be helpful a more sophisticated structure, with different lists of threads for different levels of priority.
Bye!