what happends to the thread if we free its pointer - c

What is the impact of freeing the struct that holds the pthread_t on the thread itself?
I have a struct that represents a thread:
typedef struct car{
int cur_place;
pthread_t car_thread;
}car;
and i have an array that holds these cars, after some time i want to free the struct from inside the thread, i mean:
void * car_thread(void * number){
int num = *(int *)number;
free(maze[num]);
maze[num] = NULL;
pthread_exit(NULL);
}
is it possible? what will happen to the thread after i free the pthread that holds it? will the it run the next lines?
thanks in advance.

Freeing car only releases the memory used to store those values. The thread will still be other there somewhere possibly. Think of pthread_t as simply holding a number or address used by the system to talk about the thread. Not the thread itself.
Just don't refer to the memory of car anywhere after its free'd.

You have just freed the location storing thread's ID, the data structure which stores thread attributes is freed when you do pthread_exit(NULL). Therefore answer to your question: thread stll exists.

The thread will not exit until pthread_exit() is called or, in the case you have signal handling built in, a signal for exiting has been received. The thread is attached to the process but different threads are isolated entities until you tie them up in any kind of thread organization. Apologize for the vague phrasing but this the best way I can describe it.
If you intend to signal the cars to exit when you free the data structure, you need to have signal handling build in to notify each thread to exit. Or, call pthread_exit() in each thread somehow.

Related

C Pthread: Kill other threads gracefully through one thread

I am working on a pthread problem in C.
The background of the problem: There are 5 threads using the same function, and when the core data in the shared memory hit the upper bound, all these five threads should terminate. I use semaphore to make sure only one of them executes the core data, which means only one of the five will get the end signal and then it will tell the rest to terminate. The code I wrote to achieve is:
#define THREAD_NUM 5;
int thread[THREAD_NUM];
sem_t s; /*semaphore to synchronize*/
sem_t empty; /*keep check of the number of empty buffers*/
sem_t full; /*keep check of the number of full buffers*/
void com_thread(*prt){ // I pass the id of the thread using prt
while(1){
sem_wait(&full)
sem_wait(&s)
...do something
sem_post(&s)
sem_post(&empty)
}
}
The signal will come while the while loop is running, and I tried to accept the signal at following position and then terminate all the threads.
To be honest, what I need to do is end all of the threads gracefully, and I need them to return to the main thread for thread_join() and free memory rather than simply exiting the program. So that's why I did not use exit() here.
The main idea below is terminating the other 4 threads when one of them got the signal. After that it would terminate itself.
However, it does not work as I expected.
#define THREAD_NUM 5;
int thread[THREAD_NUM];
sem_t s; /*semaphore to synchronize*/
sem_t empty; /*keep check of the number of empty buffers*/
sem_t full; /*keep check of the number of full buffers*/
void com_thread(*prt){ // I pass the id of the thread using prt
while(1){
sem_wait(&full)
sem_wait(&s)
if(signal){
int i;
int id = *((int*) prt);
for (i=0;i<THREAD_NUM;i++){
if(i != id)
pthread_exit(&thread[i]);
}
pthread_exit(&thread[id]);
}
...do something
sem_post(&s)
sem_post(&empty)
}
}
Can anyone help me with that? Or, if there is a better way to achieve this? Thanks in advance :)
You can use pthead_kill from the thread you want to terminate all other threads. Man page is at http://linux.die.net/man/3/pthread_kill. You should choose signal carefully if you want graceful termination. http://linux.die.net/man/7/signal has more details.
The easiest solution is probably to have a single global boolean variable, initially initialized to "false". All the threads check if this variable is "false" or "true", and if it's "true" they terminate.
When a single thread notices that all threads should be terminated it simply sets this flag to "true", and the other threads will notice that sooner or later.
You can check for this exit-condition in multiple places in the thread function, especially if some thread is waiting for a lock from the currently active thread (the one that sets the exit condition).

Detach thread right after creation and memory leaks

I'm trying to create a detached thread so I won't need to free the memory allocated for it.
Valgrind is used to check for memory leaks.
I've used IBM example and written:
void *threadfunc(void *parm)
{
printf("Inside secondary thread\n");
return NULL;
}
int main(int argc, char **argv)
{
pthread_t thread;
int rc=0;
rc = pthread_create(&thread, NULL, threadfunc, NULL);
sleep(1);
rc = pthread_detach(thread);
return 0;
}
this works and doesn't create leaks, but a version without "sleep(1);" doesn't.
Why is this sleep(1) needed?
I'm trying to create a detached thread so I won't need to free the
memory allocated for it.
In this case, pthread_detach() is not be required and hence should not be used. Additionaly, in this code snippet you have not done any explict memory allocation, so you should not worry about the freeing the memory.
Why is this sleep(1) needed?
When you create the new thread, parent and child threads can starts executing in any order.
Its depends on the OS schedular and other factors. Now in this case if parent threads gets
scheduled first then it is possible that its goes and exit the program before child thread
starts execution.
By adding sleep in the parent context, the child thread is getting time to start and finish the execution before finished. But this is not good idea and as we do not know how much time child thread will take. hence pthread_jon() should be used in the parent context. For detailed information please refer to POSIX thread documentation and great arcicle from below link
https://computing.llnl.gov/tutorials/pthreads/

Implementing join function in a user level thread library

I am trying to implement a user level thread library as part of a project. My focus is on the join function. Lets say that Thread1 calls join function on Thread2. What i need to do is get the return value/argument supplied to pthread_exit() from Thread2 and store it in the memory location specified in join function argument.
But, how do i get the return value of another thread?
Any help will be appreciated. Thanks
Here is an example taken from the GnuPth(pth_lib.c) user-level threading library, which shows the implementation of the exit and join functions, respectively. I simplified the code to highlight the return value handling.
void pth_exit(void *value)
{
pth_debug2("pth_exit: marking thread \"%s\" as dead", pth_current->name);
/* the main thread is special, because its termination
would terminate the whole process, so we have to delay
its termination until it is really the last thread */
/* execute cleanups */
/*
* Now mark the current thread as dead, explicitly switch into the
* scheduler and let it reap the current thread structure; we can't
* free it here, or we'd be running on a stack which malloc() regards
* as free memory, which would be a somewhat perilous situation.
*/
pth_current->join_arg = value;
pth_current->state = PTH_STATE_DEAD;
pth_debug2("pth_exit: switching from thread \"%s\" to scheduler", pth_current->name);
//return (for ever) to the scheduler
}
and the corresponding pth_join:
/* waits for the termination of the specified thread */
int pth_join(pth_t tid, void **value)
{
//Validate thread situation.
//wait until thread death
//save returned value for the caller
if (value != NULL)
*value = tid->join_arg;
//remove thread from the thread queue
//free its memory space
return TRUE;
}
You could create storage for the return value in the application or process context, initialized when your threads library initializes. Your pthread_exit would fill the value and your join would use the threadid to retrieve it - I think this will only work for non-dynamically allocated return values though.
I'm interesting about the meta-info for individual thread in you implementation. Return value could be one of items stored in that of a thread, besides ID, stack pointer and so on.
When pthread_exit() is called, such information should be dumped to one or more data structures by scheduler, so that other thread can consult them when needed.

reusing pthread_t variable for currently running threads

I'm abit uncertain if the following code will lead to undefined behavior.
//global
pthread_t thread1;
void *worker(void *arg){
//do stuff
}
void spawnThread(){
//init stuff
int iret1 = pthread_create( &thread1, NULL, worker, (void*) p);
}
My spawnThread will make a new thread using the global thread1.
If I'm currently running a thread that is not finished, will I somehow cause undefined behaviour when starting a new thread using the thread1 variable?
If this is a problem, would it make sense to make my pthread_t variable local to a function? I think it might be problem because it will use the stack, and as soon as i return from my function that will be removed.
If I make my pthread_t local to a function, I can't use the pthread_join in a another part of my program. Is the canonical solution, to have a mutex'ed counter keeping track of how many current threads are running?
thanks
The pthread_t is just an identifier. You can copy it round or destroy it at will. Of course, as you mention, if you destroy it (because it is local) then you cannot use it to call pthread_join.
If you reuse the same pthread_t variable for multiple threads then unless there is only one thread active at a time you are overwriting the older values with the new ones, and you will only be able to call pthread_join on the most recently started thread. Also, if you are starting your threads from inside multiple threads then you will need to protect the pthread_t variable with a mutex.
If you need to wait for your thread to finish, give it its own pthread_t variable, and call pthread_join at the point where you need to wait. If you do not need to wait for your thread to finish, call pthread_detach() after creation, or use the creation attributes to start the thread detached.
pthread_t is just an identifier, and you can do whatever you like with it. Thread state is maintained internally in the C library (in the case of Glibc/NPTL, on an internal struct thread on Thread Local Storage, accessed on x86 via the GS register).
Problem is, your thread1 variable is the only way to refer to your first thread.
The solution I often use is having an array of pthread_t where to store the thread ids I need to refer to. In this example it's a static array, but you can also use dynamically alloced memory.
static pthread_t running_threads[MAX_THREAD_RUNNING_LIMIT];
static unsigned int running_thread_count = 0;
// each time you create a new thread:
pthread_create( &running_threads[running_thread_count], blabla...);
running_thread_count++;
// don't forget to check running_thread_count against the size
// of your running thread size MAX_THREAD_RUNNING_LIMIT
When you need to join() them, simply do it in a loop:
for(i =0; i<running_thread_count; i++)
{
pthread_join(&running_threads[i], &return_value);
}

How can barriers be destroyable as soon as pthread_barrier_wait returns?

This question is based on:
When is it safe to destroy a pthread barrier?
and the recent glibc bug report:
http://sourceware.org/bugzilla/show_bug.cgi?id=12674
I'm not sure about the semaphores issue reported in glibc, but presumably it's supposed to be valid to destroy a barrier as soon as pthread_barrier_wait returns, as per the above linked question. (Normally, the thread that got PTHREAD_BARRIER_SERIAL_THREAD, or a "special" thread that already considered itself "responsible" for the barrier object, would be the one to destroy it.) The main use case I can think of is when a barrier is used to synchronize a new thread's use of data on the creating thread's stack, preventing the creating thread from returning until the new thread gets to use the data; other barriers probably have a lifetime equal to that of the whole program, or controlled by some other synchronization object.
In any case, how can an implementation ensure that destruction of the barrier (and possibly even unmapping of the memory it resides in) is safe as soon as pthread_barrier_wait returns in any thread? It seems the other threads that have not yet returned would need to examine at least some part of the barrier object to finish their work and return, much like how, in the glibc bug report cited above, sem_post has to examine the waiters count after having adjusted the semaphore value.
I'm going to take another crack at this with an example implementation of pthread_barrier_wait() that uses mutex and condition variable functionality as might be provided by a pthreads implementation. Note that this example doesn't try to deal with performance considerations (specifically, when the waiting threads are unblocked, they are all re-serialized when exiting the wait). I think that using something like Linux Futex objects could help with the performance issues, but Futexes are still pretty much out of my experience.
Also, I doubt that this example handles signals or errors correctly (if at all in the case of signals). But I think proper support for those things can be added as an exercise for the reader.
My main fear is that the example may have a race condition or deadlock (the mutex handling is more complex than I like). Also note that it is an example that hasn't even been compiled. Treat it as pseudo-code. Also keep in mind that my experience is mainly in Windows - I'm tackling this more as an educational opportunity than anything else. So the quality of the pseudo-code may well be pretty low.
However, disclaimers aside, I think it may give an idea of how the problem asked in the question could be handled (ie., how can the pthread_barrier_wait() function allow the pthread_barrier_t object it uses to be destroyed by any of the released threads without danger of using the barrier object by one or more threads on their way out).
Here goes:
/*
* Since this is a part of the implementation of the pthread API, it uses
* reserved names that start with "__" for internal structures and functions
*
* Functions such as __mutex_lock() and __cond_wait() perform the same function
* as the corresponding pthread API.
*/
// struct __barrier_wait data is intended to hold all the data
// that `pthread_barrier_wait()` will need after releasing
// waiting threads. This will allow the function to avoid
// touching the passed in pthread_barrier_t object after
// the wait is satisfied (since any of the released threads
// can destroy it)
struct __barrier_waitdata {
struct __mutex cond_mutex;
struct __cond cond;
unsigned waiter_count;
int wait_complete;
};
struct __barrier {
unsigned count;
struct __mutex waitdata_mutex;
struct __barrier_waitdata* pwaitdata;
};
typedef struct __barrier pthread_barrier_t;
int __barrier_waitdata_init( struct __barrier_waitdata* pwaitdata)
{
waitdata.waiter_count = 0;
waitdata.wait_complete = 0;
rc = __mutex_init( &waitdata.cond_mutex, NULL);
if (!rc) {
return rc;
}
rc = __cond_init( &waitdata.cond, NULL);
if (!rc) {
__mutex_destroy( &pwaitdata->waitdata_mutex);
return rc;
}
return 0;
}
int pthread_barrier_init(pthread_barrier_t *barrier, const pthread_barrierattr_t *attr, unsigned int count)
{
int rc;
rc = __mutex_init( &barrier->waitdata_mutex, NULL);
if (!rc) return rc;
barrier->pwaitdata = NULL;
barrier->count = count;
//TODO: deal with attr
}
int pthread_barrier_wait(pthread_barrier_t *barrier)
{
int rc;
struct __barrier_waitdata* pwaitdata;
unsigned target_count;
// potential waitdata block (only one thread's will actually be used)
struct __barrier_waitdata waitdata;
// nothing to do if we only need to wait for one thread...
if (barrier->count == 1) return PTHREAD_BARRIER_SERIAL_THREAD;
rc = __mutex_lock( &barrier->waitdata_mutex);
if (!rc) return rc;
if (!barrier->pwaitdata) {
// no other thread has claimed the waitdata block yet -
// we'll use this thread's
rc = __barrier_waitdata_init( &waitdata);
if (!rc) {
__mutex_unlock( &barrier->waitdata_mutex);
return rc;
}
barrier->pwaitdata = &waitdata;
}
pwaitdata = barrier->pwaitdata;
target_count = barrier->count;
// all data necessary for handling the return from a wait is pointed to
// by `pwaitdata`, and `pwaitdata` points to a block of data on the stack of
// one of the waiting threads. We have to make sure that the thread that owns
// that block waits until all others have finished with the information
// pointed to by `pwaitdata` before it returns. However, after the 'big' wait
// is completed, the `pthread_barrier_t` object that's passed into this
// function isn't used. The last operation done to `*barrier` is to set
// `barrier->pwaitdata = NULL` to satisfy the requirement that this function
// leaves `*barrier` in a state as if `pthread_barrier_init()` had been called - and
// that operation is done by the thread that signals the wait condition
// completion before the completion is signaled.
// note: we're still holding `barrier->waitdata_mutex`;
rc = __mutex_lock( &pwaitdata->cond_mutex);
pwaitdata->waiter_count += 1;
if (pwaitdata->waiter_count < target_count) {
// need to wait for other threads
__mutex_unlock( &barrier->waitdata_mutex);
do {
// TODO: handle the return code from `__cond_wait()` to break out of this
// if a signal makes that necessary
__cond_wait( &pwaitdata->cond, &pwaitdata->cond_mutex);
} while (!pwaitdata->wait_complete);
}
else {
// this thread satisfies the wait - unblock all the other waiters
pwaitdata->wait_complete = 1;
// 'release' our use of the passed in pthread_barrier_t object
barrier->pwaitdata = NULL;
// unlock the barrier's waitdata_mutex - the barrier is
// ready for use by another set of threads
__mutex_unlock( barrier->waitdata_mutex);
// finally, unblock the waiting threads
__cond_broadcast( &pwaitdata->cond);
}
// at this point, barrier->waitdata_mutex is unlocked, the
// barrier->pwaitdata pointer has been cleared, and no further
// use of `*barrier` is permitted...
// however, each thread still has a valid `pwaitdata` pointer - the
// thread that owns that block needs to wait until all others have
// dropped the pwaitdata->waiter_count
// also, at this point the `pwaitdata->cond_mutex` is locked, so
// we're in a critical section
rc = 0;
pwaitdata->waiter_count--;
if (pwaitdata == &waitdata) {
// this thread owns the waitdata block - it needs to hang around until
// all other threads are done
// as a convenience, this thread will be the one that returns
// PTHREAD_BARRIER_SERIAL_THREAD
rc = PTHREAD_BARRIER_SERIAL_THREAD;
while (pwaitdata->waiter_count!= 0) {
__cond_wait( &pwaitdata->cond, &pwaitdata->cond_mutex);
};
__mutex_unlock( &pwaitdata->cond_mutex);
__cond_destroy( &pwaitdata->cond);
__mutex_destroy( &pwaitdata_cond_mutex);
}
else if (pwaitdata->waiter_count == 0) {
__cond_signal( &pwaitdata->cond);
__mutex_unlock( &pwaitdata->cond_mutex);
}
return rc;
}
17 July 20111: Update in response to a comment/question about process-shared barriers
I forgot completely about the situation with barriers that are shared between processes. And as you mention, the idea I outlined will fail horribly in that case. I don't really have experience with POSIX shared memory use, so any suggestions I make should be tempered with scepticism.
To summarize (for my benefit, if no one else's):
When any of the threads gets control after pthread_barrier_wait() returns, the barrier object needs to be in the 'init' state (however, the most recent pthread_barrier_init() on that object set it). Also implied by the API is that once any of the threads return, one or more of the the following things could occur:
another call to pthread_barrier_wait() to start a new round of synchronization of threads
pthread_barrier_destroy() on the barrier object
the memory allocated for the barrier object could be freed or unshared if it's in a shared memory region.
These things mean that before the pthread_barrier_wait() call allows any thread to return, it pretty much needs to ensure that all waiting threads are no longer using the barrier object in the context of that call. My first answer addressed this by creating a 'local' set of synchronization objects (a mutex and an associated condition variable) outside of the barrier object that would block all the threads. These local synchronization objects were allocated on the stack of the thread that happened to call pthread_barrier_wait() first.
I think that something similar would need to be done for barriers that are process-shared. However, in that case simply allocating those sync objects on a thread's stack isn't adequate (since the other processes would have no access). For a process-shared barrier, those objects would have to be allocated in process-shared memory. I think the technique I listed above could be applied similarly:
the waitdata_mutex that controls the 'allocation' of the local sync variables (the waitdata block) would be in process-shared memory already by virtue of it being in the barrier struct. Of course, when the barrier is set to THEAD_PROCESS_SHARED, that attribute would also need to be applied to the waitdata_mutex
when __barrier_waitdata_init() is called to initialize the local mutex & condition variable, it would have to allocate those objects in shared memory instead of simply using the stack-based waitdata variable.
when the 'cleanup' thread destroys the mutex and the condition variable in the waitdata block, it would also need to clean up the process-shared memory allocation for the block.
in the case where shared memory is used, there needs to be some mechanism to ensured that the shared memory object is opened at least once in each process, and closed the correct number of times in each process (but not closed entirely before every thread in the process is finished using it). I haven't thought through exactly how that would be done...
I think these changes would allow the scheme to operate with process-shared barriers. the last bullet point above is a key item to figure out. Another is how to construct a name for the shared memory object that will hold the 'local' process-shared waitdata. There are certain attributes you'd want for that name:
you'd want the storage for the name to reside in the struct pthread_barrier_t structure so all process have access to it; that means a known limit to the length of the name
you'd want the name to be unique to each 'instance' of a set of calls to pthread_barrier_wait() because it might be possible for a second round of waiting to start before all threads have gotten all the way out of the first round waiting (so the process-shared memory block set up for the waitdata might not have been freed yet). So the name probably has to be based on things like process id, thread id, address of the barrier object, and an atomic counter.
I don't know whether or not there are security implications to having the name be 'guessable'. if so, some randomization needs to be added - no idea how much. Maybe you'd also need to hash the data mentioned above along with the random bits. Like I said, I really have no idea if this is important or not.
As far as I can see there is no need for pthread_barrier_destroy to be an immediate operation. You could have it wait until all threads that are still in their wakeup phase are woken up.
E.g you could have an atomic counter awakening that initially set to the number of threads that are woken up. Then it would be decremented as last action before pthread_barrier_wait returns. pthread_barrier_destroy then just could be spinning until that counter falls to 0.

Resources