When are thread function local variables allocated with Posix? - c

I know it's a very specific question and it's not very interesting for a high level programmer, but I would like to know when exactly are allocated the local variables of a thread function, in other words after
pthread_create(&thread, &function, ...)
is executed, can I say that they exists in memory or not (considering that the scheduler could have not executed the thread yet)?
I tried to search in the posix library code but it's not easy to understand, I arrive at the clone function, written in assembly, but than I cannot find che code of the system call service routine sys_clone to understand what exactly it does. I see in the clone code the invocation of the thread function, but I think this should happen only in the created thread (which could have never been executed by the scheduler when pthread_create is terminated) and not in the creator.

in other words after
pthread_create(&thread, &function, ...)
is executed, can I say that they exists in memory or not (considering
that the scheduler could have not executed the thread yet)?
POSIX does not give you any reason for confidence that the local variables of the initial call to function function() in the created thread will have been allocated by the time pthread_create() returns. They might or might not have been, and indeed, the answer might not even be well defined inasmuch as different threads do not necessarily have a consistent view of machine state.
There is no special significance to the local variables of a thread's start function relative to the local variables of any other function called in that thread. Moreover, although pthread_create() will not return successfully until the new thread has been created, that's a separate question from whether the start function has even been entered, much less whether its local variables have been allocated.

Related

Stack Memory and Multithreading in C

Usually, when I have two concurrent running threads, and each thread calls the same function, there are two instances of the function running in parallel, one in each thread's stack memory. No race condition.
My question is what if I have a global struct with a function pointer.
If I run that function in each thread, from the global struct, is that a race condition? Is there only one copy of the function's variables in the application stack?
Do I need a mutex/semaphore?
I suspect it is not a race condition because the act of calling a function from a function pointer, should be effectively the same as calling a function
Your suspicion is correct. If thread A calls some function, then the activation record for that call (i.e., where the local variables for the call reside) will be on thread A's stack. If thread B simultaneously calls the same function, then that activation record will be on thread B's stack.
It does not matter how either of the two threads knew which function to call. It's the same regardless of whether the address of the function was hard-wired into the code, or whether they got the function address from a "function pointer" variable in a struct.
If I run that function simultaneously in each thread, from the global
struct, is that a race condition?
The rule is: if two (or more) threads can both access the same data, and at least one of the threads might modify the data, then there is a race condition. In that situation, you can avoid the race condition by having each thread lock a mutex before accessing the data, and unlock the mutex afterwards, so that it is guaranteed that no other thread will modify the data while that thread is reading and/or modifying it.
If both threads are only reading the data and will never modify it, then no mutex is necessary.
Is there only one copy of the function's variables in the application
stack?
I don't know what "the application stack" is, but if you are asking if there is only one copy of the global struct, the answer is yes.
If by "the function's variables" you mean local variables that are declared inside the function body -- those are located separately in each thread's stack and not shared across threads (although if the local variable is a pointer, the object it points to might be shared).

Clarifying how GNU C Library defines nonreentrant functions

Taken from: https://www.gnu.org/software/libc/manual/html_node/Nonreentrancy.html
For example, suppose that the signal handler uses gethostbyname. This function returns its value in a static object, reusing the same object each time. If the signal happens to arrive during a call to gethostbyname, or even after one (while the program is still using the value), it will clobber the value that the program asked for.
I fail to see how the above scenario is non-reentrant. It seems to me that gethostbyname is a (read-only) getter function that merely reads from memory (as opposed to modifying memory). Why is gethostbyname non-reentrant?
As the word says, reentrancy is the capability of a function to be able to be called again while it is being called in anothe thread. The scenario you propose is the exact place in which reentrancy is exercised. asume the function has some static or global variable (as the gethostbyname(3) function does) As the return buffer for the structure is being written by one, the other call can be overwriting it to completely destroy the first writing. When the in execution instance of the function (the interrupted one, not the interrumpting one) gets control again, all it's data has been overwritten by the interrupting one, and destroyed it.
A common solution to solve this problem with interruptions is to disable interrupts while the function is executing. This way it doesn't get interrupted by a new call to itself.
If two threads call the same piece of code, and all the parameters and local variables are stored in the stack, each thread has a copy of its own data, so there's no problem in calling both at the same time, as the data they touch is in different stacks. This will not happen with static variables, being those local scope, compilation unit scope or global scope (think that the problem comes when calling the same piece of code, so everywhere one call has access to, the other has also)
Static data, like buffers (look at stdio buffered packages) etc. means in general, the routines will not be reentrant.

c - pthread_create identifier

The first argument of pthread_create() is a thread object which is used to identify the newly-created thread. However, I'm not sure I completely understand the implacations of this.
For instance, I am writing a simple chat server and I plan on using threads. Threads will be coming and going at all times, so keeping track of thread objects could be complicated. However, I don't think I should need to identify individual threads. Could I simply use the same thread object for the first argument of pthread_create() over and over again, or are there other ramifications for this?
If you throw away the thread identifiers by overwriting the same variable with the ID of each thread you create, you'll not be able to use pthread_join() to collect the exit status of threads. So, you may as well make the threads detached (non-joinable) when you call pthread_create().
If you don't make the threads detached, exiting threads will continue to use some resource, so continually creating attached (non-detached) threads that exit will use up system resources — a memory leak.
Read the manual at http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_create.html
According to it:
"Upon successful completion, pthread_create() shall store the ID of the created thread in the location referenced by thread."
I think pthread_create just overwrites the value in the first argument. It does not read it, doesn't care what is inside it. So you can get a new thread from pthread_create, but you can't make it reuse an existing thread. If you would like to reuse your threads, that is more complicated.

Can two Threads use same Thread Procedure?

Is it possible for two threads to use a single function "ThreadProc" as its thread procedure when CreateThread() is used?
HANDLE thread1= CreateThread( NULL, //Choose default security
0, //Default stack size
(LPTHREAD_START_ROUTINE)&ThreadProc,
//Routine to execute. I want this routine to be different each time as I want each thread to perform a different functionality.
(LPVOID) &i, //Thread parameter
0, //Immediately run the thread
&dwThreadId //Thread Id
)
HANDLE thread2= CreateThread( NULL, //Choose default security
0, //Default stack size
(LPTHREAD_START_ROUTINE)&ThreadProc,
//Routine to execute. I want this routine to be different each time as I want each thread to perform a different functionality.
(LPVOID) &i, //Thread parameter
0, //Immediately run the thread
&dwThreadId //Thread Id
)
Would the above code create two threads each with same functionality(since thread procedure for both of the threads is same.) Am I doing it correctly?
If it is possible then would there be any synchronization issues since both threads are using same Thread Procedure.
Please help me with this. I am really confused and could not find anything over the internet.
It is fine to use the same function as a thread entry point for multiple threads.
However, from the posted code the address of i is being passed to both threads. If either thread modifies this memory and the other reads then there is a race condition on i. Without seeing the declaration of i it is probably a local variable. This is dangerous as the threads require that i exist for their lifetime. If i does not the threads will have a dangling pointer. It is common practice to dynamically allocate thread arguments and have each thread free its arguments.
Yes, it is very well possible to have multiple (concurrent) threads that start with the same entry point.
Apart from the fact that the OS/threading library specifies the signature and calls it, there is nothing special about a thread entry point function. It can be used to start off multiple threads with the same caveats as for calling any other function from multiple threads: you need synchronization to access non-atomic shared variables.
Each thread uses its own stack area, but that gets allocated by the OS before the Thread Procedure get invoked, so by the time the Thread Procedure gets called all the special actions that are needed to create and start a new thread have already taken place.
Whether the threads are using the same code or not is irrelevant. It has no effect whatsoever on synchronization. It behaves precisely the same as if they were different functions. The issues with potential races is the same.
You probably don't want to pass both threads the same pointers. That will likely lead to data races. (Though we'd have to see the code to know for sure.)
Your code is right. There is NOT any synchronization issues between both threads. If they need synchronization, it maybe because they are change the same global variable, not because they use the same thread Procedure.

pthread_exit() with address of local variable sometimes works?

I used to get a trouble with pthread_exit(). I know there is no way to use pthread_exit() in a way like
pthread_exit(&some_local_variable);
We always need to use pthread_exit() like:
pthread_exit("Thread Exit Message or something necessary information");
I once coded a simple program for testing purpose.
I made four thread functions for addition, subtraction, multiplication and division of two integers, respectively. Then while performing these operations on four different threads, I tried to return the result of the operation by pthread_exit(). What I mean is something like:
pthread_exit(&add_result);
When I ran the code in CentOS 6, I got the desired result (i.e., garbage values from all the threads) as pthread_exit() cannot be used like that. But, I got confused. Because for the first time, I ran that code in Ubuntu 11.10 and got three absolutely correct result(correct result of the operation) from three threads and garbage value from one thread. This confused me because why three threads are giving correct result of operation?
Moreover, I used different sleep times for those threads. I found that the thread having least sleep time gave the garbage value.
As gcc is the compiler for both these operating systems, why one system has bugs like this?
It confuses novice programmers like me. If it is not a bug, can anyone explain it to me why is this happening?
I think your answer is in pthread_exit doc. You say that you returned a pointer on add_result, which seems to be a local variable.
Here is the quote of the doc that might answer:
After a thread has terminated, the result of access to local (auto)
variables of the thread is undefined. Thus,
references to local variables of the exiting thread should not be used for the pthread_exit() value_ptr parameter
value.
You may use the void* argument to the threaded function to use a structure, which should contain the actual result of your operation.
pthread_exit just takes a pointer to a void. If you pass the address of a variable local to the thread, sometimes that memory will have been reused for something else. Sometimes it will still be there. There's no guarantee that after a thread exits, some part of the system will go and make sure that all of the memory it was using is set to garbage values.
It's not a bug - the system is doing exactly what you ask it.
Bonus related answer - Can a local variable's memory be accessed outside its scope?
The only requirement for pthread_exit(foo) is that foo points to something which lives long enough. Local variables don't, malloc'ed memory does.

Resources