Does the function "pthread_create" start the thread (starts executing its function), or does it just creates it and make it wait for the right moment to start?
pthread_create creates the thread (by using clone syscall internally), and return the tid (thread id, like pid). So, at the time when pthread_create returns, the new thread is at least created. But there are no guaranties when it will be started.
From the Man:
http://man7.org/linux/man-pages/man3/pthread_create.3.html
Unless real-time scheduling policies
are being employed, after a call to pthread_create(), it is
indeterminate which thread—the caller or the new thread—will next
execute.
POSIX has the similar comment in the informative description of pthread_create http://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_create.html
There is no requirement on the implementation that the ID of the created thread be available before the newly created thread starts executing.
There is also long "Rationale" why pthread_create is single step process without separate thread creation and start_execution (as it was in good old Java epoch):
A suggested alternative to pthread_create() would be to define two separate operations: create and start. Some applications would find such behavior more natural. Ada, in particular, separates the "creation" of a task from its "activation".
Splitting the operation was rejected by the standard developers for many reasons:
The number of calls required to start a thread would increase from one to two and thus place an additional burden on applications that do not require the additional synchronization. The second call, however, could be avoided by the additional complication of a start-up state attribute.
An extra state would be introduced: "created but not started". This would require the standard to specify the behavior of the thread operations when the target has not yet started executing.
For those applications that require such behavior, it is possible to simulate the two separate steps with the facilities that are currently provided. The start_routine() can synchronize by waiting on a condition variable that is signaled by the start operation.
You may use RT scheduling; or just add some synchronization in the created thread to get exact information about it's execution. It can be also useful in some cases to manually bind the thread to specific CPU core using pthread_setaffinity_np
It creates the thread and enters the ready queue. When it gets its slice from the scheduler, it starts to run.
How early it gets to run will depend upon thread's priority, no of threads it is competing against among other factors.
Related
I am trying to achieve the following:
Force the newly created thread to start running, immediately after pthread_create(). No real-time scheduling is being used.
From the pthread_create() man page:
Unless real-time scheduling policies
are being employed, after a call to pthread_create(), it is
indeterminate which thread—the caller or the new thread—will next
execute.
Which of course makes sense. Thus, I thought by using pthread_yield() I would force the newly created thread to take over and as a result start. But this is not the case.
I could only achieve the desired result by sleeping after the pthread_create(). But I don't want to rely on this solution atm.
Why can't I achieve my goal with pthread_yield()?
Is there some other way than using sleep?
The creation of new threads is handled the same way as task-switching, i.e. follows the scheduling policy? For example, in RT (preemptive) scheduling, if the newly created thread has a higher priority, will it immediately preempt the current thread?
Related post:
Does pthread_create starting thread?
pthread_mutex not updating fast enough, so one thread "hogs" the lock.
Thanks!
If you are on a multi-core system, then it is possible that your new thread is scheduled on a core different from the thread that created it. Calling pthread_yield() may not have the desired effect, since it may only affect scheduling on the core of the caller, and not any other core. The effect is usually placing the thread at the end of runnable queue. (It is also noteworthy that pthread_yield() is not a standard system call, so there is no standard reference regarding its intended behavior.)
Calling sleep() may yield a different result if the sleep time is non-zero. The thread is actually placed in a timer wake-up queue, and must be moved back to the runnable queue after the timer expires. This will make it more likely that a new thread on a different core will run before the creating thread wakes back up.
If a new thread has a higher priority than the thread that created it, it will preempt the creating thread.
As recommended in the comments, predictable behavior can be achieved by making the creating thread conditionally wait on a signal from the newly created thread.
(Working in Win32 api , in C environment with VS2010)
I have a two thread app. The first thread forks the second and waits for a given interval - 'TIMEOUT', and then calls TerminateThread() on it.
Meanwhile, second thread calls NetServerEnum().
It appears that when timeout is reached , whether NetServerEnum returned successfully or not, the first thread get deadlocked.
I've already noticed that NetServerEnum creates worker threads of it's own.
I ultimately end up with one of those threads in deadlock, typically on ntdll.dll!RtlInitializeExceptionChain, unable to exit my process gracefully.
As this to too long for a comment:
Verbatim from MSDN, allow me to use te answer form (emphasis by me):
TerminateThread is a dangerous function that should only be used in the most extreme cases. You should call TerminateThread only if you know exactly what the target thread is doing, and you control all of the code that the target thread could possibly be running at the time of the termination. For example, TerminateThread can result in the following problems:
If the target thread owns a critical section, the critical section will not be released.
If the target thread is allocating memory from the heap, the heap lock will not be released.
*If the target thread is executing certain kernel32 calls when it is terminated, the kernel32 state for the thread's process could be inconsistent.
If the target thread is manipulating the global state of a shared DLL, the state of the DLL could be destroyed, affecting other users of the DLL.
From reading this it is easy to understanf why it is a bad idea to cancel (terminate) a thread stucking in a system call.
A possible alternative approach to the OP's design might be to spawn off a thread calling NetServerEnum() and simply let it run until the system call returned.
In the mean while the main thread could do other things like for example informing the user that scanning the net takes longer as expected.
I am programming using pthreads in C.
I have a parent thread which needs to create 4 child threads with id 0, 1, 2, 3.
When the parent thread gets data, it will set split the data and assign it to 4 seperate context variables - one for each sub-thread.
The sub-threads have to process this data and in the mean time the parent thread should wait on these threads.
Once these sub-threads have done executing, they will set the output in their corresponding context variables and wait(for reuse).
Once the parent thread knows that all these sub-threads have completed this round, it computes the global output and prints it out.
Now it waits for new data(the sub-threads are not killed yet, they are just waiting).
If the parent thread gets more data the above process is repeated - albeit with the already created 4 threads.
If the parent thread receives a kill command (assume a specific kind of data), it indicates to all the sub-threads and they terminate themselves. Now the parent thread can terminate.
I am a Masters research student and I am encountering the need for the above scenario. I know that this can be done using pthread_cond_wait, pthread_Cond_signal. I have written the code but it is just running indefinitely and I cannot figure out why.
My guess is that, the way I have coded it, I have over-complicated the scenario. It will be very helpful to know how this can be implemented. If there is a need, I can post a simplified version of my code to show what I am trying to do(even though I think that my approach is flawed!)...
Can you please give me any insights into how this scenario can be implemented using pthreads?
As far what can be seen from your description, there seems to be nothing wrong with the principle.
What you are trying to implement is a worker pool, I guess, there should be a lot of implementations out there. If the work that your threads are doing is a substantial computation (say at least a CPU second or so) such a scheme is a complete overkill. Mondern implementations of POSIX threads are efficient enough that they support the creation of a lot of threads, really a lot, and the overhead is not prohibitive.
The only thing that would be important if you have your workers communicate through shared variables, mutexes etc (and not via the return value of the thread) is that you start your threads detached, by using the attribute parameter to pthread_create.
Once you have such an implementation for your task, measure. Only then, if your profiler tells you that you spend a substantial amount of time in the pthread routines, start thinking of implementing (or using) a worker pool to recycle your threads.
One producer-consumer thread with 4 threads hanging off it. The thread that wants to queue the four tasks assembles the four context structs containing, as well as all the other data stuff, a function pointer to an 'OnComplete' func. Then it submits all four contexts to the queue, atomically incrementing a a taskCount up to 4 as it does so, and waits on an event/condvar/semaphore.
The four threads get a context from the P-C queue and work away.
When done, the threads call the 'OnComplete' function pointer.
In OnComplete, the threads atomically count down taskCount. If a thread decrements it to zero, is signals the the event/condvar/semaphore and the originating thread runs on, knowing that all the tasks are done.
It's not that difficult to arrange it so that the assembly of the contexts and the synchro waiting is done in a task as well, so allowing the pool to process multiple 'ForkAndWait' operations at once for multiple requesting threads.
I have to add that operations like this are a huge pile easier in an OO language. The latest Java, for example, has a 'ForkAndWait' threadpool class that should do exactly this kind of stuff, but C++, (or even C#, if you're into serfdom), is better than plain C.
I am trying to implement a checkpointing scheme for multithreaded applications by using fork. I will take the checkpoint at a safe location such as a barrier. One thread will call fork to replicate the address space and signals will be sent to all other threads so that they can save their contexts and write it to a file.
The forked process will not run initially. Only when restart from checkpoint is required, a signal would be sent to it so it can start running. At that point, the threads who were not forked but whose contexts were saved, will be recreated from the saved contexts.
My first question is if it is enough to recreate threads from saved contexts and run them from there, if i assume there was no lock held, no signal pending during checkpoint etc... . Lastly, how a thread can be created to run from a known context.
What you want is not possible without major integration with the pthreads implementation. Internal thread structures will likely contain their own kernel-space thread ids, which will be different in the restored contexts.
It sounds to me like what you really want is forkall, which is non-trivial to implement. I don't think barriers are useful at all for what you're trying to accomplish. Asynchronous interruption and checkpointing is just as good as synchronized.
If you want to try hacking forkall into glibc, you should start out by looking at the setxid code NPTL uses for synchronizing setuid() calls between threads using signals. The same principle is what's needed to implement forkall, but you'd basically call setjmp instead of setuid in the signal handlers, and then longjmp back into them after making new threads in the child. After that you'd have to patch up the thread structures to have the right pid/tid values, free the excess new stacks that were created, etc.
Edit: Since the setxid code in glibc/NPTL is rather dense reading for someone not familiar with the codebase, you might instead look at the corresponding code I have in musl, called __synccall:
http://git.etalabs.net/cgi-bin/gitweb.cgi?p=musl;a=blob;f=src/thread/synccall.c;h=91ac5eb77322da7393f778da29d35fb3c2def15d;hb=HEAD
It uses a signal to synchronize all threads, then runs a callback sequentially in each thread one-by-one. To implement forkall, you'd want to do something like this prior to the fork, but instead of a callback, simply save jump buffers for each thread except the calling thread (you can't use a callback for this because the return would invalidate the jump buffer you just saved), then perform the fork from the calling thread. After that, you would make N new threads, and have them jump back to the old threads' saved jump buffers, and destroy their new (unneeded) stacks. You'd also need to make the right syscall to update their thread register (e.g. %gs on x86) and tid address.
Then you need to take these ideas and integrate them with glibc's thread allocation and thread stack cache framework. :-)
Is something like the following possible in C on Linux platform:
I have a thread say A reading system calls(intercepting system calls) made by application processes. For each process A creates a thread, which performs the required system call and then sleeps till A wakes it up with another system call which was made by its corresponding application process. When a process exits, it worker thread ceases to exist.
So its like a number of processes converzing on a thread which then fans out to many threads with one thread per process.
Thanks
If you are looking for some kind of threadpool implementation and are not strictly limited to C I would recommend threadpool (which is almost Boost). Its easy to use and quite lean. The only logic you now need is the catching of the system event and then spawn a new task thread that will execute the call. The threadpool will keep track of all created threads and assign work automatically to the threads.
EDIT
Since you are limited to C, try this implementation. It looks fairly complete and rather simple, but it will basically do the job.