Is something like the following possible in C on Linux platform:
I have a thread say A reading system calls(intercepting system calls) made by application processes. For each process A creates a thread, which performs the required system call and then sleeps till A wakes it up with another system call which was made by its corresponding application process. When a process exits, it worker thread ceases to exist.
So its like a number of processes converzing on a thread which then fans out to many threads with one thread per process.
Thanks
If you are looking for some kind of threadpool implementation and are not strictly limited to C I would recommend threadpool (which is almost Boost). Its easy to use and quite lean. The only logic you now need is the catching of the system event and then spawn a new task thread that will execute the call. The threadpool will keep track of all created threads and assign work automatically to the threads.
EDIT
Since you are limited to C, try this implementation. It looks fairly complete and rather simple, but it will basically do the job.
Related
I have created 10 threads (pthreads to be precise), each thread is registered with a call back functions say fn1, fn2 ...fn10. I am also assigning different priorities for each thread with scheduling policy FIFO. The requirement of the application is that each of these functions have to be called periodically (periodicity varies for each thread). To implement the periodicity, I got ideas from other questions to use itimer and sigwait methods (Not very sure if this is good way to implement this, Any other suggestion to implement this are welcome).
My question is how do I need handle SIGALRM to repeatedly call these functions in their respective threads when periodicity is varying for each thread?
Thanks in advance.
Using Do sleep functions sleep all threads or just the one who call it? as a reference, my advice would be to avoid SIGALRM. Signals are normally delivered to a process.
IMHO you have two ways to do that :
implement a clever monitor that knows about all threads periodicity. It computes the time at which it must wake a thread, sleeps to that time, wakes the thread and continuouly iterates on that. Pro : threads only wait on a semaphore or other mutex, con : the monitor it too clever for me
each thread knows its periodicity, and stores its last start time. When it finishes its job, it computes how long it should wait until next activation time and sleeps for that duration. Pro : each thread is fully independant and implementation looks easy, cons : you must ensure that in your implementation, sleep calls only blocks calling thread.
I would use the 2nd solution, because the first looks like a user level implementation of sleep in a threaded environment.
I'm going to write a program in which the main thread creates new thread and then the new thread creates a child process. Since I have a hard time keeping track of the new thread and forked process, I'd like to gain a wise answer from someone.
My question is
1. Does a created process in a thread start to execute codes after pthread_create?
2. If 1 is not, where does the forked process start from if a call of fork in a thread occurs?
Thank you for reading my question.
Some of this is a bit OS-dependent, as different systems have different POSIX thread implementations and this can expose internals.
POSIX offers pthread_atfork as a somewhat blunt instrument for dealing with some of the issues, but it still looks pretty messy to me.
If your system uses a one-to-one map between "user land thread" and "kernel thread" using clone or rfork to achieve proper user-space sharing of data between threads, then fork will merely duplicate the (single) thread that calls it. However, if your system has a many-to-many style mapping (so that one user process is handling multiple threads, at least before they enter into blocking syscalls), fork may internally duplicate multiple threads. POSIX says it should look like it only duplicated one thread, so that's not supposed to be visible, but I'm not sure how well all systems implement this.
There's some general advice at http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-before-using-them (Linux-centric, obviously, but still useful).
Is there some particular reason you want to fork inside a thread but not exec? In general, if you just want to run more code in parallel, you just spin off yet another thread (i.e., once you choose to run any threads, you do everything in threads, except if you have to fork for exec; if the exec fails, just _exit).
I am programming using pthreads in C.
I have a parent thread which needs to create 4 child threads with id 0, 1, 2, 3.
When the parent thread gets data, it will set split the data and assign it to 4 seperate context variables - one for each sub-thread.
The sub-threads have to process this data and in the mean time the parent thread should wait on these threads.
Once these sub-threads have done executing, they will set the output in their corresponding context variables and wait(for reuse).
Once the parent thread knows that all these sub-threads have completed this round, it computes the global output and prints it out.
Now it waits for new data(the sub-threads are not killed yet, they are just waiting).
If the parent thread gets more data the above process is repeated - albeit with the already created 4 threads.
If the parent thread receives a kill command (assume a specific kind of data), it indicates to all the sub-threads and they terminate themselves. Now the parent thread can terminate.
I am a Masters research student and I am encountering the need for the above scenario. I know that this can be done using pthread_cond_wait, pthread_Cond_signal. I have written the code but it is just running indefinitely and I cannot figure out why.
My guess is that, the way I have coded it, I have over-complicated the scenario. It will be very helpful to know how this can be implemented. If there is a need, I can post a simplified version of my code to show what I am trying to do(even though I think that my approach is flawed!)...
Can you please give me any insights into how this scenario can be implemented using pthreads?
As far what can be seen from your description, there seems to be nothing wrong with the principle.
What you are trying to implement is a worker pool, I guess, there should be a lot of implementations out there. If the work that your threads are doing is a substantial computation (say at least a CPU second or so) such a scheme is a complete overkill. Mondern implementations of POSIX threads are efficient enough that they support the creation of a lot of threads, really a lot, and the overhead is not prohibitive.
The only thing that would be important if you have your workers communicate through shared variables, mutexes etc (and not via the return value of the thread) is that you start your threads detached, by using the attribute parameter to pthread_create.
Once you have such an implementation for your task, measure. Only then, if your profiler tells you that you spend a substantial amount of time in the pthread routines, start thinking of implementing (or using) a worker pool to recycle your threads.
One producer-consumer thread with 4 threads hanging off it. The thread that wants to queue the four tasks assembles the four context structs containing, as well as all the other data stuff, a function pointer to an 'OnComplete' func. Then it submits all four contexts to the queue, atomically incrementing a a taskCount up to 4 as it does so, and waits on an event/condvar/semaphore.
The four threads get a context from the P-C queue and work away.
When done, the threads call the 'OnComplete' function pointer.
In OnComplete, the threads atomically count down taskCount. If a thread decrements it to zero, is signals the the event/condvar/semaphore and the originating thread runs on, knowing that all the tasks are done.
It's not that difficult to arrange it so that the assembly of the contexts and the synchro waiting is done in a task as well, so allowing the pool to process multiple 'ForkAndWait' operations at once for multiple requesting threads.
I have to add that operations like this are a huge pile easier in an OO language. The latest Java, for example, has a 'ForkAndWait' threadpool class that should do exactly this kind of stuff, but C++, (or even C#, if you're into serfdom), is better than plain C.
As you might know, all threads in the application die in a forked process, other than the thread doing the fork. However, I plan to ressurrect those threads in the forked process by calling pthread_create and using pthread_attr_setstack, so as to assign the newly created threads the same stack as the dead threads. Something like as follows.
// stackAddr and stacksize taken from the dead thread
pthread_attr_setstack(&attr, stackAddr, stacksize);
rc = pthread_create(&thread, &attr, threadRoutine, NULL);
However, I would still need to get the CPU register values, such as stack pointer, base pointer, instruction pointer etc, to restart threads from the same point. How can I do that? And what else do I need to do to successfully achieve my goal?
Also note that I'm using a 64-bit architecture. What additional difficulties would it have as compared to 32-bit one?
I see two possible ways to shoot yourself in the foot and lose hair^W^W^W^W^W^W^W^Wtry to do this:
Try to force each thread into calling getcontext() before the fork(), and then restore the context of each thread via setcontext(). Probably won't work, but you can try for fun.
Save ptrace(PTRACE_GETREGS), ptrace(PTRACE_GETFPREGS), and restore with ptrace(PTRACE_SETREGS), ptrace(PTRACE_SETFPREGS).
The other threads in the current process aren't killed by a fork -- they're still there and running in the parent. The problem you seem to have is that fork only forks a SINGLE thread in the current procces, creating a new process running one thread with a copy of all non-thread resources in the parent.
What you apparently want is a way of duplicating an entire multithreaded task, forking all the threads in it and creating a new process/task with the same number of threads.
In order to do THAT, you would need to find and pause all the other threads in the process, dump their current state (including all locks they hold), fork a new process, and then (re)create each of those other threads in the child, rewiring the lock state to refer to the new child threads where needed.
Unfortunately, the POSIX pthread interface is hopelessly underspecified, and provides no way of doing that. In particular, it lacks any sort of reflective interface allowing you to figure out what threads are actually running.
If you want to try to do this anyway, I can see two ways of trying to approach this:
poke around in /proc/self/task to figure out what threads are running in your process, effectively getting that reflective interface in a highly non-portable way. You'll likely end up having to ptrace(2) the other threads to get their internal state. This will be very difficult.
wrap the pthreads library -- instead of using library directly, intercept every call and keep track of all the threads/mutexes/locks that get created, so that you have that information available when you want to fork. This will work fine as long as you don't want to use any third-party libraries that use pthreads
The second option is much easier (and somewhat portable), but only works well if you have access to all the source code of your entire application, and can modify it to use your wrappers properly.
Just googling around I found that solaris has a forkall() call that does exactly what you want, see the documentation here:
http://download.oracle.com/docs/cd/E19963-01/html/821-1601/gen-1.html
I assume you're running on linux, but it is possible to run solaris on x86 hardware. So maybe that is an option for you.
Does a process have to have at least one thread in it? Is it possible for a process to be void of any threads, or does this not make sense?
A process usually has at least one thread. Wikipedia has the definition:
a thread of execution is the smallest unit of processing that can be scheduled by an operating system. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process.
The MSDN backs this up:
A processor executes threads, not processes, so each application has at least one process, and a process always has at least one thread of execution, known as the primary thread.
Though it does go on to say:
A process can have zero or more single-threaded apartments and zero or one multithreaded apartment.
Which implies that if both the number of single-threaded apartments and multithreaded apartments could be zero. However, the process wouldn't do much :)
In Unix-like operating systems, it's possible to have a zombie process, where an entry still exists in the process table even though there are no (longer) any threads.
You can choose not to use an explicit threading library, or an operating system that has no concept of threads (and so doesn't call it a thread), but for most modern programming all programs have at least one thread of execution (generally referred to as a main thread or UI thread or similar). If that exits, so does the process.
Thought experiment: what would a process with zero threads of execution do?
In theory, I don't see why not. But it would be impossible with the popular operating systems.
A process typically consists of a few different parts:
Threads
Memory space
File discriptors
Environment (root directory, current directory, etc.)
Privileges (UID, etc.)
Et cetera
In theory, a process could exist with no threads as an RPC server. Other processes would make RPC calls which spawn threads in the server process, and then the threads disappear when the function returns. I don't know of any operating systems that work this way.
On most OSs, the process exits either when the last thread exits, or when the main thread exits.
Note: This ignores the "useless" cases such as zombie processes, which have no threads but don't do anything.
"main" itself is thread. Its a thread that gets executed. So, every process runs on at least one thread.