Call functions of the created threads periodically (Manual scheduling) - c

I have created 10 threads (pthreads to be precise), each thread is registered with a call back functions say fn1, fn2 ...fn10. I am also assigning different priorities for each thread with scheduling policy FIFO. The requirement of the application is that each of these functions have to be called periodically (periodicity varies for each thread). To implement the periodicity, I got ideas from other questions to use itimer and sigwait methods (Not very sure if this is good way to implement this, Any other suggestion to implement this are welcome).
My question is how do I need handle SIGALRM to repeatedly call these functions in their respective threads when periodicity is varying for each thread?
Thanks in advance.

Using Do sleep functions sleep all threads or just the one who call it? as a reference, my advice would be to avoid SIGALRM. Signals are normally delivered to a process.
IMHO you have two ways to do that :
implement a clever monitor that knows about all threads periodicity. It computes the time at which it must wake a thread, sleeps to that time, wakes the thread and continuouly iterates on that. Pro : threads only wait on a semaphore or other mutex, con : the monitor it too clever for me
each thread knows its periodicity, and stores its last start time. When it finishes its job, it computes how long it should wait until next activation time and sleeps for that duration. Pro : each thread is fully independant and implementation looks easy, cons : you must ensure that in your implementation, sleep calls only blocks calling thread.
I would use the 2nd solution, because the first looks like a user level implementation of sleep in a threaded environment.

Related

sleeping on an event

I have a multi threaded program in which I sleep in one thread(Thread A) unconditionally for infinite time. When an event happens in another thread (Thread B), it wake up Thread-A by signaling. Now I know there are multiple ways to do it.
When my program runs in windows environment, I use WaitForSingleObject in Thread-A and SetEvent in the Thread-B. It is working without any issues.
I can also use file descriptor based model where I do poll, select. There are more than one way to do it.
However, I am trying to find which is the most efficient way. I want to wake up the Thread-A asap whenever Thread-B signals. What do you think is the best option.
I am ok to explore a driver based option.
Thanks
As said, triggering an SetEvent in thread B and a WaitForSingleObject in thread A is fast.
However some conditions have to be taken into account:
Single core/processor: As Martin says, the waiting thread will preempt the signalling thread. With such a scheme you should take care that the signalling thread (B) is going idle right after the SetEvent. This can be done by a sleep(0) for example.
Multi core/processor: One might think there is an advantage to put the two threads onto different cores/processors but this is not really such a good idea. If both threads are on the same core/processor, the time-span between calling SetEventand the return of WaitForSingleObject is much shorter shorter.
Handling both threads on one core (SetThreadAffinityMask) also allows to handle the behavior of them by means of their priority setting (SetThreadPriority). You may run the waiting thread at a higher priorty or you have to ensure that the signalling thread is really not doing anything after it has set the event.
You have to deal with some other synchronization matter: When is the next event going to happen? Will thread A have completed its task? Most effective a second event can be used to solve this matter: When thread A is done, it sets an event to indicate that thread B is allowed to set its event again. Thread B will effectively first set the event and then wait for the feedback event, it meets the requirment to go idle immedeately.
If you want to allow thread B to set the event even when thread A is not finished and not yet in a wait state, you should consider using semaphores instead of events. This way the number of "calls/events" from thread B is kept and the wait function in thread A can follow up, because it is returning for the number of times the semaphore has been released. Semaphore objects are about as fast as events.
Summary:
Have both threads on the same core/cpu by means of SetThreadAffinityMask.
Extend the SetEvent/WaitForSingleObject by another event to establish a Handshake.
Depending on the details of the processing you may also consider semaphore objects.

How to resuse threads - pthreads c

I am programming using pthreads in C.
I have a parent thread which needs to create 4 child threads with id 0, 1, 2, 3.
When the parent thread gets data, it will set split the data and assign it to 4 seperate context variables - one for each sub-thread.
The sub-threads have to process this data and in the mean time the parent thread should wait on these threads.
Once these sub-threads have done executing, they will set the output in their corresponding context variables and wait(for reuse).
Once the parent thread knows that all these sub-threads have completed this round, it computes the global output and prints it out.
Now it waits for new data(the sub-threads are not killed yet, they are just waiting).
If the parent thread gets more data the above process is repeated - albeit with the already created 4 threads.
If the parent thread receives a kill command (assume a specific kind of data), it indicates to all the sub-threads and they terminate themselves. Now the parent thread can terminate.
I am a Masters research student and I am encountering the need for the above scenario. I know that this can be done using pthread_cond_wait, pthread_Cond_signal. I have written the code but it is just running indefinitely and I cannot figure out why.
My guess is that, the way I have coded it, I have over-complicated the scenario. It will be very helpful to know how this can be implemented. If there is a need, I can post a simplified version of my code to show what I am trying to do(even though I think that my approach is flawed!)...
Can you please give me any insights into how this scenario can be implemented using pthreads?
As far what can be seen from your description, there seems to be nothing wrong with the principle.
What you are trying to implement is a worker pool, I guess, there should be a lot of implementations out there. If the work that your threads are doing is a substantial computation (say at least a CPU second or so) such a scheme is a complete overkill. Mondern implementations of POSIX threads are efficient enough that they support the creation of a lot of threads, really a lot, and the overhead is not prohibitive.
The only thing that would be important if you have your workers communicate through shared variables, mutexes etc (and not via the return value of the thread) is that you start your threads detached, by using the attribute parameter to pthread_create.
Once you have such an implementation for your task, measure. Only then, if your profiler tells you that you spend a substantial amount of time in the pthread routines, start thinking of implementing (or using) a worker pool to recycle your threads.
One producer-consumer thread with 4 threads hanging off it. The thread that wants to queue the four tasks assembles the four context structs containing, as well as all the other data stuff, a function pointer to an 'OnComplete' func. Then it submits all four contexts to the queue, atomically incrementing a a taskCount up to 4 as it does so, and waits on an event/condvar/semaphore.
The four threads get a context from the P-C queue and work away.
When done, the threads call the 'OnComplete' function pointer.
In OnComplete, the threads atomically count down taskCount. If a thread decrements it to zero, is signals the the event/condvar/semaphore and the originating thread runs on, knowing that all the tasks are done.
It's not that difficult to arrange it so that the assembly of the contexts and the synchro waiting is done in a task as well, so allowing the pool to process multiple 'ForkAndWait' operations at once for multiple requesting threads.
I have to add that operations like this are a huge pile easier in an OO language. The latest Java, for example, has a 'ForkAndWait' threadpool class that should do exactly this kind of stuff, but C++, (or even C#, if you're into serfdom), is better than plain C.

Many processes executed by one thread

Is something like the following possible in C on Linux platform:
I have a thread say A reading system calls(intercepting system calls) made by application processes. For each process A creates a thread, which performs the required system call and then sleeps till A wakes it up with another system call which was made by its corresponding application process. When a process exits, it worker thread ceases to exist.
So its like a number of processes converzing on a thread which then fans out to many threads with one thread per process.
Thanks
If you are looking for some kind of threadpool implementation and are not strictly limited to C I would recommend threadpool (which is almost Boost). Its easy to use and quite lean. The only logic you now need is the catching of the system event and then spawn a new task thread that will execute the call. The threadpool will keep track of all created threads and assign work automatically to the threads.
EDIT
Since you are limited to C, try this implementation. It looks fairly complete and rather simple, but it will basically do the job.

Gracefully (i.e eventually cooperatively) suspend thread execution

I have to develop an application that tries to emulate the executing flow of an embedded target. This target has 2 levels of priority : the highest one being preemptive on the lowest one. The low priority level is managed with a round-robin scheduler which gives 1ms of execution to each thread in turn.
My goal is to write a library that provide the thread_create, thread_start, and all the system calls that are available on my target and use POSIX functions to reproduce the behavior natively on a standard PC.
Thus, when an high priority thread executes, low priority threads should be suspended whatever they are doing at that very moment. It is to the responsibility of the low priority thread's implementation to ensure that it won't be perturbed.
I now it is usually unsafe to suspend a thread, which explains why I didn't find any "suspend(pid)" function.
I basically imagine two solutions to the problem :
-find a way to suspend the low priority threads when a high priority thread starts (and resume them when there is no more high priority activity)
-periodically call a very small "suspend_if_necessary" function everywhere in my low-priority code, and whenever an high priority must start, wait for all low-priority process to call that function and be suspended, execute as single high priority thread, then resume them all.
Even if it is not-so-clean, I quite like the second solution, but still have one problem : how to call the function everywhere without changing all my code?
I wonder if there is an easy way to doing that, somewhat like debugging code does : add a hook call at every line executed that checks for a flag and run some specific code when that flag changes?
I'd be very happy if there is an easy solution to that problem, since I really need to be representative with the behavior of the target execution flow...
Thanks in advance,
Goulou.
Unfortunately, it's not really possible to implement what you want with true threads - even if the high prio thread is restarted, it can take arbitrarily long before the high prio thread is scheduled back in and goes to suspend all the low priority threads. Moreover, there is no reliable way to determine whether the high priority thread is blocked or not using only POSIX threads; you could try tracking things manually, but this runs the risk of both false positives (the thread's blocked on something, but the low prio threads think it's running and suspend itself) and false negatives (you miss a resumed annotation, or there's lag between when the thread's actually resumed and when it marks itself as running).
If you want to implement a thread priority system with pure POSIX, one option is to not use threads, but rather use setcontext for cooperative multitasking. This would allow you to swap between threads at a user level. However you must explicitly yield the CPU in this case. It also doesn't help with blocking syscalls, which would then block all threads in your app; but since you're writing an emulator this might not be an issue.
You may also be able to swap threads using setcontext within a signal handler; I've not tested this case myself, but it could be worth a try scheduling using setcontext in a SIGALRM handler.
To suspend a thread, you sleep it. If you want to be able to wake it on command, sleep it using sigwait, which puts the thread to sleep until it gets a signal. You can send a specific thread a signal with pthread_kill (crazy name, but it actually just sends signals to a thread). This is a very fast way to sleep and wake up threads. 40x Faster than condition variables and very easy.

linux pthread_suspend

Looks like linux doesnt implement pthread_suspend and continue, but I really need em.
I have tried cond_wait, but it is too slow. The work being threaded mostly executes in 50us but occasionally executes upwards of 500ms. The problem with cond_wait is two-fold. The mutex locking is taking comparable times to the micro second executions and I don't need locking. Second, I have many worker threads and I don't really want to make N condition variables when they need to be woken up.
I know exactly which thread is waiting for which work and could just pthread_continue that thread. A thread knows when there is no more work and can easily pthread_suspend itself. This would use no locking, avoid the stampede, and be faster. Problem is....no pthread_suspend or _continue.
Any ideas?
Make the thread wait for a specific signal.
Use pthread_sigmask and sigwait.
Have the threads block on a pipe read. Then dispatch the data through the pipe. The threads will awaken as a result of the arrival of the data they need to process. If the data is very large, just send a pointer through the pipe.
If specific data needs to go to specific threads you need one pipe per thread. If any thread can process any data, then all threads can block on the same pipe and they will awaken round robin.
It seems to me that such a solution (that is, using "pthread_suspend" and "pthread_continue") is inevitably racy.
An arbitrary amount of time can elapse between the worker thread finishing work and deciding to suspend itself, and the suspend actually happening. If the main thread decides during that time that that worker thread should be working again, the "continue" will have no effect and the worker thread will suspend itself regardless.
(Note that this doesn't apply to methods of suspending that allow the "continue" to be queued, like the sigwait() and read() methods mentioned in other answers).
May be try an option of pthread_cancel but be careful if any locks to be released,Read the man page to identify cancel state
Why do you care which thread does the work? It sounds like you designed yourself into a corner and now you need a trick to get yourself out of it. If you let whatever thread happened to already be running do the work, you wouldn't need this trick, and you would need fewer context switches as well.

Resources