I have one sender thread and 40 worker threads. There is a single queue. All of the 40 threads write to the queue and the sender thread periodically reads from the shared queue and sends the data read over a tcp socket (say after every 1 sec, the sender thread must read data from the queue and send it over the socket). I have a question here:
If any of the 40 threads is in the critical section and all other threads are also waiting to enter the critical section and at the same time 1 sec timer is up and I want to ignore the requests of all other threads to enter the critical section, and the Sender thread must be given priority now and must be given the critical section.
In other words I want to set the priority of sender thread to 1 i.e. when sender thread calls EnterCriticalSection() then, all other threads that are waiting to enter critical section must be ignored and as soon as the critical section gets free, it must be given to the sender thread.
Is there any way to achieve this functionality?
You can not achieve it by using just priority, because if any of worker thread is holding a lock then priority can not force them to free it. Here is one implementation I can think off..
As soon as the sender thread will wake up after 1 sec duration it will send a signal to the worker process. And in the signal handler free the lock that is held by the workers(I guess a binary semaphore would be good here, so put it's value to 0 in the signal handler), so whatever worker thread will try to access it will get blocked. At sender side send all the packets and at the end again set semaphore back to 1.
This is one implementation, you can think think your own like that but eventually it should work .:)
You likely just want some variant of a reader-writer lock. And probably just a plain Win32 critical section lock is all that is needed.
Here's why. The operations in the critical section, append data to a queue (or reading from the queue), is a non-blocking operation. In other words, no operation on the queue will take longer than a fraction of a millisecond. If you use the Windows critical section lock (EnterCriticalSection, LeaveCriticalSection), fairness is guaranteed to threads waiting to enter the CS (I'm fairly certain of this).
So if all 40 writer threads need to enter the CS to append to the queue, that shouldn't take more than a millisecond or two for the reader thread to wait it's turn to acquire the lock. This is of course assuming that writer threads are only copying memory a queue and are not doing any long blocking I/O operations while having acquired the lock.
Hope this helps in solving your problem http://man7.org/linux/man-pages/man3/pthread_getschedparam.3.html
One of the possible solutions to Your issue lies in The way Threads are implemented in Linux. Try and have a Mutex. Let your Sender thread create a named FIFO (using mkfifo() call), And, when you create say 40 worker threads, in their respective functions, make them create a named fifo for receiving. Whenever your Sender thread wants to communicate with one of your Worker thread, use open() call to open the worker_fifo and write onto it, close it. But When you have things like a User-Client Application thing, Whenever you open a fifo, put a Mutex Lock, do whatever you want (read/write) and Unlock the Mutex when you are done with it.
Related
I'm trying to figure out a way to put some thread in a passive waiting mode and wake them up as they arrive to the barrier. I have a fixed amount of thread that should arrive.
I was first thinking about a semaphore that i would initialise at 0 so it would block but they will be released in a random way. I would like to implement a system that would release the thread in the order the came to the barrier of synchronisation like a FIFO.
I also thought about using 2 semaphore, on that would block, release a thread and sort it. If the thread is the good one then it just goes, if not then it's blocked by the second semaphore. However this system seems kind of long and fastidious.
Does someone have an idea or suggestions that would help me ?
Thank you very much :)
On Linux, you can just use a condition variable and a mutex to block and unblock threads in the same FIFO order.
This is because all waiters on a condition variable append to the futex wait queue in the kernel in order. Waking up the waiters happens in the same FIFO order. As long as you keep the mutex locked while signaling the condition variable.
However, as commenters mentioned, this is a poor idea to depend on thread execution order.
I have two threads that are communicating to a serial port. One thread is continuously polling the serial port. The polling is done by first writing to the serial port write("do you have a message") and then reading the reply. This has to happen in consecutively.
I have another thread that is used to send commands over the serial port. So all this thread is doing is writing a message to the port, and then reading a confirmation that the message was sent.
So, while one thread is used to read and another used to write, they both do a read() and write().
My issue is, my current implementation uses and read mutex and a write mutex, which of course means you can get a write, write, read, read ordering, which is not the behavior I want.
I tried only using one lock and encompassing the write/reads, but this is causing one thread (polling thread) to never let the other thread grab the lock, or it randomly happens, even if I put sleeps after an unlock of one thread.
So, is there a way to pause one thread... say after it's done a read/write combo, and let another thread do it's thing? What's the best way of going about this issue? I do not want to use pthread_join because 1. It will pause my main code that is creating these threads and 2. The polling thread is never meant to exit unless there is an error.
Thanks
This is kind of generic question - however I met this problem several times already and I still haven't found the best possible solution.
Let's imagine you have program (e.g. HTTP application server) that is multithreaded and that communicates over sockets (TCP, Unix, ...). Main thread is using asynchronous IO and select() or poll() POSIX calls to dispatch traffic from/to sockets. There are also worker threads that process requests and provides responses. To send response back to the client, worker thread synchronises with main thread (that polls) 'somehow'. Core of the questions is 'how' - in terms of what is efficient. I can use pipe() - socket based IPC mechanism - but this seems to me as quite huge overhead. I tend to use some pthread IPC techniques like mutex, condition variables etc. … but these will not work with select() or poll().
Is there a common technique in POSIX (and surroundings) that address this conflict?
I guess on Windows there is WaitForMultipleObjects() function that allows that.
Example program is crafted to illustrate an issue, I know that I can design master/worker pattern in a different way but this is not what I'm asking for. I have other cases where I'm in the same situation.
You could use a signal to poke the worker thread, which will interrupt the select() call and return EINTR. This gets even easier to do with pselect().
For this to work:
decide on a signal (or allocate a real-time signal)
attach an empty handler function to it (if the signal were ignored, the system call would be automatically restarted)
block the signal, at least in the worker thread.
use the signal mask argument in pselect() to unblock the signal while waiting.
Between threads, you can use pthread_kill to deliver the signal to the worker thread specifically. When another process should send the signal, you can either make sure the signal is blocked in all but the worker thread (so it will be delivered there), or use the signal handler to find out whether the signal was sent to the worker thread, and use pthread_kill to forward it explicitly (the worker thread still doesn't need to do anything in the signal handler).
Due to laziness on my part, I don't have a source code viewer online, but you can clone the LibreVISA git tree, and take a look at src/messagepump.cpp, where this method is used to poke the worker thread after another thread added a file descriptor to the watch list.
Simon Richthers answer is v good.
Another alternative might be to make main thread only responsible for listening for new connections and starting up a worker thread with the connection information so that the worker is responsible for all subsequent ‘transactions’ from this source.
My understanding is:
Main thread uses select.
Worker threads processes requests forwarded to it by main thread.
So need to synchronize between workers and main thread e.g. when
worker finishes a transaction need to send response back to main
thread which in turn forwards the response back to the source.
Why don't you remove the problem of having to synchronize between the worker thread and the main thread by making the worker thread responsible for all transactions from a particular connection?
Thus the main thread is only responsible for listening for new connections and starting up a worker thread with the connection information i.e. the file descriptor for the new connection.
First of all, the way to wake another thread is to use the pthread_cond_wait / pthread_cond_timedwait calls in thread A to wait, and for thread B to use pthread_cond_broadcast / pthread_cond_signal to pick it up. So, for instance if B is a producer and A is the consumer, the producer might add items to a linked list protected with a mutex. There would be an associated conditional variable such that after the addition of the item, it could wake thread B such that it went to see if any new items had arrived on the list, and if so removed them. I say 'associated' as then the same mutex can be associated with the condition variable as protects the list.
So far so good. Now you mention asynchronous I/O. What I've wanted to do several times is select() or poll() on a set of FDs and a set of condition variables, so the select(), poll() is interrupted when the condition variable is broadcasted to. There is no easy way of doing this directly; you cannot simply mix and match.
You thus need to do one of two things. Either:
work around the problem (for instance, use a self-connected pipe() to send one byte to wake the select() up either instead of the condition variable, as well as the condition variable, or from some additional thread waiting on the condition variable; or
convert to a more threaded model. IE use one thread for sending, one thread for receiving, and use a producer / consumer model, so the sender thread simply removes from a list / buffer and sends (blocking if necessary), and the received waits for I/O (blocking if necessary) and adds it to the list (this is what you put in italics at the end).
The second is a major design change for those of us brought up on asynchronous I/O, and the first is ugly. You are not the first to be dismayed by this, but I've not found an easy way around it. Re the first an inefficiency, if you only write one character to wake the select loop to the self-pipe, I don't think you are going to see too much inefficiency.
I wrote a simple program that implements master/worker scheme where the master is the main thread, and workers are created by it.
The main thread writes something to a shared buffer, and the worker threads read this shared buffer, writing and reading to shared buffer are organized by read/write lock.
Unfortunately, this scheme definitely leads to starvation of main thread, since a single write has to wait on several reads to complete. One possible solution is increasing the priority of the master thread, so if it wants to write something, it will get immediate access to the shared buffer.
According to a great post to a similar issue, I discovered that probably manipulating the priority of a thread under SCHED_OTHER policy is not allowed, what can be changed is the nice value only.
I wrote a procedure to give worker threads lower priority than master thread, but it seems not to work correctly.
void assignWorkerThreadPriority(pthread_t* worker)
{
struct sched_param* worker_sched_param = (struct sched_param*)malloc(sizeof(struct sched_param));
worker_sched_param->sched_priority =0; //any value other than 0 gives error?
int policy = SCHED_OTHER;
pthread_setschedparam(*worker, policy, worker_sched_param);
printf("Result of changing priority is: %d - %s\n", errno, strerror(errno));
}
I have a two-fold question:
How can I set the nice value of a worker threads to avoid main thread starvation.
If not possible, then how can I change the scheduling policy to a one that allows changing the priority.
Edit: I managed to run the program using other policies, such as SCHED_FIFO, all I had to do was running the program as a super user
You cannot avoid problems using a read/write lock when the read and write usage is so even. You need a different method. You need a lock-free message queue or independent work queues or one of many other techniques.
Here is another way to do the job, the way I would do it. The worker can take the buffer away and work on it rather than keeping it shared:
Write thread:
Create work item.
Lock the mutex or CriticalSection protecting the current queue and pointer to queue.
Add work item to queue.
Release the lock.
Optionally signal a condition variable or Event. Another option is for worker threads to check for work on a timer.
Worker thread:
Create a new queue.
Wait for a condition variable or event or other signal, or wait on a timer.
Lock the mutex or CriticalSection protecting the current queue and pointer to queue.
Set the current queue pointer to the new queue.
Release the lock.
Proceed to work on the now private queue.
Delete the queue when all work items complete.
Now write thread creates more work items. When all the worker threads have their own copies of a queue to work on it will be able to write many items in peace.
You can modify this. For example, a worker thread may lock the queue and move a limited number of work items off into its own internal queue instead of taking the whole thing.
In Linux, if two threads are created and both of them are running, when one of them calls recv() or any IO syscall that blocks when no data is available, what would happen to the whole process?
Will the other thread block also? I guess this depends on how threading is implemented. If thread library is in user space and kernel totally unaware of the threads within process, then process is the scheduling entity and thus both threads got blocked.
Further, if the other thread doesn't block because of this, can it then send() data via the same socket, which is blocking the recv thread? Duplexing?
Any ideas?
You're absolutely right that the blocking behavior will depend on if the thread is implemented in kernel space, or in user space. If threading is implemented purely in user space (that is, the kernel is completely uninvolved with the threading), then any blocking entry point into the kernel will need to be wrapped with some non-blocking variant that can simulate blocking semantics to its calling "thread" (e.g. using AIO to send / recv data instead of blocking, and the completion callback makes the thread runnable, again).
In Linux (and every other extant major OS I can think of), threading is implemented at the kernel level, or similar, and a blocking call into the kernel will not cause all other threads to block.
Yes, you can send() to a socket for which another thread is blocked on recv().
Blocking calls in one thread should not affect other threads.
If the blocked thread locks a mutex prior to entering the blocked call and the second thread attempts to lock the same mutex, then they the second thread would need to wait for the blocking call to finish and for the first thread to release the lock.
It's completely possible to implement threads in user-space such that one thread can proceed while another thread block on I/O.
The non-blocked thread should be able to send on the socket while the other thread is blocking on it (I've written such code).