I am using mqueue.h to use POSIX message queue to communicate between threads (for school project demonstration).
When I am done with my two pthreads using the queue and want to close the message queue, what should I do?
Do mq_unlink and mq_close from both threads
Do mq_unlink and mq_close from one thread
Only do mq_unlink from one thread
Only do mq_unlink from two threads
Only do mq_close from one thread
Only do mq_close from two threads
Edit (because of): "Closed. This question needs details or clarity"
I am using a POSIX message queue defined in mqueue.h (C) to send messages between threads. This is similar to interprocess communication with a message queue. I could communicate using the shared memory but that is not what I want to do. I have created, opened, sent and received messages between threads successfully but need to know what to do when done. I have found mq_close and mq_unlink but have not found information about how they should be used and from where. That is what I am asking about.
mq_overview - overview of POSIX message queues.
Similar like dealing with files.
Call mq_close on each mq_open.
When a process has finished using the queue, it closes it using mq_close(3), and when the queue is no longer required, it can be deleted using mq_unlink(3).
Call once mq_unlink optionally.
POSIX message queues have kernel persistence: if not removed by mq_unlink(3), a message queue will exist until the system is shut down.
mq_unlink() removes the specified message queue name. The message queue name is removed immediately. The queue itself is destroyed once any other processes that have the queue open close their descriptors referring to the queue.
Related
I have tried calling kill from process A to process B and the process B has successful reacted to the signal. The problem is I don't want to send signals from the kill function directly for two reasons:
1) Sometimes the process A may not have the permissions, ex. process B is ran by another user
2) I want to be able to send signals from A to B through the message queue
I am creating a message queue from which I send "objects" of the following structure
typedef struct msg {
long message_type;
char message_text[SIZE];
}message;
I want to know if it is possible for process A to signal B through IPC Message Passing. I know I can achieve this by sending the signal type into the message_text from process A to B and then inside the process B check the type of the signal and act properly but I was wondering if there is another way.
Would this be possible by passing sigaction objects as messages:
struct sigaction as;
//...
msgsnd(queue_id, &as, length, IPC_NOWAIT);
//...
I know this is completely infeasible but this is what I am trying to achieve. Thank you
Based upon your comments it seems that you want B to be able to receive messages but when it receives a "signal" message it needs to act like it received a regular signal. You mentioned that the B needed to react to SIGTERM or SIGINT from the "signal" message.
The way to achieve this is depends upon using POSIX message queues or System V message queues.
Either way it doesn't seem that you want to use polling of the message queue by the main thread of B as that would add to much latency to responding to the "signal" message.
So with POSIX message queues you can use mq_notify() to run either a thread or raise a signal that a new message has arrived. Otherwise B can use a thread (or even fork()) to poll the message queue.
After a "signal" message is received you have a couple of options. A) You can use either kill or raise in B to send a signal of the correct type to itself (or parent in case of fork), or B) just call a function that does what you want (that sort of thing).
Process A can send a "signal" message whenever it wants. But you need to understand that if you are using named queues they are persistent. That is that A can send a "signal" message before B even starts, and then when B starts that message is waiting there. Depending on how the message queue is made it can be N messages deep and have older messages in the queue. One way to deal with that is for B to empty the queue before processing any of the messages.
Message queues can’t achieve how signals do. With signal it is possible to asynchronously interrupt or kill process but with message queue when receiving process checks message or wait on it and exits after receiving message, it actually will ignore that message rest of path of execution(synchronous). But, It is possible to achieve with threads.
If you were using a POSIX message queues (using mq_send/mq_receive), then process B can request (with mq_notify) to be sent a signal every time a message is sent to the message queue. However, your example seems to be using a SYSV legacy message queue (msgsnd) which does not suppport any kind of notify.
Is to possible to send the same data from sender thread to two different receiver threads using message Queue mechanism. Receiver thread should receive same data. As per my understanding Shared memory with synchronization mechanism is proper solution for such scenario. But I was wondering if we could do it using message queue as if one thread pop the data other thread won't have anything to receive from the system message queue.
I am confused a lot with the ways message queues are removed in a C/C++ program.
I saw here that
Removing a Message Queue
You can remove a message queue using the ipcrm command (see the ipcrm(1)
reference page), or by calling msgctl() and passing the IPC_RMID command
code. In many cases, a message queue is meant for use within the scope of
one program only, and you do not want the queue to persist after the
termination of that program. Call msgctl() to remove the queue as part of
termination.
And then something else which is mq_unlink
I am confused what is the way now to completely remove the message queue
Now Let me tell the issue that I am facing.
I have in my application created 2 message queues
Now suddenly there is signal that comes and passes the control to a signal handler. In the signal handler, I am restarting the service in which I am facing an error saying "Resource temporarily Unavailable". I have closed in the signal handler one of the queue's with mq_close(). May be the issue is coming since I am not closing the other one. But my doubt here is:
Do I need to close it?
DO I need to remove it?
If I have to remove it, Do I need to use msg_ctl or mq_unlink?
Firstly, there are two unrelated message queue implementations, the old UNIX System V one which uses msgget(), msgsnd() and msgrcv() and the newer POSIX compliant one described here.
If you are using the POSIX version, to close it just in your program you use mq_close, or to destroy it completely for all programs where it may be open use mq_unlink.
If you use the System V version to close the queue you must use:
msgctl(MessageQueueIQ,IPC_RMID,NULL);
where MessageQueueIQ is the ID of your queue.
to answer your other questions, if you are using the System V message queues, closing it is enough, if you are using the POSIX ones, you must unlink it (this will also close it).
This is kind of generic question - however I met this problem several times already and I still haven't found the best possible solution.
Let's imagine you have program (e.g. HTTP application server) that is multithreaded and that communicates over sockets (TCP, Unix, ...). Main thread is using asynchronous IO and select() or poll() POSIX calls to dispatch traffic from/to sockets. There are also worker threads that process requests and provides responses. To send response back to the client, worker thread synchronises with main thread (that polls) 'somehow'. Core of the questions is 'how' - in terms of what is efficient. I can use pipe() - socket based IPC mechanism - but this seems to me as quite huge overhead. I tend to use some pthread IPC techniques like mutex, condition variables etc. … but these will not work with select() or poll().
Is there a common technique in POSIX (and surroundings) that address this conflict?
I guess on Windows there is WaitForMultipleObjects() function that allows that.
Example program is crafted to illustrate an issue, I know that I can design master/worker pattern in a different way but this is not what I'm asking for. I have other cases where I'm in the same situation.
You could use a signal to poke the worker thread, which will interrupt the select() call and return EINTR. This gets even easier to do with pselect().
For this to work:
decide on a signal (or allocate a real-time signal)
attach an empty handler function to it (if the signal were ignored, the system call would be automatically restarted)
block the signal, at least in the worker thread.
use the signal mask argument in pselect() to unblock the signal while waiting.
Between threads, you can use pthread_kill to deliver the signal to the worker thread specifically. When another process should send the signal, you can either make sure the signal is blocked in all but the worker thread (so it will be delivered there), or use the signal handler to find out whether the signal was sent to the worker thread, and use pthread_kill to forward it explicitly (the worker thread still doesn't need to do anything in the signal handler).
Due to laziness on my part, I don't have a source code viewer online, but you can clone the LibreVISA git tree, and take a look at src/messagepump.cpp, where this method is used to poke the worker thread after another thread added a file descriptor to the watch list.
Simon Richthers answer is v good.
Another alternative might be to make main thread only responsible for listening for new connections and starting up a worker thread with the connection information so that the worker is responsible for all subsequent ‘transactions’ from this source.
My understanding is:
Main thread uses select.
Worker threads processes requests forwarded to it by main thread.
So need to synchronize between workers and main thread e.g. when
worker finishes a transaction need to send response back to main
thread which in turn forwards the response back to the source.
Why don't you remove the problem of having to synchronize between the worker thread and the main thread by making the worker thread responsible for all transactions from a particular connection?
Thus the main thread is only responsible for listening for new connections and starting up a worker thread with the connection information i.e. the file descriptor for the new connection.
First of all, the way to wake another thread is to use the pthread_cond_wait / pthread_cond_timedwait calls in thread A to wait, and for thread B to use pthread_cond_broadcast / pthread_cond_signal to pick it up. So, for instance if B is a producer and A is the consumer, the producer might add items to a linked list protected with a mutex. There would be an associated conditional variable such that after the addition of the item, it could wake thread B such that it went to see if any new items had arrived on the list, and if so removed them. I say 'associated' as then the same mutex can be associated with the condition variable as protects the list.
So far so good. Now you mention asynchronous I/O. What I've wanted to do several times is select() or poll() on a set of FDs and a set of condition variables, so the select(), poll() is interrupted when the condition variable is broadcasted to. There is no easy way of doing this directly; you cannot simply mix and match.
You thus need to do one of two things. Either:
work around the problem (for instance, use a self-connected pipe() to send one byte to wake the select() up either instead of the condition variable, as well as the condition variable, or from some additional thread waiting on the condition variable; or
convert to a more threaded model. IE use one thread for sending, one thread for receiving, and use a producer / consumer model, so the sender thread simply removes from a list / buffer and sends (blocking if necessary), and the received waits for I/O (blocking if necessary) and adds it to the list (this is what you put in italics at the end).
The second is a major design change for those of us brought up on asynchronous I/O, and the first is ugly. You are not the first to be dismayed by this, but I've not found an easy way around it. Re the first an inefficiency, if you only write one character to wake the select loop to the self-pipe, I don't think you are going to see too much inefficiency.
I am writing a simple multi-client server communication program using POSIX threads in C. I am creating a thread every time a new client is connected, i.e. after the accept(...) routine in main().
I have put the accept(...) and the pthread_create(...) inside a while(1) loop, so that server continues to accept clients forever. Now, where should I write the pthread_join(...) routine after a thread exits.
More Info: Inside the thread's "start routine", I have used poll() & then recv() functions, again inside a while(1) loop to continuously poll for availability of client and receive the data from client, respectively. The thread exits in following cases:
1) Either poll() returns some error event or client hangs up.
2) recv() returns a value <= 0.
Language: C
Platform: Suse Linux Enterprise Server 10.3 (x86_64)
First up starting a new thread for each client is probably wasteful and surely won't scale very far. You should try a design where a thread handles more than one client (i.e. calls poll on more than one socket). Indeed, that's what poll(2), epoll etc were designed for.
That being said, in this design you likely needn't join the threads at all. You're not mentioning any reason why the main thread would need information from a thread that finished. Put another way, there's no need for joining.
Just set them as "detached" (pthread_detach or pthread_attr_setdetachstate) and they will be cleaned up automatically when their function returns.
The problem is that pthread_join blocks the calling thread until the thread exits. This means you can't really call it and hope the thread have exited as then the main thread will not be able to do anything else until the thread have exited.
One solution is that each child thread have a flag that is polled by the main thread, and the child thread set that flag just before exiting. When the main thread notices the flag being set, it can join the child thread.
Another possible solution, is if you have e.g. pthread_tryjoin_np (which you should have since you're on a Linux system). Then the main thread in its loop can simply try to join all the child threads in a non-blocking way.
Yet another solution may be to detach the child threads. Then they will run by themselves and do not need to be joined.
Ah, the ol' clean shutdown problem.
Assuming that you may want to cleanly disconnect the server from all clients under some circumstance or other, your main thread will have to tell the client threads that they're to disconnect. So how could this be done?
One way would be to have a pipe (one per client thread) between the main thread and client thread. The client thread includes the file descriptor for that pipe in its call to poll(). That way the main thread can easily send a command to the client thread, telling it to terminate. The client thread reads the command when poll() tells it that the pipe has become ready for reading.
So your main thread can then send some sort of command through the pipe to the client thread and then call pthread_join() waiting for the client thread to tidy itself up and terminate.
Similarly another pipe (again one per client thread) can be used by the client thread to send information to the main thread. Instead of being stuck in a call to accept(), the main thread can be using poll() to wait for a new client connection and for messages from the existing client threads. A timeout on poll() also allows the main thread to do something periodically.
Welcome to the world of the actor model of concurrent programming.
Of course, if you don't need a clean shut down then you can just let threads terminate as and when they want to, and just ctrl c the program to close it...
As other people have said getting the balance of work per thread is important for efficient scaling.