How can I cleanup the IPC message-queue? - c

I am using msgget() function in my IPC based application. How can I clean up the queue filled up with old message queues?

To delete a queue, use the following command:
msgctl(msgQID, IPC_RMID, NULL);
SYSTEM CALL: msgctl()

A work around is to increase MSGMNI System wide maximum number of message queues: policy dependent (on Linux, this limit can be read and modified via /proc/sys/kernel/msgmni).

You can change the message queue attribute for O_NONBLOCK by using mq_setattr.
Then empty the queue by reading all of the messages, until the returned value indicates the queue is empty.
Now set back the old attributes.
This method is not a run time optimized, but it avoids the need to close and open the message queue.

These persistent resource allocation issues (there's a similar one with shared memory) are why the System V APIs are generally considered deprecated. In this case, have you considered using a unix domain socket or FIFO instead of a message queue? Those appear in the filesystem, and can be "cleaned up" when no longer used with tools like rm.

Related

IPC message queue overflow consequences

I am creating a C application, which will be executed in openwrt router device. Because of limited resources I'm a bit scared about the message queue. What if the "reader" application, which takes the messages from the queue, crash and the "writer" still sends the messages? Should I be worried about the device's memory or will the message queue clean itself eventually?
EDIT I realised that I wasn't clear enough about my task. One application will be sending messages and other will be reading and processing them.
See the documentation for msgsnd:
The queue capacity is governed by the msg_qbytes field in the associated data structure for the message queue. During queue creation this field is initialized to MSGMNB bytes, but this limit can be modified using msgctl(2).
If insufficient space is available in the queue, then the default behavior of msgsnd() is to block until space becomes available. If IPC_NOWAIT is specified in msgflg, then the call instead fails with the error EAGAIN.
So the sender will wait for the receiver to process a message, unless you use IPC_NOWAIT, in which case it returns EAGAIN and the sender can check for this error code.
The default maximum buffer size is specified in a constant called MSGMNB. You can print this value to see what it is on your system. To change the maximum size for your queue, you can use the function msgctl.

Switching many threads using curl easy to single thread using curl multi

I use libcurl easy interface and I create lots of threads in my c++ app to handle these http requests. I would like to convert the code to use libcurl multi instead. Conceptually, the idea is clear: instead of calling blocking curl_easy_perform on each curl easy handle from multiple threads I'll call a blocking curl_multi_perform from a single thread and this call internally will handle all attached curl easy handles.
Things that aren't clear to me:
how do I cancel any of the outstanding http requests that are being handled by the blocking curl_multi_perform call (from another thread). Similarly, would the same work with easy interface, can I end/about an http request from another thread while there is another thread does curl_easy_perform on that handle.
Is it ok to add new easy handles to a multi handle while there is another thread calls curl_multi_perform on the multi handle?
If I use curl_multi_remove_handle to abort one of outgoing http requests while it was loading data (let's say it was doing 1GB file download) then I can reuse the same handle right after that. Does curl close that tcp connection that was aborted in the middle? Otherwise, I don't see how that connection could possibly be reused without completely downloading entire 1GB body.
Is there a simple example that used to do multiple easy requests from different threads and same example converted to multi interface?
(This is really several questions disguised as one, which is not a good fit for stackoverflow.)
curl_multi_perform() doesn't block. It does as much as it can do for now, then it returns and expects the program to call it again when it's time or when there's activity on one of its sockets.
Ideally you can mark which transfers to stop in the other threads and as soon as curl_multi_perform() returns you can remove said easy handles from the multi handle and they're no longer in the game. Alternatively, you can use the individual transfer's callbacks (write/read/progress) to return error when you want that transfer to end.
It is not OK to use the same libcurl handle in more than one thread at any given moment. If you really need to use the same handle from more than one thread, then you need to do careful mutexing. See the libcurl treading man page. It is usually better to put things into qeueus from the other threads and let the single libcurl-using thread read handles or actions from that queue when it can, which then assures single thread access to the handles.
If you abort a transfer by removing the handle with curl_multi_remove_handle(), that transfer is aborted. Stopped. You can indeed reuse that handle immediately and if you just put it back in, it will be treated as a brand new transfer and unless you change any options in the easy handle it will simply start off from the beginning again with the same URL. Prematurely aborted transfers will of course be treated correctly, which might include closing the TCP connection if necessary.

Removing a message queue

I am confused a lot with the ways message queues are removed in a C/C++ program.
I saw here that
Removing a Message Queue
You can remove a message queue using the ipcrm command (see the ipcrm(1)
reference page), or by calling msgctl() and passing the IPC_RMID command
code. In many cases, a message queue is meant for use within the scope of
one program only, and you do not want the queue to persist after the
termination of that program. Call msgctl() to remove the queue as part of
termination.
And then something else which is mq_unlink
I am confused what is the way now to completely remove the message queue
Now Let me tell the issue that I am facing.
I have in my application created 2 message queues
Now suddenly there is signal that comes and passes the control to a signal handler. In the signal handler, I am restarting the service in which I am facing an error saying "Resource temporarily Unavailable". I have closed in the signal handler one of the queue's with mq_close(). May be the issue is coming since I am not closing the other one. But my doubt here is:
Do I need to close it?
DO I need to remove it?
If I have to remove it, Do I need to use msg_ctl or mq_unlink?
Firstly, there are two unrelated message queue implementations, the old UNIX System V one which uses msgget(), msgsnd() and msgrcv() and the newer POSIX compliant one described here.
If you are using the POSIX version, to close it just in your program you use mq_close, or to destroy it completely for all programs where it may be open use mq_unlink.
If you use the System V version to close the queue you must use:
msgctl(MessageQueueIQ,IPC_RMID,NULL);
where MessageQueueIQ is the ID of your queue.
to answer your other questions, if you are using the System V message queues, closing it is enough, if you are using the POSIX ones, you must unlink it (this will also close it).

How to use libuv for direct file descriptor reads?

As part of an investigation for a project I am working on, I've been looking into different event loop mechanisms/libraries to use for detection and reading of data from sockets. Specifically, what I need to do is simple:
Detect data from client connections
Pass the file descriptor to worker threads to read and process
Epoll edge triggering worked great for this purpose, and I like the edge triggered behavior so I only get notified once when data is available. I tried implementing using libev doing something like the below pseudo code and this appears to work:
void read_cb(struct ev_loop *loop, struct ev_io *watcher, int revents) {
1. Check for errors
2. ev_io_stop(loop, watcher) so I don't get constantly notified
3. Assign the ev_io watcher pointer into worker thread accessible data structure
3. Signal worker thread
4. Worker thread begins reading from watcher->fd
5. When worker thread get EAGAIN, start the watcher again
Since libuv is intended for similar purpose and is edge triggered, I am trying to do something similar but haven't been successful yet. With libuv, I understand that you can use uv_read_start for reading data from streams, but with this method, the uv_read_cb returns a buffer filled with the data. As I need to manipulate that amount of data that needs to be read, and to avoid extra copy of the data from this buffer to a different structure, I'd like to be able to read directly from the socket.
Is this scenario something that libuv can be used for?
Thanks in advance!
This commit adds the possibility to get the file descriptor of an underlying stream: https://github.com/joyent/libuv/commit/4ca9a363897cfa60f4e2229e4f15ac5abd7fd103
You can use:
int uv_fileno(const uv_handle_t* handle, uv_os_fd_t* fd);
Then read from the FD however you see fit.
I was finally able to find an example that does what I describe in my previous post. For those who'd be interested on how this is done, here is the link.
Testing this had yielded additional questions but I'll post those separately as they are related more to edge/level trigger behaviour rather than the library.

What key to put on the receiver side of Linux message queues?

I have created a message queue and the sender part successfully creates and sends the message to the message queue.
I have used IPC_PRIVATE as key in msgget() on the sender side.
Now my question is, what key to use in msgget() on the receiver side ?
Using IPC_PRIVATE on the receiver side as key in msgget() does not receive message and fails.
I should also mention that msgsnd() in the sender part indicates an error (returns -1), but when printing with perror(), the output is Success and the message is sent to the message queue successfully and can be seen using ipcs -q command at the terminal. I don't know why this happens.
if(msgsnd(msqid,&msgp,88,IPC_NOWAIT) == 0)
{
perror("\nsend : msgsnd FAIL");
msgctl(msqid,IPC_RMID,buf);
return 1;
}
Output :
send : msgsnd FAIL: Success
You are going to have to use a common key value between your two independent processes ... using IPC_PRIVATE means you are not planning on sharing the queue between two processes unless the secondary process has been forked from the first process. Because of the forking operation, the child will inherent the queue identifier from the parent process, so using IPC_PRVATE in that scenario is okay. But because using IPC_PRIVATE creates a unique key-value for every call its used in, for scenarios where you have two completely independent processes, such as a server/client relationship, you will need to create a common key ... it can either be a "magic number" that you share between all the processes that is already not in-use by another queue, shared memory segment, etc., or you can create a key off a common file in the filesystem by using ftok().
This question is the reason you should not use the ancient SysV message queues - there's simply no good way to get a key that's unique. Even with ftok, collisions are sufficiently likely that you must write code to try to work around them. Pretend you never saw the SysV IPC interfaces and use POSIX message queues instead; see man mq_open.

Resources