When I do a "dbus_connection_close", do I need to flush the message queue?
In other words, do I need to continue with "dbus_connection_read_write_dispatch" until I receive the "disconnected" indication or is it safe to stop dispatching?
Updated: I need to close the connection to DBus in a clean manner. From reading the documentation, all the clean-up must be done prior to "unreferencing" the connection and this process isn't very well documented IMO.
After some more digging, it appears that there are two types of connection: shared and private.
The shared connection mustn't be closed just unreferenced. Furthermore, it does not appear that the connection must be flushed & dispatched unless the outgoing messages must be delivered.
In my case, I just needed to end the communication over DBus as soon as possible without trying to salvage any outgoing messages.
Thus the short answer is: NO - no flushing / no dispatching needs to be done prior to dbus_connection_unref.
Looking at the documentation for dbus_connection_close(), the only thing that may be invoked is the dispatch status function to indicate that the connection has been closed.
So, ordering here is something you probably want to pay attention to .. i.e getting notified of a closed / dropped connection prior to things left in the message queue.
Looking at the source of the function, it looks like the only thing its going to do is return if fail, i.e. invalid connection / NULL pointer. Otherwise, it (seems) to just hang up.
This means yes, you probably should flush the message queue prior to hanging up.
Disclaimer: I've only had to talk to dbus a few times, I'm not by any means an authority on it.
Related
I know that, here, on SO, are many questions themed like this. I've read through most of the similar questions and can not find an answer for my case.
I use kqueue for server/client socket echo application. The program uses exclusively BSD socket API. The program is work in progress. Now I am at the point of getting EOF from socket.
My setup follows.
Start server, that waits for connections, and accepts one socket.
Start client that connects.
No user data sent by this time. Close the client with SIGINT.
Server kqueue gets EOF flag with no errors.
read system call returns zero with no errors.
The problem is that I get no indication that connection was fully closed. I can not determine if I have to shutdown read end, or completely close a socket. I get no indication of EOF with the write end. And that is expected, since I did not register for the write event(no data were sent by now).
How to properly tell, if the socket was fully closed?
Update
I know that what follows may belong to other post. I think that this update is tightly connected with the question, and the question will benefit as a whole.
To the point. Since I get a read EOF, but not a write EOF(the socket is closed before any data comes in, or out), can I somehow query socket for its state?
What I learned from other network related questions, here, on SO, that network stack may get some packets on a socket. Like FIN, or RST. It will be a sure win for me to just get the socket state, in the particular case.
As a second option, will it help to add one-time write event after I got a read EOF, just to get a write EOF? Will the write EOF event trigger?
I know I will get write error eventually. But, until that time, the socket will be a dead weight.
It will be of a great convenience to getsockopt for the write end close. Or, at least, queuing an event for read endpoint shutdown, after the read returned EOF.
I did not found similar getsockopt options, and I am not sure about queue'ing write event. The source code for kevent, and a network stack in general, is too tough for me.
That is why I ask.
If read or recv returns 0 then that means the other end closed the connection. It's at least a half-close for writing (from the other peer), which means there's nothing more to be received from that connection.
Unless the protocol specifies that it's only a half-close and that you can continue to send data, it's generally best to simply do a full closing of the connection from your side.
So more recently, I have been developing some asynchronous algorithms in my research. I was doing some parallel performance studies and I have been suspicious that I am not properly understanding some details about the various non-blocking MPI functions.
I've seen some insightful posts on here, namely:
MPI: blocking vs non-blocking
MPI Non-blocking Irecv didn't receive data?
There's a few things I am uncertain about or just want to clarify related to working with non-blocking functionality that I think will help me potentially increase the performance of my current software.
From the Nonblocking Communication part of the MPI 3.0 standard:
A nonblocking send start call initiates
the send operation, but does not complete it. The send start call can return before the message was copied out of the send buffer. A separate send complete call is needed to complete the communication, i.e., to verify that the data has been copied out of the send buffer. With suitable hardware, the transfer of data out of the sender memory may proceed
concurrently with computations done at the sender after the send was initiated and before it completed.
...
If the send mode is standard then the send-complete call may
return before a matching receive is posted, if the message is
buffered. On the other hand, the receive-complete may not complete
until a matching receive is posted, and the message was copied into
the receive buffer.
So as a first set of questions about the MPI_Isend (and similarly MPI_Irecv), it seems as though to ensure a non-blocking send finishes, I need to use some mechanism to check that it is complete because in the worst case, there may not be suitable hardware to transfer the data concurrently, right? So if I never use something like MPI_Test or MPI_Wait following the non-blocking send, the MPI_Isend may never actually get its message out, right?
This question applies to some of my work because I am sending messages via MPI_Isend and not actually testing for completeness until I get the expected response message because I want to avoid the overhead of MPI_Test calls. While this approach has been working, it seems faulty based on my reading.
Further, the second paragraph appears to say that for the standard non-blocking send, MPI_Isend, it may not even begin to send any of its data until the destination process has called a matching receive. Given the availability of MPI_Probe/MPI_Iprobe, does this mean an MPI_Isend call will at least send out some preliminary metadata of the message, such as size, source, and tag, so that the probe functions on the destination process can know a message wants to be sent there and so the destination process can actually post a corresponding receive?
Related is a question about the probe. In the Probe and Cancel section, the standard says that
MPI_IPROBE(source, tag, comm, flag, status) returns flag = true if there is a message that can be received and that matches the pattern specifed by the arguments source, tag, and comm. The call matches the same message that would have been received by a call to MPI_RECV(..., source, tag, comm, status) executed at the same point in the program, and returns in status the same value that would have been returned by MPI_RECV(). Otherwise, the call returns flag = false, and leaves status undefined.
Going off of the above passage, it is clear the probing will tell you whether there's an available message you can receive corresponding to the specified source, tag, and comm. My question is, should you assume that the data for the corresponding send from a successful probing has not actually been transferred yet?
It seems reasonable to me now, after reading the standard, that indeed a message the probe is aware of need not be a message that the local process has actually fully received. Given the previous details about the standard non-blocking send, it seems you would need to post a receive after doing the probing to ensure the source non-blocking standard send will complete, because there might be times where the source is sending a large message that MPI does not want to copy into some internal buffer, right? And either way, it seems that posting the receive after a probing is how you ensure that you actually get the full data from the corresponding send to be sent. Is this correct?
This latter question relates to one instance in my code where I am doing a MPI_Iprobe call and if it succeeds, I perform an MPI_Recv call to get the message. However, I think this could be problematic now because I was thinking in my mind that if the probe succeeds, that means it has gotten the whole message already. This implied to me that the MPI_Recv would run quickly, then, since the full message would already be in local memory somewhere. However, I am feeling this was an incorrect assumption now that some clarification on would be helpful.
The MPI standard does not mandate a progress thread. That means that MPI_Isend() might do nothing at all until communications are progressed. Progress occurs under the hood by most MPI subroutines, MPI_Test(), MPI_Wait() and MPI_Probe() are the most obvious ones.
I am afraid you are mixing progress and synchronous send (e.g. MPI_Ssend()).
MPI_Probe() is a local operation, it means it will not contact the sender and ask if something was sent nor progress it.
Performance wise, you should as much as possible avoid unexpected messages, it means a receive should be posted on one end before the message is sent by the other end.
There is a trade-off between performance and portability here :
if you want to write portable code, then you cannot assume there is a MPI progress thread
if you want to optimize your application on a given system, you should give a try to a MPI library that implements a progress thread on the interconnect you are using
Keep in mind most MPI implementations (read this is not mandated by the MPI standard, and you should not rely on it) send small messages in eager mode.
It means MPI_Send() will likely return immediately if the message is small enough (and small enough depends among other things on your MPI implementation, how it is tuned or which interconnect is used).
I have a listening socket on a tcp port. The process itself is using setrlimit(RLIMIT_NOFILE,&...); to configure how many sockets are allowed for the process.
For tests RLIMIT_NOFILE is set to 20 and of course for production it will be set to a sanely bigger number. 20 is good for easily reaching the limit in a test environment.
The server itself has no issues like descriptor leak or similar, but trying to solve the problem by increasing RLIMIT_NOFILE obviously cannot do, because in real life there is no guarantee the the limit will not be reached, no matter how high it is set.
The problem is that after reaching the limit accept returns Too many open files and unless a file or socket is closed the event loop starts spinning without delay, eating 100% of one core. Even if the client closes the connection (e.g. because of timeout), the server will loop until a file descriptor is available to process and close the already dead incoming connection. EDIT: On the other hand the client stalls and there is no good way to know that the server is overloaded.
My question: is there some standard way to handle this situation by closing the incoming connection after accept returns Too many open files.
Several dirty approaches come to mind:
To close and reopen the listening socket with the hope that all pending connections will be closed (this is quite dirty because in threaded server some other thread may get the freed file descriptor)
To track open file descriptor count (this cannot be properly done with external libraries that will have some untracked file descriptors)
To check if file descriptor number is near the limit and start closing incoming connections before the situation happens (this is rather implementation specific and while it will work on Linux, there is no guarantee that other OS will handle file descriptors in the same way)
EDIT: One more dirty and ugly approach:
To keep one spare fd (e.g. dup(STDIN_FILENO) or open("/dev/null",...)) that will be used in case accept fails. The sequence will be:
... accept failed
// stop threads
close(sparefd);
newconnection = accept(...);
close(newconnection);
sparefd = open("/dev/null",...);
// release threads
The main drawback with this approach is thread synchronization to prevent other threads to get the just freed spare fd.
You shouldn't use setrlimit to control how many simultaneous connections your process can handle. Your tiny little bit of socket code is saying to the whole rest of the application, "I only want to have N connections open at a time, and this is the only way I know how to do it, so... nothing else in the process can have any files!". What would happen if everybody did that?
The proper way to do what you want is easy -- keep track of how many connections you have open, and just don't call accept until you can handle another one.
I understand that your code is in a library. The library encounters a resource limit event. I would distinguish, generally, between events which are catastrophic (memory exhaustion, can't open listening socket) and those which are probably temporary. Catastrophic events are hard to handle: without memory, even logging or an orderly shutdown may be impossible.
Too many open files, by contrast, is a condition which is probably temporary, not least because we are the resource hog. Temporary error conditions are luckily trivial to handle: By waiting. This is what you don't do: You should wait for a spell after accept returns "Too many open files", before you call accept again. That will solve the 100% CPU load problem. (I assume that our server performs some work on each connection which is at some point finished, so that the file descriptors of the client connections which our library holds are eventually closed.)
There remains the problem that the library cannot know the requirements of the user code. (How long should the pause between accepts be?1 Is it at all acceptable (sic) to let connection requests wait at all? Do we give up at some point?) It is imperative to report errors back to the user code, so that the user code has a chance to see and fix the error.
If the user code gets the file descriptor back, that's easy: Return accept's error code (and make sure to document that possibility). So I assume that the user code never sees gritty details like file descriptors but instead gets some data, for example. It may even be that the library performs just side effects, possibly concurrently, so that user code never sees any return value which would be usable to communicate errors. Then the library must provide some other way to signal the error condition to the user code. This may impose restrictions on how the user code can use the library: Perhaps before or after certain function calls, or simply periodically, an error status must be actively checked.
1By the way, it is not clear to me, even after reading the accept man page, whether the client's connect fails (because the connection request has been de-queued on the server side but cannot be handled), or whether the request simply stays in the queue so that the client is oblivious of the server's problems, apart from a delay.
notice that multiplexing syscalls such as poll(2) can work (so wait without busy spin looping) on accept-ing sockets (and on connected sockets also, or any other kind of stream file descriptor).
So just have your event loop handle them (probably with other readable & writable file descriptors). And don't call accept(2) when you don't want to.
thank you for reading. I'm currently implementing both the server and client for a socket server in C using linux. Currently i have a working "chat" system where both the server and the socket can send unique messages and the other end would receive that message with the correct length.
example output:
Server side
You:Hello!
client:hi, how are you?
You: fine thanks.
client: blabla
..And the client side would look be as follows:
server: Hello!
you:hi,how are you?
etc etc.
My question is, is there any way for the client/server to be able to send multiple messages before the other replies?
I currently have an endless while loop that waits for a receive and then proceeds to send, and this will repeat until the connection is lost. Using this method i can only send one message before i am forced to wait for a receive. I'm not sure of the correct implementation as I'm still quite new to both sockets and C! Thanks :)
Yes it could be possible.
The main body of your code, does not wait on socket for data. It reads the socket if data is already on it. It is possinle by using select function. After the select call, it reads the socket to display the received messages and sends user messages to other peer if there are ready on input.
A generic solution: You must use threading, and i'd propose to run the receiving part in a separate thread.
Hence, you first code the main thread to only manage sending, just as if the application couldn't receive at all. Apparently you have an edit field somewhere (and a messgae loop somehow). Each time the user presses Enter, you Send from within the Edit field's callback function.
Then you code a separate thread, that calls (and hangs on, blocks on) Receive(). Each time Receive "slips on" (ie. data came in), you do something with the data and then jump back to the Receive entry point. This goes on until you terminate the socket, or by other means decide to in fact not jump back to the Receive entry point.
The only situation where the two threads "touch" each other is when they both want to write text content to the same chat window. Both shall do it immediately as the transmission happens, but potentially both may try to access the chat window at exactly the same moment, causing a crash. Hence you muct apply a locking mechanism here; the one that first tries to access the chat window "gets it", while the locking mechanism keeps the other one on hold until the first releases the lock. Then the second one can do it's job. The locking is after all only a matter of microseconds.
These are immediate actions, free from each other. You don't need to que multiple messages; each one gets processed "as it happens".
The CreateIoCompletionPort function allows the creation of a new I/O completion port and the registration of file handles to an existing I/O completion port.
Then, I can use any function, like a recv on a socket or a ReadFile on a file with a OVERLAPPED structure to start an asynchronous operation.
I have to check whether the function call returned synchronously although it was called with an OVERLAPPED structure and in this case handle it directly. In the other case, when ERROR_IO_PENDING is returned, I can use the GetQueuedCompletionStatus function to be notified when the operation completes.
The question which arise are:
How can I remove a handle from the I/O completion port? For example, when I add sockets to the IOCP, how can I remove closed ones? Should I just re-register another socket with the same completion key?
Also, is there a way to make the calls ALWAYS go over the I/O completion port and don't return synchronously?
And finally, is it possible for example to recv asynchronously but to send synchronously? For example when a simple echo service is implemented: Can I wait with an asynchronous recv for new data but send the response in a synchronous way so that code complexity is reduced? In my case, I wouldn't recv a second time anyways before the first request was processed.
What happens if an asynchronous ReadFile has been requested, but before it completes, a WriteFile to the same file should be processed. Will the ReadFile be cancelled with an error message and I have to restart the read process as soon as the write is complete? Or do I have to cancel the ReadFile manually before writing? This question arises in combination with a communication device; so, the write and read should not do problems if happening concurrently.
How can I remove a handle from the I/O completion port?
In my experience you can't disassociate a handle from a completion port. However, you may disable completion port notification by setting the low-order bit of your OVERLAPPED structure's hEvent field: See the documentation for GetQueuedCompletionStatus.
For example, when I add sockets to the IOCP, how can I remove closed ones? Should I just re-register another socket with the same completion key?
It is not necessary to explicitly disassociate a handle from an I/O completion port; closing the handle is sufficient. You may associate multiple handles with the same completion key; the best way to figure out which request is associated with the I/O completion is by using the OVERLAPPED structure. In fact, you may even extend OVERLAPPED to store additional data.
Also, is there a way to make the calls ALWAYS go over the I/O completion port and don't return synchronously?
That is the default behavior, even when ReadFile/WriteFile returns TRUE. You must explicitly call SetFileCompletionNotificationModes to tell Windows to not enqueue a completion packet when TRUE and ERROR_SUCCESS are returned.
is it possible for example to recv asynchronously but to send synchronously?
Not by using recv and send; you need to use functions that accept OVERLAPPED structures, such as WSARecv, WSASend, or alternatively ReadFile and WriteFile. It might be more handy to use the latter if your code is meant to work multiple types of I/O handles, such as both sockets and named pipes. Those functions provide a synchronous mode, so if you use those them you can mix asynchronous and synchronous calls.
What happens if an asynchronous ReadFile has been requested, but before it completes, a WriteFile to the same file should be processed?
There is no implicit cancellation. As long as you're using separate OVERLAPPED structures for each read/write to a full-duplex device, I see no reason why you can't do concurrent I/O operations.
As I’ve already pointed out there, the commonly held belief that it is impossible to remove handles from completion ports is wrong, probably caused by the abscence of any hint whatsoever on how to do this from nearly all documentation I could find. Actually, it’s pretty easy:
Call NtSetInformationFile with the FileReplaceCompletionInformationenumerator value for FileInformationClass and a pointer to a FILE_COMPLETION_INFORMATION structure for the FileInformation parameter. In this structure, set the Port member to NULL (or nullptr, in C++) to disassociate the file from the port it’s currently attached to (I guess if it isn’t attached to any port, nothing would happen),
or set Port to a valid HANDLE to another completion port to associate the file with that one instead.
First some important corrections.
In case the overlapped I/O operation completes immediately (ReadFile or similar I/O function returns success) - the I/O completion is already scheduled to the IOCP.
Also, according to your questions I think you confuse between the file/socket handles, and the specific I/O operations issued on them.
Now, regarding your questions:
AFAIK there is no conventional way to remove a file/socket handle from the IOCP (usually you just don't have to do this). You talk about removing closed handles from the IOCP, which is absolutely incorrect. You can't remove a closed handle, because it does not reference a valid kernel object anymore!
A more correct question should be how the file/socket should be properly closed. The answer is: just close your handle. All the outstanding I/O operations (issued on this handle) will return soon with an error code (abortion). Then, in your completion routine (the one that calls GetQueuedCompletionStatus in a loop) should perform the per-I/O needed cleanup.
As I've already said, all the I/O completion arrives at IOCP in both synchronous and asynchronous cases. The only situation where it does not arrive at IOCP is when an I/O completes synchronously with an error. Anyway, if you want a unified processing - in such a case you may post an artificial completion data to IOCP (use PostQueuedCompletionStatus).
You should use WSASend and WSARecv (not recv and send) for overlapped I/O. Nevertheless, even of the socket was opened with flag WSA_FLAG_OVERLAPPED - you are allowed to call the I/O functions without specifying the OVERLAPPED structure. In such a case those functions work synchronously.
So that you may decide on synchronous/asynchronous modes for every function call.
There is no problem to mix overlapped read/write requests. The only delicate point here is what happens if you try to read the data from the file position where you're currently writing to. The result may depend on subtle things, such as order of completion of I/Os by the hardware, some PC timing parameters and etc. Such a situation should be avoided.
How can I remove a handle from the I/O completion port? For example, when I add sockets to the IOCP, how can I remove closed ones? Should I just re-register another socket with the same completion key?
You've got it the wrong way around. You set the I/O completion port to be used by a file object - when the file object is deleted, you have nothing to worry about. The reason you're getting confused is because of the way Win32 exposes the underlying native API functionality (CreateIoCompletionPort does two very different things in one function).
Also, is there a way to make the calls
ALWAYS go over the I/O completion port
and don't return synchronously?
This is how it's always been. Only starting with Windows Vista can you customize how the completion notifications are handled.
What happens if an asynchronous
ReadFile has been requested, but
before it completes, a WriteFile to
the same file should be processed.
Will the ReadFile be cancelled with an
error message and I have to restart
the read process as soon as the write
is complete?
I/O operations in Windows are asynchronous inherently, and requests are always queued. You may not think this is so because you have to specify FILE_FLAG_OVERLAPPED in CreateFile to turn on asynchronous I/O. However, at the native layer, synchronous I/O is really an add-on, convenience thing where the kernel keeps track of the file position for you and waits for the I/O to complete before returning.