SO_ERROR vs. errno - c

For getting socket syscall (like recv) error, which is better (at performance level) ?
Use the plain old errno
Or use SO_ERROR as getsockopt() optname ?
I think errno (defined to __error() on my system) is faster because it's not a system call. Am I right ?
The advantages of SO_ERROR are : automatic error reset after getting, and we are sure that the error only concerns our socket. It's safer.
Which one do you think is better ? Is there a really difference of performance between the two ?

Quoting Dan Bernstein:
Situation: You set up a non-blocking socket and do a connect() that returns -1/EINPROGRESS or -1/EWOULDBLOCK. You select() the socket for writability. This returns as soon as the connection succeeds or fails. (Exception: Under some old versions of Ultrix, select() wouldn't notice failure before the 75-second timeout.)
Question: What do you do after select() returns writability? Did the connection fail? If so, how did it fail?
If the connection failed, the reason is hidden away inside something called so_error in the socket. Modern systems let you see so_error with getsockopt(,,SO_ERROR,,) ...
He goes on to discuss the fact that getsockopt(,,SO_ERROR,,) is a modern invention that doesn't work on old systems, and how to get the error code on such systems. But you probably don't need to worry about that if you're programming for a Unix/Linux system released in the last 15 years.
The Linux man page for connect describes the same usage of SO_ERROR.
So, if you're performing asynchronous operations on sockets, you may need to use SO_ERROR. In any other case, just use errno.

Quoting Unix Network Programming:
If so_error is nonzero when the process calls read and there is no
data to return, read returns –1 with errno set to the value of so_error
(p. 516 of TCPv2). The value of so_error is then reset to 0. If there
is data queued for the socket, that data is returned by read instead
of the error condition. If so_error is nonzero when the process calls
write, –1 is returned with errno set to the value of so_error (p. 495
of TCPv2) and so_error is reset to 0.
So, errno is the better choice, unless you want to get error immediately before the data is fully fetched.

Related

Can socket() fail with EINPROGRESS

Is it possible for the socket() function to fail with EINPROGRESS in Linux? Note that I am specifically asking about socket(), not connect() or others.
POSIX does not list EINPROGRESS as a possible error code. However the manpage for socket() in Linux says:
Other errors may be generated by the underlying protocol modules.
Is there any circumstances in which this call can actually fail with EINPROGRESS?
EINPROGRESS means the operation is now in progress. It would block because of an external reason : wait for a remote action or a local device.
socket() is only creating an entry in the memory of the system : there is no reason to wait for any remote action or any device.
But if it were able to return EINPROGRESS, you would have nothing to wait for.
With file handles and socket handles, you can use select() in order to wait for the system to be ready. But if the socket() itself does not return anything, you have nothing to wait on.
I see no reason for socket() to return EIPROGRESS but it would be a bad idea anyway.
Maybe not the answer you were looking for:
You'll have to check the corresponding Linux kernel source code (kernel/net/socket.c) throughly to be 100% sure. Glancing through the code, it doesn't look like EINPROGRESS is returned anywhere. However, there are runtime dependent calls in there, so its difficult to say just from static code analysis.

Can send() on a TCP socket return >=0 and <length?

I've seen a number of questions regarding send() that discuss the underlying protocol. I'm fully aware that for TCP any message may be broken up into parts as it's sent and there's no guarantee that the receiver will get the message in one atomic operation. In this question I'm talking solely about the behavior of the send() system call as it interacts with the networking layer of the local system.
According to the POSIX standard, and the send() documentation I've read, the length of the message to be sent is specified by the length argument. Note that: send() sends one message, of length length. Further:
If space is not available at the sending socket to hold the message to
be transmitted, and the socket file descriptor does not have
O_NONBLOCK set, send() shall block until space is available. If space
is not available at the sending socket to hold the message to be
transmitted, and the socket file descriptor does have O_NONBLOCK set,
send() shall fail.
I don't see any possibility in this definition for send() to ever return any value other than -1 (which means no data is queued in the kernel to be transmitted) or length, which means the entire message is queued in the kernel to be transmitted. I.e., it seems to me that send() must be atomic with respect to locally queuing the message for delivery in the kernel.
If there is enough room in the socket queue in the kernel for the entire message and no signal occurs (normal case), it's copied and returns length.
If a signal occurs during send(), then it must return -1. Obviously we cannot have queued part of the message in this case, since we don't know how much was sent. So nothing can be sent in this situation.
If there is not enough room in the socket queue in the kernel for the entire message and the socket is blocking, then according to the above statement send() must block until space becomes available. Then the message will be queued and send() returns length.
If there is not enough room in the socket queue in the kernel for the entire message and the socket is non-blocking, then send() must fail (return -1) and errno will be set to EAGAIN or EWOULDBLOCK. Again, since we return -1 it's clear that in this situation no part of the message can be queued.
Am I missing something? Is it possible for send() to return a value which is >=0 && <length? In what situation? What about non-POSIX/UNIX systems? Is the Windows send() implementation conforming with this?
Your point 2 is over-simplified. The normal condition under which send returns a value greater than zero but less than length (note that, as others have said, it can never return zero except possibly when the length argument is zero) is when the message is sufficiently long to cause blocking, and an interrupting signal arrives after some content has already been sent. In this case, send cannot fail with EINTR (because this would prevent the application from knowing it had already successfully sent some data) and it cannot re-block (since the signal is interrupting, and the whole point of that is to get out of blocking), so it has to return the number of bytes already sent, which is less than the total length requested.
According to the Posix specification and all the man 2 send pages I have ever seen in 30 years, yes, send() can return any value > 0 and <= length. Note that it cannot return zero.
According to a discussion a few years ago on news:comp.protocols.tcp-ip where all the TCP implementors are, a blocking send() won't actually return until it has transferred all the data to the socket send buffer: in other words, the return value is either -1 or length. It was agreed that this was true of all known implementations, and also true of write(), writev(), sendmsg(), writev(),
I know how the thing works on Linux, with the GNU C Library. Point 4 of your question reads differently in this case. If you set the flag O_NONBLOCK for the file descriptor, and if it is not possible to queue the entire message in the kernel atomically, send() returns the number of bytes actually sent (it can be between 1 and length), and errno is set to EWOULDBLOCK.
(With a file descriptor working in the blocking mode, send() would block.)
It is possible for send() to return a value >= 0 && < length. This could happen if the send buffer has less room than the length of the message upon a call to send(). Similarly, if the current receiver window size known to the sender is smaller than the length of the message, only part of the message may be sent. Anecdotally, I've seen this happen on Linux through the a localhost connection when the receiving process was slow to unload the data it was receiving from its receive buffer.
My sense is that one's actual experience will vary a good bit by implementation. From this Microsoft link, it's clear that a non-error return value less than the length can occur.
It is also possible to get a return value of zero (again, at least with some implementations) if a zero-length message is sent.
This answer is based on my experience, as well as drawing upon this SO answer particularly.
Edit: From this answer and its comments, evidently an EINTR failure may only result if the interruption comes before any data is sent, which would be another possible way to get such a return value.
On a 64-bit Linux system:
sendto(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4294967296, 0, NULL, 0) = 2147479552
So, even trying to send lowy 4GB, Linux chickens out and sends less than 2GB. So, if you think that you'll ask it to send 1TB and it patiently will sit there, keep wishing.
Similarly, on an embedded system with just a few KBs free, don't think that it'll fail or will wait for something - it'll send as much as it can, and tell you how much that was, letting you to retry with the rest (or do something else in the meantime).
Everyone agrees that in case of EINTR, there can be a short send. But EINTR can happen at any time, so there can always be a short send.
And finally, POSIX says that the number of bytes sent is returned, period. And whole Unix and POSIX which formalizes it is built on the concept of short read/writes, which allows implementations of POSIX systems to scale from the tiniest embedded to supercomputers with proverbial "bigdata". So, no need to try to read between the lines and find indulgences to a particular adhoc implementation you have on your hands. There're many more implementations out there, and as long as you follow the word of the standard, your app will be portable among them.
To clarify a little, where it says:
shall block until space is available.
there are several ways to wake up from that block/sleep:
Enough space becomes available.
A signal interrupts the current blocking operation.
SO_SNDTIMEO is set for the socket and the timeout expires.
Other, e.g. the socket is closed in another thread.
So things end up thus:
If there is enough room in the socket queue in the kernel for the entire message and no signal occurs (normal case), it's copied and returns length.
If a signal occurs during send(), then it must return -1. Obviously we cannot have queued part of the message in this case, since we don't know how much was sent. So nothing can be sent in this situation.
If there is not enough room in the socket queue in the kernel for the entire message and the socket is blocking, then according to the above statement send() must block until space becomes available. Then the message will be queued and send() returns length. Then send() can be interrupted by a signal, the send timeout can elapse,... causing a short send/partial write. Reasonable implementations will return -1 and set errno to an adequate value if nothing was copied to the send buffer.
If there is not enough room in the socket queue in the kernel for the entire message and the socket is non-blocking, then send() must fail (return -1) and errno will be set to EAGAIN or EWOULDBLOCK. Again, since we return -1 it's clear that in this situation no part of the message can be queued.

How do I get a specific error from g_poll?

The g_poll() function returns -1 "on error or if the call was interrupted". (See: https://developer.gnome.org/glib/2.28/glib-The-Main-Event-Loop.html#g-poll).
If g_poll returns -1 how do I determine if this was because the call was interrupted vs. if there was an error?
If it was an error, how do I determine the cause of the error? Is it sufficient to look at errno?
Yes. Check errno if g_poll() returns -1. The documentation also says
gpoll() polls fds, as with the poll() system call, but portably.
On systems that don't have poll(), it is emulated using select().
i.e. g_poll() uses poll() and select() internally.
Hence, check the various scenarios that errno is set to various values by poll() and select()

Read signaled by select(), but recv() returns no data and signal EAGAIN on non-blocking sockets

I have got signaled socket for read from select(), but then no data arrived by recv call(), instead it returns -1 with errno==EAGAIN.
I can grant that no other thread touch the socket.
I think that this behavior is not correct. If an subsequent close from other side occurs, I can expect return value 0 (graceful close) or other error code from recv, but not EAGAIN, because it means by my opinion that an data will arrive in the future.
I have found some previous thread about the problem here but without solution.
This behavior happens to me on Ubuntu Linux Oneric, or other last Linux distros, then info from link posted here
That it will be fixed in kernel is not true for 3.0.0 kernel or latest 2.6.x
Does anybody have an idea why it happens and how to avoid this unwanted behavior?
Select() reporting a socket as readable does not mean that there is something to read; it implies that a read will not block. The read could return -1 or 0, but it would not block.
UPDATE:
After select returns readable: if read() returns -1, check errno.
EAGAIN/EWOULDBLOCK and EINTR are the values to be treated specially: mostly by reissuing the read(), but you might trust on the select loop returning readable the next time around.
If there are multiple threads involved, things may get more difficult.
I'm getting the same problem but with epoll. I noticed, that it happens whenever the system is reusing the FD numbers of the sockets that are already closed.
After some research, I've noticed that this behavior is caused by closing the sockets while epolling on them. Try to avoid running select on a socket while closing it - that may help.

reentrant function read()

I have found a server by select(), which I want to receive from some clients.
But I find that the server will get blocked in read() by gdb.
So I thought of solving it by adding a SIGALRM, but
when a timeout occurs, it's still blocked in read().
This happens because, system calls are automatically restarted, the read()
is not interrupted when the SIGALRM signal handler returns.
Is this interpretation correct?
The usual solution to this problem is to use SOCK_NONBLOCK to socket(2) or O_NONBLOCK to fcntl(2)'s F_SETFL command. Once the socket is marked non-blocking, it'll never block when you try to read from it, and you won't need to try to straddle the divide between blocking or non-blocking. Are you sure select(2) set the filedescriptor? The select(2) manpage does describe one reason why you see what you're seeing, but it doesn't seem likely:
Under Linux, select() may report a socket file descriptor as
"ready for reading", while nevertheless a subsequent read
blocks. This could for example happen when data has arrived
but upon examination has wrong checksum and is discarded.
There may be other circumstances in which a file descriptor is
spuriously reported as ready. Thus it may be safer to use
O_NONBLOCK on sockets that should not block.
If you really just want to prevent the automatic restart, look into SA_RESTART in sigaction(2) to prevent restartable system calls from restarting.

Resources