C socket: does send wait for recv to end? - c

I use blocking C sockets on Windows.
I use them to send updates of a data from the server to the client and vice versa. I send updates at a high frequency (every 100ms). Does the send() function will wait for the recipient recv() to receive the data before ending ?
I assume not if I understand well the man page:
"Successful completion of send() does not guarantee delivery of the message."
So what will happen if one is running 10 send() occurences while the other has only complete 1 recv() ?
Do I need to use so some sort of acknowledgement system ?

Lets assume you are using TCP. When you call send, the data that you are sending is immediately placed on the outgoing queue and send then completes successfully. If however, send is unable to place the data on the outgoing queue, send will return with an error.
Since Tcp is a guaranteed delivery protocol, the data on the outgoing queue can only be removed once acknowledgement has been received by the remote end. This is because the data may need to be resent if no ack has been received in time.
If the remote end is sluggish, the outgoing queue will fill up with data and send will then block until there is space to place the new data on the outgoing queue.
The connection can however fail is such a way that there is no way any further data can be sent. Although once a TCP connection has been closed, any further sends will result in an error, the user has no way of knowing how much data did actually make it to the other side. (I know of no way of retrieving TCP bookkeeping from a socket to the user application). Therefore, if confirmation of receipt of data is required, you should probably implement this on application level.
For UDP, I think it goes without saying that some way of reporting what has or has not been received is a must.

send() blocks until the operating system (kernel) has taken the data and put it into a buffer of outgoing data. It does not wait until the other end has received the data.

If you're sending by TCP, you get guaranteed delivery1 and the other end will receive the data in the order sent. That might, however, be coalesced together so what you sent as 10 separate updates could be received as a single large packet (or vice versa -- a single update could be broken up across an arbitrary number of packets). This means, among other things, that any ACK of any data implicitly acknowledges receipt of all previous data.
If you're using UDP, none of that is true -- data can arrive out of order, or be dropped and never delivered at all. If you care about all the data being received, you just about need to build some sort of acknowledgement system of your own on top of UDP itself.
1 Of course, there's a limit on the guarantee -- if a network cable gets cut (or whatever) packets won't be delivered, but you'll at least get an error message telling you that the connection was lost.

If you're using TCP, you get the acknowledgements for free as that is part of what the protocol does under the hood. But sounds like for this type of application you would probably want to use UDP. In either case though send() will not block until the client has successfully recv().
If it's crucial that the client receive every message, then use TCP. If it's ok for the client to miss one or more messages, then use UDP.

TCP guarantees delivery at a lower TCP stack level. It retries delivery until the receiving part acknowledges that the data was received, but your application may never know about that fact.
Let's say that you are sending chunks of data and you need to place those chunks of data somewhere according to some logic. If your application is not prepared to know where each individual block has to be placed, receiving it at the TCP level may be useless. The original post was about the application level logic.

Related

Can one send be broken up into multiple recvs?

I'm learning about C socket programming and I came across this piece of code in a online tutorial
Server.c:
//some server code up here
recv(sock_fd, buf, 2048, 0);
//some server code below
Client.c:
//some client code up here
send(cl_sock_fd, buf, 2048, 0);
//some client code below
Will the server receive all 2048 bytes in a single recv call or can the send be be broken up into multiple receive calls?
TCP is a streaming protocol, with no message boundaries of packets. A single send might need multiple recv calls, or multiple send calls could be combined into a single recv call.
You need to call recv in a loop until all data have been received.
Technically, the data is ultimately typically handled by the operating system which programs the physical network interface to send it across a wire or over the air or however else applicable. And since TCP/IP doesn't define particulars like how many packets and of which size should compose your data, the operating system is free to decide as much, which results in your 2048 bytes of data possibly being sent in fragments, over a period of time.
Practically, this means that by calling send you may merely be causing your 2048 bytes of data be buffered for sending, much like an e-mail in a queue, except that your 2048 bytes aren't even a single piece of anything to the system that sends it -- it's just 2048 more bytes to chop into packets the network will accept, marked with a destination address and port, among other things. The job of TCP is to only make sure they're the same bytes when they arrive, in same order with relation to each other and other data sent through the connection.
The important thing at the receiving end is that, again, the arriving data is merely queued and there is no information retained as to how it was partitioned when requested sent. Everything that was ever sent through the connection is now either part of a consumable stream or has already been consumed and removed from the stream.
For a TCP connection a fitting analogy would be the connection holding an open water keg, which also has a spout (tap) at the bottom. The sender can pour water into the keg (as much as it can contain, anyway) and the receiver can open the spout to drain the water from the keg into say, a cup (which is an analogy to a buffer in an application that reads from a TCP socket). Both sender and receiver can be doing their thing at the same time, or either may be doing so alone. The sender will have to wait (send call will block) if the keg is full, and the receiver will have to wait (recv call will block) if the keg is empty.
Another, shorter analogy is that sender and receiver sit each at their own end of a opaque pipe, with the former pushing stuff in one end and the latter removing pushed stuff out of the other end.

Determine if peer has closed reading end of socket

I have a socket programming situation where the client shuts down the writing end of the socket to let the server know input is finished (via receiving EOF), but keeps the reading end open to read back a result (one line of text). It would be useful for the server to know that the client has successfully read the result and closed the socket (or at least shut down the reading end). Is there a good way to check/wait for such status?
No. All you can know is whether your sends succeeded, and some of them will succeed even after the peer read shutdown, because of TCP buffering.
This is poor design. If the server needs to know that the client received the data, the client needs to acknowledge it, which means it can't shutdown its write end. The client should:
send an in-band termination message, as data.
read and acknowledge all further responses until end of stream occurs.
close the socket.
The server should detect the in-band termination message and:
stop reading requests from the socket
send all outstanding responses and read the acknowledgements
close the socket.
OR, if the objective is only to ensure that client and server end at the same time, each end should shutdown its socket for output and then read input until end of stream occurs, then close the socket. That way the final closes will occur more or less simultaneously on both ends.
getsockopt with TCP_INFO seems the most obvious choice, but it's not cross-platform.
Here's an example for Linux:
import socket
import time
import struct
import pprint
def tcp_info(s):
rv = dict(zip("""
state ca_state retransmits probes backoff options snd_rcv_wscale
rto ato snd_mss rcv_mss unacked sacked lost retrans fackets
last_data_sent last_ack_sent last_data_recv last_ack_recv
pmtu rcv_ssthresh rtt rttvar snd_ssthresh snd_cwnd advmss reordering
rcv_rtt rcv_space
total_retrans
pacing_rate max_pacing_rate bytes_acked bytes_received segs_out segs_in
notsent_bytes min_rtt data_segs_in data_segs_out""".split(),
struct.unpack("BBBBBBBIIIIIIIIIIIIIIIIIIIIIIIILLLLIIIIII",
s.getsockopt(socket.IPPROTO_TCP, socket.TCP_INFO, 160))))
wscale = rv.pop("snd_rcv_wscale")
# bit field layout is up to compiler
# FIXME test the order of nibbles
rv["snd_wscale"] = wscale >> 4
rv["rcv_wscale"] = wscale & 0xf
return rv
for i in range(100):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("localhost", 7878))
s.recv(10)
pprint.pprint(tcp_info(s))
I doubt a true cross-platform alternative exists.
Fundamentally there are quite a few states:
you wrote data to socket, but it was not sent yet
data was sent, but not received
data was sent and losts (relies on timer)
data was received, but not acknowledged yet
acknowledgement not received yet
acknowledgement lost (relies on timer)
data was received by remote host but not read out by application
data was read out by application, but socket still alive
data was read out, and app crashed
data was read out, and app closed the socket
data was read out, and app called shutdown(WR) (almost same as closed)
FIN was not sent by remote yet
FIN was sent by remote but not received yet
FIN was sent and got lost
FIN received by your end
Obviously your OS can distinguish quite a few of these states, but not all of them. I can't think of an API that would be this verbose...
Some systems allow you to query remaining send buffer space. Perhaps if you did, and socket was already shut down, you'd get a neat error?
Good news is just because socket is shut down, doesn't mean you can't interrogate it. I can get all of TCP_INFO after shutdown, with state=7 (closed). In some cases report state=8 (close wait).
http://lxr.free-electrons.com/source/net/ipv4/tcp.c#L1961 has all the gory details of Linux TCP state machine.
TL;DR:
Don't rely on the socket state for this; it can cut you in many error cases. You need to bake the acknowledgement/receipt facility into your communications protocol. First character on each line used for status/ack works really well for text-based protocols.
On many, but not all, Unix-like/POSIXy systems, one can use the TIOCOUTQ (also SIOCOUTQ) ioctl to determine how much data is left in the outgoing buffer.
For TCP sockets, even if the other end has shut down its write side (and therefore will send no more data to this end), all transmissions are acknowledged. The data in the outgoing buffer is only removed when the acknowledgement from the recipient kernel is received. Thus, when there is no more data in the outgoing buffer, we know that the kernel at the other end has received the data.
Unfortunately, this does not mean that the application has received and processed the data. This same limitation applies to all methods that rely on socket state; this is also the reason why fundamentally, the acknowledgement of receipt/acceptance of the final status line must come from the other application, and cannot be automatically detected.
This, in turn, means that neither end can shut down their sending sides before the very final receipt/acknowledge message. You cannot rely on TCP -- or any other protocols' -- automatic socket state management. You must bake in the critical receipts/acknowledgements into the stream protocol itself.
In OP's case, the stream protocol seems to be simple line-based text. This is quite useful and easy to parse. One robust way to "extend" such a protocol is to reserve the first character of each line for the status code (or alternatively, reserve certain one-character lines as acknowledgements).
For large in-flight binary protocols (i.e., protocols where the sender and receiver are not really in sync), it is useful to label each data frame with an increasing (cyclic) integer, and have the other end respond, occasionally, with an update to let the sender know which frames have been completely processed, and which ones received, and whether additional frames should arrive soon/not-very-soon. This is very useful for network-based appliances that consume a lot of data, with the data provider wishing to be kept updated on the progress and desired data rate (think 3D printers, CNC machines, and so on, where the contents of the data changes the maximum acceptable data rate dynamically).
Okay so I recall pulling my hair out trying to solve this very problem back in the late 90's. I finally found an obscure doc that stated that a read call to a disconnected socket will return a 0. I use this fact to this day.
You're probably better off using ZeroMQ. That will send a whole message, or no message at all. If you set it's send buffer length to 1 (the shortest it will go) you can test to see if the send buffer is full. If not, the message was successfully transferred, probably. ZeroMQ is also really nice if you have an unreliable or intermittent network connection as part of your system.
That's still not entirely satisfactory. You're probably even better off implementing your own send acknowledge mechanism on top of ZeroMQ. That way you have absolute proof that a message was received. You don't have proof that a message was not received (something can go wrong between emitting and receiving the ack, and you cannot solve the Two Generals Problem). But that's the best that can be achieved. What you'll have done then is implement a Communicating Sequential Processes architecture on top of ZeroMQ's Actor Model which is itself implemented on top of TCP streams.. Ultimately it's a bit slower, but your application has more certainty of knowing what's gone on.

Reading all available bytes via socket using blocking I/O

When reading from a socket using read(2) and blocking I/O, when do I know that the other side (the client) has no more data to send? (by "no more data to send" I mean that, as an example, the client is waiting for a response). At first, I thought that this point is reached when less than count bytes are returned by read (as in read(fd, *buf, count)).
But what if the client sends the data fragmented? Reading until read returns 0 would be a solution, but as far as I know 0 is only returned when the client closes the connection - otherwise, read would just block until the connection is closed. I thought of using non-blocking I/O and a timeout for select(2), but this does not seem to be a tidy solution to me.
Are there any known best practices?
The concept of "the other side has no more data to send", without either a timeout or some semantics in the transmitted data, is quite pointless. Normally, code on the client/server will be able to process data faster than the network can transmit it. So if there's no data in the receive buffer when you're trying to read() it, this just means the network has not yet transmitted everything, but you have no way to tell if the next packet will arrive within a millisecond, a second, or a day. You'd probably consider the first case as "there is more data to send", the third as "no more data to send", and the second depends on your application.
If the other side doesn't close the connection, you probably don't know when it's ready to send the next data packet either.
So unless you have specific semantics and knowledge about what the client sends, using select() and non-blocking I/O is the best you can do.
In specific cases, there might be other ways - for example, if you know the client will send and XML tag, some data, and a closing tag, every n seconds. In that case you could start reading n seconds after the last packet you received, then just read on until you receive the closing tag. But as i said, this isn't a general approach since it requires semantics on the channel.
TCP is a byte-stream protocol, not a message protocol. If you want messages you really have to implement them yourself, e.g. with a length-word prefix, lines, XML, etc. You can guess with the FIONREAD option of ioctl(), but guessing is all it is, as you can't know whether the client has paused in the middle of transmission of the message, or whether the network has done so for some reason.
The protocol needs to give you a way to know when the client is finishes sending a message.
Common approaches are to send the length of each message before it, or to send a special terminator after each message (similar to the NUL character at the end of strings in C).

Maximum size of Receive Buffer/Queue in udp socket

I have developed a udp server/client application in which server has one socket at which it continuously receives data from 40 clients.
Now I want to know that what happens if all of the 40 Clients send data at a time?
According to my understanding, data must be queued in receive buffer and next time when I call recvfrom() the data queued in the buffer is received i.e. I shall have to call recvfrom() 40 times to receive data of all the 40 Clients even if all the Clients sent data simultaneously.
Also, I want to know that all of the data of 40 Clients will be queued in receive buffer or some of the data will be discarded too?
Also, what is the maximum buffer size in which data can be queued in receive buffer and after what limit is data dropped?
I shall have to call recvfrom() 40 times to receive data of all the 40 Clients even if all the Clients sent data simultaneously.
In other words, you're asking whether separate UDP datagrams can be combined by the network stack. The answer is: no, they'll arrive as separate datagrams requiring separate calls to recvfrom().
Also, I want to know that all of the data of 40 Clients will be queued in receive buffer or some of the data will be discarded too?
UDP does not guarantee delivery. Packets can be dropped anywhere along the route: by the sending host, by any devices along the route and by the receiving host.
Also, what is the maximum buffer size in which data can be queued in receive buffer and after what limit is data dropped?
This is OS-dependent and is usually configurable. Bear in mind that packets might get dropped before even reaching the receiving host.
Indeed it all depends on the size of your OS socket buffers. In linux its fairly easy to configure, so all you probably need to do it to google how to change it for your Windows system.

What is happening when a TCP sequence number arrives that is not what is expected?

I am writing a program that uses libpcap to capture packets and reassemble a TCP stream. My program simply monitors the traffic and so I have no control over the reception and transmittal of packets. My program disregards all non TCP/IP traffic.
I calculate the next expected sequence number from the ISN and then the successive SEQ numbers. I have it set up so that every TCP connection is uniquely identified by a tuple made up of the source IP, source port, dest IP, and dest port. Everything goes swimmingly until I receive a packet that has a sequence number different than what I am expecting. I have uploaded screen shots to help illustrate what I am describing here.
My questions are:
1. Where is the data that was in the "lost" packet?
2. How does the SEQ number order recover from this situation?
3. What can I do to handle these occurrences.
Please remember; however, I am not writing a program that adheres to TCP. I am writing a program that passively monitors network traffic for TCP streams and attempts to save the raw data to disk, and I am confused as to why the above state situation happens and how I can program to handle it.
Thank you
Where is the data that was in the "lost" packet?
It got dropped by someone
It got lost on the way (wrong detour) and will arrive later
How does the SEQ number order recover from this situation
The receiver notices the segment is out of sequence and doesn't send it to the application, thereby fulfilling its contract: in-order reliable byte stream. Now, what actually happens to get the missing piece is quite intricate and varies from stack to stack. In a nutshell the stack waits for the missing piece to arrive.
The receiver can throw away out-of-sequence segments or it can queue them in a reassembly queue
The receiver can wait for the missing segment to arrive or it can immediately send the ACK it already sent before. Duplicate ACKs will alert the peer something is wrong (look for Fast Retransmit)
When sending acknowledgments the TCP can inform the peer some segments arrived successfully - they're just out of sequence (SACK)
What can I do to handle these occurrences
You can't do anything since you're only monitoring. You could probably get more insight into what is really happening if you also captured the response traffic.
Depending on the window-size of the current TCP connection, if the new packet fits within the receiving window (multi-packet buffer) it will be entered into the receiving queue (and reordered for ordered delivery to protocol clients).
If the sequence number is larger than the maximum for the current window, the packet gets rejected.
See also section 4.4.2 (INPUT PACKET HANDLER) in RFC 675

Resources