I'm learning about C socket programming and I came across this piece of code in a online tutorial
Server.c:
//some server code up here
recv(sock_fd, buf, 2048, 0);
//some server code below
Client.c:
//some client code up here
send(cl_sock_fd, buf, 2048, 0);
//some client code below
Will the server receive all 2048 bytes in a single recv call or can the send be be broken up into multiple receive calls?
TCP is a streaming protocol, with no message boundaries of packets. A single send might need multiple recv calls, or multiple send calls could be combined into a single recv call.
You need to call recv in a loop until all data have been received.
Technically, the data is ultimately typically handled by the operating system which programs the physical network interface to send it across a wire or over the air or however else applicable. And since TCP/IP doesn't define particulars like how many packets and of which size should compose your data, the operating system is free to decide as much, which results in your 2048 bytes of data possibly being sent in fragments, over a period of time.
Practically, this means that by calling send you may merely be causing your 2048 bytes of data be buffered for sending, much like an e-mail in a queue, except that your 2048 bytes aren't even a single piece of anything to the system that sends it -- it's just 2048 more bytes to chop into packets the network will accept, marked with a destination address and port, among other things. The job of TCP is to only make sure they're the same bytes when they arrive, in same order with relation to each other and other data sent through the connection.
The important thing at the receiving end is that, again, the arriving data is merely queued and there is no information retained as to how it was partitioned when requested sent. Everything that was ever sent through the connection is now either part of a consumable stream or has already been consumed and removed from the stream.
For a TCP connection a fitting analogy would be the connection holding an open water keg, which also has a spout (tap) at the bottom. The sender can pour water into the keg (as much as it can contain, anyway) and the receiver can open the spout to drain the water from the keg into say, a cup (which is an analogy to a buffer in an application that reads from a TCP socket). Both sender and receiver can be doing their thing at the same time, or either may be doing so alone. The sender will have to wait (send call will block) if the keg is full, and the receiver will have to wait (recv call will block) if the keg is empty.
Another, shorter analogy is that sender and receiver sit each at their own end of a opaque pipe, with the former pushing stuff in one end and the latter removing pushed stuff out of the other end.
Related
I’m using raspberry pi b+ and building tcp server/client connection with C.
I have few questions from client side.
How long does Linux queue the packets for client? When the packet has received thru Linux, what if client is not ready to process it or select/epoll func inside loop has 1min sleep? If there is a timeout, is there a way to adjust the timeout with code/script?
What is the internal process inside of Linux when it receives the packet? (i.e., ethernet port->kernel->ram->application??)
The raspberry pi (with linux) and any known linux (or nonlinux) tcp/ip works in some way like this:
You have a kernel buffer in which the kernel stores all the data from the other side, this is the data that has not yet been read by the user process. the kernel normally has all this data acknowledged to the other side (the acknowledge states the last byte received and stored in that buffer) The sender side has also a buffer, where it stores all the sent data that has not yet been acknowledged by the receiver (This data must be resent in case of timeout) plus data that is not yet in the window admitted by the receiver. If this buffer fills, the sender is blocked, or a partial write is reported (depending on options) to the user process.
That kernel buffer (the reading buffer) allows the kernel to make the data available for reading to the user process while the process is not reading the data. If the user process cannot read it, it remains there until de process does a read() system call.
The amount of buffering that the kernel is still capable of reading (known as the window size) is sent to the other end on each acknowledge, so the sender knows the maximum amount of data it is authorized to send. When the buffer is full, the window size descends to zero and the receiver announces it cannot receive more data. This allows a slow receiver to stop a fast sender from filling the network with data that cannot be sent.
From then on (the situation with a zero window), the sender periodically (or randomly) sends a segment with no data at all (or with just one byte of data, depending on the implementation) to check if some window has open to allow it to send more data. The acknowledge to that packet will allow it to start communicating again.
Everything is stopped now, but no timeout happens. both tcps continue talking this way until some window is available (meaning the receiver has read() part of the buffer)
This situation can be mainained for days without any problem, the reading process is busy and cannot read the data, and the writing process is blocked in the write call until the kernel in the sending side has buffer to accomodate the data to be written.
When the reading process reads the data:
An ack of the last sent byte is sent, announcing a new window size, larger than zero (by the amount freed by the reader process when reading)
The sender receives this acknowledge and sends that amount of data from his buffer, if this allows to accomodate the data the writer has requested to write, it will be awaken and allowed to continue sending data.
Again, timeouts normally only occur if data is lost in transit.
But...
If you are behind a NAT device, your connection data can be lost from not exercising it (the nat device maintains a cache of used address/port local devices making connections to the outside) and on the next data transfer that comes from the remote device, the nat device can (or cannot) send a RST, because the packet refers to a connection that is not known to it (the cache entry expired)
Or if the packet comes from the internal device, the connection can be recached and continue, what happens, depends on who is the first to send a packet.
Nothing specifies that an implementation should provide a timeout for data to be sent, but some implementations do, aborting the connection with an error in case some data is timeout for a large amount of time. TCP specifies no timeout in this case, so it is the process resposibility to cope with it.
TCP is specified in RFC-793 and must be obeyed by all implementations if they want communications to succeed. You can read it if you like. I think you'll get a better explanation than the one I give you here.
So, to answer your first question: The kernel will store the data in its buffer as long as your process wants to wait for it. By default, you just call write() on a socket, and the kernel tries as long as you (the user) don't decide to stop the process and abort the operation. In that case the kernel will probably try to close the connection or reset it. The resources are surrogated to the life of the process, so as long as the process is alive and holding the connection, the kernel will wait for it.
I have a socket programming situation where the client shuts down the writing end of the socket to let the server know input is finished (via receiving EOF), but keeps the reading end open to read back a result (one line of text). It would be useful for the server to know that the client has successfully read the result and closed the socket (or at least shut down the reading end). Is there a good way to check/wait for such status?
No. All you can know is whether your sends succeeded, and some of them will succeed even after the peer read shutdown, because of TCP buffering.
This is poor design. If the server needs to know that the client received the data, the client needs to acknowledge it, which means it can't shutdown its write end. The client should:
send an in-band termination message, as data.
read and acknowledge all further responses until end of stream occurs.
close the socket.
The server should detect the in-band termination message and:
stop reading requests from the socket
send all outstanding responses and read the acknowledgements
close the socket.
OR, if the objective is only to ensure that client and server end at the same time, each end should shutdown its socket for output and then read input until end of stream occurs, then close the socket. That way the final closes will occur more or less simultaneously on both ends.
getsockopt with TCP_INFO seems the most obvious choice, but it's not cross-platform.
Here's an example for Linux:
import socket
import time
import struct
import pprint
def tcp_info(s):
rv = dict(zip("""
state ca_state retransmits probes backoff options snd_rcv_wscale
rto ato snd_mss rcv_mss unacked sacked lost retrans fackets
last_data_sent last_ack_sent last_data_recv last_ack_recv
pmtu rcv_ssthresh rtt rttvar snd_ssthresh snd_cwnd advmss reordering
rcv_rtt rcv_space
total_retrans
pacing_rate max_pacing_rate bytes_acked bytes_received segs_out segs_in
notsent_bytes min_rtt data_segs_in data_segs_out""".split(),
struct.unpack("BBBBBBBIIIIIIIIIIIIIIIIIIIIIIIILLLLIIIIII",
s.getsockopt(socket.IPPROTO_TCP, socket.TCP_INFO, 160))))
wscale = rv.pop("snd_rcv_wscale")
# bit field layout is up to compiler
# FIXME test the order of nibbles
rv["snd_wscale"] = wscale >> 4
rv["rcv_wscale"] = wscale & 0xf
return rv
for i in range(100):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("localhost", 7878))
s.recv(10)
pprint.pprint(tcp_info(s))
I doubt a true cross-platform alternative exists.
Fundamentally there are quite a few states:
you wrote data to socket, but it was not sent yet
data was sent, but not received
data was sent and losts (relies on timer)
data was received, but not acknowledged yet
acknowledgement not received yet
acknowledgement lost (relies on timer)
data was received by remote host but not read out by application
data was read out by application, but socket still alive
data was read out, and app crashed
data was read out, and app closed the socket
data was read out, and app called shutdown(WR) (almost same as closed)
FIN was not sent by remote yet
FIN was sent by remote but not received yet
FIN was sent and got lost
FIN received by your end
Obviously your OS can distinguish quite a few of these states, but not all of them. I can't think of an API that would be this verbose...
Some systems allow you to query remaining send buffer space. Perhaps if you did, and socket was already shut down, you'd get a neat error?
Good news is just because socket is shut down, doesn't mean you can't interrogate it. I can get all of TCP_INFO after shutdown, with state=7 (closed). In some cases report state=8 (close wait).
http://lxr.free-electrons.com/source/net/ipv4/tcp.c#L1961 has all the gory details of Linux TCP state machine.
TL;DR:
Don't rely on the socket state for this; it can cut you in many error cases. You need to bake the acknowledgement/receipt facility into your communications protocol. First character on each line used for status/ack works really well for text-based protocols.
On many, but not all, Unix-like/POSIXy systems, one can use the TIOCOUTQ (also SIOCOUTQ) ioctl to determine how much data is left in the outgoing buffer.
For TCP sockets, even if the other end has shut down its write side (and therefore will send no more data to this end), all transmissions are acknowledged. The data in the outgoing buffer is only removed when the acknowledgement from the recipient kernel is received. Thus, when there is no more data in the outgoing buffer, we know that the kernel at the other end has received the data.
Unfortunately, this does not mean that the application has received and processed the data. This same limitation applies to all methods that rely on socket state; this is also the reason why fundamentally, the acknowledgement of receipt/acceptance of the final status line must come from the other application, and cannot be automatically detected.
This, in turn, means that neither end can shut down their sending sides before the very final receipt/acknowledge message. You cannot rely on TCP -- or any other protocols' -- automatic socket state management. You must bake in the critical receipts/acknowledgements into the stream protocol itself.
In OP's case, the stream protocol seems to be simple line-based text. This is quite useful and easy to parse. One robust way to "extend" such a protocol is to reserve the first character of each line for the status code (or alternatively, reserve certain one-character lines as acknowledgements).
For large in-flight binary protocols (i.e., protocols where the sender and receiver are not really in sync), it is useful to label each data frame with an increasing (cyclic) integer, and have the other end respond, occasionally, with an update to let the sender know which frames have been completely processed, and which ones received, and whether additional frames should arrive soon/not-very-soon. This is very useful for network-based appliances that consume a lot of data, with the data provider wishing to be kept updated on the progress and desired data rate (think 3D printers, CNC machines, and so on, where the contents of the data changes the maximum acceptable data rate dynamically).
Okay so I recall pulling my hair out trying to solve this very problem back in the late 90's. I finally found an obscure doc that stated that a read call to a disconnected socket will return a 0. I use this fact to this day.
You're probably better off using ZeroMQ. That will send a whole message, or no message at all. If you set it's send buffer length to 1 (the shortest it will go) you can test to see if the send buffer is full. If not, the message was successfully transferred, probably. ZeroMQ is also really nice if you have an unreliable or intermittent network connection as part of your system.
That's still not entirely satisfactory. You're probably even better off implementing your own send acknowledge mechanism on top of ZeroMQ. That way you have absolute proof that a message was received. You don't have proof that a message was not received (something can go wrong between emitting and receiving the ack, and you cannot solve the Two Generals Problem). But that's the best that can be achieved. What you'll have done then is implement a Communicating Sequential Processes architecture on top of ZeroMQ's Actor Model which is itself implemented on top of TCP streams.. Ultimately it's a bit slower, but your application has more certainty of knowing what's gone on.
I'm attempting to write a simple server using C system calls that takes unknown byte streams from unknown clients and executes specific actions depending on client input. For example, the client will send a command "multiply 2 2" and the server will multiply the numbers and return the result.
In order to avoid errors where the server reads before the client has written, I have a blocking recv() call to wait for any data using MSG_PEEK. When recv detects data to be read, I move onto non-blocking recv()'s that read the stream byte by byte.
Everything works except in the corner case where the client sends no data (i.e. write(socket, "", 0); ). I was wondering how exactly I would detect that a message with no data is sent. In this case, recv() blocks forever.
Also, this post pretty much sums up my problem, but it doesn't suggest a way to detect a size 0 packet.
What value will recv() return if it receives a valid TCP packet with payload sized 0
When using TCP at the send/recv level you are not privy to the packet traffic that goes into making the stream. When you send a nonzero number of bytes over a TCP stream the sequence number increases by the number of bytes. That's how both sides know where the other is in terms of successful exchange of data. Sending multiple packets with the same sequence number doesn't mean that the client did anything (such as your write(s, "", 0) example), it just means that the client wants to communicate some other piece of information (for example, an ACK of data flowing the other way). You can't directly see things like retransmits, duplicate ACKs, or other anomalies like that when operating at the stream level.
The answer you linked says much the same thing.
Everything works except in the corner case where the client sends no data (i.e. write(socket, "", 0); ).
write(socket, "", 0) isn't even a send in the first place. It's just a local API call that does nothing on the network.
I was wondering how exactly I would detect that a message with no data is sent.
No message is sent, so there is nothing to detect.
In this case, recv() blocks forever.
I agree.
I have a blocking recv() call to wait for any data using MSG_PEEK. When recv detects data to be read, I move onto non-blocking recv()'s that read the stream byte by byte.
Instead of using recv(MSG_PEEK), you should be using select(), poll(), or epoll() to detect when data arrives, then call recv() to read it.
I have an application that sends data point to point from a sender to the receiver over an link that can operate in simplex (one way transmission) or duplex modes (two way). In simplex mode, the application sends data using UDP, and in duplex it uses TCP. Since a write on TCP socket may block, we are using Non Blocking IO (ioctl with FIONBIO - O_NONBLOCK and fcntl are not supported on this distribution) and the select() system call to determine when data can be written. NIO is used so that we can abort out of send early after a timeout if needed should network conditions deteriorate. I'd like to use the same basic code to do the sending but instead change between TCP/UDP at a higher abstraction. This works great for TCP.
However I am concerned about how Non Blocking IO works for a UDP socket. I may be reading the man pages incorrectly, but since write() may return indicating fewer bytes sent than requested, does that mean that a client will receive fewer bytes in its datagram? To send a given buffer of data, multiple writes may be needed, which may be the case since I am using non blocking IO. I am concerned that this will translate into multiple UDP datagrams received by the client.
I am fairly new to socket programming so please forgive me if have some misconceptions here. Thank you.
Assuming a correct (not broken) UDP implementation, then each send/sendmsg/sendto will correspond to exactly one whole datagram sent and each recv/recvmsg/recvfrom will correspond to exactly one whole datagram received.
If a UDP message cannot be transmitted in its entirety, you should receive an EMSGSIZE error. A sent message might still fail due to size at some point in the network, in which case it will simply not arrive. But it will not be delivered in pieces (unless the IP stack is severely buggy).
A good rule of thumb is to keep your UDP payload size to at most 1400 bytes. That is very approximate and leaves a lot of room for various forms of tunneling so as to avoid fragmentation.
i have a client which sends data to a server with 2 consecutive send calls:
send(_sockfd,msg,150,0);
send(_sockfd,msg,150,0);
and the server is receiving when the first send call was sent (let's say i'm using select):
recv(_sockfd,buf,700,0);
note that the buffer i'm receiving is much bigger.
my question is: is there any chance that buf will contain both msgs? of do i need 2 recv() calls to get both msgs?
thank you!
TCP is a stream oriented protocol. Not message / record / chunk oriented. That is, all that is guaranteed is that if you send a stream, the bytes will get to the other side in the order you sent them. There is no provision made by RFC 793 or any other document about the number of segments / packets involved.
This is in stark contrast with UDP. As #R.. correctly said, in UDP an entire message is sent in one operation (notice the change in terminology: message). Try to send a giant message (several times larger than the MTU) with TCP ? It's okay, it will split it for you.
When running on local networks or on localhost you will certainly notice that (generally) one send == one recv. Don't assume that. There are factors that change it dramatically. Among these
Nagle
Underlying MTU
Memory usage (possibly)
Timers
Many others
Of course, not having a correspondence between an a send and a recv is a nuisance and you can't rely on UDP. That is one of the reasons for SCTP. SCTP is a really really interesting protocol and it is message-oriented.
Back to TCP, this is a common nuisance. An equally common solution is this:
Establish that all packets begin with a fixed-length sequence (say 32 bytes)
These 32 bytes contain (possibly among other things) the size of the message that follows
When you read any amount of data from the socket, add the data to a buffer specific for that connection. When 32 bytes are reached, read the length you still need to read until you get the message.
It is really important to notice how there are really no messages on the wire, only bytes. Once you understand it you will have made a giant leap towards writing network applications.
The answer depends on the socket type, but in general, yes it's possible. For TCP it's the norm. For UDP I believe it cannot happen, but I'm not an expert on network protocols/programming.
Yes, it can and often does. There is no way of matching up sends and receive calls when using TCP/IP. Your program logic should test the return values of both send and recv calls in a loop, which terminates when everything has been sent or recieved.