Stop-and-Wait Protocol - How does it handle last ACK dropped? - c

I am implementing a client/server echo program using the Stop-and-Wait protocol. Part of the implementation is to randomly introduce drops from the server. I am actually randomly dropping ACKs as well as FRAMEs. A side effect of this is the introduction of a corner case:
If the last ACK from the server is dropped, both client and server end up in a "send" mode. This happens because the server doesn't know the last ACK was dropped, and from its perspective, everything worked and it now should echo back everything the client sent to it.
The client, on the other hand, doesn't know if the last FRAME was dropped or if just the ACK was dropped, so it tries to resend the last FRAME.
I could see this being solved by the use of a FIN or something -- basically an extra ACK going the opposite direction when both sides think transmission is complete. However, I want to conform to the expected approach for Stop-and-Wait.
How should Stop-and-Wait handle this case?

One way to handle this situation is to have a one-bit sequence number included in every frame. Every time you send a frame and receive an ACK, you flip the bit for the next frame. If you send a frame, don't receive an ACK, and have to retransmit the frame, you resend it with the same sequence bit.
At the receiver, if you receive two (or more) frames in a row with the same sequence bit, you know it's a retransmission, so you can resend the ACK but ignore the frame, since you've already acted on it.
Don't bother trying to acknowledge acknowledgements. You just run into the same problem where the network might drop the ACK for the ACK.

Related

Determine if peer has closed reading end of socket

I have a socket programming situation where the client shuts down the writing end of the socket to let the server know input is finished (via receiving EOF), but keeps the reading end open to read back a result (one line of text). It would be useful for the server to know that the client has successfully read the result and closed the socket (or at least shut down the reading end). Is there a good way to check/wait for such status?
No. All you can know is whether your sends succeeded, and some of them will succeed even after the peer read shutdown, because of TCP buffering.
This is poor design. If the server needs to know that the client received the data, the client needs to acknowledge it, which means it can't shutdown its write end. The client should:
send an in-band termination message, as data.
read and acknowledge all further responses until end of stream occurs.
close the socket.
The server should detect the in-band termination message and:
stop reading requests from the socket
send all outstanding responses and read the acknowledgements
close the socket.
OR, if the objective is only to ensure that client and server end at the same time, each end should shutdown its socket for output and then read input until end of stream occurs, then close the socket. That way the final closes will occur more or less simultaneously on both ends.
getsockopt with TCP_INFO seems the most obvious choice, but it's not cross-platform.
Here's an example for Linux:
import socket
import time
import struct
import pprint
def tcp_info(s):
rv = dict(zip("""
state ca_state retransmits probes backoff options snd_rcv_wscale
rto ato snd_mss rcv_mss unacked sacked lost retrans fackets
last_data_sent last_ack_sent last_data_recv last_ack_recv
pmtu rcv_ssthresh rtt rttvar snd_ssthresh snd_cwnd advmss reordering
rcv_rtt rcv_space
total_retrans
pacing_rate max_pacing_rate bytes_acked bytes_received segs_out segs_in
notsent_bytes min_rtt data_segs_in data_segs_out""".split(),
struct.unpack("BBBBBBBIIIIIIIIIIIIIIIIIIIIIIIILLLLIIIIII",
s.getsockopt(socket.IPPROTO_TCP, socket.TCP_INFO, 160))))
wscale = rv.pop("snd_rcv_wscale")
# bit field layout is up to compiler
# FIXME test the order of nibbles
rv["snd_wscale"] = wscale >> 4
rv["rcv_wscale"] = wscale & 0xf
return rv
for i in range(100):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("localhost", 7878))
s.recv(10)
pprint.pprint(tcp_info(s))
I doubt a true cross-platform alternative exists.
Fundamentally there are quite a few states:
you wrote data to socket, but it was not sent yet
data was sent, but not received
data was sent and losts (relies on timer)
data was received, but not acknowledged yet
acknowledgement not received yet
acknowledgement lost (relies on timer)
data was received by remote host but not read out by application
data was read out by application, but socket still alive
data was read out, and app crashed
data was read out, and app closed the socket
data was read out, and app called shutdown(WR) (almost same as closed)
FIN was not sent by remote yet
FIN was sent by remote but not received yet
FIN was sent and got lost
FIN received by your end
Obviously your OS can distinguish quite a few of these states, but not all of them. I can't think of an API that would be this verbose...
Some systems allow you to query remaining send buffer space. Perhaps if you did, and socket was already shut down, you'd get a neat error?
Good news is just because socket is shut down, doesn't mean you can't interrogate it. I can get all of TCP_INFO after shutdown, with state=7 (closed). In some cases report state=8 (close wait).
http://lxr.free-electrons.com/source/net/ipv4/tcp.c#L1961 has all the gory details of Linux TCP state machine.
TL;DR:
Don't rely on the socket state for this; it can cut you in many error cases. You need to bake the acknowledgement/receipt facility into your communications protocol. First character on each line used for status/ack works really well for text-based protocols.
On many, but not all, Unix-like/POSIXy systems, one can use the TIOCOUTQ (also SIOCOUTQ) ioctl to determine how much data is left in the outgoing buffer.
For TCP sockets, even if the other end has shut down its write side (and therefore will send no more data to this end), all transmissions are acknowledged. The data in the outgoing buffer is only removed when the acknowledgement from the recipient kernel is received. Thus, when there is no more data in the outgoing buffer, we know that the kernel at the other end has received the data.
Unfortunately, this does not mean that the application has received and processed the data. This same limitation applies to all methods that rely on socket state; this is also the reason why fundamentally, the acknowledgement of receipt/acceptance of the final status line must come from the other application, and cannot be automatically detected.
This, in turn, means that neither end can shut down their sending sides before the very final receipt/acknowledge message. You cannot rely on TCP -- or any other protocols' -- automatic socket state management. You must bake in the critical receipts/acknowledgements into the stream protocol itself.
In OP's case, the stream protocol seems to be simple line-based text. This is quite useful and easy to parse. One robust way to "extend" such a protocol is to reserve the first character of each line for the status code (or alternatively, reserve certain one-character lines as acknowledgements).
For large in-flight binary protocols (i.e., protocols where the sender and receiver are not really in sync), it is useful to label each data frame with an increasing (cyclic) integer, and have the other end respond, occasionally, with an update to let the sender know which frames have been completely processed, and which ones received, and whether additional frames should arrive soon/not-very-soon. This is very useful for network-based appliances that consume a lot of data, with the data provider wishing to be kept updated on the progress and desired data rate (think 3D printers, CNC machines, and so on, where the contents of the data changes the maximum acceptable data rate dynamically).
Okay so I recall pulling my hair out trying to solve this very problem back in the late 90's. I finally found an obscure doc that stated that a read call to a disconnected socket will return a 0. I use this fact to this day.
You're probably better off using ZeroMQ. That will send a whole message, or no message at all. If you set it's send buffer length to 1 (the shortest it will go) you can test to see if the send buffer is full. If not, the message was successfully transferred, probably. ZeroMQ is also really nice if you have an unreliable or intermittent network connection as part of your system.
That's still not entirely satisfactory. You're probably even better off implementing your own send acknowledge mechanism on top of ZeroMQ. That way you have absolute proof that a message was received. You don't have proof that a message was not received (something can go wrong between emitting and receiving the ack, and you cannot solve the Two Generals Problem). But that's the best that can be achieved. What you'll have done then is implement a Communicating Sequential Processes architecture on top of ZeroMQ's Actor Model which is itself implemented on top of TCP streams.. Ultimately it's a bit slower, but your application has more certainty of knowing what's gone on.

How to identify initial packet in TCP 3-way handshake?

Is it true that the Acknowledgment Number (please note I'm not talking about the ACK flag here) is set to 0 when a client initiates the 3-way TCP handshake by sending its initial packet?
I have a TCP trace file and I used the pcap library in C to print out specific information for every packet. What I noticed was that the Acknowledgment Number of the very first packet in a connection is always set to 0. Could I use that as criteria in identifying the first packet of a TCP session?
If not that, what other criteria can I use to identify a given packet as being the first one sent by the web client? Simply looking at the SYN flag won't work since when the server responds to the initial request by the client, it will also set its SYN flag.
The sequence number will have a random value, but it is completely normal behavior for the acknowledgement (which you ask about) number field to contain 32 bits of zeroes in it.
This isn't to say that it can't contain data. You rightly distinguish the ACK flag from the acknowledgement number. The actual meaning of the flag is to signal that the value in the acknowledgement number field is valid. Since it would be cleared on the initial SYN, there is no such claim. As such, it can contain absolutely anything, though again, normal behavior is zero.
As to your question in distinguishing the initial SYN from the response, the response to a properly formed initial SYN by a normal IP stack will be SYN-ACK. So, while the SYN will be set, the ACK will also be set. To distinguish one from the other the better practice would be to look at the TCP code bits field rather than the sequence numbers unless you are trying to do some kind of anomaly detection.

What is happening when a TCP sequence number arrives that is not what is expected?

I am writing a program that uses libpcap to capture packets and reassemble a TCP stream. My program simply monitors the traffic and so I have no control over the reception and transmittal of packets. My program disregards all non TCP/IP traffic.
I calculate the next expected sequence number from the ISN and then the successive SEQ numbers. I have it set up so that every TCP connection is uniquely identified by a tuple made up of the source IP, source port, dest IP, and dest port. Everything goes swimmingly until I receive a packet that has a sequence number different than what I am expecting. I have uploaded screen shots to help illustrate what I am describing here.
My questions are:
1. Where is the data that was in the "lost" packet?
2. How does the SEQ number order recover from this situation?
3. What can I do to handle these occurrences.
Please remember; however, I am not writing a program that adheres to TCP. I am writing a program that passively monitors network traffic for TCP streams and attempts to save the raw data to disk, and I am confused as to why the above state situation happens and how I can program to handle it.
Thank you
Where is the data that was in the "lost" packet?
It got dropped by someone
It got lost on the way (wrong detour) and will arrive later
How does the SEQ number order recover from this situation
The receiver notices the segment is out of sequence and doesn't send it to the application, thereby fulfilling its contract: in-order reliable byte stream. Now, what actually happens to get the missing piece is quite intricate and varies from stack to stack. In a nutshell the stack waits for the missing piece to arrive.
The receiver can throw away out-of-sequence segments or it can queue them in a reassembly queue
The receiver can wait for the missing segment to arrive or it can immediately send the ACK it already sent before. Duplicate ACKs will alert the peer something is wrong (look for Fast Retransmit)
When sending acknowledgments the TCP can inform the peer some segments arrived successfully - they're just out of sequence (SACK)
What can I do to handle these occurrences
You can't do anything since you're only monitoring. You could probably get more insight into what is really happening if you also captured the response traffic.
Depending on the window-size of the current TCP connection, if the new packet fits within the receiving window (multi-packet buffer) it will be entered into the receiving queue (and reordered for ordered delivery to protocol clients).
If the sequence number is larger than the maximum for the current window, the packet gets rejected.
See also section 4.4.2 (INPUT PACKET HANDLER) in RFC 675

C socket: does send wait for recv to end?

I use blocking C sockets on Windows.
I use them to send updates of a data from the server to the client and vice versa. I send updates at a high frequency (every 100ms). Does the send() function will wait for the recipient recv() to receive the data before ending ?
I assume not if I understand well the man page:
"Successful completion of send() does not guarantee delivery of the message."
So what will happen if one is running 10 send() occurences while the other has only complete 1 recv() ?
Do I need to use so some sort of acknowledgement system ?
Lets assume you are using TCP. When you call send, the data that you are sending is immediately placed on the outgoing queue and send then completes successfully. If however, send is unable to place the data on the outgoing queue, send will return with an error.
Since Tcp is a guaranteed delivery protocol, the data on the outgoing queue can only be removed once acknowledgement has been received by the remote end. This is because the data may need to be resent if no ack has been received in time.
If the remote end is sluggish, the outgoing queue will fill up with data and send will then block until there is space to place the new data on the outgoing queue.
The connection can however fail is such a way that there is no way any further data can be sent. Although once a TCP connection has been closed, any further sends will result in an error, the user has no way of knowing how much data did actually make it to the other side. (I know of no way of retrieving TCP bookkeeping from a socket to the user application). Therefore, if confirmation of receipt of data is required, you should probably implement this on application level.
For UDP, I think it goes without saying that some way of reporting what has or has not been received is a must.
send() blocks until the operating system (kernel) has taken the data and put it into a buffer of outgoing data. It does not wait until the other end has received the data.
If you're sending by TCP, you get guaranteed delivery1 and the other end will receive the data in the order sent. That might, however, be coalesced together so what you sent as 10 separate updates could be received as a single large packet (or vice versa -- a single update could be broken up across an arbitrary number of packets). This means, among other things, that any ACK of any data implicitly acknowledges receipt of all previous data.
If you're using UDP, none of that is true -- data can arrive out of order, or be dropped and never delivered at all. If you care about all the data being received, you just about need to build some sort of acknowledgement system of your own on top of UDP itself.
1 Of course, there's a limit on the guarantee -- if a network cable gets cut (or whatever) packets won't be delivered, but you'll at least get an error message telling you that the connection was lost.
If you're using TCP, you get the acknowledgements for free as that is part of what the protocol does under the hood. But sounds like for this type of application you would probably want to use UDP. In either case though send() will not block until the client has successfully recv().
If it's crucial that the client receive every message, then use TCP. If it's ok for the client to miss one or more messages, then use UDP.
TCP guarantees delivery at a lower TCP stack level. It retries delivery until the receiving part acknowledges that the data was received, but your application may never know about that fact.
Let's say that you are sending chunks of data and you need to place those chunks of data somewhere according to some logic. If your application is not prepared to know where each individual block has to be placed, receiving it at the TCP level may be useless. The original post was about the application level logic.

implementing ack over UDP?

We have a system (built in C) in place that performs communication over UDP. Recently we have found a necessity to guarantee delivery of packets. My question is: what would be the minimum additions to a UDP based system to ensure delivery using ack packets? Also, ideally without having to manipulate the packet headers. We have application level control over the packets including sequence numbers and ack/nack flags. I am wondering if this is a lost cause and anything we attempt to do will basically be a flawed and broken version of TCP. Basically, is there a minimalist improvement we can make to achieve guaranteed delivery (we do not need many features of TCP such as congestion control etc.). Thanks!
TCP intertwines 3 services that might be relevant (okay TCP does a lot more, but I'm only going to talk about 3.)
In-order delivery
Reliable delivery
Flow control
You just said that you don't need flow control, so I won't even address that (how you would advertise a window size, etc. well, except that you'll probably need a window. i'll get to it.)
You did say that you need reliable delivery. That isn't too hard - you use ACKs to show that the sender has received a packet. Basic reliable delivery looks like:
Sender sends the packet
Receiver receives packet, and then sends an ack
If the sender doesn't get an ack (by way of a timer), he resends the packet.
Those three steps don't address these issues:
What if the ACK gets lost?
What if packets arrive out of order?
So for your application, you said you only needed reliable delivery - but didn't say anything about needing them in order. This will affect the way you implement your protocol.
(example where in-order doesn't matter: you're copying employee records from one computer to another. doesn't matter if Alice's record is received before Bob's, as long as both get there.)
So going on the presumption that you only need reliable (since that's what you said in your post), you could achieve this several ways.
Your sender can keep track of unacknowledged packets. So if it sends # 3, 4, 5, and 6, and doesn't get an ACK for 3 and 4, then the sender knows that it needs to retransmit. (Though the sender doesn't know if packets 3 and 4 were lots, or if their ACKs were lost. Either way, we have to retransmit.)
But then your sender could do cumulative ACKs - so in the above example, it would only ack #6 if it had received 3, 4, and 5. This means that the receiver would drop packet 6 if it hadn't received the ones before. If your network is very reliable, then this might not be a bad option.
The protocols described above, however, do have a window - that is, how many packets does the sender send at once? Which means that you do need some sort of windowing, but not for the purpose of flow control. How'll you transmit window sizes?
You could do it without a window by either having the window size constant, or by doing something like stop-and-wait. The former might be a better option.
Anyway, I haven't directly answered your question, but I hope I've pointed out some of the things that are worth considering when architecting this. The task of having "reliable transfer" without parts of flow control (like windowing) and without any regard to in-order is hard! (Let me know if I should give more details about some of this stuff!)
Good luck!
Take a look at Chapter 8 and Chapter 20 of Steven's UNIX Network Programming, volume 1. He covers a number of different approaches. Section 20.5 "Adding Reliability to a UDP Application" is probably most interesting to you.
I have a question running here which is collecting answers to "What to you use when you need reliable UDP". The answers are possibly much more than you want or need but you might be able to take a look at some of the protocols that have been built on UDP and grab just the ACK part that you need.
From my work with the ENet protocol (a reliable UDP protocol), I expect that you need a sequence number in each UDP datagram, a way of sending an ACK for datagrams that you've received, a way of keeping hold of datagrams that you've sent until you get an ACK for them or they time out and a way of timing the resending of datagrams for which you have yet to receive an ACK... I would also add an overall timeout for when you decide that you are never going to deliver a particular datagram, and, I guess, a callback to your application layer to inform it of this failure to deliver...
The best way to implement ack is to do it in the application layer. CoAP is an example of an application protocol which runs on udp but provides reliable data transfer. It keeps a message id for all the Confirmable(CON) messages and sends an receiver sends an ack packet with the same message id. All the ack and message id fields are kept at the application layer part. So if the sender doesn't receive an Ack packet with the message id send by him, it re transmits that packet. Application developer can modify the protocol to suit the needs required for reliable data transfer .
Tough problem. I would say, you wont be able to achieve the reliability of TCP. However, i do understand that sometimes, you need to have reliable UDP.
Gamedev forum
RUDP (a bit more hardcore)
Old Thread about reliable UDP

Resources