We have a system (built in C) in place that performs communication over UDP. Recently we have found a necessity to guarantee delivery of packets. My question is: what would be the minimum additions to a UDP based system to ensure delivery using ack packets? Also, ideally without having to manipulate the packet headers. We have application level control over the packets including sequence numbers and ack/nack flags. I am wondering if this is a lost cause and anything we attempt to do will basically be a flawed and broken version of TCP. Basically, is there a minimalist improvement we can make to achieve guaranteed delivery (we do not need many features of TCP such as congestion control etc.). Thanks!
TCP intertwines 3 services that might be relevant (okay TCP does a lot more, but I'm only going to talk about 3.)
In-order delivery
Reliable delivery
Flow control
You just said that you don't need flow control, so I won't even address that (how you would advertise a window size, etc. well, except that you'll probably need a window. i'll get to it.)
You did say that you need reliable delivery. That isn't too hard - you use ACKs to show that the sender has received a packet. Basic reliable delivery looks like:
Sender sends the packet
Receiver receives packet, and then sends an ack
If the sender doesn't get an ack (by way of a timer), he resends the packet.
Those three steps don't address these issues:
What if the ACK gets lost?
What if packets arrive out of order?
So for your application, you said you only needed reliable delivery - but didn't say anything about needing them in order. This will affect the way you implement your protocol.
(example where in-order doesn't matter: you're copying employee records from one computer to another. doesn't matter if Alice's record is received before Bob's, as long as both get there.)
So going on the presumption that you only need reliable (since that's what you said in your post), you could achieve this several ways.
Your sender can keep track of unacknowledged packets. So if it sends # 3, 4, 5, and 6, and doesn't get an ACK for 3 and 4, then the sender knows that it needs to retransmit. (Though the sender doesn't know if packets 3 and 4 were lots, or if their ACKs were lost. Either way, we have to retransmit.)
But then your sender could do cumulative ACKs - so in the above example, it would only ack #6 if it had received 3, 4, and 5. This means that the receiver would drop packet 6 if it hadn't received the ones before. If your network is very reliable, then this might not be a bad option.
The protocols described above, however, do have a window - that is, how many packets does the sender send at once? Which means that you do need some sort of windowing, but not for the purpose of flow control. How'll you transmit window sizes?
You could do it without a window by either having the window size constant, or by doing something like stop-and-wait. The former might be a better option.
Anyway, I haven't directly answered your question, but I hope I've pointed out some of the things that are worth considering when architecting this. The task of having "reliable transfer" without parts of flow control (like windowing) and without any regard to in-order is hard! (Let me know if I should give more details about some of this stuff!)
Good luck!
Take a look at Chapter 8 and Chapter 20 of Steven's UNIX Network Programming, volume 1. He covers a number of different approaches. Section 20.5 "Adding Reliability to a UDP Application" is probably most interesting to you.
I have a question running here which is collecting answers to "What to you use when you need reliable UDP". The answers are possibly much more than you want or need but you might be able to take a look at some of the protocols that have been built on UDP and grab just the ACK part that you need.
From my work with the ENet protocol (a reliable UDP protocol), I expect that you need a sequence number in each UDP datagram, a way of sending an ACK for datagrams that you've received, a way of keeping hold of datagrams that you've sent until you get an ACK for them or they time out and a way of timing the resending of datagrams for which you have yet to receive an ACK... I would also add an overall timeout for when you decide that you are never going to deliver a particular datagram, and, I guess, a callback to your application layer to inform it of this failure to deliver...
The best way to implement ack is to do it in the application layer. CoAP is an example of an application protocol which runs on udp but provides reliable data transfer. It keeps a message id for all the Confirmable(CON) messages and sends an receiver sends an ack packet with the same message id. All the ack and message id fields are kept at the application layer part. So if the sender doesn't receive an Ack packet with the message id send by him, it re transmits that packet. Application developer can modify the protocol to suit the needs required for reliable data transfer .
Tough problem. I would say, you wont be able to achieve the reliability of TCP. However, i do understand that sometimes, you need to have reliable UDP.
Gamedev forum
RUDP (a bit more hardcore)
Old Thread about reliable UDP
Related
When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?
It will be split when it hits a network device with a lower MTU than the packet's size. Most ethernet devices are 1500, but it can often be smaller, like 1492 if that ethernet is going over PPPoE (DSL) because of the extra routing information, even lower if a second layer is added like Windows Internet Connection Sharing. And dialup is normally 576!
In general though you should remember that TCP is not a packet protocol. It uses packets at the lowest level to transmit over IP, but as far as the interface for any TCP stack is concerned, it is a stream protocol and has no requirement to provide you with a 1:1 relationship to the physical packets sent or received (for example most stacks will hold messages until a certain period of time has expired, or there are enough messages to maximize the size of the IP packet for the given MTU)
As an example if you sent two "packets" (call your send function twice), the receiving program might only receive 1 "packet" (the receiving TCP stack might combine them together). If you are implimenting a message type protocol over TCP, you should include a header at the beginning of each message (or some other header/footer mechansim) so that the receiving side can split the TCP stream back into individual messages, either when a message is received in two parts, or when several messages are received as a chunk.
Fragmentation should be transparent to a TCP application. Keep in mind that TCP is a stream protocol: you get a stream of data, not packets! If you are building your application based on the idea of complete data packets then you will have problems unless you add an abstraction layer to assemble whole packets from the stream and then pass the packets up to the application.
The question makes an assumption that is not true -- TCP does not deliver packets to its endpoints, rather, it sends a stream of bytes (octets). If an application writes two strings into TCP, it may be delivered as one string on the other end; likewise, one string may be delivered as two (or more) strings on the other end.
RFC 793, Section 1.5:
"The TCP is able to transfer a
continuous stream of octets in each
direction between its users by
packaging some number of octets into
segments for transmission through the
internet system."
The key words being continuous stream of octets (bytes).
RFC 793, Section 2.8:
"There is no necessary relationship
between push functions and segment
boundaries. The data in any particular
segment may be the result of a single
SEND call, in whole or part, or of
multiple SEND calls."
The entirety of section 2.8 is relevant.
At the application layer there are any number of reasons why the whole 1500 bytes may not show up one read. Various factors in the internal operating system and TCP stack may cause the application to get some bytes in one read call, and some in the next. Yes, the TCP stack has to re-assemble the packet before sending it up, but that doesn't mean your app is going to get it all in one shot (it is LIKELY will get it in one read, but it's not GUARANTEED to get it in one read).
TCP tries to guarantee in-order delivery of bytes, with error checking, automatic re-sends, etc happening behind your back. Think of it as a pipe at the app layer and don't get too bogged down in how the stack actually sends it over the network.
This page is a good source of information about some of the issues that others have brought up, namely the need for data encapsulation on an application protocol by application protocol basis Not quite authoritative in the sense you describe but it has examples and is sourced to some pretty big names in network programming.
If a packet exceeds the maximum MTU of a network device it will be broken up into multiple packets. (Note most equipment is set to 1500 bytes, but this is not a necessity.)
The reconstruction of the packet should be entirely transparent to the applications.
Different network segments can have different MTU values. In that case fragmentation can occur. For more information see TCP Maximum segment size
This (de)fragmentation happens in the TCP layer. In the application layer there are no more packets. TCP presents a contiguous data stream to the application.
A the "application layer" a TCP packet (well, segment really; TCP at its own layer doesn't know from packets) is never fragmented, since it doesn't exist. The application layer is where you see the data as a stream of bytes, delivered reliably and in order.
If you're thinking about it otherwise, you're probably approaching something in the wrong way. However, this is not to say that there might not be a layer above this, say, a sequence of messages delivered over this reliable, in-order bytestream.
Correct - the most informative way to see this is using Wireshark, an invaluable tool. Take the time to figure it out - has saved me several times, and gives a good reality check
If a 3000 byte packet enters an Ethernet network with a default MTU size of 1500 (for ethernet), it will be fragmented into two packets of each 1500 bytes in length. That is the only time I can think of.
Wireshark is your best bet for checking this. I have been using it for a while and am totally impressed
I have a socket programming situation where the client shuts down the writing end of the socket to let the server know input is finished (via receiving EOF), but keeps the reading end open to read back a result (one line of text). It would be useful for the server to know that the client has successfully read the result and closed the socket (or at least shut down the reading end). Is there a good way to check/wait for such status?
No. All you can know is whether your sends succeeded, and some of them will succeed even after the peer read shutdown, because of TCP buffering.
This is poor design. If the server needs to know that the client received the data, the client needs to acknowledge it, which means it can't shutdown its write end. The client should:
send an in-band termination message, as data.
read and acknowledge all further responses until end of stream occurs.
close the socket.
The server should detect the in-band termination message and:
stop reading requests from the socket
send all outstanding responses and read the acknowledgements
close the socket.
OR, if the objective is only to ensure that client and server end at the same time, each end should shutdown its socket for output and then read input until end of stream occurs, then close the socket. That way the final closes will occur more or less simultaneously on both ends.
getsockopt with TCP_INFO seems the most obvious choice, but it's not cross-platform.
Here's an example for Linux:
import socket
import time
import struct
import pprint
def tcp_info(s):
rv = dict(zip("""
state ca_state retransmits probes backoff options snd_rcv_wscale
rto ato snd_mss rcv_mss unacked sacked lost retrans fackets
last_data_sent last_ack_sent last_data_recv last_ack_recv
pmtu rcv_ssthresh rtt rttvar snd_ssthresh snd_cwnd advmss reordering
rcv_rtt rcv_space
total_retrans
pacing_rate max_pacing_rate bytes_acked bytes_received segs_out segs_in
notsent_bytes min_rtt data_segs_in data_segs_out""".split(),
struct.unpack("BBBBBBBIIIIIIIIIIIIIIIIIIIIIIIILLLLIIIIII",
s.getsockopt(socket.IPPROTO_TCP, socket.TCP_INFO, 160))))
wscale = rv.pop("snd_rcv_wscale")
# bit field layout is up to compiler
# FIXME test the order of nibbles
rv["snd_wscale"] = wscale >> 4
rv["rcv_wscale"] = wscale & 0xf
return rv
for i in range(100):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("localhost", 7878))
s.recv(10)
pprint.pprint(tcp_info(s))
I doubt a true cross-platform alternative exists.
Fundamentally there are quite a few states:
you wrote data to socket, but it was not sent yet
data was sent, but not received
data was sent and losts (relies on timer)
data was received, but not acknowledged yet
acknowledgement not received yet
acknowledgement lost (relies on timer)
data was received by remote host but not read out by application
data was read out by application, but socket still alive
data was read out, and app crashed
data was read out, and app closed the socket
data was read out, and app called shutdown(WR) (almost same as closed)
FIN was not sent by remote yet
FIN was sent by remote but not received yet
FIN was sent and got lost
FIN received by your end
Obviously your OS can distinguish quite a few of these states, but not all of them. I can't think of an API that would be this verbose...
Some systems allow you to query remaining send buffer space. Perhaps if you did, and socket was already shut down, you'd get a neat error?
Good news is just because socket is shut down, doesn't mean you can't interrogate it. I can get all of TCP_INFO after shutdown, with state=7 (closed). In some cases report state=8 (close wait).
http://lxr.free-electrons.com/source/net/ipv4/tcp.c#L1961 has all the gory details of Linux TCP state machine.
TL;DR:
Don't rely on the socket state for this; it can cut you in many error cases. You need to bake the acknowledgement/receipt facility into your communications protocol. First character on each line used for status/ack works really well for text-based protocols.
On many, but not all, Unix-like/POSIXy systems, one can use the TIOCOUTQ (also SIOCOUTQ) ioctl to determine how much data is left in the outgoing buffer.
For TCP sockets, even if the other end has shut down its write side (and therefore will send no more data to this end), all transmissions are acknowledged. The data in the outgoing buffer is only removed when the acknowledgement from the recipient kernel is received. Thus, when there is no more data in the outgoing buffer, we know that the kernel at the other end has received the data.
Unfortunately, this does not mean that the application has received and processed the data. This same limitation applies to all methods that rely on socket state; this is also the reason why fundamentally, the acknowledgement of receipt/acceptance of the final status line must come from the other application, and cannot be automatically detected.
This, in turn, means that neither end can shut down their sending sides before the very final receipt/acknowledge message. You cannot rely on TCP -- or any other protocols' -- automatic socket state management. You must bake in the critical receipts/acknowledgements into the stream protocol itself.
In OP's case, the stream protocol seems to be simple line-based text. This is quite useful and easy to parse. One robust way to "extend" such a protocol is to reserve the first character of each line for the status code (or alternatively, reserve certain one-character lines as acknowledgements).
For large in-flight binary protocols (i.e., protocols where the sender and receiver are not really in sync), it is useful to label each data frame with an increasing (cyclic) integer, and have the other end respond, occasionally, with an update to let the sender know which frames have been completely processed, and which ones received, and whether additional frames should arrive soon/not-very-soon. This is very useful for network-based appliances that consume a lot of data, with the data provider wishing to be kept updated on the progress and desired data rate (think 3D printers, CNC machines, and so on, where the contents of the data changes the maximum acceptable data rate dynamically).
Okay so I recall pulling my hair out trying to solve this very problem back in the late 90's. I finally found an obscure doc that stated that a read call to a disconnected socket will return a 0. I use this fact to this day.
You're probably better off using ZeroMQ. That will send a whole message, or no message at all. If you set it's send buffer length to 1 (the shortest it will go) you can test to see if the send buffer is full. If not, the message was successfully transferred, probably. ZeroMQ is also really nice if you have an unreliable or intermittent network connection as part of your system.
That's still not entirely satisfactory. You're probably even better off implementing your own send acknowledge mechanism on top of ZeroMQ. That way you have absolute proof that a message was received. You don't have proof that a message was not received (something can go wrong between emitting and receiving the ack, and you cannot solve the Two Generals Problem). But that's the best that can be achieved. What you'll have done then is implement a Communicating Sequential Processes architecture on top of ZeroMQ's Actor Model which is itself implemented on top of TCP streams.. Ultimately it's a bit slower, but your application has more certainty of knowing what's gone on.
I'm writing a program with a client and a server I almost achieved it.
At the moment I can execute the server on a port. The client in the same port with the IP adress and the name of the .wav file that I want to read.
Now what I'd like to do is making a timeout between each sendto() so that the client receives the packet and read them well. without that the client receives many packets at once and it losts many of them.
So could someone tell me how it works in UDP, and how to do that ?
making a timeout between each sendto()
I believe that you are asking how to put a small delay between each sendto(). If you open raw wav file and send bytes, there is a good chance that the data will be getting to the client much faster than it can play it. If you want to stream data at the same rate as it is played, send data in chunks, then let the client request the next chunk.
If that is not an option, you can send a chunk of data (i.e. 20ms). Then let the thread sleep for a little less than 20ms then send the next chunk. Sleeps are kind of a hack. Some sort of audio callback would be best on the server. Bottom line is that your client buffer has to be big enough to consume the the amount of data your server is sending.
without that the client receives many packets at once and it losts many of them
I believe that you are asking how to deal with the variety of packet inter arrival rates and the packet losses and out of order packets received. It sounds like you were just sending packets at too fast a rate that your client could handle. You might need a larger buffer on the client.
In any case, with UDP/IP, you have the following scenarios
lost packets
packets arriving out of order
packets arriving in bursts: (each packet will not arrive exactly X ms apart)
To deal with this, you have to minimally have what is know as a dejitter buffer. This is a buffer that collects packets as they arrive and inserts them typically in a ring buffer. The buffer will have to be large enough to buffer up packets that your server is sending. Your client is potentially consuming the packets from the buffer slower than the server is sending them (or vice versa). In order to get packets in the right order and deal with losses, you have to detect it. You can detect losses and out of order arrivals by simply numbering each packet that is sent. As packets arrive you can put them into the buffer into the correct location. If a packet is lost, you need to deal with that with some sort of loss concealment (playing silence, estimating the lost packet, etc.) which is beyond the scope of this question,
The RTP protocol is designed for streaming and is an application protocol that work over UDP.
Since you're using UDP, which is connectionless, you don't really have a way to control the flow of packets unless you implement some kind of acknowledgement mechanism... at which point you might as well be using TCP because it already has that built in.
Although I don't have much experience in network programming, this looks a bit more complicated than it might seem at first glance. So UDP is connectionless. That speeds things up a lot, but there is a price to pay -- off the top of my head, packets can get lost or arrive out of order.
Those are situations you need to handle on the client end. Your client needs to be designed so that it accepts packets as they arrive at an arbitrary rate, skips over those that fail to arrive within a certain time (for live streaming, for buffered that doesn't matter) and takes order into consideration, which means that each packet needs to contain information about its place relative to previous packets.
I am writing a program that uses libpcap to capture packets and reassemble a TCP stream. My program simply monitors the traffic and so I have no control over the reception and transmittal of packets. My program disregards all non TCP/IP traffic.
I calculate the next expected sequence number from the ISN and then the successive SEQ numbers. I have it set up so that every TCP connection is uniquely identified by a tuple made up of the source IP, source port, dest IP, and dest port. Everything goes swimmingly until I receive a packet that has a sequence number different than what I am expecting. I have uploaded screen shots to help illustrate what I am describing here.
My questions are:
1. Where is the data that was in the "lost" packet?
2. How does the SEQ number order recover from this situation?
3. What can I do to handle these occurrences.
Please remember; however, I am not writing a program that adheres to TCP. I am writing a program that passively monitors network traffic for TCP streams and attempts to save the raw data to disk, and I am confused as to why the above state situation happens and how I can program to handle it.
Thank you
Where is the data that was in the "lost" packet?
It got dropped by someone
It got lost on the way (wrong detour) and will arrive later
How does the SEQ number order recover from this situation
The receiver notices the segment is out of sequence and doesn't send it to the application, thereby fulfilling its contract: in-order reliable byte stream. Now, what actually happens to get the missing piece is quite intricate and varies from stack to stack. In a nutshell the stack waits for the missing piece to arrive.
The receiver can throw away out-of-sequence segments or it can queue them in a reassembly queue
The receiver can wait for the missing segment to arrive or it can immediately send the ACK it already sent before. Duplicate ACKs will alert the peer something is wrong (look for Fast Retransmit)
When sending acknowledgments the TCP can inform the peer some segments arrived successfully - they're just out of sequence (SACK)
What can I do to handle these occurrences
You can't do anything since you're only monitoring. You could probably get more insight into what is really happening if you also captured the response traffic.
Depending on the window-size of the current TCP connection, if the new packet fits within the receiving window (multi-packet buffer) it will be entered into the receiving queue (and reordered for ordered delivery to protocol clients).
If the sequence number is larger than the maximum for the current window, the packet gets rejected.
See also section 4.4.2 (INPUT PACKET HANDLER) in RFC 675
I need to perform data filtering based on the source unicast IPv4 address of datagrams arriving to a Linux UDP socket.
Of course, it is always possible to manually perform the filtering based on the information provided by recvfrom, but I am wondering if there could be another more intelligent/efficient approach (if possible, not using libpcap).
Any ideas?
If it's a single source you need to allow, then use just connect(2) and kernel will do filtering for you. As a bonus, connected UDP sockets are more efficient. This, of cource, does not work for more then one source.
As already stated, NetFilter (the Linux firewall) can help you here.
You could also use the UDP options of xinetd and tcpd to perform filtering.
What proportion of datagrams are you expecting to discard? If it is very high, then you may want to review your application design (for example, to make the senders not send so many datagrams which are to be discarded). If it is not very high, then you don't really care about how much effort you spend discarding them.
Suppose discarding a packet takes the same amount of (runtime) effort as processing it normally; if you discard 1% of packets, you will only be spending 1% of time discarding. However, realistically, discarding is likely to be much easier than processing messages.