How to simulate UDP issues to develop a reliable application? [duplicate] - c

I am currently creating a server software that communicates with multiple arduino boards. Due to the hardware, I am using the UDP Protocol. I have a pretty simple mechanism that will resend packages in most of the cases when they get lost. I have two questions now:
How probable is it that UDP Packets get lost in a Network with no Internet access and about 20 arduinos and one computer? Is it even neccessary to have a resend method?
Is there a way I can simulate UDP Packet loss in this network to check if the resend mechanisms are working?

How probable is it that UDP Packets get lost in a Network with no
Internet access and about 20 arduinos and one computer?
The probability is 100% that sooner or later a packet will be dropped.
If you want a more detailed statistic, like the probability of a packet being dropped within any particular period of time, the only real way to know is to try it and found out (using e.g. sequence numbers in the packets so that the receiver(s) can detect when a packet has been dropped by noting the skipped sequence-number). The probability will depend very much on the size of the packets, the speed at which the packets are being sent, the CPU speed of the receivers, what other tasks the receivers are spending CPU time on, the quality of your Ethernet switch, the quality of your Ethernet cables, the phase of the moon, etc etc.
Is it even neccessary to have a resend method?
That depends on what the consequences would be to having a packet dropped. For some applications (e.g. streaming audio or video, or audio-metering data), dropping a packet is no big deal; you just ignore the fact that some data was lost, and continue on with the next packet as usual. For other applications (e.g. file transmit/receive), the loss of a packet means the loss of data that the receiver needs, so you'll want to have some way to recover from that loss, e.g. by detecting it and triggering a resend, or else the entire transfer will fail (or at least the receiver will end up with only a partial file).
Is there a way I can simulate UDP Packet loss in this network to check
if the resend mechanisms are working?
Sure, just put some logic into the receivers so that they occasionally pretend to not have received a packet:
int numBytesReceived = recv(...);
if ((rand()%100) == 0) // Simulate a 1% packet loss rate
{
printf("Pretending to have dropped a packet!\n");
}
else
{
// handle the incoming packet as usual
}

Related

Can a linux socket return data less than the underlying packet? [duplicate]

When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?
It will be split when it hits a network device with a lower MTU than the packet's size. Most ethernet devices are 1500, but it can often be smaller, like 1492 if that ethernet is going over PPPoE (DSL) because of the extra routing information, even lower if a second layer is added like Windows Internet Connection Sharing. And dialup is normally 576!
In general though you should remember that TCP is not a packet protocol. It uses packets at the lowest level to transmit over IP, but as far as the interface for any TCP stack is concerned, it is a stream protocol and has no requirement to provide you with a 1:1 relationship to the physical packets sent or received (for example most stacks will hold messages until a certain period of time has expired, or there are enough messages to maximize the size of the IP packet for the given MTU)
As an example if you sent two "packets" (call your send function twice), the receiving program might only receive 1 "packet" (the receiving TCP stack might combine them together). If you are implimenting a message type protocol over TCP, you should include a header at the beginning of each message (or some other header/footer mechansim) so that the receiving side can split the TCP stream back into individual messages, either when a message is received in two parts, or when several messages are received as a chunk.
Fragmentation should be transparent to a TCP application. Keep in mind that TCP is a stream protocol: you get a stream of data, not packets! If you are building your application based on the idea of complete data packets then you will have problems unless you add an abstraction layer to assemble whole packets from the stream and then pass the packets up to the application.
The question makes an assumption that is not true -- TCP does not deliver packets to its endpoints, rather, it sends a stream of bytes (octets). If an application writes two strings into TCP, it may be delivered as one string on the other end; likewise, one string may be delivered as two (or more) strings on the other end.
RFC 793, Section 1.5:
"The TCP is able to transfer a
continuous stream of octets in each
direction between its users by
packaging some number of octets into
segments for transmission through the
internet system."
The key words being continuous stream of octets (bytes).
RFC 793, Section 2.8:
"There is no necessary relationship
between push functions and segment
boundaries. The data in any particular
segment may be the result of a single
SEND call, in whole or part, or of
multiple SEND calls."
The entirety of section 2.8 is relevant.
At the application layer there are any number of reasons why the whole 1500 bytes may not show up one read. Various factors in the internal operating system and TCP stack may cause the application to get some bytes in one read call, and some in the next. Yes, the TCP stack has to re-assemble the packet before sending it up, but that doesn't mean your app is going to get it all in one shot (it is LIKELY will get it in one read, but it's not GUARANTEED to get it in one read).
TCP tries to guarantee in-order delivery of bytes, with error checking, automatic re-sends, etc happening behind your back. Think of it as a pipe at the app layer and don't get too bogged down in how the stack actually sends it over the network.
This page is a good source of information about some of the issues that others have brought up, namely the need for data encapsulation on an application protocol by application protocol basis Not quite authoritative in the sense you describe but it has examples and is sourced to some pretty big names in network programming.
If a packet exceeds the maximum MTU of a network device it will be broken up into multiple packets. (Note most equipment is set to 1500 bytes, but this is not a necessity.)
The reconstruction of the packet should be entirely transparent to the applications.
Different network segments can have different MTU values. In that case fragmentation can occur. For more information see TCP Maximum segment size
This (de)fragmentation happens in the TCP layer. In the application layer there are no more packets. TCP presents a contiguous data stream to the application.
A the "application layer" a TCP packet (well, segment really; TCP at its own layer doesn't know from packets) is never fragmented, since it doesn't exist. The application layer is where you see the data as a stream of bytes, delivered reliably and in order.
If you're thinking about it otherwise, you're probably approaching something in the wrong way. However, this is not to say that there might not be a layer above this, say, a sequence of messages delivered over this reliable, in-order bytestream.
Correct - the most informative way to see this is using Wireshark, an invaluable tool. Take the time to figure it out - has saved me several times, and gives a good reality check
If a 3000 byte packet enters an Ethernet network with a default MTU size of 1500 (for ethernet), it will be fragmented into two packets of each 1500 bytes in length. That is the only time I can think of.
Wireshark is your best bet for checking this. I have been using it for a while and am totally impressed

C : Linux Sockets: Recvfrom() too slow in fetching the UDP Packets

I am receiving UDP packets at the rate of 10Mbps. Each packet is formed of around 1109 bytes.
So, it makes more than 1pkt/ms that I am receving on eth0. The recvfrom() in C receives the packet and passes on the packet to Java. Java does the filtering of the packets and the necessary processing.
The bottlenecks are:
recvfrom() is too slow:fetching takes more than 10ms possibly because it does not get the CPU.
Passing of the packet from C to Java through the interface(JNI) takes 1-2 ms.
The processing of packet in Java itself takes 0.5 to 1 second depending on if database insertion or image processing needs to be done.
So, the problem is many delays add up and more than half of the packets are lost.
The possible solutions could be:
Exclude the need for C's recvfrom() completely and implement UDP fetching directly in Java (Note: On the first place, C's recvfrom() was implemented to receive raw packets and not UDP). This solution may reduce the JNI transfer delay.
Implement multi-threading on the UDP receive function in Java. But then an ID shall be required in the UDP packets for the sequence because in multi-threading the order of incoming packets is not guaranteed. (However, in this particular program, there is a need for packets to be ordered). This solution might be helpful in receiving all packets but the protocol which sends the data needs to be modified to add a sequence identifier. Due to multi-threading, the receiver might have higher chances to get the CPU and packets can be quickly fetched.
In Java, a blocking queue can be implemented as a huge buffer which stores the incoming packets. The Java parser can then use the packets from this queue to process it. However, it is not sure if the receiver function will be fast enough and put all the received packets in the queue without dropping any packet.
I would like to know which of the solutions could be optimal or a combination of the above solutions will work. Any help or suggestions would be greatly appreciated.
How long is this burst going on? Is it continuous and will go on forever? Then you need beefier hardware that can handle the load. Possibly some load-balancing where multiple servers handle the incoming data.
Does the burst only last a short wile, like in at most a second or two? Then have the lower levels read all packets as fast as it can, and put in a queue, and let the upper levels get the messages from the queue in its own time.
It sounds like you may be calling recvfrom() with your socket in blocking mode, in which case it will not return until the next packet arrives. If the packets are being sent at 10ms intervals, perhaps due to some delay on the sending side, then a loop of blocking recvfrom() calls would appear to take 10ms each. Set the socket to non-blocking and use something like select() to decide when to call it. Then profile everything to see where the real bottleneck lies. My bet would be on one or more of the JNI passthroughs.
Note that recvfrom() is not a "C" function, it is a system call. Java's functions just add layers on top of it.

Strange lag when sending large message using tcp_nodelay

I'm writing a c program that sends the output of a bash shell over a tcp connection. To make my program more responsive, I used setsockopt() to enable TCP_NODELAY, which disables Nagle's buffering algorithm. This worked great, except rarely there is a lag in large messages. As in, if the message is more than around 500 bytes (probably 512). The first 500 bytes will go through (quickly in small messages), then there'll be a 1-2 second delay before the rest is received all at once. This only happens once every 10-15 times a large message is received. On the server side, the message is being written to the socket one byte at a time, and all of the bytes are available, so this behavior is unexpected to me.
My best guess is that there's a 512 byte buffer somewhere in the socket that's causing a block? I did some time tests to see where the lag is, and I'm pretty sure it's the socket itself where the lag is occurring. All of the data on the server side is written without blocking, but the client receives the end of the message after a lag. However I used getsockopt() to find the socket's receive and send buffers, and they are well over 512 bytes - 66000 and 130000 respectively. On the client side, I'm using express js to receive the data in a handler (app.on('data', function(){})). But I read that this express function does not buffer data?
Would anyone have a guess why this is happening? Thanks!
Since TCP_NODELAY means send every piece of data as a packet as soon as possible without combining data together, it sounds like you are sending tons of packets. Since you are writing one byte at a time it could send packets with just one byte of payload and a much bigger frame. This would work fine most of the time but as soon as the first packet drops for whatever reason the receiver would need to go into error-correction mode on the TCP socket to ask for retransmission of the dropped packet. That would incur at least one round-trip latency and perhaps several. It sounds like you are getting lucky for the first several hundred packets (500 bytes worth) and then typically hitting your first packet drop and slowing way down due to error correction. One simple solution might be to write in larger chunks, say 10 bytes at a time, instead of 1 byte so that the chance of hitting a dropped packet is much less. Then you would expect to see this problem as often as you do only for messages around 5000 bytes or so. In general setting TCP_NODELAY will cause things to go faster at first but wind up hitting the first dropped packet sooner simply because TCP_NODELAY will not decrease the number of packets you send per amount of data. So it increases or leaves the number of packets the same which means your chance of hitting a dropped packet within a certain amount of data will go up. There is a tradeoff here between interactive feel and first hiccup. By avoiding TCP_NODELAY you can delay the typical amount of data that will be sent before the first error retransmission is hit on average.
Get a network capture using tcpdump or wire-shark. Review the packet transmission time line, this will help distinguish network problems from software implementation issues. If you see retransmissions you may have a network issue, if you see slow acks you might find it better to NOT use 'No Delay' since Ack delay can stall a 'No Delay' connection.

Streaming audio udp C

I'm writing a program with a client and a server I almost achieved it.
At the moment I can execute the server on a port. The client in the same port with the IP adress and the name of the .wav file that I want to read.
Now what I'd like to do is making a timeout between each sendto() so that the client receives the packet and read them well. without that the client receives many packets at once and it losts many of them.
So could someone tell me how it works in UDP, and how to do that ?
making a timeout between each sendto()
I believe that you are asking how to put a small delay between each sendto(). If you open raw wav file and send bytes, there is a good chance that the data will be getting to the client much faster than it can play it. If you want to stream data at the same rate as it is played, send data in chunks, then let the client request the next chunk.
If that is not an option, you can send a chunk of data (i.e. 20ms). Then let the thread sleep for a little less than 20ms then send the next chunk. Sleeps are kind of a hack. Some sort of audio callback would be best on the server. Bottom line is that your client buffer has to be big enough to consume the the amount of data your server is sending.
without that the client receives many packets at once and it losts many of them
I believe that you are asking how to deal with the variety of packet inter arrival rates and the packet losses and out of order packets received. It sounds like you were just sending packets at too fast a rate that your client could handle. You might need a larger buffer on the client.
In any case, with UDP/IP, you have the following scenarios
lost packets
packets arriving out of order
packets arriving in bursts: (each packet will not arrive exactly X ms apart)
To deal with this, you have to minimally have what is know as a dejitter buffer. This is a buffer that collects packets as they arrive and inserts them typically in a ring buffer. The buffer will have to be large enough to buffer up packets that your server is sending. Your client is potentially consuming the packets from the buffer slower than the server is sending them (or vice versa). In order to get packets in the right order and deal with losses, you have to detect it. You can detect losses and out of order arrivals by simply numbering each packet that is sent. As packets arrive you can put them into the buffer into the correct location. If a packet is lost, you need to deal with that with some sort of loss concealment (playing silence, estimating the lost packet, etc.) which is beyond the scope of this question,
The RTP protocol is designed for streaming and is an application protocol that work over UDP.
Since you're using UDP, which is connectionless, you don't really have a way to control the flow of packets unless you implement some kind of acknowledgement mechanism... at which point you might as well be using TCP because it already has that built in.
Although I don't have much experience in network programming, this looks a bit more complicated than it might seem at first glance. So UDP is connectionless. That speeds things up a lot, but there is a price to pay -- off the top of my head, packets can get lost or arrive out of order.
Those are situations you need to handle on the client end. Your client needs to be designed so that it accepts packets as they arrive at an arbitrary rate, skips over those that fail to arrive within a certain time (for live streaming, for buffered that doesn't matter) and takes order into consideration, which means that each packet needs to contain information about its place relative to previous packets.

Winsock UDP packets being dropped?

We have a client/server communication system over UDP setup in windows. The problem we are facing is that when the throughput grows, packets are getting dropped. We suspect that this is due to the UDP receive buffer which is continuously being polled causing the buffer to be blocked and dropping any incoming packets. Is it possible that reading this buffer will cause incoming packets to be dropped? If so, what are the options to correct this? The system is written in C. Please let me know if this is too vague and I can try to provide more info. Thanks!
The default socket buffer size in Windows sockets is 8k, or 8192 bytes. Use the setsockopt Windows function to increase the size of the buffer (refer to the SO_RCVBUF option).
But beyond that, increasing the size of your receive buffer will only delay the time until packets get dropped again if you are not reading the packets fast enough.
Typically, you want two threads for this kind of situation.
The first thread exists solely to service the socket. In other words, the thread's sole purpose is to read a packet from the socket, add it to some kind of properly-synchronized shared data structure, signal that a packet has been received, and then read the next packet.
The second thread exists to process the received packets. It sits idle until the first thread signals a packet has been received. It then pulls the packet from the properly-synchronized shared data structure and processes it. It then waits to be signaled again.
As a test, try short-circuiting the full processing of your packets and just write a message to the console (or a file) each time a packet has been received. If you can successfully do this without dropping packets, then breaking your functionality into a "receiving" thread and a "processing" thread will help.
Yes, the stack is allowed to drop packets — silently, even — when its buffers get too full. This is part of the nature of UDP, one of the bits of reliability you give up when you switch from TCP. You can either reinvent TCP — poorly — by adding retry logic, ACK packets, and such, or you can switch to something in-between like SCTP.
There are ways to increase the stack's buffer size, but that's largely missing the point. If you aren't reading fast enough to keep buffer space available already, making the buffers larger is only going to put off the time it takes you to run out of buffer space. The proper solution is to make larger buffers within your own code, and move data from the stack's buffers into your program's buffer ASAP, where it can wait to be processed for arbitrarily long times.
Is it possible that reading this buffer will cause incoming packets to be dropped?
Packets can be dropped if they're arriving faster than you read them.
If so, what are the options to correct this?
One option is to change the network protocol: use TCP, or implement some acknowledgement + 'flow control' using UDP.
Otherwise you need to see why you're not reading fast/often enough.
If the CPU is 100% utilitized then you need to do less work per packet or get a faster CPU (or use multithreading and more CPUs if you aren't already).
If the CPU is not 100%, then perhaps what's happening is:
You read a packet
You do some work, which takes x msec of real-time, some of which is spent blocked on some other I/O (so the CPU isn't busy, but it's not being used to read another packet)
During those x msec, a flood of packets arrive and some are dropped
A cure for this would be to change the threading.
Another possibility is to do several simultaneous reads from the socket (each of your reads provides a buffer into which a UDP packet can be received).
Another possibility is to see whether there's a (O/S-specific) configuration option to increase the number of received UDP packets which the network stack is willing to buffer until you try to read them.
First step, increase the receiver buffer size, Windows pretty much grants all reasonable size requests.
If that doesn't help, your consume code seems to have some fairly slow areas. I would use threading, e.g. with pthreads and utilize a producer consumer pattern to put the incoming datagram in a queue on another thread and then consume from there, so your receive calls don't block and the buffer does not run full
3rd step, modify your application level protocol, allow for batched packets and batch packets at the sender to reduce UDP header overhead from sending a lot of small packets.
4th step check your network gear, switches, etc. can give you detailed output about their traffic statistics, buffer overflows, etc. - if that is in issue get faster switches or possibly switch out a faulty one
... just fyi, I'm running UDP multicast traffic on our backend continuously at avg. ~30Mbit/sec with peaks a 70Mbit/s and my drop rate is bare nil
Not sure about this, but on windows, its not possible to poll the socket and cause a packet to drop. Windows collects the packets separately from your polling and it shouldn't cause any drops.
i am assuming your using select() to poll the socket ? As far as i know , cant cause a drop.
The packets could be lost due to an increase in unrelated network traffic anywhere along the route, or full receive buffers. To mitigate this, you could increase the receive buffer size in Winsock.
Essentially, UDP is an unreliable protocol in the sense that packet delivery is not guaranteed and no error is returned to the sender on delivery failure. If you are worried about packet loss, it would be best to implement acknowledgment packets into your communication protocol, or to port it to a more reliable protocol like TCP. There really aren't any other truly reliable ways to prevent UDP packet loss.

Resources