We have a client/server communication system over UDP setup in windows. The problem we are facing is that when the throughput grows, packets are getting dropped. We suspect that this is due to the UDP receive buffer which is continuously being polled causing the buffer to be blocked and dropping any incoming packets. Is it possible that reading this buffer will cause incoming packets to be dropped? If so, what are the options to correct this? The system is written in C. Please let me know if this is too vague and I can try to provide more info. Thanks!
The default socket buffer size in Windows sockets is 8k, or 8192 bytes. Use the setsockopt Windows function to increase the size of the buffer (refer to the SO_RCVBUF option).
But beyond that, increasing the size of your receive buffer will only delay the time until packets get dropped again if you are not reading the packets fast enough.
Typically, you want two threads for this kind of situation.
The first thread exists solely to service the socket. In other words, the thread's sole purpose is to read a packet from the socket, add it to some kind of properly-synchronized shared data structure, signal that a packet has been received, and then read the next packet.
The second thread exists to process the received packets. It sits idle until the first thread signals a packet has been received. It then pulls the packet from the properly-synchronized shared data structure and processes it. It then waits to be signaled again.
As a test, try short-circuiting the full processing of your packets and just write a message to the console (or a file) each time a packet has been received. If you can successfully do this without dropping packets, then breaking your functionality into a "receiving" thread and a "processing" thread will help.
Yes, the stack is allowed to drop packets — silently, even — when its buffers get too full. This is part of the nature of UDP, one of the bits of reliability you give up when you switch from TCP. You can either reinvent TCP — poorly — by adding retry logic, ACK packets, and such, or you can switch to something in-between like SCTP.
There are ways to increase the stack's buffer size, but that's largely missing the point. If you aren't reading fast enough to keep buffer space available already, making the buffers larger is only going to put off the time it takes you to run out of buffer space. The proper solution is to make larger buffers within your own code, and move data from the stack's buffers into your program's buffer ASAP, where it can wait to be processed for arbitrarily long times.
Is it possible that reading this buffer will cause incoming packets to be dropped?
Packets can be dropped if they're arriving faster than you read them.
If so, what are the options to correct this?
One option is to change the network protocol: use TCP, or implement some acknowledgement + 'flow control' using UDP.
Otherwise you need to see why you're not reading fast/often enough.
If the CPU is 100% utilitized then you need to do less work per packet or get a faster CPU (or use multithreading and more CPUs if you aren't already).
If the CPU is not 100%, then perhaps what's happening is:
You read a packet
You do some work, which takes x msec of real-time, some of which is spent blocked on some other I/O (so the CPU isn't busy, but it's not being used to read another packet)
During those x msec, a flood of packets arrive and some are dropped
A cure for this would be to change the threading.
Another possibility is to do several simultaneous reads from the socket (each of your reads provides a buffer into which a UDP packet can be received).
Another possibility is to see whether there's a (O/S-specific) configuration option to increase the number of received UDP packets which the network stack is willing to buffer until you try to read them.
First step, increase the receiver buffer size, Windows pretty much grants all reasonable size requests.
If that doesn't help, your consume code seems to have some fairly slow areas. I would use threading, e.g. with pthreads and utilize a producer consumer pattern to put the incoming datagram in a queue on another thread and then consume from there, so your receive calls don't block and the buffer does not run full
3rd step, modify your application level protocol, allow for batched packets and batch packets at the sender to reduce UDP header overhead from sending a lot of small packets.
4th step check your network gear, switches, etc. can give you detailed output about their traffic statistics, buffer overflows, etc. - if that is in issue get faster switches or possibly switch out a faulty one
... just fyi, I'm running UDP multicast traffic on our backend continuously at avg. ~30Mbit/sec with peaks a 70Mbit/s and my drop rate is bare nil
Not sure about this, but on windows, its not possible to poll the socket and cause a packet to drop. Windows collects the packets separately from your polling and it shouldn't cause any drops.
i am assuming your using select() to poll the socket ? As far as i know , cant cause a drop.
The packets could be lost due to an increase in unrelated network traffic anywhere along the route, or full receive buffers. To mitigate this, you could increase the receive buffer size in Winsock.
Essentially, UDP is an unreliable protocol in the sense that packet delivery is not guaranteed and no error is returned to the sender on delivery failure. If you are worried about packet loss, it would be best to implement acknowledgment packets into your communication protocol, or to port it to a more reliable protocol like TCP. There really aren't any other truly reliable ways to prevent UDP packet loss.
Related
Say you have a process that receives a large file from a server.
If you do not perform a recv() call does it stay on a buffer of your ethernet controller forever?
If another process needs to receive data and the buffer is full from another process does it need to wait until the other process performs a recv() or the buffer times out?
If you have multiple process sending and receiving data does it have to wait until the buffer is empty.? Or can it multiplex it and keep track at the driver level or some part of the socket library?
edit: spelling
If you do not perform a recv() call does it stay on a buffer of your
ethernet controller forever?
No, data never stays in the ethernet controller's buffer for very long; the kernel will read the data out of the Ethernet controller's buffer and into your socket's buffer (in the computer's regular RAM) as quickly as it can. If your socket's buffer is full, then the incoming data will be discarded.
If another process needs to receive data and the buffer is full from
another process does it need to wait until the other process performs
a recv() or the buffer times out?
Each socket has its own separate buffer in the computer's main RAM, and each process has its own socket(s), so processes do not have to wait for each others' buffers to empty.
If you have multiple process sending and receiving data does it have
to wait until the buffer is empty.?
See the answer to question 2, as it answers this question also.
This is a bit of perfectly spherical chicken in a vacuum type of answer. But your question is very broad and has a lot of what ifs depending on the NIC, the OS, and many other things.
But lets assume your are on a modernish full-blown OS, with modernish ethernet controller.
No. That all handled by the by kernel and protocol stuff. The kernel can't let the buffer on the network controller fill up while it's waiting for you. Otherwise it will block other processes from accessing the network. So it will buffer it up until you are ready. For some protocols there are mechanism where one device can tell the other device not to send any more data. (ei. TCP Receive Window Size, once the sender sent that amount of data it will stop until the receiver acknowledges it somehow)
It's basically the same answer as above, the OS handles the details. From your point of you, your recv() will not block any other processes ability to recv().
This is more interesting, modern NIC are queue based. You have n-number of transmit/receive queues, and in most cases, filters can be attached to them. This allows the NIC to do a lot of the functionality that normally would have to be done by the OS (that's called offloading) but back to the point. With these NICs, you have have multiple I/O without multiplexing. Though generally, especially on consumer grade NIC, the number of queues will be pretty low. Usually 4. So there will be some multiplexing involved.
I am receiving UDP packets at the rate of 10Mbps. Each packet is formed of around 1109 bytes.
So, it makes more than 1pkt/ms that I am receving on eth0. The recvfrom() in C receives the packet and passes on the packet to Java. Java does the filtering of the packets and the necessary processing.
The bottlenecks are:
recvfrom() is too slow:fetching takes more than 10ms possibly because it does not get the CPU.
Passing of the packet from C to Java through the interface(JNI) takes 1-2 ms.
The processing of packet in Java itself takes 0.5 to 1 second depending on if database insertion or image processing needs to be done.
So, the problem is many delays add up and more than half of the packets are lost.
The possible solutions could be:
Exclude the need for C's recvfrom() completely and implement UDP fetching directly in Java (Note: On the first place, C's recvfrom() was implemented to receive raw packets and not UDP). This solution may reduce the JNI transfer delay.
Implement multi-threading on the UDP receive function in Java. But then an ID shall be required in the UDP packets for the sequence because in multi-threading the order of incoming packets is not guaranteed. (However, in this particular program, there is a need for packets to be ordered). This solution might be helpful in receiving all packets but the protocol which sends the data needs to be modified to add a sequence identifier. Due to multi-threading, the receiver might have higher chances to get the CPU and packets can be quickly fetched.
In Java, a blocking queue can be implemented as a huge buffer which stores the incoming packets. The Java parser can then use the packets from this queue to process it. However, it is not sure if the receiver function will be fast enough and put all the received packets in the queue without dropping any packet.
I would like to know which of the solutions could be optimal or a combination of the above solutions will work. Any help or suggestions would be greatly appreciated.
How long is this burst going on? Is it continuous and will go on forever? Then you need beefier hardware that can handle the load. Possibly some load-balancing where multiple servers handle the incoming data.
Does the burst only last a short wile, like in at most a second or two? Then have the lower levels read all packets as fast as it can, and put in a queue, and let the upper levels get the messages from the queue in its own time.
It sounds like you may be calling recvfrom() with your socket in blocking mode, in which case it will not return until the next packet arrives. If the packets are being sent at 10ms intervals, perhaps due to some delay on the sending side, then a loop of blocking recvfrom() calls would appear to take 10ms each. Set the socket to non-blocking and use something like select() to decide when to call it. Then profile everything to see where the real bottleneck lies. My bet would be on one or more of the JNI passthroughs.
Note that recvfrom() is not a "C" function, it is a system call. Java's functions just add layers on top of it.
I'm writing a program with a client and a server I almost achieved it.
At the moment I can execute the server on a port. The client in the same port with the IP adress and the name of the .wav file that I want to read.
Now what I'd like to do is making a timeout between each sendto() so that the client receives the packet and read them well. without that the client receives many packets at once and it losts many of them.
So could someone tell me how it works in UDP, and how to do that ?
making a timeout between each sendto()
I believe that you are asking how to put a small delay between each sendto(). If you open raw wav file and send bytes, there is a good chance that the data will be getting to the client much faster than it can play it. If you want to stream data at the same rate as it is played, send data in chunks, then let the client request the next chunk.
If that is not an option, you can send a chunk of data (i.e. 20ms). Then let the thread sleep for a little less than 20ms then send the next chunk. Sleeps are kind of a hack. Some sort of audio callback would be best on the server. Bottom line is that your client buffer has to be big enough to consume the the amount of data your server is sending.
without that the client receives many packets at once and it losts many of them
I believe that you are asking how to deal with the variety of packet inter arrival rates and the packet losses and out of order packets received. It sounds like you were just sending packets at too fast a rate that your client could handle. You might need a larger buffer on the client.
In any case, with UDP/IP, you have the following scenarios
lost packets
packets arriving out of order
packets arriving in bursts: (each packet will not arrive exactly X ms apart)
To deal with this, you have to minimally have what is know as a dejitter buffer. This is a buffer that collects packets as they arrive and inserts them typically in a ring buffer. The buffer will have to be large enough to buffer up packets that your server is sending. Your client is potentially consuming the packets from the buffer slower than the server is sending them (or vice versa). In order to get packets in the right order and deal with losses, you have to detect it. You can detect losses and out of order arrivals by simply numbering each packet that is sent. As packets arrive you can put them into the buffer into the correct location. If a packet is lost, you need to deal with that with some sort of loss concealment (playing silence, estimating the lost packet, etc.) which is beyond the scope of this question,
The RTP protocol is designed for streaming and is an application protocol that work over UDP.
Since you're using UDP, which is connectionless, you don't really have a way to control the flow of packets unless you implement some kind of acknowledgement mechanism... at which point you might as well be using TCP because it already has that built in.
Although I don't have much experience in network programming, this looks a bit more complicated than it might seem at first glance. So UDP is connectionless. That speeds things up a lot, but there is a price to pay -- off the top of my head, packets can get lost or arrive out of order.
Those are situations you need to handle on the client end. Your client needs to be designed so that it accepts packets as they arrive at an arbitrary rate, skips over those that fail to arrive within a certain time (for live streaming, for buffered that doesn't matter) and takes order into consideration, which means that each packet needs to contain information about its place relative to previous packets.
Here is the scenario:
I have a select based socket server in Linux which processes single packet a time. Lets say several packets are coming at high speed and I hit a break point while my process is at processing stage of current packet. My question is that what will happen to packets that are being sent to my server process non stop and while it is stopped at a break point. Will they get dropped? or Will OS buffer these packets and deliver to my process when it comes out of break point?
Though I have some idea but I want to confirm it from gurus here and probably I will learn more about socket behavior.
Any help would be appreciated.
The incoming packets will be queued up by the OS kernel until its buffer gets full. Any more packets will simply be dropped, but depending on the type of connection the kernel may signal the other end to stop sending (TCP aka SOCK_STREAM should, UDP aka SOCK_DGRAM probably won't). The sender should be prepared to handle this scenario.
How big the buffer is depends on the system; you may be able to query the size and/or change it (how this is done is usually OS dependent).
It doesn't matter if your process is halted for debugging, just slow, busy waiting for other events or being swapping in; if it does not read the data from the socket it will be queued.
I have an application that sends data point to point from a sender to the receiver over an link that can operate in simplex (one way transmission) or duplex modes (two way). In simplex mode, the application sends data using UDP, and in duplex it uses TCP. Since a write on TCP socket may block, we are using Non Blocking IO (ioctl with FIONBIO - O_NONBLOCK and fcntl are not supported on this distribution) and the select() system call to determine when data can be written. NIO is used so that we can abort out of send early after a timeout if needed should network conditions deteriorate. I'd like to use the same basic code to do the sending but instead change between TCP/UDP at a higher abstraction. This works great for TCP.
However I am concerned about how Non Blocking IO works for a UDP socket. I may be reading the man pages incorrectly, but since write() may return indicating fewer bytes sent than requested, does that mean that a client will receive fewer bytes in its datagram? To send a given buffer of data, multiple writes may be needed, which may be the case since I am using non blocking IO. I am concerned that this will translate into multiple UDP datagrams received by the client.
I am fairly new to socket programming so please forgive me if have some misconceptions here. Thank you.
Assuming a correct (not broken) UDP implementation, then each send/sendmsg/sendto will correspond to exactly one whole datagram sent and each recv/recvmsg/recvfrom will correspond to exactly one whole datagram received.
If a UDP message cannot be transmitted in its entirety, you should receive an EMSGSIZE error. A sent message might still fail due to size at some point in the network, in which case it will simply not arrive. But it will not be delivered in pieces (unless the IP stack is severely buggy).
A good rule of thumb is to keep your UDP payload size to at most 1400 bytes. That is very approximate and leaves a lot of room for various forms of tunneling so as to avoid fragmentation.