I am receiving UDP packets at the rate of 10Mbps. Each packet is formed of around 1109 bytes.
So, it makes more than 1pkt/ms that I am receving on eth0. The recvfrom() in C receives the packet and passes on the packet to Java. Java does the filtering of the packets and the necessary processing.
The bottlenecks are:
recvfrom() is too slow:fetching takes more than 10ms possibly because it does not get the CPU.
Passing of the packet from C to Java through the interface(JNI) takes 1-2 ms.
The processing of packet in Java itself takes 0.5 to 1 second depending on if database insertion or image processing needs to be done.
So, the problem is many delays add up and more than half of the packets are lost.
The possible solutions could be:
Exclude the need for C's recvfrom() completely and implement UDP fetching directly in Java (Note: On the first place, C's recvfrom() was implemented to receive raw packets and not UDP). This solution may reduce the JNI transfer delay.
Implement multi-threading on the UDP receive function in Java. But then an ID shall be required in the UDP packets for the sequence because in multi-threading the order of incoming packets is not guaranteed. (However, in this particular program, there is a need for packets to be ordered). This solution might be helpful in receiving all packets but the protocol which sends the data needs to be modified to add a sequence identifier. Due to multi-threading, the receiver might have higher chances to get the CPU and packets can be quickly fetched.
In Java, a blocking queue can be implemented as a huge buffer which stores the incoming packets. The Java parser can then use the packets from this queue to process it. However, it is not sure if the receiver function will be fast enough and put all the received packets in the queue without dropping any packet.
I would like to know which of the solutions could be optimal or a combination of the above solutions will work. Any help or suggestions would be greatly appreciated.
How long is this burst going on? Is it continuous and will go on forever? Then you need beefier hardware that can handle the load. Possibly some load-balancing where multiple servers handle the incoming data.
Does the burst only last a short wile, like in at most a second or two? Then have the lower levels read all packets as fast as it can, and put in a queue, and let the upper levels get the messages from the queue in its own time.
It sounds like you may be calling recvfrom() with your socket in blocking mode, in which case it will not return until the next packet arrives. If the packets are being sent at 10ms intervals, perhaps due to some delay on the sending side, then a loop of blocking recvfrom() calls would appear to take 10ms each. Set the socket to non-blocking and use something like select() to decide when to call it. Then profile everything to see where the real bottleneck lies. My bet would be on one or more of the JNI passthroughs.
Note that recvfrom() is not a "C" function, it is a system call. Java's functions just add layers on top of it.
Related
I'm writing a c program that sends the output of a bash shell over a tcp connection. To make my program more responsive, I used setsockopt() to enable TCP_NODELAY, which disables Nagle's buffering algorithm. This worked great, except rarely there is a lag in large messages. As in, if the message is more than around 500 bytes (probably 512). The first 500 bytes will go through (quickly in small messages), then there'll be a 1-2 second delay before the rest is received all at once. This only happens once every 10-15 times a large message is received. On the server side, the message is being written to the socket one byte at a time, and all of the bytes are available, so this behavior is unexpected to me.
My best guess is that there's a 512 byte buffer somewhere in the socket that's causing a block? I did some time tests to see where the lag is, and I'm pretty sure it's the socket itself where the lag is occurring. All of the data on the server side is written without blocking, but the client receives the end of the message after a lag. However I used getsockopt() to find the socket's receive and send buffers, and they are well over 512 bytes - 66000 and 130000 respectively. On the client side, I'm using express js to receive the data in a handler (app.on('data', function(){})). But I read that this express function does not buffer data?
Would anyone have a guess why this is happening? Thanks!
Since TCP_NODELAY means send every piece of data as a packet as soon as possible without combining data together, it sounds like you are sending tons of packets. Since you are writing one byte at a time it could send packets with just one byte of payload and a much bigger frame. This would work fine most of the time but as soon as the first packet drops for whatever reason the receiver would need to go into error-correction mode on the TCP socket to ask for retransmission of the dropped packet. That would incur at least one round-trip latency and perhaps several. It sounds like you are getting lucky for the first several hundred packets (500 bytes worth) and then typically hitting your first packet drop and slowing way down due to error correction. One simple solution might be to write in larger chunks, say 10 bytes at a time, instead of 1 byte so that the chance of hitting a dropped packet is much less. Then you would expect to see this problem as often as you do only for messages around 5000 bytes or so. In general setting TCP_NODELAY will cause things to go faster at first but wind up hitting the first dropped packet sooner simply because TCP_NODELAY will not decrease the number of packets you send per amount of data. So it increases or leaves the number of packets the same which means your chance of hitting a dropped packet within a certain amount of data will go up. There is a tradeoff here between interactive feel and first hiccup. By avoiding TCP_NODELAY you can delay the typical amount of data that will be sent before the first error retransmission is hit on average.
Get a network capture using tcpdump or wire-shark. Review the packet transmission time line, this will help distinguish network problems from software implementation issues. If you see retransmissions you may have a network issue, if you see slow acks you might find it better to NOT use 'No Delay' since Ack delay can stall a 'No Delay' connection.
I'm writing a program with a client and a server I almost achieved it.
At the moment I can execute the server on a port. The client in the same port with the IP adress and the name of the .wav file that I want to read.
Now what I'd like to do is making a timeout between each sendto() so that the client receives the packet and read them well. without that the client receives many packets at once and it losts many of them.
So could someone tell me how it works in UDP, and how to do that ?
making a timeout between each sendto()
I believe that you are asking how to put a small delay between each sendto(). If you open raw wav file and send bytes, there is a good chance that the data will be getting to the client much faster than it can play it. If you want to stream data at the same rate as it is played, send data in chunks, then let the client request the next chunk.
If that is not an option, you can send a chunk of data (i.e. 20ms). Then let the thread sleep for a little less than 20ms then send the next chunk. Sleeps are kind of a hack. Some sort of audio callback would be best on the server. Bottom line is that your client buffer has to be big enough to consume the the amount of data your server is sending.
without that the client receives many packets at once and it losts many of them
I believe that you are asking how to deal with the variety of packet inter arrival rates and the packet losses and out of order packets received. It sounds like you were just sending packets at too fast a rate that your client could handle. You might need a larger buffer on the client.
In any case, with UDP/IP, you have the following scenarios
lost packets
packets arriving out of order
packets arriving in bursts: (each packet will not arrive exactly X ms apart)
To deal with this, you have to minimally have what is know as a dejitter buffer. This is a buffer that collects packets as they arrive and inserts them typically in a ring buffer. The buffer will have to be large enough to buffer up packets that your server is sending. Your client is potentially consuming the packets from the buffer slower than the server is sending them (or vice versa). In order to get packets in the right order and deal with losses, you have to detect it. You can detect losses and out of order arrivals by simply numbering each packet that is sent. As packets arrive you can put them into the buffer into the correct location. If a packet is lost, you need to deal with that with some sort of loss concealment (playing silence, estimating the lost packet, etc.) which is beyond the scope of this question,
The RTP protocol is designed for streaming and is an application protocol that work over UDP.
Since you're using UDP, which is connectionless, you don't really have a way to control the flow of packets unless you implement some kind of acknowledgement mechanism... at which point you might as well be using TCP because it already has that built in.
Although I don't have much experience in network programming, this looks a bit more complicated than it might seem at first glance. So UDP is connectionless. That speeds things up a lot, but there is a price to pay -- off the top of my head, packets can get lost or arrive out of order.
Those are situations you need to handle on the client end. Your client needs to be designed so that it accepts packets as they arrive at an arbitrary rate, skips over those that fail to arrive within a certain time (for live streaming, for buffered that doesn't matter) and takes order into consideration, which means that each packet needs to contain information about its place relative to previous packets.
I have an application that sends data point to point from a sender to the receiver over an link that can operate in simplex (one way transmission) or duplex modes (two way). In simplex mode, the application sends data using UDP, and in duplex it uses TCP. Since a write on TCP socket may block, we are using Non Blocking IO (ioctl with FIONBIO - O_NONBLOCK and fcntl are not supported on this distribution) and the select() system call to determine when data can be written. NIO is used so that we can abort out of send early after a timeout if needed should network conditions deteriorate. I'd like to use the same basic code to do the sending but instead change between TCP/UDP at a higher abstraction. This works great for TCP.
However I am concerned about how Non Blocking IO works for a UDP socket. I may be reading the man pages incorrectly, but since write() may return indicating fewer bytes sent than requested, does that mean that a client will receive fewer bytes in its datagram? To send a given buffer of data, multiple writes may be needed, which may be the case since I am using non blocking IO. I am concerned that this will translate into multiple UDP datagrams received by the client.
I am fairly new to socket programming so please forgive me if have some misconceptions here. Thank you.
Assuming a correct (not broken) UDP implementation, then each send/sendmsg/sendto will correspond to exactly one whole datagram sent and each recv/recvmsg/recvfrom will correspond to exactly one whole datagram received.
If a UDP message cannot be transmitted in its entirety, you should receive an EMSGSIZE error. A sent message might still fail due to size at some point in the network, in which case it will simply not arrive. But it will not be delivered in pieces (unless the IP stack is severely buggy).
A good rule of thumb is to keep your UDP payload size to at most 1400 bytes. That is very approximate and leaves a lot of room for various forms of tunneling so as to avoid fragmentation.
I need to perform data filtering based on the source unicast IPv4 address of datagrams arriving to a Linux UDP socket.
Of course, it is always possible to manually perform the filtering based on the information provided by recvfrom, but I am wondering if there could be another more intelligent/efficient approach (if possible, not using libpcap).
Any ideas?
If it's a single source you need to allow, then use just connect(2) and kernel will do filtering for you. As a bonus, connected UDP sockets are more efficient. This, of cource, does not work for more then one source.
As already stated, NetFilter (the Linux firewall) can help you here.
You could also use the UDP options of xinetd and tcpd to perform filtering.
What proportion of datagrams are you expecting to discard? If it is very high, then you may want to review your application design (for example, to make the senders not send so many datagrams which are to be discarded). If it is not very high, then you don't really care about how much effort you spend discarding them.
Suppose discarding a packet takes the same amount of (runtime) effort as processing it normally; if you discard 1% of packets, you will only be spending 1% of time discarding. However, realistically, discarding is likely to be much easier than processing messages.
We have a client/server communication system over UDP setup in windows. The problem we are facing is that when the throughput grows, packets are getting dropped. We suspect that this is due to the UDP receive buffer which is continuously being polled causing the buffer to be blocked and dropping any incoming packets. Is it possible that reading this buffer will cause incoming packets to be dropped? If so, what are the options to correct this? The system is written in C. Please let me know if this is too vague and I can try to provide more info. Thanks!
The default socket buffer size in Windows sockets is 8k, or 8192 bytes. Use the setsockopt Windows function to increase the size of the buffer (refer to the SO_RCVBUF option).
But beyond that, increasing the size of your receive buffer will only delay the time until packets get dropped again if you are not reading the packets fast enough.
Typically, you want two threads for this kind of situation.
The first thread exists solely to service the socket. In other words, the thread's sole purpose is to read a packet from the socket, add it to some kind of properly-synchronized shared data structure, signal that a packet has been received, and then read the next packet.
The second thread exists to process the received packets. It sits idle until the first thread signals a packet has been received. It then pulls the packet from the properly-synchronized shared data structure and processes it. It then waits to be signaled again.
As a test, try short-circuiting the full processing of your packets and just write a message to the console (or a file) each time a packet has been received. If you can successfully do this without dropping packets, then breaking your functionality into a "receiving" thread and a "processing" thread will help.
Yes, the stack is allowed to drop packets — silently, even — when its buffers get too full. This is part of the nature of UDP, one of the bits of reliability you give up when you switch from TCP. You can either reinvent TCP — poorly — by adding retry logic, ACK packets, and such, or you can switch to something in-between like SCTP.
There are ways to increase the stack's buffer size, but that's largely missing the point. If you aren't reading fast enough to keep buffer space available already, making the buffers larger is only going to put off the time it takes you to run out of buffer space. The proper solution is to make larger buffers within your own code, and move data from the stack's buffers into your program's buffer ASAP, where it can wait to be processed for arbitrarily long times.
Is it possible that reading this buffer will cause incoming packets to be dropped?
Packets can be dropped if they're arriving faster than you read them.
If so, what are the options to correct this?
One option is to change the network protocol: use TCP, or implement some acknowledgement + 'flow control' using UDP.
Otherwise you need to see why you're not reading fast/often enough.
If the CPU is 100% utilitized then you need to do less work per packet or get a faster CPU (or use multithreading and more CPUs if you aren't already).
If the CPU is not 100%, then perhaps what's happening is:
You read a packet
You do some work, which takes x msec of real-time, some of which is spent blocked on some other I/O (so the CPU isn't busy, but it's not being used to read another packet)
During those x msec, a flood of packets arrive and some are dropped
A cure for this would be to change the threading.
Another possibility is to do several simultaneous reads from the socket (each of your reads provides a buffer into which a UDP packet can be received).
Another possibility is to see whether there's a (O/S-specific) configuration option to increase the number of received UDP packets which the network stack is willing to buffer until you try to read them.
First step, increase the receiver buffer size, Windows pretty much grants all reasonable size requests.
If that doesn't help, your consume code seems to have some fairly slow areas. I would use threading, e.g. with pthreads and utilize a producer consumer pattern to put the incoming datagram in a queue on another thread and then consume from there, so your receive calls don't block and the buffer does not run full
3rd step, modify your application level protocol, allow for batched packets and batch packets at the sender to reduce UDP header overhead from sending a lot of small packets.
4th step check your network gear, switches, etc. can give you detailed output about their traffic statistics, buffer overflows, etc. - if that is in issue get faster switches or possibly switch out a faulty one
... just fyi, I'm running UDP multicast traffic on our backend continuously at avg. ~30Mbit/sec with peaks a 70Mbit/s and my drop rate is bare nil
Not sure about this, but on windows, its not possible to poll the socket and cause a packet to drop. Windows collects the packets separately from your polling and it shouldn't cause any drops.
i am assuming your using select() to poll the socket ? As far as i know , cant cause a drop.
The packets could be lost due to an increase in unrelated network traffic anywhere along the route, or full receive buffers. To mitigate this, you could increase the receive buffer size in Winsock.
Essentially, UDP is an unreliable protocol in the sense that packet delivery is not guaranteed and no error is returned to the sender on delivery failure. If you are worried about packet loss, it would be best to implement acknowledgment packets into your communication protocol, or to port it to a more reliable protocol like TCP. There really aren't any other truly reliable ways to prevent UDP packet loss.