RTP packet drop issue(?) - c

I have a client and a server, where server sends the audio data by RTP packets encapsulated inside UDP. Client receives the packets. As UDP has no flow control, client checks for the sequence number of the packet and rearranges them if they come out of order.
My question here is, I see client never receives packet with some sequence number, as show below in the wireshark -
If this is the case, when i play the audio at client side, it is distorted(obvious). How do i avoid it? What factors effect these? Should i set the socket buffer size to a large value?
Appreciate reply in advance.
EDIT 1: This issue is on QNX platform and not on Linux.
I observed the output of "netstat -p udp" to see if that gives any hint about why packets are getting dropped on QNX and not on Linux.
QNX:
SOCK=/dev/d_usb3/ netstat -p udp
udp:
8673 datagrams received
0 with incomplete header
 60 with bad data length field
0 with bad checksum
0 dropped due to no socket
2 broadcast/multicast datagrams dropped due to no socket
0 dropped due to full socket buffers
8611 delivered
8592 PCB hash misses
On Linux I see netstat shows no packet drops with the same server and same audio!
Any leads? Why this might be? Driver issue? Networking stack?

You need to specify how you are handling lost packets in your client.
If you lose packets, that means you have missing data in your audio stream. So your client has to "do something" where it is missing data. Some options are
- play silence (makes a cracking noise due to sharp envelop to 0)
- fade to silence
- estimate waveform by examining adjacent data
- play noise
You play cannot misalign packets or play them with missing packets. For example, suppose you you get packet 1,2,3,4 and 6. You are missing packet 5. You cannot play packet 4 then play packet 6. Something has to happen to fill the space of packet 5.
See this post for more info.

Related

Can a linux socket return data less than the underlying packet? [duplicate]

When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?
It will be split when it hits a network device with a lower MTU than the packet's size. Most ethernet devices are 1500, but it can often be smaller, like 1492 if that ethernet is going over PPPoE (DSL) because of the extra routing information, even lower if a second layer is added like Windows Internet Connection Sharing. And dialup is normally 576!
In general though you should remember that TCP is not a packet protocol. It uses packets at the lowest level to transmit over IP, but as far as the interface for any TCP stack is concerned, it is a stream protocol and has no requirement to provide you with a 1:1 relationship to the physical packets sent or received (for example most stacks will hold messages until a certain period of time has expired, or there are enough messages to maximize the size of the IP packet for the given MTU)
As an example if you sent two "packets" (call your send function twice), the receiving program might only receive 1 "packet" (the receiving TCP stack might combine them together). If you are implimenting a message type protocol over TCP, you should include a header at the beginning of each message (or some other header/footer mechansim) so that the receiving side can split the TCP stream back into individual messages, either when a message is received in two parts, or when several messages are received as a chunk.
Fragmentation should be transparent to a TCP application. Keep in mind that TCP is a stream protocol: you get a stream of data, not packets! If you are building your application based on the idea of complete data packets then you will have problems unless you add an abstraction layer to assemble whole packets from the stream and then pass the packets up to the application.
The question makes an assumption that is not true -- TCP does not deliver packets to its endpoints, rather, it sends a stream of bytes (octets). If an application writes two strings into TCP, it may be delivered as one string on the other end; likewise, one string may be delivered as two (or more) strings on the other end.
RFC 793, Section 1.5:
"The TCP is able to transfer a
continuous stream of octets in each
direction between its users by
packaging some number of octets into
segments for transmission through the
internet system."
The key words being continuous stream of octets (bytes).
RFC 793, Section 2.8:
"There is no necessary relationship
between push functions and segment
boundaries. The data in any particular
segment may be the result of a single
SEND call, in whole or part, or of
multiple SEND calls."
The entirety of section 2.8 is relevant.
At the application layer there are any number of reasons why the whole 1500 bytes may not show up one read. Various factors in the internal operating system and TCP stack may cause the application to get some bytes in one read call, and some in the next. Yes, the TCP stack has to re-assemble the packet before sending it up, but that doesn't mean your app is going to get it all in one shot (it is LIKELY will get it in one read, but it's not GUARANTEED to get it in one read).
TCP tries to guarantee in-order delivery of bytes, with error checking, automatic re-sends, etc happening behind your back. Think of it as a pipe at the app layer and don't get too bogged down in how the stack actually sends it over the network.
This page is a good source of information about some of the issues that others have brought up, namely the need for data encapsulation on an application protocol by application protocol basis Not quite authoritative in the sense you describe but it has examples and is sourced to some pretty big names in network programming.
If a packet exceeds the maximum MTU of a network device it will be broken up into multiple packets. (Note most equipment is set to 1500 bytes, but this is not a necessity.)
The reconstruction of the packet should be entirely transparent to the applications.
Different network segments can have different MTU values. In that case fragmentation can occur. For more information see TCP Maximum segment size
This (de)fragmentation happens in the TCP layer. In the application layer there are no more packets. TCP presents a contiguous data stream to the application.
A the "application layer" a TCP packet (well, segment really; TCP at its own layer doesn't know from packets) is never fragmented, since it doesn't exist. The application layer is where you see the data as a stream of bytes, delivered reliably and in order.
If you're thinking about it otherwise, you're probably approaching something in the wrong way. However, this is not to say that there might not be a layer above this, say, a sequence of messages delivered over this reliable, in-order bytestream.
Correct - the most informative way to see this is using Wireshark, an invaluable tool. Take the time to figure it out - has saved me several times, and gives a good reality check
If a 3000 byte packet enters an Ethernet network with a default MTU size of 1500 (for ethernet), it will be fragmented into two packets of each 1500 bytes in length. That is the only time I can think of.
Wireshark is your best bet for checking this. I have been using it for a while and am totally impressed

Path of the UDP packet from kernel to user-space in Linux

I'm maintaining some network driver and I've got some problems with lost of data. The effect is that when I send for example ICMP or UDP ping using ping or nping some of the udp/icmp packets are lost.
I'm sure that on ping/nping side of the transmission the ping reply is received by my driver and the kernel (tcpdump shows incoming udp or icmp packets as a reply).
But application ping/nping shows sometimes that for example 80% packets are lost. I suspect that those packets are lost somewhere between kernel and user space.
I know that for UDP there is procedure udp_rcv() for maintenance of UDP packets, but I don't know which procedure is next in the path of delivering of the packet to user space application.
Linux kernel is in version 3.3.8.
My question is - how to trace the whole path of transition of the packet from my driver to user space socket buffer?
udp_rcv() is a callback that is passed to struct net_protocol as a .handler.
You may either look at usage of this handler field in the structure, or you can also see if some error occurs. There is a callback err_handler. Maybe packet loss happens here and error handler will be called.
P. S. Remember that UDP does not guarantee 100% transmit success, and one lost packet out of 100 might be expected behavior. (:

Can LoadRunner Receive Data By UDP Packets?

We want to receive packets from udp sockets, the udp packets have variable length and we don't know how long they really are until we receive them (parts of them exactly, length were written in the sixth byte).
We tried the function lrs_set_receive_option with MarkerEnd only to find it has no help on this issue. The reason why we want to receive by packets is that we need to respond some packet by sending back user-defined udp packets.
Is there anybody knows how achieve that?
UPDATE
The LR version seems to be v10 or v11.
We need respond an incoming udp packet by sending back a udp packet immediately.
The udp packet may be like this
| orc code | packet length | Real DATA |
Issue is we can't let loadrunner return data for each packets, sometimes it returns many packets in a buffer, sometimes it waits until timeout though when there has been an incoming packet in the socket buffer. While in the c programming language world, when calling recvfrom(udp socket) we are returned only one udp packet per time (per call) which is want we really want.
If you need raw socket support to intercept at the packet level then you are likely going to have to jump to a DLL virtual user in Visual Studio with the raw socket support.
As to your question on UDP support: Yes, a Winsock user supports both core transport types, UDP and TCP. TCP being the more common variant as connection oriented. However, packet examination is at layer 3 of the OSI model for the carrier protocol IP. The ACK should come before you receive the dataflow for your use in the script. You are looking at assembled data flows in the data.ws when you jump to the TCP and UDP level.
Now, you are likely receiving a warning on receive buffer size mismatch which is taking you down this path with a mismatch to the recording size. There is an easy way to address this. If you take your send buffer and construct it using the lrs_set_send_buffer() function, then anything that returns will be taken as correct, ignoring the previously recorded buffer size and not having to wait for a match or timeout before continuing.

What is happening when a TCP sequence number arrives that is not what is expected?

I am writing a program that uses libpcap to capture packets and reassemble a TCP stream. My program simply monitors the traffic and so I have no control over the reception and transmittal of packets. My program disregards all non TCP/IP traffic.
I calculate the next expected sequence number from the ISN and then the successive SEQ numbers. I have it set up so that every TCP connection is uniquely identified by a tuple made up of the source IP, source port, dest IP, and dest port. Everything goes swimmingly until I receive a packet that has a sequence number different than what I am expecting. I have uploaded screen shots to help illustrate what I am describing here.
My questions are:
1. Where is the data that was in the "lost" packet?
2. How does the SEQ number order recover from this situation?
3. What can I do to handle these occurrences.
Please remember; however, I am not writing a program that adheres to TCP. I am writing a program that passively monitors network traffic for TCP streams and attempts to save the raw data to disk, and I am confused as to why the above state situation happens and how I can program to handle it.
Thank you
Where is the data that was in the "lost" packet?
It got dropped by someone
It got lost on the way (wrong detour) and will arrive later
How does the SEQ number order recover from this situation
The receiver notices the segment is out of sequence and doesn't send it to the application, thereby fulfilling its contract: in-order reliable byte stream. Now, what actually happens to get the missing piece is quite intricate and varies from stack to stack. In a nutshell the stack waits for the missing piece to arrive.
The receiver can throw away out-of-sequence segments or it can queue them in a reassembly queue
The receiver can wait for the missing segment to arrive or it can immediately send the ACK it already sent before. Duplicate ACKs will alert the peer something is wrong (look for Fast Retransmit)
When sending acknowledgments the TCP can inform the peer some segments arrived successfully - they're just out of sequence (SACK)
What can I do to handle these occurrences
You can't do anything since you're only monitoring. You could probably get more insight into what is really happening if you also captured the response traffic.
Depending on the window-size of the current TCP connection, if the new packet fits within the receiving window (multi-packet buffer) it will be entered into the receiving queue (and reordered for ordered delivery to protocol clients).
If the sequence number is larger than the maximum for the current window, the packet gets rejected.
See also section 4.4.2 (INPUT PACKET HANDLER) in RFC 675

Echo client with over 65% packet losses

I'm just wondering why my UDP server/client pair is getting 65% packet loss. It's just an echo service that sends (the client) an empty packet when it reaches EOF, and the server resets itself when he gets that empty packet.
Not really homework, just frustrated because I can't get it right :(
Thanks!
Have you looked at udp buffer overflows?
Here is information on packet loss myths how to detect UDP packet loss rates on several platforms. And last but by no means least how to mess with (err... I mean change) the kernel udp buffer sizes for a few unix platforms (caution is advisable).
There's no promise that UDP packets will ever be delivered. On congested networks, UDP packets may just be dropped in transit.
There are also routers that are configured to just drop empty UDP packets. See this random example of someone wanting this.

Resources