I'm maintaining some network driver and I've got some problems with lost of data. The effect is that when I send for example ICMP or UDP ping using ping or nping some of the udp/icmp packets are lost.
I'm sure that on ping/nping side of the transmission the ping reply is received by my driver and the kernel (tcpdump shows incoming udp or icmp packets as a reply).
But application ping/nping shows sometimes that for example 80% packets are lost. I suspect that those packets are lost somewhere between kernel and user space.
I know that for UDP there is procedure udp_rcv() for maintenance of UDP packets, but I don't know which procedure is next in the path of delivering of the packet to user space application.
Linux kernel is in version 3.3.8.
My question is - how to trace the whole path of transition of the packet from my driver to user space socket buffer?
udp_rcv() is a callback that is passed to struct net_protocol as a .handler.
You may either look at usage of this handler field in the structure, or you can also see if some error occurs. There is a callback err_handler. Maybe packet loss happens here and error handler will be called.
P. S. Remember that UDP does not guarantee 100% transmit success, and one lost packet out of 100 might be expected behavior. (:
Related
When I received packet with recv at linux, is the kernel did de-fragmentation so I will get de-fragmentation data? Or should I take care of it on user-space?
When receiving UDP data via a socket of type SOCK_DGRAM, you'll only receive the complete datagram (assuming your input buffer is large enough to receive it).
Any IP fragmentation is handled transparently from userspace.
If you're using raw sockets, then you need to handle defragmentation yourself.
ip fragmentation takes place at the IP level in the protocol stack. The layers TCP and UDP already receive recomposed packets due to fragmentation, so the TCP can receive full segments (despite that TCP tries to neve send segments that result in fragmented packets) and UPD receive full datagrams to preserve datagram boundaries.
Only if you use raw sockets you'll receive raw packets. (this meaning fragments of IP packets) TCP gives you already reliable data streams (preserving data sequence, repetitions and retransmission of missing/faulty packets) and udp gives you the datagram as it was sent, or nothing at all.
Of course, all these modules are in the kernel, so it's the kernel that recomposes packets. It's worth noting that, once a packet is fragmented in pieces, it is not recomposed until it reaches its destination, so it's the IP layer at destination who is responsible of reassembling the fragments.
I have a client and a server, where server sends the audio data by RTP packets encapsulated inside UDP. Client receives the packets. As UDP has no flow control, client checks for the sequence number of the packet and rearranges them if they come out of order.
My question here is, I see client never receives packet with some sequence number, as show below in the wireshark -
If this is the case, when i play the audio at client side, it is distorted(obvious). How do i avoid it? What factors effect these? Should i set the socket buffer size to a large value?
Appreciate reply in advance.
EDIT 1: This issue is on QNX platform and not on Linux.
I observed the output of "netstat -p udp" to see if that gives any hint about why packets are getting dropped on QNX and not on Linux.
QNX:
SOCK=/dev/d_usb3/ netstat -p udp
udp:
8673 datagrams received
0 with incomplete header
60 with bad data length field
0 with bad checksum
0 dropped due to no socket
2 broadcast/multicast datagrams dropped due to no socket
0 dropped due to full socket buffers
8611 delivered
8592 PCB hash misses
On Linux I see netstat shows no packet drops with the same server and same audio!
Any leads? Why this might be? Driver issue? Networking stack?
You need to specify how you are handling lost packets in your client.
If you lose packets, that means you have missing data in your audio stream. So your client has to "do something" where it is missing data. Some options are
- play silence (makes a cracking noise due to sharp envelop to 0)
- fade to silence
- estimate waveform by examining adjacent data
- play noise
You play cannot misalign packets or play them with missing packets. For example, suppose you you get packet 1,2,3,4 and 6. You are missing packet 5. You cannot play packet 4 then play packet 6. Something has to happen to fill the space of packet 5.
See this post for more info.
We want to receive packets from udp sockets, the udp packets have variable length and we don't know how long they really are until we receive them (parts of them exactly, length were written in the sixth byte).
We tried the function lrs_set_receive_option with MarkerEnd only to find it has no help on this issue. The reason why we want to receive by packets is that we need to respond some packet by sending back user-defined udp packets.
Is there anybody knows how achieve that?
UPDATE
The LR version seems to be v10 or v11.
We need respond an incoming udp packet by sending back a udp packet immediately.
The udp packet may be like this
| orc code | packet length | Real DATA |
Issue is we can't let loadrunner return data for each packets, sometimes it returns many packets in a buffer, sometimes it waits until timeout though when there has been an incoming packet in the socket buffer. While in the c programming language world, when calling recvfrom(udp socket) we are returned only one udp packet per time (per call) which is want we really want.
If you need raw socket support to intercept at the packet level then you are likely going to have to jump to a DLL virtual user in Visual Studio with the raw socket support.
As to your question on UDP support: Yes, a Winsock user supports both core transport types, UDP and TCP. TCP being the more common variant as connection oriented. However, packet examination is at layer 3 of the OSI model for the carrier protocol IP. The ACK should come before you receive the dataflow for your use in the script. You are looking at assembled data flows in the data.ws when you jump to the TCP and UDP level.
Now, you are likely receiving a warning on receive buffer size mismatch which is taking you down this path with a mismatch to the recording size. There is an easy way to address this. If you take your send buffer and construct it using the lrs_set_send_buffer() function, then anything that returns will be taken as correct, ignoring the previously recorded buffer size and not having to wait for a match or timeout before continuing.
My program contains a thread that waits for UDP messages and when a message is received it runs some functions before it goes back to listening. I am worried about missing a message, so my question is something along the line of, how long is it possible to read a message after it has been sent? For example, if the message was sent when the thread was running the functions, could it still be possible to read it if the functions are short enough? I am looking for guidelines here, but an answer in microseconds would also be appreciated.
When your computer receives a UDP packet (and there is at least one program listening on the UDP port specified in that packet), the TCP stack will add that packet's data into a fixed-size buffer that is associated with that socket and kept in the kernel's memory space. The packet's data will stay in that buffer until your program calls recv() to retrieve it.
The gotcha is that if your computer receives the UDP packet and there isn't enough free space left inside the buffer to fit the new UDP packet's data, the computer will simply throw the UDP packet away -- it's allowed to do that, since UDP doesn't make any guarantees that a packet will arrive.
So the amount of time your program has to call recv() before packets start getting thrown away will depend on the size of the socket's in-kernel packet buffer, the size of the packets, and the rate at which the packets are being received.
Note that you can ask the kernel to make its receive-buffer size larger by calling something like this:
size_t bufSize = 64*1024; // Dear kernel: I'd like the buffer to be 64kB please!
setsockopt(mySock, SOL_SOCKET, SO_RCVBUF, &bufSize, sizeof(bufSize));
… and that might help you avoid dropped packets. If that's not sufficient, you'll need to either make sure your program goes back to recv() quickly, or possibly do your network I/O in a separate thread that doesn't get held off by processing.
I'm just wondering why my UDP server/client pair is getting 65% packet loss. It's just an echo service that sends (the client) an empty packet when it reaches EOF, and the server resets itself when he gets that empty packet.
Not really homework, just frustrated because I can't get it right :(
Thanks!
Have you looked at udp buffer overflows?
Here is information on packet loss myths how to detect UDP packet loss rates on several platforms. And last but by no means least how to mess with (err... I mean change) the kernel udp buffer sizes for a few unix platforms (caution is advisable).
There's no promise that UDP packets will ever be delivered. On congested networks, UDP packets may just be dropped in transit.
There are also routers that are configured to just drop empty UDP packets. See this random example of someone wanting this.