Echo client with over 65% packet losses - c

I'm just wondering why my UDP server/client pair is getting 65% packet loss. It's just an echo service that sends (the client) an empty packet when it reaches EOF, and the server resets itself when he gets that empty packet.
Not really homework, just frustrated because I can't get it right :(
Thanks!

Have you looked at udp buffer overflows?
Here is information on packet loss myths how to detect UDP packet loss rates on several platforms. And last but by no means least how to mess with (err... I mean change) the kernel udp buffer sizes for a few unix platforms (caution is advisable).

There's no promise that UDP packets will ever be delivered. On congested networks, UDP packets may just be dropped in transit.
There are also routers that are configured to just drop empty UDP packets. See this random example of someone wanting this.

Related

How to simulate UDP issues to develop a reliable application? [duplicate]

I am currently creating a server software that communicates with multiple arduino boards. Due to the hardware, I am using the UDP Protocol. I have a pretty simple mechanism that will resend packages in most of the cases when they get lost. I have two questions now:
How probable is it that UDP Packets get lost in a Network with no Internet access and about 20 arduinos and one computer? Is it even neccessary to have a resend method?
Is there a way I can simulate UDP Packet loss in this network to check if the resend mechanisms are working?
How probable is it that UDP Packets get lost in a Network with no
Internet access and about 20 arduinos and one computer?
The probability is 100% that sooner or later a packet will be dropped.
If you want a more detailed statistic, like the probability of a packet being dropped within any particular period of time, the only real way to know is to try it and found out (using e.g. sequence numbers in the packets so that the receiver(s) can detect when a packet has been dropped by noting the skipped sequence-number). The probability will depend very much on the size of the packets, the speed at which the packets are being sent, the CPU speed of the receivers, what other tasks the receivers are spending CPU time on, the quality of your Ethernet switch, the quality of your Ethernet cables, the phase of the moon, etc etc.
Is it even neccessary to have a resend method?
That depends on what the consequences would be to having a packet dropped. For some applications (e.g. streaming audio or video, or audio-metering data), dropping a packet is no big deal; you just ignore the fact that some data was lost, and continue on with the next packet as usual. For other applications (e.g. file transmit/receive), the loss of a packet means the loss of data that the receiver needs, so you'll want to have some way to recover from that loss, e.g. by detecting it and triggering a resend, or else the entire transfer will fail (or at least the receiver will end up with only a partial file).
Is there a way I can simulate UDP Packet loss in this network to check
if the resend mechanisms are working?
Sure, just put some logic into the receivers so that they occasionally pretend to not have received a packet:
int numBytesReceived = recv(...);
if ((rand()%100) == 0) // Simulate a 1% packet loss rate
{
printf("Pretending to have dropped a packet!\n");
}
else
{
// handle the incoming packet as usual
}

RTP packet drop issue(?)

I have a client and a server, where server sends the audio data by RTP packets encapsulated inside UDP. Client receives the packets. As UDP has no flow control, client checks for the sequence number of the packet and rearranges them if they come out of order.
My question here is, I see client never receives packet with some sequence number, as show below in the wireshark -
If this is the case, when i play the audio at client side, it is distorted(obvious). How do i avoid it? What factors effect these? Should i set the socket buffer size to a large value?
Appreciate reply in advance.
EDIT 1: This issue is on QNX platform and not on Linux.
I observed the output of "netstat -p udp" to see if that gives any hint about why packets are getting dropped on QNX and not on Linux.
QNX:
SOCK=/dev/d_usb3/ netstat -p udp
udp:
8673 datagrams received
0 with incomplete header
 60 with bad data length field
0 with bad checksum
0 dropped due to no socket
2 broadcast/multicast datagrams dropped due to no socket
0 dropped due to full socket buffers
8611 delivered
8592 PCB hash misses
On Linux I see netstat shows no packet drops with the same server and same audio!
Any leads? Why this might be? Driver issue? Networking stack?
You need to specify how you are handling lost packets in your client.
If you lose packets, that means you have missing data in your audio stream. So your client has to "do something" where it is missing data. Some options are
- play silence (makes a cracking noise due to sharp envelop to 0)
- fade to silence
- estimate waveform by examining adjacent data
- play noise
You play cannot misalign packets or play them with missing packets. For example, suppose you you get packet 1,2,3,4 and 6. You are missing packet 5. You cannot play packet 4 then play packet 6. Something has to happen to fill the space of packet 5.
See this post for more info.

Path of the UDP packet from kernel to user-space in Linux

I'm maintaining some network driver and I've got some problems with lost of data. The effect is that when I send for example ICMP or UDP ping using ping or nping some of the udp/icmp packets are lost.
I'm sure that on ping/nping side of the transmission the ping reply is received by my driver and the kernel (tcpdump shows incoming udp or icmp packets as a reply).
But application ping/nping shows sometimes that for example 80% packets are lost. I suspect that those packets are lost somewhere between kernel and user space.
I know that for UDP there is procedure udp_rcv() for maintenance of UDP packets, but I don't know which procedure is next in the path of delivering of the packet to user space application.
Linux kernel is in version 3.3.8.
My question is - how to trace the whole path of transition of the packet from my driver to user space socket buffer?
udp_rcv() is a callback that is passed to struct net_protocol as a .handler.
You may either look at usage of this handler field in the structure, or you can also see if some error occurs. There is a callback err_handler. Maybe packet loss happens here and error handler will be called.
P. S. Remember that UDP does not guarantee 100% transmit success, and one lost packet out of 100 might be expected behavior. (:

Streaming audio udp C

I'm writing a program with a client and a server I almost achieved it.
At the moment I can execute the server on a port. The client in the same port with the IP adress and the name of the .wav file that I want to read.
Now what I'd like to do is making a timeout between each sendto() so that the client receives the packet and read them well. without that the client receives many packets at once and it losts many of them.
So could someone tell me how it works in UDP, and how to do that ?
making a timeout between each sendto()
I believe that you are asking how to put a small delay between each sendto(). If you open raw wav file and send bytes, there is a good chance that the data will be getting to the client much faster than it can play it. If you want to stream data at the same rate as it is played, send data in chunks, then let the client request the next chunk.
If that is not an option, you can send a chunk of data (i.e. 20ms). Then let the thread sleep for a little less than 20ms then send the next chunk. Sleeps are kind of a hack. Some sort of audio callback would be best on the server. Bottom line is that your client buffer has to be big enough to consume the the amount of data your server is sending.
without that the client receives many packets at once and it losts many of them
I believe that you are asking how to deal with the variety of packet inter arrival rates and the packet losses and out of order packets received. It sounds like you were just sending packets at too fast a rate that your client could handle. You might need a larger buffer on the client.
In any case, with UDP/IP, you have the following scenarios
lost packets
packets arriving out of order
packets arriving in bursts: (each packet will not arrive exactly X ms apart)
To deal with this, you have to minimally have what is know as a dejitter buffer. This is a buffer that collects packets as they arrive and inserts them typically in a ring buffer. The buffer will have to be large enough to buffer up packets that your server is sending. Your client is potentially consuming the packets from the buffer slower than the server is sending them (or vice versa). In order to get packets in the right order and deal with losses, you have to detect it. You can detect losses and out of order arrivals by simply numbering each packet that is sent. As packets arrive you can put them into the buffer into the correct location. If a packet is lost, you need to deal with that with some sort of loss concealment (playing silence, estimating the lost packet, etc.) which is beyond the scope of this question,
The RTP protocol is designed for streaming and is an application protocol that work over UDP.
Since you're using UDP, which is connectionless, you don't really have a way to control the flow of packets unless you implement some kind of acknowledgement mechanism... at which point you might as well be using TCP because it already has that built in.
Although I don't have much experience in network programming, this looks a bit more complicated than it might seem at first glance. So UDP is connectionless. That speeds things up a lot, but there is a price to pay -- off the top of my head, packets can get lost or arrive out of order.
Those are situations you need to handle on the client end. Your client needs to be designed so that it accepts packets as they arrive at an arbitrary rate, skips over those that fail to arrive within a certain time (for live streaming, for buffered that doesn't matter) and takes order into consideration, which means that each packet needs to contain information about its place relative to previous packets.

Can LoadRunner Receive Data By UDP Packets?

We want to receive packets from udp sockets, the udp packets have variable length and we don't know how long they really are until we receive them (parts of them exactly, length were written in the sixth byte).
We tried the function lrs_set_receive_option with MarkerEnd only to find it has no help on this issue. The reason why we want to receive by packets is that we need to respond some packet by sending back user-defined udp packets.
Is there anybody knows how achieve that?
UPDATE
The LR version seems to be v10 or v11.
We need respond an incoming udp packet by sending back a udp packet immediately.
The udp packet may be like this
| orc code | packet length | Real DATA |
Issue is we can't let loadrunner return data for each packets, sometimes it returns many packets in a buffer, sometimes it waits until timeout though when there has been an incoming packet in the socket buffer. While in the c programming language world, when calling recvfrom(udp socket) we are returned only one udp packet per time (per call) which is want we really want.
If you need raw socket support to intercept at the packet level then you are likely going to have to jump to a DLL virtual user in Visual Studio with the raw socket support.
As to your question on UDP support: Yes, a Winsock user supports both core transport types, UDP and TCP. TCP being the more common variant as connection oriented. However, packet examination is at layer 3 of the OSI model for the carrier protocol IP. The ACK should come before you receive the dataflow for your use in the script. You are looking at assembled data flows in the data.ws when you jump to the TCP and UDP level.
Now, you are likely receiving a warning on receive buffer size mismatch which is taking you down this path with a mismatch to the recording size. There is an easy way to address this. If you take your send buffer and construct it using the lrs_set_send_buffer() function, then anything that returns will be taken as correct, ignoring the previously recorded buffer size and not having to wait for a match or timeout before continuing.

Resources