Is it normal that there will be no packet losses in a UDP client-server if both are on the same machine? I'm currently calculating packet loss by taking the difference between the bytes obtained from the sendto and recvfrom functions on the client side ? Am I doing something wrong ?
I would be very surprised if there was any packet loss in such a case. But on the other hand you use the wrong way to calculate any loss.
Remember that UDP is a packet oriented protocol, which means that what you send will be a packet, and what you receive will be a packet, and there will be no difference in the size of what you send and receive. If you send a 512-byte packet, the receiver will always receive the full 512-byte packet, or nothing at all.
That means you should count the number of times you call sendto, and compare the number of times that recvfrom returns with a packet.
Related
When I received packet with recv at linux, is the kernel did de-fragmentation so I will get de-fragmentation data? Or should I take care of it on user-space?
When receiving UDP data via a socket of type SOCK_DGRAM, you'll only receive the complete datagram (assuming your input buffer is large enough to receive it).
Any IP fragmentation is handled transparently from userspace.
If you're using raw sockets, then you need to handle defragmentation yourself.
ip fragmentation takes place at the IP level in the protocol stack. The layers TCP and UDP already receive recomposed packets due to fragmentation, so the TCP can receive full segments (despite that TCP tries to neve send segments that result in fragmented packets) and UPD receive full datagrams to preserve datagram boundaries.
Only if you use raw sockets you'll receive raw packets. (this meaning fragments of IP packets) TCP gives you already reliable data streams (preserving data sequence, repetitions and retransmission of missing/faulty packets) and udp gives you the datagram as it was sent, or nothing at all.
Of course, all these modules are in the kernel, so it's the kernel that recomposes packets. It's worth noting that, once a packet is fragmented in pieces, it is not recomposed until it reaches its destination, so it's the IP layer at destination who is responsible of reassembling the fragments.
When using recvfrom(2) to get packet from network I get each time 1 packet.
What is the max length of TCP/UDP packet that get with this function?
There's no fixed limit in TCP, because it's a stream protocol, not a datagram protocol.
In UDP over IPv4, the limit is 65,507 bytes. From Wikipedia:
Length
This field specifies the length in bytes of the UDP header and UDP data. The minimum length is 8 bytes, the length of the header. The field size sets a theoretical limit of 65,535 bytes (8 byte header + 65,527 bytes of data) for a UDP datagram. However the actual limit for the data length, which is imposed by the underlying IPv4 protocol, is 65,507 bytes (65,535 − 8 byte UDP header − 20 byte IP header).
Using IPv6 jumbograms it is possible to have UDP datagrams of size greater than 65,535 bytes. RFC 2675 specifies that the length field is set to zero if the length of the UDP header plus UDP data is greater than 65,535.
Note that using extremely large UDP datagrams can be problematic. Few network links have such large MTUs, so the datagram will likely be fragmented. If any fragment is lost, the entire datagram will have to be resent by the application layer (if the application requires and implements reliability). TCP normally used Path MTU Discovery to send the stream in segments that fit in the minimum MTU of all the links in the path; if a segment is lost, TCP can just retransmit the segments after that (or just the lost segment if Selective Acknowledgement is implemented, which most TCP implementations now offer).
recvfrom will always return exactly one packet for UDP. UDP packets can be up to 64KB in size give or take for a few header bytes. In practice, most UDP protocols don't ever send that much data in a single packet. So your buffer size passed to recvfrom can be much less depending on what your protocol dictates.
For TCP, you typically use recv, not recvfrom to read incoming data from a connected socket. Many will point out, TCP is a stream protocol, not a message/packet protocol like UDP. As such recv will give you back a non-deterministic amount of bytes between 1 and the size of the buffer being passed in to the recv call itself. Always check the return value from a recv call - it's not guaranteed to give you any particular byte count.
We want to receive packets from udp sockets, the udp packets have variable length and we don't know how long they really are until we receive them (parts of them exactly, length were written in the sixth byte).
We tried the function lrs_set_receive_option with MarkerEnd only to find it has no help on this issue. The reason why we want to receive by packets is that we need to respond some packet by sending back user-defined udp packets.
Is there anybody knows how achieve that?
UPDATE
The LR version seems to be v10 or v11.
We need respond an incoming udp packet by sending back a udp packet immediately.
The udp packet may be like this
| orc code | packet length | Real DATA |
Issue is we can't let loadrunner return data for each packets, sometimes it returns many packets in a buffer, sometimes it waits until timeout though when there has been an incoming packet in the socket buffer. While in the c programming language world, when calling recvfrom(udp socket) we are returned only one udp packet per time (per call) which is want we really want.
If you need raw socket support to intercept at the packet level then you are likely going to have to jump to a DLL virtual user in Visual Studio with the raw socket support.
As to your question on UDP support: Yes, a Winsock user supports both core transport types, UDP and TCP. TCP being the more common variant as connection oriented. However, packet examination is at layer 3 of the OSI model for the carrier protocol IP. The ACK should come before you receive the dataflow for your use in the script. You are looking at assembled data flows in the data.ws when you jump to the TCP and UDP level.
Now, you are likely receiving a warning on receive buffer size mismatch which is taking you down this path with a mismatch to the recording size. There is an easy way to address this. If you take your send buffer and construct it using the lrs_set_send_buffer() function, then anything that returns will be taken as correct, ignoring the previously recorded buffer size and not having to wait for a match or timeout before continuing.
I'm writing udp server/client application in which server is sending data and client
is receiving. When packet is loss client should sent nack to server. I set the socket as
O_NONBLOCK so that I can notice if the client does not receive the packet
if (( bytes = recvfrom (....)) != -1 ) {
do something
}else{
send nack
}
My problem is that if server does not start to send packets client is behave as the
packet is lost and is starting to send nack to server. (recvfrom is fail when no data is available)I want some advice how can I make difference between those cases , if the server does not start to send the packets and if it sends, but the packet is really lost
You are using UDP. For this protocol its perfectly ok to throw away packets if there is need to do so. So it's not reliable in terms of "what is sent will arrive". What you have to do in your client is to check wether all packets you need arrived, and if not, talk politely to your server to resend those packets you did not receive. To implement this stuff is not that easy,
If you have to use UDP to transfer a largish chunk of data, then design a small application-level protocol that would handle possible packet loss and re-ordering (that's part of what TCP does for you). I would go with something like this:
Datagrams less then MTU (plus IP and UDP headers) in size (say 1024 bytes) to avoid IP fragmentation.
Fixed-length header for each datagram that includes data length and a sequence number, so you can stitch data back together, and detect missed, duplicate, and re-ordered parts.
Acknowledgements from the receiving side of what has been successfully received and put together.
Timeout and retransmission on the sending side when these acks don't come within appropriate time.
you have a loop calling either select() or poll() to determine if data has arrived - if so you then call recvfrom() to read the data.
you can set time out for receive data as follows
ssize_t
recv_timeout(int fd, void *buf, size_t len, int flags)
{
ssize_t ret;
struct timeval tv;
fd_set rset;
// init set
FD_ZERO(&rset);
// add to set
FD_SET(fd, &rset);
// this is set to 60 seconds
tv.tv_sec =
config.idletimeout;
tv.tv_usec = 0;
// NEVER returns before the timeout value.
ret = select(fd, &rset, NULL, NULL, &tv);
if (ret == 0) {
log_message(LOG_INFO,
"Idle Timeout (after select)");
return 0;
} else if (ret < 0) {
log_message(LOG_ERR,
"recv_timeout: select() error \"%s\". Closing connection (fd:%d)",
strerror(errno), fd);
return;
}
ret = recvfrom(fd, buf, len, flags);
return ret;
}
It tells that if there are data ready, Normally, read() should return up to the maximum number of bytes that you've specified, which possibly includes zero bytes (this is actually a valid thing to happen!), but it should never block after previously having reported readiness.
Under Linux, select() may report a socket file descriptor as "ready
for reading", while nevertheless a subsequent read blocks. This could
for example happen when data has arrived but upon examination has
wrong checksum and is discarded. There may be other circumstances in
which a file descriptor is spuriously reported as ready. Thus it may
be safer to use O_NONBLOCK on sockets that should not block.
Look up sliding window protocol here.
The idea is that you divide your payload into packets that fit in a physical udp packet, then number them. You can visualize the buffers as a ring of slots, numbered sequentially in some fashion, e.g. clockwise.
Then you start sending from 12 oclock moving to 1,2,3... In the process, you may (or may not) receive ACK packets from the server that contain the slot number of a packet you sent.
If you receive a ACK, then you can remove that packet from the ring, and place the next unsent packet there which is not already in the ring.
If you receive a NAK for a packet you sent, it means that packet was received by the server with data corruptions, and then you resend it from the ring slot reported in the NAK.
This protocol class allows transmission over channels with data or packet loss (like RS232, UDP, etc). If your underlying data transmission protocol does not provide checksums, then you need to add a checksum for each ring packet you send, so the server can check its integrity, and report back to you.
ACK and NAK packets from the server can also be lost. To handle this, you need to associate a timer with each ring slot, and if you don't receive either a ACK or NAK for a slot when the timer reaches a timeout limit you set, then you retransmit the packet and reset the timer.
Finally, to detect fatal connection loss (i.e. server went down), you can establish a maximum timeout value for all your packets in the ring. To evaluate this, you just count how many consecutive timeouts you have for single slots. If this value exceeds the maximum you have set, then you can consider the connection lost.
Obviously, this protocol class requires dataset assembly on both sides based on packet numbers, since packets may not be sent or received in sequence. The 'ring' helps with this, since packets are removed only after successful transmission, and on the receiving side, only when the previous packet number has already been removed and appended to the growing dataset. However, this is only one strategy, there are others.
Hope this hepls.
I'm just wondering why my UDP server/client pair is getting 65% packet loss. It's just an echo service that sends (the client) an empty packet when it reaches EOF, and the server resets itself when he gets that empty packet.
Not really homework, just frustrated because I can't get it right :(
Thanks!
Have you looked at udp buffer overflows?
Here is information on packet loss myths how to detect UDP packet loss rates on several platforms. And last but by no means least how to mess with (err... I mean change) the kernel udp buffer sizes for a few unix platforms (caution is advisable).
There's no promise that UDP packets will ever be delivered. On congested networks, UDP packets may just be dropped in transit.
There are also routers that are configured to just drop empty UDP packets. See this random example of someone wanting this.