C udp recvfrom client side - c

I'm writing udp server/client application in which server is sending data and client
is receiving. When packet is loss client should sent nack to server. I set the socket as
O_NONBLOCK so that I can notice if the client does not receive the packet
if (( bytes = recvfrom (....)) != -1 ) {
do something
}else{
send nack
}
My problem is that if server does not start to send packets client is behave as the
packet is lost and is starting to send nack to server. (recvfrom is fail when no data is available)I want some advice how can I make difference between those cases , if the server does not start to send the packets and if it sends, but the packet is really lost

You are using UDP. For this protocol its perfectly ok to throw away packets if there is need to do so. So it's not reliable in terms of "what is sent will arrive". What you have to do in your client is to check wether all packets you need arrived, and if not, talk politely to your server to resend those packets you did not receive. To implement this stuff is not that easy,
If you have to use UDP to transfer a largish chunk of data, then design a small application-level protocol that would handle possible packet loss and re-ordering (that's part of what TCP does for you). I would go with something like this:
Datagrams less then MTU (plus IP and UDP headers) in size (say 1024 bytes) to avoid IP fragmentation.
Fixed-length header for each datagram that includes data length and a sequence number, so you can stitch data back together, and detect missed, duplicate, and re-ordered parts.
Acknowledgements from the receiving side of what has been successfully received and put together.
Timeout and retransmission on the sending side when these acks don't come within appropriate time.
you have a loop calling either select() or poll() to determine if data has arrived - if so you then call recvfrom() to read the data.
you can set time out for receive data as follows
ssize_t
recv_timeout(int fd, void *buf, size_t len, int flags)
{
ssize_t ret;
struct timeval tv;
fd_set rset;
// init set
FD_ZERO(&rset);
// add to set
FD_SET(fd, &rset);
// this is set to 60 seconds
tv.tv_sec =
config.idletimeout;
tv.tv_usec = 0;
// NEVER returns before the timeout value.
ret = select(fd, &rset, NULL, NULL, &tv);
if (ret == 0) {
log_message(LOG_INFO,
"Idle Timeout (after select)");
return 0;
} else if (ret < 0) {
log_message(LOG_ERR,
"recv_timeout: select() error \"%s\". Closing connection (fd:%d)",
strerror(errno), fd);
return;
}
ret = recvfrom(fd, buf, len, flags);
return ret;
}
It tells that if there are data ready, Normally, read() should return up to the maximum number of bytes that you've specified, which possibly includes zero bytes (this is actually a valid thing to happen!), but it should never block after previously having reported readiness.
Under Linux, select() may report a socket file descriptor as "ready
for reading", while nevertheless a subsequent read blocks. This could
for example happen when data has arrived but upon examination has
wrong checksum and is discarded. There may be other circumstances in
which a file descriptor is spuriously reported as ready. Thus it may
be safer to use O_NONBLOCK on sockets that should not block.

Look up sliding window protocol here.
The idea is that you divide your payload into packets that fit in a physical udp packet, then number them. You can visualize the buffers as a ring of slots, numbered sequentially in some fashion, e.g. clockwise.
Then you start sending from 12 oclock moving to 1,2,3... In the process, you may (or may not) receive ACK packets from the server that contain the slot number of a packet you sent.
If you receive a ACK, then you can remove that packet from the ring, and place the next unsent packet there which is not already in the ring.
If you receive a NAK for a packet you sent, it means that packet was received by the server with data corruptions, and then you resend it from the ring slot reported in the NAK.
This protocol class allows transmission over channels with data or packet loss (like RS232, UDP, etc). If your underlying data transmission protocol does not provide checksums, then you need to add a checksum for each ring packet you send, so the server can check its integrity, and report back to you.
ACK and NAK packets from the server can also be lost. To handle this, you need to associate a timer with each ring slot, and if you don't receive either a ACK or NAK for a slot when the timer reaches a timeout limit you set, then you retransmit the packet and reset the timer.
Finally, to detect fatal connection loss (i.e. server went down), you can establish a maximum timeout value for all your packets in the ring. To evaluate this, you just count how many consecutive timeouts you have for single slots. If this value exceeds the maximum you have set, then you can consider the connection lost.
Obviously, this protocol class requires dataset assembly on both sides based on packet numbers, since packets may not be sent or received in sequence. The 'ring' helps with this, since packets are removed only after successful transmission, and on the receiving side, only when the previous packet number has already been removed and appended to the growing dataset. However, this is only one strategy, there are others.
Hope this hepls.

Related

Who take care tcp message order

I listen to tcp socket in linux with recv or recvfrom.
Who taking care that I will get the tcp packets on the right order?
Is that the kernel taking care so if packet 1 came after packet 2 the kernel will drop out both/save packet 2 until packet 1 will come?
Or maybe I need to take care on user-space to the order of tcp packet?
On Linux based systems in any normal scenario, this is handled by the kernel.
You can find the source code here, and here's an abridged version:
/* Queue data for delivery to the user.
* Packets in sequence go to the receive queue.
* Out of sequence packets to the out_of_order_queue.
*/
if (TCP_SKB_CB(skb)->seq == tp->rcv_nxt) {
/* packet is in order */
}
if (!after(TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt)) {
/* already received */
}
if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt + tcp_receive_window(tp)))
goto out_of_window;
if (before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
/* Partial packet, seq < rcv_next < end_seq */
}
/* append to out-of-order queue */
tcp_data_queue_ofo(sk, skb);
along with the actual implementation that does the reordering using RB trees:
To quote Wikipedia,
At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or unpredictable network behaviour, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests re-transmission of lost data, rearranges out-of-order data, [...]
It's an inherent property of the protocol that you will receive the data in the correct order (or not at all).
Note that TCP is a stream protocol, so you can't even detect packet boundaries. A call to recv/recvfrom may return a portion of a packet, and it may return bytes that came from more than one packet.

How does TCP datagram arrive at a client computer?

I am trying to understand some of the concepts regarding TCP data transmission.
Suppose we are using a socket (in C) to send and receive GET HTTP requests for a website. Regarding to the receiving end, what I have seen is the implementation below. Basically, a response buffer is created and filled up iteratively.
memset(response, 0, sizeof(response));
total = sizeof(response)-1;
received = 0;
while (received < total) {
bytes = recv(sockfd, response + received, total - received, 0);
if (bytes < 0)
error("ERROR reading response from socket");
if (bytes == 0)
break;
received += bytes;
}
The following two things weren't very clear to me
Where does the response get data from, cache from the OS or data directly from the website? I haven't learnt operating systems, which makes it difficult for me to comprehensive the buffering process.
Also, theoretically TCP would check for transmission loss, where does this happen? When I do the socket programming, I didn't see any handlers regarding transmission loss, is it automatically handled by socket?
Where does the response get data from, cache from the OS or data
directly from the website? I haven't learnt operating systems, which
makes it difficult for me to comprehensive the buffering process.
The TCP packets are received from the network device (Ethernet card, Wi-Fi adapter, etc), and their payload-data is placed into a temporary buffer inside the TCP/IP stack. When your program calls recv(), some or all of that data is copied from the temporary buffer into your program's buffer (response).
The TCP/IP stack will not do any caching of data other than what is described above. (e.g. if a web browser wants to cache a local copy of the web page so that it won't have to download it a second time, it will be up to the web browser itself to do that at the application level; the TCP/IP stack and the OS will not do that on their own)
Also, theoretically TCP would check for transmission loss, where does
this happen? When I do the socket programming, I didn't see any
handlers regarding transmission loss, is it automatically handled by
socket?
It is handled transparently inside the TCP stack. In particular, each TCP packet has a checksum and a sequence-number included in its header, and the TCP stack checks that sequence-number of each packet it receives, to make sure that it matches the next number in the sequence (relative to the previous packet it received from the same TCP stream). If it is not the expected next-packet-number, then the TCP stack knows that a packet was lost somehow, and it responds by sending a request to the remote computer that the dropped packet(s) be resent. Note that the TCP stack may have to drop subsequent packets as necessary until the originally-expected sequence can be resumed, because it is required to deliver the payload data to your application in the exact order in which it was sent (i.e. it isn't allowed to deliver "later" bytes before "earlier" bytes, even if some of the "earlier" bytes got lost and had to be retransmitted).
Where does the response get data from, cache from the OS? I haven't learnt operating systems, which makes it difficult for me to comprehensive the buffering process.
Maybe, maybe not, you don't care as you are the user of the feature. All of this is guaranteed by the TCP socket protocol. However if you forget to read the socket the socket data will be full. There is a limit to the data you can queue.
Also, theoretically TCP would check for transmission loss, where does this happen? When I do the socket programming, I didn't see any handlers regarding transmission loss, is it automatically handled by socket?
Yes. Tou don't need to worried about that. Beautiful, isn't it?
With respect to your code, I anticipate some problems:
// not necessary as you should only read the bytes affected by recv()
// memset(response, 0, sizeof(response));
// should declare total and received here
size_t total = sizeof(response) - 1;
size_t received = 0;
while (received < total) {
// should declare byte only here
ssize_t bytes = recv(sockfd, response + received, total - received, 0);
if (bytes < 0)
error("ERROR reading response from socket");
if (bytes == 0)
break;
received += (size_t)bytes;
}
// as you want read a string don't forget
response[received] = '\0';

Is TCP not reliable?

I am believing TCP is reliable. If write(socket, buf, buf_len) and close(socket) returns without error, the receiver will receive the exact same data of buf with length buf_len.
But this article says TCP is not reliable.
A:
sock = socket(AF_INET, SOCK_STREAM, 0);
connect(sock, &remote, sizeof(remote));
write(sock, buffer, 1000000); // returns 1000000
close(sock);
B:
int sock = socket(AF_INET, SOCK_STREAM, 0);
bind(sock, &local, sizeof(local));
listen(sock, 128);
int client=accept(sock, &local, locallen);
write(client, "220 Welcome\r\n", 13);
int bytesRead=0, res;
for(;;) {
res = read(client, buffer, 4096);
if(res < 0) {
perror("read");
exit(1);
}
if(!res)
break;
bytesRead += res;
}
printf("%d\n", bytesRead);
Quiz question - what will program B print on completion?
A) 1000000
B) something less than 1000000
C) it will exit reporting an error
D) could be any of the above
The right answer, sadly, is ā€˜Dā€™. But how could this happen? Program A
reported that all data had been sent correctly!
If this article is true, I have to change my mind. But I am not sure if this article is true or not.
Is this article true?
TCP is reliable (at least when the lower level protocols are), but programmers may use it in a unreliable way.
The problem here is that a socket should not be closed before all sent data has been correctly received at the other side: the close signal may destroy the connection before while the last data is still in transfert.
The correct way to ensure proper reception by peer is a graceful shutdown.
TCP/IP is reliable, for a very particular (and limited) meaning of the word "reliable".
Specifically, when write() returns 1000000, it is making you the following promises:
Your 1000000 bytes of data have been copied into the TCP socket's outgoing-data-buffer, and from now on, the TCP/IP stack is responsible for delivering those bytes to the remote program.
If those bytes can be delivered (with a reasonable amount of effort), then they will be delivered, even if some of the transmitted TCP packets get dropped during delivery.
If the bytes do get delivered, they will be delivered accurately and in-order (relative to each other and also relative to the data passed to previous and subsequent calls to write() on that same socket).
But there are also some guarantees that write() doesn't (and, in general, can't) provide. In particular:
write() cannot guarantee that the receiving application won't exit or crash before it calls recv() to get all 1000000 of those bytes.
write() cannot guarantee that the receiving application will do the right thing with the bytes it does receive (i.e. it might just ignore them or mishandle them, rather than acting on them)
write() cannot guarantee that the receiving application ever will call recv() to recv() those bytes at all (i.e. it might just keep the socket open but never call recv() on it)
write() cannot guarantee that the network infrastructure between your computer and the remote computer will work to deliver the bytes -- i.e. if some packets are dropped, no problem, the TCP layer will resend them, but e.g. if someone has pulled the power cable out of your cable modem, then there's simply no way for the bytes to get to their destination, and after a few minutes of trying and failing to get an ACK, the TCP layer will give up and error out the connection.
write() can't guarantee that the receiving application handles socket-shutdown issues 100% correctly (if it doesn't, then it's possible for any data you send just before the remote program closes to the socket to be silently dropped, since there's no receiver left to receive it)
write() doesn't guarantee that the bytes will be received by the receiving application in the same segment-sizes that you sent them in. i.e. just because you sent 1000000 bytes in a single call to send() doesn't mean that the receiving app can receive 1000000 bytes in a single call to recv(). He might instead receive the data in any-sized chunks, e.g. 1-byte-per-recv()-call, or 1000-bytes-per-recv-call, or any other size the TCP layer feels like providing.
Note that in all of the cases listed above, the problems occur after write() has returned, which is why write() can't just return an error code when these happen. (A later call to write() might well return an error code, of course, but that won't particularly help you know where the line between delivered-bytes and non-delivered bytes was located)
TLDR: TCP/IP is more reliable than UDP, but not a 100% iron-clad guarantee that nothing will ever go wrong. If you really want to make sure your bytes got processed on the receiving end, you'll want to program your receiving application to send back some sort of higher-level acknowledgement that it received (and successfully handled!) the bytes you sent it.
TCP/IP offers reliability which means it allows for the retransmission of lost packets, thereby making sure that all data transmitted is (eventually) received or you get a timeout error. Either your stuff get delivered or you are presented with a timeout error
What is the correct way of reading from a TCP socket in C/C++?
and btw, TCP/IP is reliable to some extent, it guarantees delivery assuming the network is working... if you unplug the cable TCP/IP will not deliver your packet
P.S. you should at least advance the pointer inside the buffer buffer
void ReadXBytes(int socket, unsigned int x, void* buffer)
{
int bytesRead = 0;
int result;
while (bytesRead < x)
{
result = read(socket, buffer + bytesRead, x - bytesRead);
if (result < 1 )
{
// Throw your error.
}
bytesRead += result;
}
}
Program A reported that all data had been sent correctly!
No. The return of write in program A only means that the returned number of bytes has been delivered to the kernel of operating system. It does not claim that the data have been send by the local system, have been received by the remote system or even have been processed by the remote application.
I am believing TCP is reliable.
TCP provides a reliability at the transport layer only. This means only that it will make sure that data loss gets detected (and packets re-transmitted), data duplication gets detected (and duplicates discarded) and packet reordering gets detected and fixed.
TCP does not claim any reliability of higher layers. If the applications needs this it has to be provided by the application itself.

Receive continuous stream of varying length packets over sockets in C at a quick rate?

I'm working on the sockets (in C, with no prior experience on socket programming) from the past few days.
Actually I have to collect WiFi packets on raspberry pi, do some processing and have to send the formatted information to another device over sockets (both the devices are connect in a network).
The challenge I'm facing is when receiving the data over the sockets.
While sending the data, the data is sent successfully over the sockets from the sending side but on the receiving side, sometimes some junk or previous data is received.
On Sending Side (client):
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
//connecting to the server with connect function
send(server_socket, &datalength, sizeof(datalength),0); //datalength is an integer containing the number of bytes that are going to be sent next
send(server_socket, actual_data, sizeof(actual_data),0); //actual data is a char array containing the actual character string data
On Receiving Side (Server Side):
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
//bind the socket to the ip and port with bind function
//listen to the socket for any clients
//int client_socket = accept(server_socket, NULL, NULL);
int bytes;
recv(client_socket, &bytes, sizeof(bytes),0);
char* actual_message = malloc(bytes);
int rec_bytes = recv(client_socket, actual_message, bytes,0);
*The above lines of code are not the actual lines of code, but the flow and procedure would be similar (with exception handling and comments).
Sometimes, I could get the actual data for all the packets quickly(without any errors and packet loss). But sometimes the bytes (integer sent to tell the size of byte stream for the next transaction) is received as a junk value, so my code is breaking at that point.
Also sometimes the number of bytes that I receive on the receving side are less than the number of bytes expected (known from the received integer bytes). So in that case, I check for that condition and retrieve the remaining bytes.
Actually the rate at which packets arrive is very high (around 1000 packets in less than a second and I have to dissect, format and send it over sockets). I'm trying different ideas (using SOCK_DGRAMS but there is some packet loss here, insert some delay between transactions, open and close a new socket for each packet, adding an acknowledgement after receiving packet) but none are them meets my requirement (quick transfer of packets with 0 packet loss).
Kindly, suggest a way to send and receive varying length of packets at a quick rate over sockets.
I see a few main issues:
I think your code ignores the possibility of a full buffer in the send function.
It also seems to me that your code ignores the possibility of partial data being collected by recv. (nevermind, I just saw the new comment on that)
In other words, you need to manage a user-land buffer for send and handle fragmentation in recv.
The code uses sizeof(int) which might be a different length on different machines (maybe use uint32_t instead?).
The code doesn't translate to and from network byte order. This means that you're sending the memory structure of the int instead of an integer that can be read by different machines (some machines store the bytes backwards, some forward, some mix and match).
Notice that when you send larger data using TCP/IP, it will be fragmented into smaller packets.
This depends, among others, on the MTU network value (which often runs at ~500 bytes in the wild and usually around ~1500 bytes in your home network).
To handle these cases you should probably use an evented network design rather than blocking sockets.
Consider routing the send through something similar to this (if you're going to use blocking sockets):
int send_complete(int fd, void * data, size_t len) {
size_t act = 0;
while(act < len) {
int tmp = send(fd, (void *)((uintptr_t)data + act), len - act);
if(tmp <= 0 && errno != EWOULDBLOCK && errno != EAGAIN && errno != EINTR)
return tmp; // connection error
act += tmp;
// add `select` to poll the socket
}
return (int)act;
}
As for the sizeof issues, I would replace the int with a specific byte length integer type, such as int32_t.
A few more details
Please notice that sending the integer separately doesn't guaranty that it would be received separately or that the integer itself wouldn't be fragmented.
The send function writes to the system's buffer for the socket, not to the network (just like recv reads from the available buffer and not from the wire).
You can't control where fragmentation occurs or how the TCP packets are packed (unless you implement your own TCP/IP stack).
I'm sure it's clear to you that the "junk" value is data that was sent by the server. This means that the code isn't reading the integer you send, but reading another piece of data.
It's probably a question of alignment to the message boundaries, caused by an incomplete read or an incomplete send.
P.S.
I would consider using the Websocket protocol on top of the TCP/IP layer.
This guaranties a binary packet header that works with different CPU architectures (endianness) as well as offers a wider variety of client connectivity (such as connecting with a browser etc').
It will also solve the packet alignment issue you're experiencing (not because it won't exist, but because it was resolved in whatever Websocket parser you will adopt).

First UDP message to a specific remote ip gets lost

I am working on a LAN based solution with a "server" that has to control a number of "players"
My protocol of choice is UDP because its easy, I do not need connections, my traffic consists only of short commands from time to time and I want to use a mix of broadcast messages for syncing and single target messages for player individual commands.
Multicast TCP would be an alternative, but its more complicated, not exactly suited for the task and often not well supported by hardware.
Unfortunately I am running into a strange problem:
The first datagram which is sent to a specific ip using "sendto" is lost.
Any datagram sent short time afterwards to the same ip is received.
But if i wait some time (a few minutes) the first "sendto" is lost again.
Broadcast datagrams always work.
Local sends (to the same computer) always work.
I presume the operating system or the router/switch has some translation table from IP to MAC addresses which gets forgotten when not being used for some minutes and that unfortunately causes datagrams to be lost.
I could observe that behaviour with different router/switch hardware, so my suspect is the windows networking layer.
I know that UDP is by definition "unreliable" but I cannot believe that this goes so far that even if the physical connection is working and everything is well defined packets can get lost. Then it would be literally worthless.
Technically I am opening an UDP Socket,
bind it to a port and INADRR_ANY.
Then I am using "sendto" and "recvfrom".
I never do a connect - I dont want to because I have several players. As far as I know UDP should work without connect.
My current workaround is that I regularly send dummy datagrams to all specific player ips - that solves the problem but its somehow "unsatisfying"
Question: Does anybody know that problem? Where does it come from? How can I solve it?
Edit:
I boiled it down to the following test program:
int _tmain(int argc, _TCHAR* argv[])
{
WSADATA wsaData;
WSAStartup(MAKEWORD(2, 2), &wsaData);
SOCKET Sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
SOCKADDR_IN Local = {0};
Local.sin_family = AF_INET;
Local.sin_addr.S_un.S_addr = htonl(INADDR_ANY);
Local.sin_port = htons(1234);
bind(Sock, (SOCKADDR*)&Local, sizeof(Local));
printf("Press any key to send...\n");
int Ret, i = 0;
char Buf[4096];
SOCKADDR_IN Remote = {0};
Remote.sin_family = AF_INET;
Remote.sin_addr.S_un.S_addr = inet_addr("192.168.1.12"); // Replace this with a valid LAN IP which is not the hosts one
Remote.sin_port = htons(1235);
while(true) {
_getch();
sprintf(Buf, "ping %d", ++i);
printf("Multiple sending \"%s\"\n", Buf);
// Ret = connect(Sock, (SOCKADDR*)&Remote, sizeof(Remote));
// if (Ret == SOCKET_ERROR) printf("Connect Error!\n", Buf);
Ret = sendto(Sock, Buf, strlen(Buf), 0, (SOCKADDR*)&Remote, sizeof(Remote));
if (Ret != strlen(Buf)) printf("Send Error!\n", Buf);
Ret = sendto(Sock, Buf, strlen(Buf), 0, (SOCKADDR*)&Remote, sizeof(Remote));
if (Ret != strlen(Buf)) printf("Send Error!\n", Buf);
Ret = sendto(Sock, Buf, strlen(Buf), 0, (SOCKADDR*)&Remote, sizeof(Remote));
if (Ret != strlen(Buf)) printf("Send Error!\n", Buf);
}
return 0;
The Program opens an UDP Socket, and sends 3 datagrams in a row on every keystroke to a specific IP.
Run that whith wireshark observing your UDP traffic, press a key, wait a while and press a key again.
You do not need a receiver on the remote IP, makes no difference, except you wont get the black marked "not reachable" packets.
This is what you get:
As you can see the first sending initiated a ARP search for the IP. While that search was pending the first 2 of the 3 successive sends were lost.
The second keystroke (after the IP search was complete) properly sent 3 messages.
You may now repeat sending messages and it will work until you wait (about a minute until the adress translation gets lost again) then you will see dropouts again.
That means: There is no send buffer when sending UDP messages and there are ARP requests pending! All messages get lost except the last one.
Also "sendto" does not block until it successfully delivered, and there is no error return!
Well, that surprises me and makes me a little bit sad, because it means that I have to live with my current workaround or implement an ACK system that only sends one message at a time and then waits for reply - which would not be easy any more and imply many difficulties.
I'm posting this long after it's been answered by others, but it's directly related.
Winsock drops UDP packets if there's no ARP entry for the destination address (or the gateway for the destination).
Thus it's quite likely some of the first UDP packet gets dropped as at that time there's no ARP entry - and unlike most other operating systems, winsock only queues 1 packet while the the ARP request completes.
This is documented here:
ARP queues only one outbound IP datagram for a specified destination
address while that IP address is being resolved to a MAC address. If a
UDP-based application sends multiple IP datagrams to a single
destination address without any pauses between them, some of the
datagrams may be dropped if there is no ARP cache entry already
present. An application can compensate for this by calling the
Iphlpapi.dll routine SendArp() to establish an ARP cache entry, before
sending the stream of packets.
The same behavior can be observed on Mac OS X and FreeBSD:
When an interface requests a mapping for an address not
in the cache, ARP queues the message which requires the
mapping and broadcasts a message on the associated associated
network requesting the address mapping. If a response is
provided, the new mapping is cached and any pending message is
transmitted. ARP will queue at most one packet while waiting
for a response to a mapping request; only the most recently
``transmitted'' packet is kept.
UDP packets are supposed to be buffered on receipt, but a UDP packet (or the ethernet frame holding it) can be dropped at several points on a given machine:
network card does not have enough space to accept it,
OS network stack does not have enough buffer memory to copy it to,
firewall/packet filtering drop-rule match,
no application is listening on destination IP and port,
receive buffer of the listening application socket is full.
First two points are about too much traffic, which is not likely the case here. Then I trust that point 4. is not applicable and your software is waiting for the data. Point 5. is about your application not processing network data fast enough - also does not seem like the case.
Translation between MAC and IP addresses is done via Address Resolution Protocol. This does not cause packet drop if your network is properly configured.
I would disable Windows firewall and any anti-virus/deep packet inspection software and check what's on the wire with wireshark. This will most likely point you into right direction - if you can sniff those first packets on the "sent-to" machines then check local configuration (firewall, etc.); if you don't, then check your network - something in the path is interfering with your traffic.
Hope this helps.
erm ..... Its your computer doing ARP request. When you first start sending, your com does not know the receiver mac address, hence its unable to send any packets. It uses the ip address of the receiver to do an ARP request to get the mac address. During this process, any udp packets you try to send cannot be send out as the destination mac address is still not known.
Once your com receive the mac address it can start sending. However, the mac address will only remain in you com's ARP cache for 2mins (if no further activities is detected between you and the receiver) or 10 mins (full clear of ARP cache, even if connection is active). That is why you experience this problem every few mins.
Does anybody know that problem?
The real problem is that you assumed that UDP packet sending is reliable. It isn't.
The fact that you lose the first packet with your current network configuration is really a secondary issue. You might be able to fix that, but at any point you are still vulnerable to packet losses.
If packet loss is a problem for you, then you really should use TCP. You can build a reliable protocol on UDP, but unless you have a very good reason to do so, it's no recommended.

Resources