I'm currently using netmap to send and receive packets. It's working like a charm, but there is one thing causing me problems.
When I pull out the network cable, netmap is filling up the available slots in the tx ring. When I reconnect the cable, all the old packets are immediately transmitted. I don't want that to happen.
Is there any way to avoid the old packets from being sent?
I'm programming in C and the program is running on FreeBSD.
Related
Setup
I have a TCP server running on a microcontroller, and a client running on a Linux Destop.
What I'm trying to do
I am establishing a TCP connection, and then periodically sending messages with lengths varying from 18 bytes all the way up to 512 bytes, from the microcontroller (server) to the desktop (client).
The messages must be sent in one packet, so the TCP stack is configured to not split up the data in the microcontroller end.
In the Linux Desktop, I am using the TCP_NODELAY option which allows messages to be sent immediately, turning off the buffering algorithm. Good, I want this.
What the issue I'm facing is
I try to set the option SO_RCVLOWAT to 18 as well. My understanding is that when I call recv() it will return when at least 18 bytes have been received, which is the smallest messages I'll receive.
This didn't work as expected, and I don't know if it's because I'm misusing the API, or what I want to do cannot be achieved with the TCP/IP stack of the Linux Kernel.
The problem is more with the small packages, recv() won't return with small buffers.
One last Note
When running the client on another microcontroller, with the same TCP/IP setup, I can send and receive the data, how I want it, with no problems.
How can I achieve what I want to do under Linux?
It's for a real time application so I need the functionality of UDP but with the reliability of TCP. Thanks in advance!
I'm working on Linux system that using modbus rtu RS485 to send and receive data. My device is master and just send "request latest data"(8Bytes include 2Byte CRC) to slaver (from now is only 1 slaver) every 1second. when Slaver receive request, they will prepare data (71Bytes include CRC) and send back to Master. I can't see the source of Slaver because this is commercial product.
Both of Master and Slaver using same baud rate 38400.
Result:
When check communication between Master and Slaver, sometimes (average is 1-2hours) data from Slaver is lost some first Bytes, and first Byte that received has been modified with other value that sent from Slaver (sometime lost some first Bytes only)
Sometime data from Slaver not come (timeout but do not receive any data). I tried to increase timeout by 500ms or 1sec but still occurs without any change
I tested Slaver by communication with Teraterm and there is no error like above. Data that sent and received are OK. With Master, I also tested with Teraterm and there is no error.
When I try to catching data while Master and Slaver send & receive data, when problem are occurred(not receive any data or lost some first bytes) on both of Master side and on my PC side (try to catching data byte Teraterm on PC).
I believe that problem is on Master side, and maybe on serial port setting, but I don't know where were wrong. Please help.
Sorry for my poor English!
I used to work with RS485 bus a lot. And one problem that sometimes appeared was very similar to your. Because RS485 is half duplex bus, there is mechanism that switching receive mode and transmit mode on RS485 bus driver. And this was exactly cause of my problems.
When master device sent some data, slave was ready to reply (and replied) before bus driver (on master side) was switched to receive mode. This behavior ended with data loss.
May I suggest you to check, using oscilloscope, that slave sent data correctly? If so, you probably don't have too much options to do as possible solutions are:
Slave have to wait some time before sending reply to master.
Change HW, some RS485 driver that will be faster in switching modes or use different BUS.
The implicit question is: If Linux blocks the send() call when the socket's send buffer is full, why should there be any lost packets?
More details:
I wrote a little utility in C to send UDP packets as fast as possible to a unicast address and port. I send a UDP payload of 1450 bytes each time, and the first bytes are a counter which increments by 1 for every packet. I run it on a Fedora 20 inside VirtualBox on a desktop PC with a 1Gb nic (=quite slow).
Then I wrote a little utility to read UDP packets from a given port which checks the packet's counter against its own counter and prints a message if they are different (i.e. 1 or more packets have been lost). I run it on a Fedora 20 bi-xeon server with a 1Gb ethernet nic (=super fast). It does show many lost packets.
Both machines are on a local network. I don't know exactly the number of hops between them, but I don't think there are more than 2 routers between them.
Things I tried:
Add a delay after each send(). If I set a delay of 1ms, then no packets are lost any more. A delay of 100us will start losing packets.
Increase the receiving socket buffer size to 4MiB using setsockopt(). That does not make any difference...
Please enlighten me!
For UDP the SO_SNDBUF socket option only limits the size of the datagram you can send. There is no explicit throttling send socket buffer as with TCP. There is, of course, in-kernel queuing of frames to the network card.
In other words, send(2) might drop your datagram without returning an error (check out description of ENOBUFS at the bottom of the manual page).
Then the packet might be dropped pretty much anywhere on the path:
sending network card does not have free hardware resources to service the request, frame is discarded,
intermediate routing device has no available buffer space or implements some congestion avoidance algorithm, drops the packet,
receiving network card cannot accept ethernet frames at given rate, some frames are just ignored.
reader application does not have enough socket receive buffer space to accommodate traffic spikes, kernel drops datagrams.
From what you said though, it sounds very probable that the VM is not able to send the packets at a high rate. Sniff the wire with tcpdump(1) or wireshark(1) as close to the source as possible, and check your sequence numbers - that would tell you if it's the sender that is to blame.
Even if send() blocks when the send buffer is full (provided that you didn't set SOCK_NONBLOCK on the socket to put it in non-blocking mode) the receiver must still be fast enough to handle all incoming packets. If the receiver or any intermediate system is slower than the sender, packets will get lost when using UDP. Note that slower does not only apply to the speed of the network interface but to the whole network stack plus the userspace application.
In your case it is quite possible that the receiver is receiving all packets but can't handle them fast enough in userpace. You can check that by recording and analyzing your traffic via tcpdump or wireshark.
If you don't want to loose packets then switch to TCP.
Any of the two routers you mentionned might drop packets if there is an overload,
and the receiving PC might drop or miss packets as well under certain circumstances such as overload.
As one of the above posters said, UDP is a simple datagram protocol that does not guarantee delivery. Either because of local machine, equipments on the network,etc. That is the reason why many current developers will recommend, if you want reliability, to switch to TCP. However, if you really want to stick to the UDP protocol and there are many valid reasons to do that, you will need to find a library that will help you guarantee the delivery. Look for SS7 projects especially in telephony APIs where UDP is used to transmit voice,data and signalling information. For your sole purpose app may i suggest the enet UDP library.http://enet.bespin.org/
I need to write a C code to check the status of a network device txqueue to know if there is still not sent packets, or how many packets queued in the queue.
Im working on a non-blocking C tcp sockets for linux system. I've read that in non-blocking mode, the "send" command will return "bytes sent" immediately if there is no error. I'm guessing this value returned does not actually mean that those data have been delivered to the destination but rather the data has been passed to kernel memory for it to handle further and send.
If that is the case, how would my application know which packet has really been sent out by kernel to the other end, assuming that the network connection had some problems and kernel decides to give up only after several retries in a span of a few minutes later?
Im asking because i would want my application to resend those failed packets again at a later time.
If that is the case, how would my application know which packet has
really been sent out by kernel to the other end, assuming that the
network connection had some problems and kernel decides to give up
only after several retries in a span of a few minutes later?
Your application won't know, unless it is able to recontact the receiving application and ask the receiving application about what data it had previously received.
Keep in mind that even with blocking I/O your application doesn't block until the data is received by the remote application -- it only blocks until there is some room in the kernel's outgoing-data buffer to hold the bytes you asked the TCP stack to send(). So even with blocking I/O you would face the same issue.
Also keep in mind that the byte arrays you pass to send() do not have a guaranteed 1-to-1 correspondence to the TCP packets that the TCP stack sends out. The TCP stack is free to pack your bytes into TCP packets any way it likes (e.g. the data from multiple send() calls can end up in a single TCP packet, or the data from a single send() call can end up in multiple TCP packets, or any other combination you can think of). Depending on network conditions, TCP stacks can and do pack things various different ways, their only promise is that the bytes will be received in FIFO order (if they get received at all).
Anyway, the answer to your question is: you can't know, unless you later ask the receiving program about what it got (or didn't get).
TCP internally takes care of retrying, application doesn't need to do any special handling for it. If you wish to confirm a packet received the other end of the TCP stack then you can set the send socket buffer (setsockopt(SOL_SOCKET, SO_SNDBUF)) to zero. In this case, kernel uses your application buffer to send the data & its only released after the TCP receives acknowledgement for this data. This way you can confirm that the data is pushed to the receiver end of the TCP stack. It doesn't confirm that the application has received the data. You need to have application layer acknowledgement in your protocol to confirm that the data reached the receiver application.