I am working with the ethernet communication under echo server lwIP. I would like to capture samples from DMA to the HOST by ethernet. The system captures samples via UART.
I am not able to make lwIP to send more than 2 packages higher than 1500 bytes without waiting for ACK. My application sends packet continuously to the client. Client receives the packet without any delay but it sends the ACK after 200ms (see attached wireshark capture image). LWIP get stuck always waiting for ACK packet before it sends the next packet. My lwIP could only send no more than 2 TCP segment and then wait for ACK. The network delay will cause performance to get down.
Is there any configuration which makes the LWIP to send packet without waiting for the ACK packet? Do you have any suggestion?
If you don't want to wait how about using UDP instead of TCP? TCP is a stream protocol and is going to ensure that everything arrives and is in-order (so long as there aren't errors). echo usually makes me think of a situation where you don't care about ordering, only whether a particular packet makes it or not and how long it took.
Related
I have a scenario where I construct and send a raw ethernet packet to a bespoke hardware device. The packet triggers a response packet to be sent back by the device.
My code performs a sendto() call to transmit the outgoing packet and after that a recvfrom() call to try and receive the response. The recvfrom() is in a while loop such that it keeps trying to receive a packet until recvfrom() indicates its received > 0 bytes.
I've run Wireshark to confirm that my outgoing packet is constructed correctly and sent, I can also see the response packet from the device and it is correctly formed.
What I can't do is receive it with my recvfrom() call. The while loop effectively hangs as recvfrom() never returns a value other than 0.
I'm thinking that maybe I need the recvfrom() call in a separate 'listener' thread that is running prior to the sendto() call. Currently my application isn't threaded. I just run the recvfrom() after the sendto().
Is it possible that I am simply missing the packet with my non-threaded approach, or are packets buffered in some way such that they can be received even if they are sent before a corresponding recvfrom() is executed?
I would be very interested to know if there is a defacto way of doing this sort of thing. It would be great not to have to make my app multithreaded, but I can if need be.
I want to send a UDP broadcast datagram to multiple devices on the network, including the sender device itself. The goal is to have all devices receive the data at the EXACT same time (well, +/- 5ms is OK).
The problem is that the network interface on the sending device is looping the data back, so it is received immediately (in contrast to the other devices where network latency comes into play - quite a bit for Wifi for instance)
Any idea how I can disable my network interface to loop the data back directly?
Another idea I had: Is it possible to create a virtual network interface to send the broadcast packet and listen on another interface which only receives it via the network?
I am trying to do that in C on a Linux machine. Any help would be greatly appreciated!
UDP are sent as IP-payload. The routing of IP packets is a domain of the IP stack. It decides how a packet is transferred to the destination. When you IP stack detects that the destination is the local host it will enqueue the packet in the receive queue and the packet will be available immediatly. If your adapters' send queues are filled that you will have a delay. So you can't make a synchronization with this concept.
If you need a hard synchronization you should utilize NTP or SNTP tro synchronize the clocks and define a comment start time for your desired common operation.
Edit:
The (S)NTP protocol is designed to synchronize at millisecond Level. You will get a precision that you can't achieve with any Transmission of UDP packets due to the reason I described above.
The implicit question is: If Linux blocks the send() call when the socket's send buffer is full, why should there be any lost packets?
More details:
I wrote a little utility in C to send UDP packets as fast as possible to a unicast address and port. I send a UDP payload of 1450 bytes each time, and the first bytes are a counter which increments by 1 for every packet. I run it on a Fedora 20 inside VirtualBox on a desktop PC with a 1Gb nic (=quite slow).
Then I wrote a little utility to read UDP packets from a given port which checks the packet's counter against its own counter and prints a message if they are different (i.e. 1 or more packets have been lost). I run it on a Fedora 20 bi-xeon server with a 1Gb ethernet nic (=super fast). It does show many lost packets.
Both machines are on a local network. I don't know exactly the number of hops between them, but I don't think there are more than 2 routers between them.
Things I tried:
Add a delay after each send(). If I set a delay of 1ms, then no packets are lost any more. A delay of 100us will start losing packets.
Increase the receiving socket buffer size to 4MiB using setsockopt(). That does not make any difference...
Please enlighten me!
For UDP the SO_SNDBUF socket option only limits the size of the datagram you can send. There is no explicit throttling send socket buffer as with TCP. There is, of course, in-kernel queuing of frames to the network card.
In other words, send(2) might drop your datagram without returning an error (check out description of ENOBUFS at the bottom of the manual page).
Then the packet might be dropped pretty much anywhere on the path:
sending network card does not have free hardware resources to service the request, frame is discarded,
intermediate routing device has no available buffer space or implements some congestion avoidance algorithm, drops the packet,
receiving network card cannot accept ethernet frames at given rate, some frames are just ignored.
reader application does not have enough socket receive buffer space to accommodate traffic spikes, kernel drops datagrams.
From what you said though, it sounds very probable that the VM is not able to send the packets at a high rate. Sniff the wire with tcpdump(1) or wireshark(1) as close to the source as possible, and check your sequence numbers - that would tell you if it's the sender that is to blame.
Even if send() blocks when the send buffer is full (provided that you didn't set SOCK_NONBLOCK on the socket to put it in non-blocking mode) the receiver must still be fast enough to handle all incoming packets. If the receiver or any intermediate system is slower than the sender, packets will get lost when using UDP. Note that slower does not only apply to the speed of the network interface but to the whole network stack plus the userspace application.
In your case it is quite possible that the receiver is receiving all packets but can't handle them fast enough in userpace. You can check that by recording and analyzing your traffic via tcpdump or wireshark.
If you don't want to loose packets then switch to TCP.
Any of the two routers you mentionned might drop packets if there is an overload,
and the receiving PC might drop or miss packets as well under certain circumstances such as overload.
As one of the above posters said, UDP is a simple datagram protocol that does not guarantee delivery. Either because of local machine, equipments on the network,etc. That is the reason why many current developers will recommend, if you want reliability, to switch to TCP. However, if you really want to stick to the UDP protocol and there are many valid reasons to do that, you will need to find a library that will help you guarantee the delivery. Look for SS7 projects especially in telephony APIs where UDP is used to transmit voice,data and signalling information. For your sole purpose app may i suggest the enet UDP library.http://enet.bespin.org/
I want to check my udp server client application. One of the features that I wanted to check is the time delay between data sent by the server and received by the client and vice versa.
I figured out a way of sending a message from the server to the client and note the time. The client when receives this message sends the same message back to the server. The server gets this echoed message back and again notes the time. The difference between the time at which the message was sent and at which the echoed message is received back tells me the delay between data sent by the server and received by the client.
Is this approach correct?
Because I also foresee a lot of other delays involved using this approach. What could be a possible way to calculate more accurate delays?
Waiting for help.
Yes this is the most traditional way of doing ,you can do this.
You can see on sniffer, using relative time taken between sender's udp packet and receiver's udp packet. For the need of more accurate results, you have to go deep into the window's stack where it checks for udp packet received or not. And for calculation of timer's you can use a real time clock which gives upto microsecond delay. Also you are using udp which has high priority of packet getting lost, unlike tcp which is much reliable.
What stack are you using? LwIP ?
I have a scenario where multiple clients connect to a TCP server. When any of the clients sends a packet to the server, the server is supposed to have a retransmission timer and keep sending that packet to another server until it receives a reply. How do I go about setting up this retransmission mechanism? I'm doing this on Linux in C.
If you use a TCP socket, retransmit will happen automatically. Admittedly, if you want more control, you'll need to use UDP and handle the retransmit yourself.
I'm guessing this is an assignment. I had something similar where our channel was purposefully being corrupted.
I would suggest you follow something similar.
Send packet.
start a timer.
if an ACK (acknowledgment) is not received within a certain amount of time, then go back to step 1.
IIRC, the location of the files that contain these TCP config parameters are distro-dependent. They are in different folders on Red Hat and Ubuntu.