Sending & Receiving small TCP messages in Linux - c

Setup
I have a TCP server running on a microcontroller, and a client running on a Linux Destop.
What I'm trying to do
I am establishing a TCP connection, and then periodically sending messages with lengths varying from 18 bytes all the way up to 512 bytes, from the microcontroller (server) to the desktop (client).
The messages must be sent in one packet, so the TCP stack is configured to not split up the data in the microcontroller end.
In the Linux Desktop, I am using the TCP_NODELAY option which allows messages to be sent immediately, turning off the buffering algorithm. Good, I want this.
What the issue I'm facing is
I try to set the option SO_RCVLOWAT to 18 as well. My understanding is that when I call recv() it will return when at least 18 bytes have been received, which is the smallest messages I'll receive.
This didn't work as expected, and I don't know if it's because I'm misusing the API, or what I want to do cannot be achieved with the TCP/IP stack of the Linux Kernel.
The problem is more with the small packages, recv() won't return with small buffers.
One last Note
When running the client on another microcontroller, with the same TCP/IP setup, I can send and receive the data, how I want it, with no problems.
How can I achieve what I want to do under Linux?
It's for a real time application so I need the functionality of UDP but with the reliability of TCP. Thanks in advance!

Related

Deny a client's TCP connect request before accept()

I'm trying code TCP server in C language. I just noticed accept() function returns when connection is already established.
Some clients are flooding with random data some clients are just sending random data for one time, after that I want to close their's current connection and future connections for few minutes (or more, depends about how much load program have).
I can save bad client IP addresses in a array, can save timings too but I cant find any function for abort current connection or deny future connections from bad clients.
I found a function for windows OS called WSAAccept that allows you deny connections by user choice, but I don't use windows OS.
I tried code raw TCP server which allows you access TCP packet from begin including all TCP header and it doesn't accept connections automatically. I tried handle connections by program side including SYN ACK and other TCP signals. It worked but then I noticed raw TCP server receiving all packets in my network interface, when other programs using high traffic it makes my program laggy too.
I tried use libnetfilter which allows you filter whole traffic in your network interface. It works too but like raw TCP server it also receiving whole network interface's packets which is making it slow when there is lot of traffic. Also I tried compare libnetfilter with iptables. libnetfilter is slower than iptables.
So in summary how I can abort client's current and future connection without hurt other client connections?
I have linux with debian 10.
Once you do blacklisting on packet level you could get very fast vulnerable to very trivial attacks based on IP spoofing. For a very basic implementation an attacker could use your packet level blacklisting to blacklist anyone he wants by just sending you many packets with a fake source IP address. Usually you don't want to touch these filtering (except you really know what you are doing) and you just trust your firewall etc. .
So I recommend really just to close the file descriptor immediately after getting it from accept.

Possible causes for lack of data loss over my localhost UDP protocol?

I just implemented my first UDP server/client. The server is on localhost.
I'm sending 64kb of data from client to server, which the server is supposed to send back. Then, the client checks how many of the 64kb are still intact and they all are. Always.
What are the possible causes for this behaviour? I was expecting at least -some- dataloss.
client code: http://pastebin.com/5HLkfcqS
server code: http://pastebin.com/YrhfJAGb
PS: A newbie in network programming here, so please don't be too harsh. I couldn't find an answer for my problem.
The reason why you are not seeing any lost datagrams is that your network stack is simply not running into any trouble.
Your localhost connection can easily cope with what you provide, a localhost connection is able to process several 100 megabyte of data per second on a decent CPU.
To see dropped datagrams you should increase the probability of interference. You have several opportunities:
increase the load on the network
busy your cpu with other tasks
use a "real" network and transfer data between real machines
run your code over a dsl line
set up a virtual machine and simulate network outages (Vmware Workstation is able to do so)
And this might be an interesting read: What would cause UDP packets to be dropped when being sent to localhost?

In linux, why do I lose UDP packets if I call send() as fast as possible?

The implicit question is: If Linux blocks the send() call when the socket's send buffer is full, why should there be any lost packets?
More details:
I wrote a little utility in C to send UDP packets as fast as possible to a unicast address and port. I send a UDP payload of 1450 bytes each time, and the first bytes are a counter which increments by 1 for every packet. I run it on a Fedora 20 inside VirtualBox on a desktop PC with a 1Gb nic (=quite slow).
Then I wrote a little utility to read UDP packets from a given port which checks the packet's counter against its own counter and prints a message if they are different (i.e. 1 or more packets have been lost). I run it on a Fedora 20 bi-xeon server with a 1Gb ethernet nic (=super fast). It does show many lost packets.
Both machines are on a local network. I don't know exactly the number of hops between them, but I don't think there are more than 2 routers between them.
Things I tried:
Add a delay after each send(). If I set a delay of 1ms, then no packets are lost any more. A delay of 100us will start losing packets.
Increase the receiving socket buffer size to 4MiB using setsockopt(). That does not make any difference...
Please enlighten me!
For UDP the SO_SNDBUF socket option only limits the size of the datagram you can send. There is no explicit throttling send socket buffer as with TCP. There is, of course, in-kernel queuing of frames to the network card.
In other words, send(2) might drop your datagram without returning an error (check out description of ENOBUFS at the bottom of the manual page).
Then the packet might be dropped pretty much anywhere on the path:
sending network card does not have free hardware resources to service the request, frame is discarded,
intermediate routing device has no available buffer space or implements some congestion avoidance algorithm, drops the packet,
receiving network card cannot accept ethernet frames at given rate, some frames are just ignored.
reader application does not have enough socket receive buffer space to accommodate traffic spikes, kernel drops datagrams.
From what you said though, it sounds very probable that the VM is not able to send the packets at a high rate. Sniff the wire with tcpdump(1) or wireshark(1) as close to the source as possible, and check your sequence numbers - that would tell you if it's the sender that is to blame.
Even if send() blocks when the send buffer is full (provided that you didn't set SOCK_NONBLOCK on the socket to put it in non-blocking mode) the receiver must still be fast enough to handle all incoming packets. If the receiver or any intermediate system is slower than the sender, packets will get lost when using UDP. Note that slower does not only apply to the speed of the network interface but to the whole network stack plus the userspace application.
In your case it is quite possible that the receiver is receiving all packets but can't handle them fast enough in userpace. You can check that by recording and analyzing your traffic via tcpdump or wireshark.
If you don't want to loose packets then switch to TCP.
Any of the two routers you mentionned might drop packets if there is an overload,
and the receiving PC might drop or miss packets as well under certain circumstances such as overload.
As one of the above posters said, UDP is a simple datagram protocol that does not guarantee delivery. Either because of local machine, equipments on the network,etc. That is the reason why many current developers will recommend, if you want reliability, to switch to TCP. However, if you really want to stick to the UDP protocol and there are many valid reasons to do that, you will need to find a library that will help you guarantee the delivery. Look for SS7 projects especially in telephony APIs where UDP is used to transmit voice,data and signalling information. For your sole purpose app may i suggest the enet UDP library.http://enet.bespin.org/

how does non-blocking tcp socket notify application on packets which fail to get sent.

Im working on a non-blocking C tcp sockets for linux system. I've read that in non-blocking mode, the "send" command will return "bytes sent" immediately if there is no error. I'm guessing this value returned does not actually mean that those data have been delivered to the destination but rather the data has been passed to kernel memory for it to handle further and send.
If that is the case, how would my application know which packet has really been sent out by kernel to the other end, assuming that the network connection had some problems and kernel decides to give up only after several retries in a span of a few minutes later?
Im asking because i would want my application to resend those failed packets again at a later time.
If that is the case, how would my application know which packet has
really been sent out by kernel to the other end, assuming that the
network connection had some problems and kernel decides to give up
only after several retries in a span of a few minutes later?
Your application won't know, unless it is able to recontact the receiving application and ask the receiving application about what data it had previously received.
Keep in mind that even with blocking I/O your application doesn't block until the data is received by the remote application -- it only blocks until there is some room in the kernel's outgoing-data buffer to hold the bytes you asked the TCP stack to send(). So even with blocking I/O you would face the same issue.
Also keep in mind that the byte arrays you pass to send() do not have a guaranteed 1-to-1 correspondence to the TCP packets that the TCP stack sends out. The TCP stack is free to pack your bytes into TCP packets any way it likes (e.g. the data from multiple send() calls can end up in a single TCP packet, or the data from a single send() call can end up in multiple TCP packets, or any other combination you can think of). Depending on network conditions, TCP stacks can and do pack things various different ways, their only promise is that the bytes will be received in FIFO order (if they get received at all).
Anyway, the answer to your question is: you can't know, unless you later ask the receiving program about what it got (or didn't get).
TCP internally takes care of retrying, application doesn't need to do any special handling for it. If you wish to confirm a packet received the other end of the TCP stack then you can set the send socket buffer (setsockopt(SOL_SOCKET, SO_SNDBUF)) to zero. In this case, kernel uses your application buffer to send the data & its only released after the TCP receives acknowledgement for this data. This way you can confirm that the data is pushed to the receiver end of the TCP stack. It doesn't confirm that the application has received the data. You need to have application layer acknowledgement in your protocol to confirm that the data reached the receiver application.

Maximizing performance on udp

im working on a project with two clients ,one for sending, and the other one for receiving udp datagrams, between 2 machines wired directly to each other.
each datagram is 1024byte in size, and it is sent using winsock(blocking).
they are both running on a very fast machines(separate). with 16gb ram and 8 cpu's, with raid 0 drives.
im looking for tips to maximize my throughput , tips should be at winsock level, but if u have some other tips, it would be great also.
currently im getting 250-400mbit transfer speed. im looking for more.
thanks.
Since I don't know what else besides sending and receiving that your applications do it's difficult to know what else might be limiting it, but here's a few things to try. I'm assuming that you're using IPv4, and I'm not a Windows programmer.
Maximize the packet size that you are sending when you are using a reliable connection. For 100 mbs Ethernet the maximum packet is 1518, Ethernet uses 18 of that, IPv4 uses 20-64 (usually 20, thought), and UDP uses 8 bytes. That means that typically you should be able to send 1472 bytes of UDP payload per packet.
If you are using gigabit Ethernet equiptment that supports it your packet size increases to 9000 bytes (jumbo frames), so sending something closer to that size should speed things up.
If you are sending any acknowledgments from your listener to your sender then try to make sure that they are sent rarely and can acknowledge more than just one packet at a time. Try to keep the listener from having to say much, and try to keep the sender from having to wait on the listener for permission to keep sending.
On the computer that the sender application lives on consider setting up a static ARP entry for the computer that the receiver lives on. Without this every few seconds there may be a pause while a new ARP request is made to make sure that the ARP cache is up to date. Some ARP implementations may do this request well before the ARP entry expires, which would decrease the impact, but some do not.
Turn off as many users of the network as possible. If you are using an Ethernet switch then you should concentrate on the things that will introduce traffic to/from the computers/network devices on which your applications are running reside/use (this includes broadcast messages, like many ARP requests). If it's a hub then you may want to quiet down the entire network. Windows tends to send out a constant stream of junk to networks which in many cases isn't useful.
There may be limits set on how much of the network bandwidth that one application or user can have. Or there may be limits on how much network bandwidth the OS will let it self use. These can probably be changed in the registry if they exist.
It is not uncommon for network interface chips to not actually support the maximum bandwidth of the network all the time. There are chips which may miss packets because they are busy handling a previous packet as well as some which just can't send packets as close together as Ethernet specifications would allow. Additionally the rest of the system might not be able to keep up even if it is.
Some things to look at:
Connected UDP sockets (some info) shortcut several operations in the kernel, so are faster (see Stevens UnP book for details).
Socket send and receive buffers - play with SO_SNDBUF and SO_RCVBUF socket options to balance out spikes and packet drop
See if you can bump up link MTU and use jumbo frames.
use 1Gbps network and upgrade your network hardware...
Test the packet limit of your hardware with an already proven piece of code such as iperf:
http://www.noc.ucf.edu/Tools/Iperf/
I'm linking a Windows build, it might be a good idea to boot off a Linux LiveCD and try a Linux build for comparison of IP stacks.
More likely your NIC isn't performing well, try an Intel Gigabit Server Adapter:
http://www.intel.com/network/connectivity/products/server_adapters.htm
For TCP connections it has been shown that using multiple parallel connections will better utilize the data connection. I'm not sure if that applies to UDP, but it might help with some of the latency issues of packet processing.
So you might want to try multiple threads of blocking calls.
As well as Nikolai's suggestion of send and recv buffers, if you can, switch to overlapped I/O and have many recvs pending, this also helps to minimise the number of datagrams that are dropped by the stack due to lack of buffer space.
If you're looking for reliable data transfer, consider UDT.

Resources