What is the default size of datagram queue length in Unix Domain Sockets (AF_UNIX)? Is it configurable? - c

I know the maximum length of a datagram queue length can be found using
"cat /proc/sys/net/unix/max_dgram_qlen".
I wanted to know how to find the default value that is set on boot up (like in case of the /proc/sys/net/core/wmem_default for the send buffer size).
Is it possible to increase the value of max_dgram_qlen? What is the upper limit of the same?
My kernel version is 2.6.27.7. I'm new to Unix Domain Socket programming (AF_UNIX).
Thanks in advance for any comments / solutions!

The previous answers/comments failed to understand that the OP was talking about maximum queue length in datagrams (max_dram_qlen) and not in bytes. The OS provides settings for both sizes.
You can set max_dgram_qlen using the following command:
sysctl net.unix.max_dgram_qlen=128
You may need to run with sudo and you may also need to put double quotes around max_dgram_qlen=128 depending on your shell.
Also, see What's the practical limit on the size of single packet transmitted over domain socket?.

man unix(7):
The SO_SNDBUF socket option does have an effect for UNIX domain sockets, but the SO_RCVBUF option does not. For datagram sockets, the SO_SNDBUF value imposes an upper limit on the size of outgoing datagrams. This limit is calculated as the doubled (see socket(7)) option value less 32 bytes used for overhead.

Related

How to count bytes using send() in C, including protocol size?

I'm trying to count total send bytes from my program, but I can't get accurate value.
All my functions call a single function that send data to my server using send() function.
In this function, i get return of send() and sum into global counter. This is working fine.
But when I compare to 'iftop' utility (sudo iftop -f 'port 33755'), I'm getting more data on iftop then in my app....and my guess if because of tcp headers/protocol data. I really don't know how to calculate this. I'm sending packets using send() and variable data length, so I'm not sure if is possible to detect/calculate TCP packet size from there. I know that each TCP packet send TCP header, but I'm not sure how many packets is sent.
May I assume that every call to send(), if data length is less than 1518 (TCP packet size limite?), than it's only one TCP packet and I need to sum TCP Header length? Even if I sent one byte? If so, how much is these extra-bytes from TCP structure?!
For information: I'm using GCC on linux as compiler.
Tks!
How to count bytes using send() in C, including protocol size?
There is no reliable way to do so from within your program. You can compute a minimum total number of bytes required to transmit data of the total payload size you count, subject to a few assumptions, but you would need to monitor from the kernel side to determine the exact number of bytes.
May I assume that every call to send(), if data length is less than
1518 (TCP packet size limite?), than it's only one TCP packet and I
need to sum TCP Header length?
No, that would not be a safe assumption. The main problem is that the kernel does not necessarily match the data transferred by each send() call to its own sequence of packets. It may combine data from multiple send()s into a smaller number of packets. Additionally, however, it may use either a smaller MTU or a larger one than Ethernet's default of 1500 bytes, depending on various factors, and, furthermore, you need to fit packet headers into the chosen MTU, so the payload carried by one packet is smaller than that.
I suspect you're making this too hard. If this is a task that has been assigned to you -- a homework problem, for example -- then my first guess would be that it is intended that you count only the total payload size, not the protocol overhead. Alternatively, if you do need to account for the overhead, then my guess would be that you are supposed to estimate, based on measured or assumed characteristics of the network. If you've set this problem for yourself, then I can only say that people generally make one of the two computations I just described, not the one you asked about.

What is the maximum buffer length allowed by the sendto function in C

I am implementing a simple network stack using UDP sockets and I wish to send about 1 MB of string data from the client to the server. However, I am not aware if there is a limit to the size of the length in the UDP sendto() API in C. If there is a limit and sendto() won't handle packetization beyond a limit then I would have to manually split the string into smaller blocks and then send them.
Is there a limit on the length of the buffer? Or does the sendto() API handle packetization by itself.
Any insight is appreciated.
There's no API limit on sendto -- it can handle any size the underlying protocol can.
There IS a packet limit for UDP -- 64K; if you exceed this, the sendto call will fail with an EMSGSIZE error code. There are also packet size limits for IP which differ between IPv4 and IPv6. Finally, the low level transport has an MTU size which may or may not be an issue. IP packets can be fragemented into multiple lower level packets and automatically reassembled, unless you've used an IP_OPTIONS setsockopt call to disable fragmentation.
The easiest way to deal with all this complexity is to make your code flexible -- detect EMSGSIZE errors from sendto and switch to using smaller messages if you get it. The also works well if you want to do path MTU discovery, which will generally accept larger messages at first, but will cut down the maximum message size when you send one that ends up exceeding the path MTU.
If you just want to avoid worrying about it, a send of 1452 bytes or less is likely to always be fine (that's the 1500 byte normal ethernet payload max minus 40 for a normal IPv6 header and 8 for a UDP header), unless you're using a VPN (in which case you need to worry about encapsulation overhead).

Isn't recv() in C socket programming blocking?

In Receiver, I have
recvfd=accept(sockfd,&other_side,&len);
while(1)
{
recv(recvfd,buf,MAX_BYTES-1,0);
buf[MAX_BYTES]='\0';
printf("\n Number %d contents :%s\n",counter,buf);
counter++;
}
In Sender , I have
send(sockfd,mesg,(size_t)length,0);
send(sockfd,mesg,(size_t)length,0);
send(sockfd,mesg,(size_t)length,0);
MAX_BYTES is 1024 and length of mesg is 15. Currently, It calls recv only one time. I want recv function to be called three times for each corresponding send. How do I achieve it?
In short: yes, it is blocking. But not in the way you think.
recv() blocks until any data is readable. But you don't know the size in advance.
In your scenario, you could do the following:
call select() and put the socket where you want to read from into the READ FD set
when select() returns with a positive number, your socket has data ready to be read
then, check if you could receive length bytes from the socket:
recv(recvfd, buf, MAX_BYTES-1, MSG_PEEK), see man recv(2) for the MSG_PEEK param or look at MSDN, they have it as well
now you know how much data is available
if there's less than length available, return and do nothing
if there's at least length available, read length and return (if there's more than length available, we'll continue with step 2 since a new READ event will be signalled by select()
To send discrete messages over a byte stream protocol, you have to encode messages into some kind of framing language. The network can chop up the protocol into arbitrarily sized packets, and so the receives do not correlate with your messages in any way. The receiver has to implement a state machine which recognizes frames.
A simple framing protocol is to have some length field (say two octets: 16 bits, for a maximum frame length of 65535 bytes). The length field is followed by exactly that many bytes.
You must not even assume that the length field itself is received all at once. You might ask for two bytes, but recv could return just one. This won't happen for the very first message received from the socket, because network (or local IPC pipe, for that matter) segments are never just one byte long. But somewhere in the middle of the stream, it is possible that the fist byte of the 16 bit length field could land on the last position of one network frame.
An easy way to deal with this is to use a buffered I/O library instead of raw operating system file handles. In a POSIX environment, you can take an open socket handle, and use the fdopen function to associate it with a FILE * stream. Then you can use functions like getc and fread to simplify the input handling (somewhat).
If in-band framing is not acceptable, then you have to use a protocol which supports framing, namely datagram type sockets. The main disadvantage of this is that the principal datagram-based protocol used over IP is UDP, and UDP is unreliable. This brings in a lot of complexity in your application to deal with out of order and missing frames. The size of the frames is also restricted by the maximum IP datagram size which is about 64 kilobytes, including all the protocol headers.
Large UDP datagrams get fragmented, which, if there is unreliability in the network, adds up to greater unreliability: if any IP fragment is lost, the entire packet is lost. All of it must be retransmitted; there is no way to just get a repetition of the fragment that was lost. The TCP protocol performs "path MTU discovery" to adjust its segment size so that IP fragmentation is avoided, and TCP has selective retransmission to recover missing segments.
I bet you've created a TCP socket using SOCK_STREAM, which would cause the three messages to be read into your buffer during the first recv call. If you want to read the messages one-by-one, create a UPD socket using SOCK_DGRAM, or develop some type of message format which allows you to parse your messages when they arrive in a stream (assuming your messages will not always be fixed length).
First send the length to be received in a fixed format regarding the size of length in bytes you use to transmit this length, then make recv() loop until length bytes had been received.
Note the fact (as also already mentioned by other answers), that the size and number of chunks received do not necessarly need to be the same as sent. Only the sum of all bytes received shall be the same as the sum of all bytes sent.
Read the man pages for recvand send. Especially read the sections on what those functions RETURN.
recv will block until the entire buffer is filled, or the socket is closed.
If you want to read length bytes and return, then you must only pass to recv a buffer of size length.
You can use select to determine if
there are any bytes waiting to be read,
how many bytes are waiting to be read, then
read only those bytes
This can avoid recv from blocking.
Edit:
After re-reading the docs, the following may be true: your three "messages" may be being read all-at-once since length + length + length < MAX_BYTES - 1.
Another possibility, if recv is never returning, is that you may need to flush your socket from the sender-side. The data may be waiting in a buffer to actually be sent to the receiver.

Optimal SNAPLEN for PCAP live capture

When using pcap_open_live to sniff from an interface, I have seen a lot of examples using various numbers as SNAPLEN value, ranging from BUFSIZ (<stdio.h>) to "magic numbers".
Wouldn't it make more sense to set as SNAPLEN the MTU of the interface we are capturing from ?
In this manner, we could fit more packets at once in PCAP buffer. Is it safe to assume that the MRU is equal to the MTU ?
Otherwise, is there a non-exotic way to set the SNAPLEN value ?
Thanks
The MTU is the largest payload size that could be handed to the link layer; it does not include any link-layer headers, so, for example, on Ethernet it would be 1500, not 1514 or 1518, and wouldn't be large enough to capture a full-sized Ethernet packet.
In addition, it doesn't include any metadata headers such as the radiotap header for 802.11 radio information.
And if the adapter is doing any form of fragmentation/segmentation/reassembly offloading, the packets handed to the adapter or received from the adapter might not yet be fragmented or segmented, or might have been reassembled, and, as such, might be much larger than the MTU.
As for fitting more packets in the PCAP buffer, that only applies to the memory-mapped TPACKET_V1 and TPACKET_V2 capture mechanisms in Linux, which have fixed-size packet slots; other capture mechanisms do not reserve a maximum-sized slot for every packet, so a shorter snapshot length won't matter. For TPACKET_V1 and TPACKET_V2, a smaller snapshot length could make a difference, although, at least for Ethernet, libpcap 1.2.1 attempts, as best it can, to choose an appropriate buffer slot size for Ethernet. (TPACKET_V3 doesn't appear to have the fixed-size per-packet slots, in which case it wouldn't have this problem, but it only appeared in officially-released kernels recently, and no support for it exists yet in libpcap.)

Sending TCP frames of fixed length

I need to send some data over the subnet with fixed non-standard MTU (for example, 1560) using TCP.
All the Ethernet frames transfered through this subnet should be manually padded with 0's, if the frame's length is less than MTU.
So, the data size should be
(1560 - sizeof( IP header ) - sizeof( TCP header ) ).
This is the way I am going to do it:
I set the TCP_CORK option to decrease the fragmenting of data. It is not reliable, because there is 200 millisecond ceiling, but it works.
I know size of IP header (20 bytes), so data length should be equal to (1540 - sizeof( TCP header )).
That's the problem. I don't know the TCP header size. The size of it's "Options" field is floating.
So, the question is: how to get the size of TCP header? Or maybe there is some way to send TCP frames with headers of fixed length?
Trying to control the size of frames when using TCP from the user application is wrong. You are working at the wrong abstraction level. It's also impossible.
What you should be doing is either consider replacing TCP with something else (UDP?) or, less likely, but possible, rewrite your Ethernet driver to set the non standard MTU and do the padding you need.
This isn't possible using the TCP stack of the host simply because a TCP stack that follows RFC 793 isn't supposed to offer this kind of access to an application.
That is, there isn't (and there shouldn't be) a way to influence what the lower layers do with your data. Of course, there are ways to influence what TCP does (Nagle for example) but that is against the spirit of the protocol. TCP should be used for what it's best at: transferring a continuous, ordered stream of bytes. Nothing more, nothing less. No messages, packets, frames.
If after all you do need to control such details, you need to look at lower-level APIs. You could use SOCK_RAW and PF_PACKET.
Packet sockets are used to receive or
send raw packets at the device driver
(OSI Layer 2) level.
#gby mentioned UDP and that is (partially) a good idea: UDP has a fixed size. But keep in mind that you will have to deal with IP fragmentation (or use IP_DONTFRAG).
In addition to my comments below the OP's question, this quote from the original RFC outlining how to send TCP/IP over ethernet is relevant:
RFC 894 (emphasis mine):
If necessary, the data field should be padded (with octets of zero) to meet the Ethernet minimum frame size.
If they wanted all ethernet frames to be at maximum size, they would have said so. They did not.
Maybe what was meant by padding is that the TCP header padding to align it on 32 bits should be all zeros : http://freesoft.org/CIE/Course/Section4/8.htm

Resources