Optimal SNAPLEN for PCAP live capture - c

When using pcap_open_live to sniff from an interface, I have seen a lot of examples using various numbers as SNAPLEN value, ranging from BUFSIZ (<stdio.h>) to "magic numbers".
Wouldn't it make more sense to set as SNAPLEN the MTU of the interface we are capturing from ?
In this manner, we could fit more packets at once in PCAP buffer. Is it safe to assume that the MRU is equal to the MTU ?
Otherwise, is there a non-exotic way to set the SNAPLEN value ?
Thanks

The MTU is the largest payload size that could be handed to the link layer; it does not include any link-layer headers, so, for example, on Ethernet it would be 1500, not 1514 or 1518, and wouldn't be large enough to capture a full-sized Ethernet packet.
In addition, it doesn't include any metadata headers such as the radiotap header for 802.11 radio information.
And if the adapter is doing any form of fragmentation/segmentation/reassembly offloading, the packets handed to the adapter or received from the adapter might not yet be fragmented or segmented, or might have been reassembled, and, as such, might be much larger than the MTU.
As for fitting more packets in the PCAP buffer, that only applies to the memory-mapped TPACKET_V1 and TPACKET_V2 capture mechanisms in Linux, which have fixed-size packet slots; other capture mechanisms do not reserve a maximum-sized slot for every packet, so a shorter snapshot length won't matter. For TPACKET_V1 and TPACKET_V2, a smaller snapshot length could make a difference, although, at least for Ethernet, libpcap 1.2.1 attempts, as best it can, to choose an appropriate buffer slot size for Ethernet. (TPACKET_V3 doesn't appear to have the fixed-size per-packet slots, in which case it wouldn't have this problem, but it only appeared in officially-released kernels recently, and no support for it exists yet in libpcap.)

Related

How to count bytes using send() in C, including protocol size?

I'm trying to count total send bytes from my program, but I can't get accurate value.
All my functions call a single function that send data to my server using send() function.
In this function, i get return of send() and sum into global counter. This is working fine.
But when I compare to 'iftop' utility (sudo iftop -f 'port 33755'), I'm getting more data on iftop then in my app....and my guess if because of tcp headers/protocol data. I really don't know how to calculate this. I'm sending packets using send() and variable data length, so I'm not sure if is possible to detect/calculate TCP packet size from there. I know that each TCP packet send TCP header, but I'm not sure how many packets is sent.
May I assume that every call to send(), if data length is less than 1518 (TCP packet size limite?), than it's only one TCP packet and I need to sum TCP Header length? Even if I sent one byte? If so, how much is these extra-bytes from TCP structure?!
For information: I'm using GCC on linux as compiler.
Tks!
How to count bytes using send() in C, including protocol size?
There is no reliable way to do so from within your program. You can compute a minimum total number of bytes required to transmit data of the total payload size you count, subject to a few assumptions, but you would need to monitor from the kernel side to determine the exact number of bytes.
May I assume that every call to send(), if data length is less than
1518 (TCP packet size limite?), than it's only one TCP packet and I
need to sum TCP Header length?
No, that would not be a safe assumption. The main problem is that the kernel does not necessarily match the data transferred by each send() call to its own sequence of packets. It may combine data from multiple send()s into a smaller number of packets. Additionally, however, it may use either a smaller MTU or a larger one than Ethernet's default of 1500 bytes, depending on various factors, and, furthermore, you need to fit packet headers into the chosen MTU, so the payload carried by one packet is smaller than that.
I suspect you're making this too hard. If this is a task that has been assigned to you -- a homework problem, for example -- then my first guess would be that it is intended that you count only the total payload size, not the protocol overhead. Alternatively, if you do need to account for the overhead, then my guess would be that you are supposed to estimate, based on measured or assumed characteristics of the network. If you've set this problem for yourself, then I can only say that people generally make one of the two computations I just described, not the one you asked about.

Can a linux socket return data less than the underlying packet? [duplicate]

When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?
It will be split when it hits a network device with a lower MTU than the packet's size. Most ethernet devices are 1500, but it can often be smaller, like 1492 if that ethernet is going over PPPoE (DSL) because of the extra routing information, even lower if a second layer is added like Windows Internet Connection Sharing. And dialup is normally 576!
In general though you should remember that TCP is not a packet protocol. It uses packets at the lowest level to transmit over IP, but as far as the interface for any TCP stack is concerned, it is a stream protocol and has no requirement to provide you with a 1:1 relationship to the physical packets sent or received (for example most stacks will hold messages until a certain period of time has expired, or there are enough messages to maximize the size of the IP packet for the given MTU)
As an example if you sent two "packets" (call your send function twice), the receiving program might only receive 1 "packet" (the receiving TCP stack might combine them together). If you are implimenting a message type protocol over TCP, you should include a header at the beginning of each message (or some other header/footer mechansim) so that the receiving side can split the TCP stream back into individual messages, either when a message is received in two parts, or when several messages are received as a chunk.
Fragmentation should be transparent to a TCP application. Keep in mind that TCP is a stream protocol: you get a stream of data, not packets! If you are building your application based on the idea of complete data packets then you will have problems unless you add an abstraction layer to assemble whole packets from the stream and then pass the packets up to the application.
The question makes an assumption that is not true -- TCP does not deliver packets to its endpoints, rather, it sends a stream of bytes (octets). If an application writes two strings into TCP, it may be delivered as one string on the other end; likewise, one string may be delivered as two (or more) strings on the other end.
RFC 793, Section 1.5:
"The TCP is able to transfer a
continuous stream of octets in each
direction between its users by
packaging some number of octets into
segments for transmission through the
internet system."
The key words being continuous stream of octets (bytes).
RFC 793, Section 2.8:
"There is no necessary relationship
between push functions and segment
boundaries. The data in any particular
segment may be the result of a single
SEND call, in whole or part, or of
multiple SEND calls."
The entirety of section 2.8 is relevant.
At the application layer there are any number of reasons why the whole 1500 bytes may not show up one read. Various factors in the internal operating system and TCP stack may cause the application to get some bytes in one read call, and some in the next. Yes, the TCP stack has to re-assemble the packet before sending it up, but that doesn't mean your app is going to get it all in one shot (it is LIKELY will get it in one read, but it's not GUARANTEED to get it in one read).
TCP tries to guarantee in-order delivery of bytes, with error checking, automatic re-sends, etc happening behind your back. Think of it as a pipe at the app layer and don't get too bogged down in how the stack actually sends it over the network.
This page is a good source of information about some of the issues that others have brought up, namely the need for data encapsulation on an application protocol by application protocol basis Not quite authoritative in the sense you describe but it has examples and is sourced to some pretty big names in network programming.
If a packet exceeds the maximum MTU of a network device it will be broken up into multiple packets. (Note most equipment is set to 1500 bytes, but this is not a necessity.)
The reconstruction of the packet should be entirely transparent to the applications.
Different network segments can have different MTU values. In that case fragmentation can occur. For more information see TCP Maximum segment size
This (de)fragmentation happens in the TCP layer. In the application layer there are no more packets. TCP presents a contiguous data stream to the application.
A the "application layer" a TCP packet (well, segment really; TCP at its own layer doesn't know from packets) is never fragmented, since it doesn't exist. The application layer is where you see the data as a stream of bytes, delivered reliably and in order.
If you're thinking about it otherwise, you're probably approaching something in the wrong way. However, this is not to say that there might not be a layer above this, say, a sequence of messages delivered over this reliable, in-order bytestream.
Correct - the most informative way to see this is using Wireshark, an invaluable tool. Take the time to figure it out - has saved me several times, and gives a good reality check
If a 3000 byte packet enters an Ethernet network with a default MTU size of 1500 (for ethernet), it will be fragmented into two packets of each 1500 bytes in length. That is the only time I can think of.
Wireshark is your best bet for checking this. I have been using it for a while and am totally impressed

recommended chunk size for sending in C?

I have a server which needs to transfer a large amount of data (~1-2 gigs) per request. What is the recommended amount of data that each call to send should have? Does it matter?
The TCP/IP stack takes care of not sending packets larger than the PATH MTU, and you would have to work rather hard to make it send packets smaller than the MTU, in ways that wouldn't help throughput: and there is no way of dsicovering what the path PTU actually is for any given connection. So leave all consideration of the path MTU out of it.
You can and should make every send() as big as possible.
Your packet size is constraint by the Maximum Transfer Unit (MTU) of the protocol. If you send a packet bigger than the MTU then you get packet fragmentation.
You can do some math: http://www.speedguide.net/articles/mtu-what-difference-does-it-make--111
More References: http://en.wikipedia.org/wiki/Maximum_transmission_unit
But ultimately my suggestion is that if you aim for good enough and just let the os network layer do its work, unless you are programming raw sockets or passing some funky options, is pretty common that the os will have a network buffer and will try to do its best.
Considering this last option then the socket.send() is nothing else than a memory transfer from your user process memory to the kernel's private memory. The rule of thumb is to not exceed the virtual memory page size http://en.wikipedia.org/wiki/Page_(computer_memory) that is usually 4KB.

libpcap format - packet header - incl_len / orig_len

The libpcap packet header structure has 2 length fields:
typedef struct pcaprec_hdr_s {
guint32 ts_sec; /* timestamp seconds */
guint32 ts_usec; /* timestamp microseconds */
guint32 incl_len; /* number of octets of packet saved in file */
guint32 orig_len; /* actual length of packet */
} pcaprec_hdr_t;
incl_len: the number of bytes of packet data actually captured and saved in the file. This value should never become larger than orig_len or the snaplen value of the global header.
orig_len: the length of the packet as it appeared on the network when it was captured. If incl_len and orig_len differ, the actually saved packet size was limited by snaplen.
Can any one tell me what is the difference between the 2 length fields? We are saving the packet in entirely then how can the 2 differ?
Reading through the documentation at the Wireshark wiki ( http://wiki.wireshark.org/Development/LibpcapFileFormat ) and studying an example pcap file, it looks like incl_len and orig_len are usually the same quantity. The only time they will differ is if the length of the packet exceeded the size of snaplen, which is specified in the global header for the file.
I'm just guessing here, but I imagine that snaplen specifies the size of the static buffer used for capturing. In the event that a packet was too large for the capture buffer, this is the format's method for signaling that fact. snaplen is documented to "usually" be 65535, which is large enough for most packets. But the documentation stipulates that the size might be limited by the user.
Can any one tell me what is the difference between the 2 length fields? We are saving the packet in entirely then how can the 2 differ?
If you're saving the entire packet, the 2 shouldn't differ.
However, if, for example, you run tcpdump or TShark or dumpcap or a capture-from-the-command-line Wireshark and specify a small value with the "-s n" flag, or specify a small value in the "Limit each packet to [n] bytes" option in the Wireshark GUI, then libpcap/WinPcap will be passed that value and will only supply the first n bytes of each packet to the program, and the entire packet won't be saved.
A limited "snapshot length" means you don't see all the packet data, so some analysis might not be possible, but means that less memory is needed in the OS to buffer packets (so fewer packets might be dropped), and less CPU bandwidth is needed to copy packet data to the application and less disk bandwidth is needed to save packets to disk if the application is saving them (which might also reduce the number of packets dropped), and less disk space is needed for the saved packets.

Sending TCP frames of fixed length

I need to send some data over the subnet with fixed non-standard MTU (for example, 1560) using TCP.
All the Ethernet frames transfered through this subnet should be manually padded with 0's, if the frame's length is less than MTU.
So, the data size should be
(1560 - sizeof( IP header ) - sizeof( TCP header ) ).
This is the way I am going to do it:
I set the TCP_CORK option to decrease the fragmenting of data. It is not reliable, because there is 200 millisecond ceiling, but it works.
I know size of IP header (20 bytes), so data length should be equal to (1540 - sizeof( TCP header )).
That's the problem. I don't know the TCP header size. The size of it's "Options" field is floating.
So, the question is: how to get the size of TCP header? Or maybe there is some way to send TCP frames with headers of fixed length?
Trying to control the size of frames when using TCP from the user application is wrong. You are working at the wrong abstraction level. It's also impossible.
What you should be doing is either consider replacing TCP with something else (UDP?) or, less likely, but possible, rewrite your Ethernet driver to set the non standard MTU and do the padding you need.
This isn't possible using the TCP stack of the host simply because a TCP stack that follows RFC 793 isn't supposed to offer this kind of access to an application.
That is, there isn't (and there shouldn't be) a way to influence what the lower layers do with your data. Of course, there are ways to influence what TCP does (Nagle for example) but that is against the spirit of the protocol. TCP should be used for what it's best at: transferring a continuous, ordered stream of bytes. Nothing more, nothing less. No messages, packets, frames.
If after all you do need to control such details, you need to look at lower-level APIs. You could use SOCK_RAW and PF_PACKET.
Packet sockets are used to receive or
send raw packets at the device driver
(OSI Layer 2) level.
#gby mentioned UDP and that is (partially) a good idea: UDP has a fixed size. But keep in mind that you will have to deal with IP fragmentation (or use IP_DONTFRAG).
In addition to my comments below the OP's question, this quote from the original RFC outlining how to send TCP/IP over ethernet is relevant:
RFC 894 (emphasis mine):
If necessary, the data field should be padded (with octets of zero) to meet the Ethernet minimum frame size.
If they wanted all ethernet frames to be at maximum size, they would have said so. They did not.
Maybe what was meant by padding is that the TCP header padding to align it on 32 bits should be all zeros : http://freesoft.org/CIE/Course/Section4/8.htm

Resources