OSI Layers on local host - c

I wrote a small application to try to display the protocol headers of captured packets. All my packets are captured with libpcap's pcap_loop. The way my program works is as follows: I wrote my own headers based of the structures defined in if_ether.h ip.h and tcp.h. pcap_loop sets a char pointer to the beginning of the packet, and I then step through the packet, casting to the appropriate structure each time, and incrementing the pointer by the size of the header. Now it's important to remember my question isn't code specific; my code works but there are logical flaws I dont undestand; Keep in mind my packets are sent over the same machine, different port(I wrote tiny python server that I send data to with telnet):
1.the ethernet header doesn't display anything that looks correct when packets are sent over localhost (When I use my program on internet packets, MAC adresses are dosplayed correctly though)
2.Through trial and error, I've determined that the structure iphdr starts exactly 16 bytes after the start of the packet buffer, as opposed to the expected 14 bytes, the size of the ethernet header
Those observations lead me to ask the following questions:
When packets are sent over local host, do we use another protocol on layer 2?
Is there anything at all that separates the packet headers?
Are the iphdr and tcphdr structures defined in ip.h and tcp.h obsolete?

When packets are sent over local host, do we use another protocol on layer 2?
There really isn't a layer 2 protocol, as there's no real network adapter.
However, there are fake layer 2 headers provided to programs that capture traffic. What fake headers are provided are operating-system-dependent.
On Linux, the fake layer 2 headers are fake Ethernet headers.
On *BSD, OS X, iOS, and, I think, Solaris 11, they're either DLT_NULL or DLT_LOOP headers, as described in the list of libpcap/WinPcap/pcap/pcap-ng link-layer header types.
However:
Through trial and error, I've determined that the structure iphdr starts exactly 16 bytes after the start of the packet buffer
if you're capturing on the "any" device, the headers are DLT_LINUX_SLL headers, which are 16 bytes long.
If you are using pcap or any pcap wrapper, you MUST, without exception, call pcap_datalink(), or the wrapper's equivalent, before trying to parse any packets you capture or read from a savefile. You MUST NOT assume the packets will have ANY particular link-layer header type.

Related

Additional header to IPv4 packet can be segmented with GSO?

I'm getting trouble with packet segmentation. I've already read from many sources about GSO, which is a generalized way for segmenting a packet with size greater than the Ethernet MTU (1500 B). However, I have not found an answer for doubts that I have in mind.
If we add a new set of bytes (ex. a new header by the name 'NH') between L2 and L3 layer, the kernel must be able to pass through NH and adjust sk_buff pointer to the beginning of the L3 to offload the packet according to the 'policy' of the L3 protocol type (ex. IPv4 fragmentation). My thoughts were to modify skb_network_protocol() function. This function, if I'm not wrong, enables skb_mac_gso_segment() to properly call GSO function for different types of L3 protocol. However, I'm not being able to segment my packets properly.
I have a kernel module that forwards packets through the network (OVS, Open vSwitch). On the tests which I've been running (h1 --ping-- h2), the host generates large ICMP packets and then sends packets which are less or equal than MTU size. Those packets are received by the first switch which attaches the new header NH, so if a packet had 1500B, it becomes 1500B + NH length. Here is the problem, the switch has already received a fragmented packet from the host, and the switch adds more bytes in the packet (kind of VLAN does).
Therefore, at first, I tried to ping large packets, but it didn't work. In OVS, before calling dev_queue_xmit(), a packet can be segmented by calling skb_gso_segment(). However, the packet needs to go through a condition checked by netif_needs_gso(). But I'm not sure if I have to use skb_gso_segment() to properly segment the packet.
I also noticed that, for the needs_gso_segment() function be true, skb_shinfo(skb)->gso_size have to be true. However, gso_size has always zero value for all the received packets. So, I made a test by attributing a random value to gso_size (ex. 1448B). Now, on my tests, I was able to ping from h1 to h2, but the first 2 packets were lost. On another test, TCP had a extremely poor performance. And since then, I've been getting a kernel warning: "[ 5212.694418] [c1642e50] ? skb_warn_bad_offload+0xd0/0xd8
"
For small packets (< MTU) I got no trouble and ping works fine. TCP works fine, but for small window size.
Someone has any idea for what's happening? Should I always use GSO when I get large packets? Is it possible to fragment a fragmented IPv4 packets?
As the new header lies between L2 and L3, I guess the enlargement of a IPv4 packet due to the additional header, is similar to what happens with VLAN. How VLAN can handle the segmentation problem?
Thanks in advance,

send ipv6 jumbograms in c (linux) : How to change packet headers

I am sorry if the question is too naive, but I am confused. I want to send IPv6 jumbograms (to be able to multicast packets of size > 64 KB). I have been able to multicast normal IPv6 UDP packets successfully.
For sending jumbograms, from RFC 2675, I get that I have to make the following changes :
set payload length to 0
set next header to hop-by-hop
But, I don't get how to implement these in c socket programming (which function calls to make etc.). Do I have to create a custom header or are there functions like sendto available to send jumbograms?
You could use raw sockets if you are making your own headers. For more information, type man -s7 raw or look here. Note you will effectively need to implement your own IP stack that way.
However, my understanding is that linux itself supports IPv6 jumbograms natively so you don't need to bother. Try ifconfig lo mtu 100000 and do some tests over the loopback device to check.
I suspect the issue might be that your network adaptor and everything on the path (end to end) needs to support the jumbograms too.

Why is the IP layer not removed by the kernel in the ping program

I am looking into the standard ping implementation. Here the icmp structure is created and the data is filled in. The IP layer is added by the kernel. However when we receive a message using the function http://linux.die.net/man/2/recvfrom I observe that they are first parsing the IP packet and then parsing the ICMp packet. Why is this happening. The code I am referrring to is the standard ping implementation available online.
It's because the header is always included when receiving an IPv4 packet on a raw socket. Notice the following in raw(7) (emphasis mine):
The IPv4 layer generates an IP header when sending a packet unless the IP_HDRINCL socket option is enabled on the socket. When it is enabled, the packet must contain an IP header. For receiving the IP header is always included in the packet.
Since the header is always included and has variable length (for IPv4), it must be parsed to figure out where the ICMP data starts.
As for why the header isn't removed (sorry if that was the only thing you were wondering), I don't know. My wild guess is that enough programs that deal in raw IPv4 want to look at the header that it didn't seem worthwhile to include stripping it as an option. From a quick look it seems the header is stripped for IPv6.
The standard ping and ping6 come from iputils by the way, where ping_common.c, ping.c, and ping6.c are the most relevant source files.

Is it possible to write a packet, read by libpcap, with libnet? in c?

I'm trying to get libpcap to read a pcap file, get the user to select a packet and write that packet using libnet, in c.
I got the reading from file part done. Libpcap puts that packet into a const unsigned char. I have worked with libnet before, but never with libnet's advanced functions. I would just create the packet using libnet's build functions, then let them on their way. I realize there is a function, libnet_adv_write_link() that takes the libnet context, a pointer to a packet to inject(const uint8_t) and the size of the packet. I tried passing the 'packet' that I got from libpcap, and it compiled and executed without errors. However, I am not seeing anything in wireshark.
Would this be the right way to tackle this problem, or should I read from libpcap and build a separate packet with libnet, based on what libpcap read?
EDIT: I believe I somewhat solved the problem. I read the packet with libpcap. Put all the bytes after the 16th byte into another uchar and wrote that into the wire. using libnet_adv_write_raw_ipv4(), libnet initialized with LIBNET_RAW4_ADV. I believe, maybe because of the driver, I don't have much power over the ETH layer. so basically I just let it be written automatically this way, and the new uchar packet is just whatever is left after the ETH layer in the original packet. Works fine so far.
I'm the current libnet maintainer.
You should call libnet_write_link() to write a packet. If you aren't seeing it, its possible you haven't opened the correct device, that you lack permissions (you checked the return value of libnet_write_link I hope), and also possible that the packet injected was invalid.
If you don't need to build the packet, it sounds like you should be using pcap to send the packet, though, see http://www.tcpdump.org/manpages/pcap_inject.3pcap.html
Also, your statement "Libpcap puts that packet into a const unsigned char" is odd. A packet doesn't fit in a single char, what pcap does is, depending on the API, return pointers into the packet data. Its worth including a snippet of code showing how you get the packet from data, and how you pass it to libnet. Its possible you aren't using the pointers correctly.
If you are using libpcap, why not use libpcap to send the packet? No, it's not well known, but yes it does work. See the function pcap_sendpacket.
The packet libpcap returns is simply an array of bytes. Anything that takes an array of bytes (including the ethernet frame) should work. However, note that your OS and/or hardware may stop you from sending packets with incorrect or malformed source MAC addresses.

pcap_open_dead to simulate full UDP packets capture

Following up on my question about pcap file creation, I now would like to simulate the saving of a full UDP packet, including the Ethernet, IP and UDP headers.
Which DLT_XXX type should I use? I believe pcap_dump() skips the Ethernet header when using pcap_open_dead(DLT_RAW, 65535).
If you want to simulate a full UDP-over-IP-over-Ethernet packet, you want DLT_EN10MB (the "10MB" in the name is historical; DLT_EN10MB really means "all types of Ethernet").
(DLT_RAW is for packets where the lowest-level headers are for IP; it doesn't skip the Ethernet header, it means that you don't have to provide an Ethernet header and, in fact, it requires that you don't provide one - if you do provide one, it'll be written to the file, which will confuse programs reading the file, as they'll expect the packets to begin with an IPv4 or IPv6 header, not an Ethernet header.)

Resources