I'm writing network analyzer and I need to filter packets saved in file, I have written some code to filter http packets but I'm not sure if it work as it should because when I use my code on a pcap dump the result is 5 packets but in wireshark writing http in filter gives me 2 packets and if I use:
tcpdump port http -r trace-1.pcap
it gives me 11 packets.
Well, 3 different results, that's a little confusing.
The filter and the packet processing in me code is:
...
if (pcap_compile(handle, &fcode, "tcp port 80", 1, netmask) < 0)
...
while ((packet = pcap_next(handle,&header))) {
u_char *pkt_ptr = (u_char *)packet;
//parse the first (ethernet) header, grabbing the type field
int ether_type = ((int)(pkt_ptr[12]) << 8) | (int)pkt_ptr[13];
int ether_offset = 0;
if (ether_type == ETHER_TYPE_IP) // ethernet II
ether_offset = 14;
else if (ether_type == ETHER_TYPE_8021Q) // 802
ether_offset = 18;
else
fprintf(stderr, "Unknown ethernet type, %04X, skipping...\n", ether_type);
//parse the IP header
pkt_ptr += ether_offset; //skip past the Ethernet II header
struct ip_header *ip_hdr = (struct ip_header *)pkt_ptr;
int packet_length = ntohs(ip_hdr->tlen);
printf("\n%d - packet length: %d, and the capture lenght: %d\n", cnt++,packet_length, header.caplen);
}
My question is why there are 3 different result when filtering the http? And/Or if I'm filtering it wrong then how can I do it right, also is there a way to filter http(or ssh, ftp, telnet ...) packets using something else than the port numbers?
Thanks
So I have figured it out. It took a little search and understanding but I did it.
Wireshark filter set to http filter packets that have set in tcp port 80 and also flags set to PSH, ACK. After realizing this, the tcpdump command parameters which result in the same numbers of packets was easy to write.
So now the wireshark and tcpdump gives the same results
What about my code? well I figured that I actually had an error in my question, the filter
if (pcap_compile(handle, &fcode, "tcp port 80", 1, netmask) < 0)
indeed gives 11 packets (src and dst port set to 80 no matter what tcp flags are)
Now to filter the desired packets is a question of good understanding the filter syntax
or setting to filter only port 80 (21,22, ...) and then in callback function or in while loop get the tcp header and from there get the flags and use mask to see if it is the correct packet (PSH, ACK, SYN ...) the flags number are for example here
Related
I am using DPDK19.11.10 on centos.
The application is working fine with HW offloading if I send only the IPV4 packet without the VLAN header.
If I add the VLAN header with IPV4, HW offloading is not working.
If capture the pcap on ubuntu gateway the IP header is corrupted with Fragmented IP packet even though we are not fragmenting IP packet.
We verified capabalities like this:
if (!(dev->tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT)) {
rte_panic(" VLAN offload not supported");
}
Below is my code:
.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_VLAN_INSERT),
m->l2_len = L2_HDR_SIZE;
m->l3_len = L3_IPV4_HDR_SIZE;
ip_hdr->check = 0;
m->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CKSUM;
ip_hdr = rte_pktmbuf_mtod(m, struct iphdr *);
vlan1_hdr = (struct vlan1_hdr *) rte_pktmbuf_prepend(m, sizeof(struct vlan1_hdr));
eth_hdr = (struct ethernet_hdr *) rte_pktmbuf_prepend(m, (uint16_t)sizeof(struct ethernet_hdr));
Once I received the packet in the ubuntu gateway the IP packet is corrupted as a fragmented IP packet.
The same code works fine if I removed the VLAN header.
Does anything else need to add here?
By the sound of it,
You might misunderstand the way how HW Tx VLAN offload is supposed to work;
Your code does not update m->l2_len when it inserts a VLAN header.
First of all, your code enables support for HW Tx VLAN offload, but, oddly enough, it does not actually attempt to use it. If one wants to use hardware Tx VLAN offload, they should set PKT_TX_VLAN in m->ol_flags and fill out m->vlan_tci. The VLAN header will be added by the hardware.
However, your code prepends the header itself, like if there was no intent to use a hardware offload in the first place. Your code does m->l2_len = L2_HDR_SIZE;, which, as I presume, only counts for Ethernet header. When your code prepends a VLAN header, this variable has to be updated accordingly:
m->l2_len += sizeof(struct rte_vlan_hdr);
Most DPDK NIC PMD supports HW VLAN offload (RX direction). But a limited number of PMD support the DEV_TX_OFFLOAD_VLAN_INSERT feature namely
Aquantia Atlantic
Marvell OCTEON CN9K/CN10K SoC
Cisco VIC adapter
Pensando NIC
OCTEON TX2
Wangxun 10 Gigabit Ethernet NIC and
Intel NIC - i40e, ice, iavf, ixgbe, igb
To enable HW VLAN INSERT one needs to check
if DEV_TX_OFFLOAD_VLAN_INSERT by checking get_dev_info
configure tx offload for the port with DEV_TX_OFFLOAD_VLAN_INSERT
enable MBUF descriptor with ol_flags = PKT_TX_VLAN and vlan_tci = [desired TCI in big-endian format]
This will allow the driver code in xmit function to check mbuf descriptors ol_flags for PKT_TX_VLAN and enables VLAN Insert offload to Hardware by registering the appropriate command with the Packet Descriptor before DMA.
From DPDK conditions are to be satisfied
at a given instance there should only be 1 thread access and updating the mbuf.
no modification for the mbuf is to be done on the original mbuf (with payload).
If the intention is to perform VLAN insert via SW (especially if HW or virtual NIC PMD does not support), in dpdk one has to do the following
Ensure the refcnt is 1 to prevent multiple thread access and modification on the intended buffer.
There is enough headroom to shift the packet 4 bytes to accommodate the Ether type and VLAN values.
ensure pkt_len and data_len are in bound (greater than 60 bytes and less than 4 bytes of MTU)
MBUF offload descriptors is not enabled for PKT_TX_VLAN
update data_len on the modified MBUF by 4 Bytes.
Update total pkt_len by 4.
(optional for performance consideration) prefetch the 4 bytes prior to mtod of mbuf memory address
Note: All the above things are easily achieved by using the DPDK function rte_vlan_insert. TO use the same follow the steps as
Do not configure the port with DEV_TX_OFFLOAD_VLAN_INSERT.
Update ol_flags with PKT_TX_VLAN and vlan_tci desired value.
Invoke rte_vlan_insert with the mbuf
Sample code:
/* Get burst of RX packets, from first port of pair. */
struct rte_mbuf *bufs[BURST_SIZE];
const uint16_t nb_rx = rte_eth_rx_burst(port, 0, bufs, BURST_SIZE);
if (unlikely(nb_rx == 0))
continue;
for (int i = 0; i < nb_rx; i++) {
bufs[i]->ol_flags = PKT_TX_VLAN;
bufs[i]->vlan_tci = 0x10;
rte_vlan_insert(&bufs[i]);
}
/* Send burst of TX packets, to second port of pair. */
const uint16_t nb_tx = rte_eth_tx_burst(port, 0,
bufs, nb_rx);
/* Free any unsent packets. */
if (unlikely(nb_tx < nb_rx)) {
uint16_t buf;
for (buf = nb_tx; buf < nb_rx; buf++)
rte_pktmbuf_free(bufs[buf]);
}
I’m writing a C program which builds an Ethernet/IPv4/TCP network packet, then writes the packet into a PCAP file for inspection. I build my code off the SO post here. The first version of my code worked perfectly, but it was one big main() function, and that is not portable into larger programs.
So I reorganized the code so I could port it into another program. I don’t want to get into the differences between Version 1 and Version 2 in this post. But needless to say, Version 2 works great, except for one annoying quirk. When Wireshark opened a Version 1 PCAP file, it saw that my Layer 2 was Ethernet II:
Frame 1: 154 bytes on wire (1232 bits), 154 bytes captured (1232 bits)
Ethernet II, Src: 64:96:c8:fa:fc:ff (64:96:c8:fa:fc:ff), Dst: Woonsang_04:05:06 (01:02:03:04:05:06)
Destination: Woonsang_04:05:06 (01:02:03:04:05:06)
Source: 64:96:c8:fa:fc:ff (64:96:c8:fa:fc:ff)
Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 10.10.10.10, Dst: 20.20.20.20
Transmission Control Protocol, Src Port: 22, Dst Port: 55206, Seq: 1, Ack: 1, Len: 100
SSH Protocol
But in Version 2, the Layer 2 header became 802.3 Ethernet:
Frame 1: 154 bytes on wire (1232 bits), 134 bytes captured (1072 bits)
IEEE 802.3 Ethernet
Destination: Vibratio_1c:08:00 (00:09:70:1c:08:00)
Source: 45:00:23:28:06:cf (45:00:23:28:06:cf)
Length: 64
Trailer: 050401040204000001020506040400070602040704060202…
Logical-Link Control
Data (61 bytes)
[Packet size limited during capture: Ethernet truncated]
I’m no expert in networking, but I’m guessing my Version 2 PCAP file is malformed somewhere. I should not have a Logical-Link Control header in there; my code thinks it is writing Ethernet II / IPv4 / TCP headers. At this point, my instinct is that either the PCAP Packet header (necessary to proceed every packet in a PCAP file) or my Ethernet header is incorrect, somehow. Which would tell Wireshark “the next X bytes are an Ethernet II header?"
Here’s my code, in excerpts:
The structs for the PCAP header and Ethernet frames were cribbed directly from the before-mentioned SO post. The solution in that post was to use the pcap_sf_pkthdr struct for the PCAP Packet header:
// struct for PCAP Packet Header - Timestamp
struct pcap_timeval {
bpf_int32 tv_sec; // seconds
bpf_int32 tv_usec; // microseconds
};
// struct for PCAP Packet Header
struct pcap_sf_pkthdr {
struct pcap_timeval ts; // time stamp
bpf_u_int32 caplen; // length of portion present
bpf_u_int32 len; // length this packet (off wire)
};
And the Ethernet header is from the original post:
// struct for the Ethernet header
struct ethernet {
u_char mac1[6];
u_char mac2[6];
u_short protocol; // will be ETHERTYPE_IP, for IPv4
};
There’s not much to either struct, right? I don’t really understand how Wireshark looks at this and knows the first 20 bytes of the packet are Ethernet.
Here’s the actual code, slightly abridged:
#include <netinet/in.h> // for ETHERTYPE_IP
struct pcap_sf_pkthdr* allocatePCAPPacketHdr(struct pcap_sf_pkthdr* pcapPacketHdr ){
pcapPacketHdr = malloc( sizeof(struct pcap_sf_pkthdr) );
if( pcapPacketHdr == NULL ){
return NULL;
}
uint32_t frameSize = sizeof( struct ethernet) + …correctly computed here
bzero( pcapPacketHdr, sizeof( struct pcap_sf_pkthdr ) );
pcapPacketHdr->ts.tv_sec = 0; // for now
pcapPacketHdr->ts.tv_usec = 0; // for now
pcapPacketHdr->caplen = frameSize;
pcapPacketHdr->len = frameSize;
return pcapPacketHdr;
}
void* allocateL2Hdr( packetChecklist* pc, void* l2header ){
l2header = malloc( sizeof( struct ethernet ) );
if( l2header == NULL ){
return NULL;
}
bzero( ((struct ethernet*)l2header)->mac1, 6 );
bzero( ((struct ethernet*)l2header)->mac2, 6 );
// …MAC addresses filled in later…
((struct ethernet*)l2header)->protocol = ETHERTYPE_IP; // This is correctly set
return l2header;
}
...and the code which uses the above functions...
struct pcap_sf_pkthdr* pcapPacketHdr;
pcapPacketHdr = allocatePCAPPacketHdr( pcapPacketHdr );
struct ethernet* l2header;
l2header = allocateL2Hdr( l2header );
Later, the code populates these structs and writes them into a file, along with an IPv4 header, a TCP header, and so on.
But I think my problem is that I don’t really understand how Wireshark is supposed to know that my Ethernet header is Ethernet II and not 802.3 Ethernet with an Logical-Link Header. Is that communicated in the PCAP Packet Header? Or in the ethernet frame somewhere? I’m hoping for advice. Thank you
Wireshark is supposed to know that my Ethernet header is Ethernet II and not 802.3 Ethernet with an Logical-Link Header. Is that communicated in the PCAP Packet Header?
No.
Or in the ethernet frame somewhere?
Yes.
If you want the details, see, for example, the "Types" section of the Wikipedia "Ethernet frame" page.
However, the problem appears to be that the packet you're writing to the file doesn't have the full 6-byte destination and source addresses in it - the last two bytes of the destination address are 0x08 0x00, which are the first two bytes of a big-endian value of ETHERTYPE_IP (0x0800), and the first byte of the source address is 0x45, which is the first byte of an IPv4 header for an IPv4 packet with no IP options.
Somehow, Version 1 of your program put the destination and source addresses into the data part of the pcap record, but Version 2 didn't.
I am pretty new to networking and I have been trying to understand ARP requests. I've been using mininet and wireshark in order to test what I'm doing.
When I use mininet to generate 2 hosts (h1 and h2) and a switch, my ARP broadcast is immediately responded with an ARP reply, everything works correctly.
When I use a given router.py script that generates the following on mininet -
*** Creating network
*** Adding controller
*** Adding hosts:
h1x1 h1x2 h2x1 h2x2 h3x1 h3x2 r0
*** Adding switches:
s1 s2 s3
*** Adding links:
(h1x1, s1) (h1x2, s1) (h2x1, s2) (h2x2, s2) (h3x1, s3) (h3x2, s3) (s1, r0) (s2, r0) (s3, r0)
*** Configuring hosts
h1x1 h1x2 h2x1 h2x2 h3x1 h3x2 r0
*** Starting controller
c0
*** Starting 3 switches
s1 s2 s3 ...
*** Routing Table on Router:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 r0-eth3
172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 r0-eth2
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 r0-eth1
// ./a.out Send <InterfaceName> <DestIP> <RouterIP> <Message>
mininet> h1x1 ./a.out Send h1x1-eth0 10.0.0.1 192.168.1.100 'This is a test'
This is how I run my command on mininet to run the ARP request.
When I try to run the ARP request using destination IP 10.0.0.1 and the router IP 192.168.1.00 my ARP request broadcasts normally, but I do not get the ARP reply, instead I get a series of ICMPv6 responses.
Here is how I am creating my ARP header
struct arp_hdr construstArpRequest(char if_name[], int sockfd, struct in_addr dst, struct ifreq if_hwaddr) {
printf("Constructing ARP request --\n");
struct arp_hdr arphdr;
arphdr.ar_hrd = htons(0x0001);
arphdr.ar_pro = htons(0x0800);
arphdr.ar_hln = 6;
arphdr.ar_pln = 4;
arphdr.ar_op = htons(0x0001);
unsigned long sip = get_ip_saddr(if_name, sockfd); // source IP
memcpy(arphdr.ar_sip, &sip, 4); // source IP
memcpy(arphdr.ar_tip, &dst.s_addr, 4); // taget IP
memset(arphdr.ar_tha, 0, 6); // taget HA
memcpy(arphdr.ar_sha, if_hwaddr.ifr_hwaddr.sa_data, 6); // source HA
return arphdr;
}
And I create my ARP request
int sockfd = -1;
if((sockfd = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL))) < 0){
perror("socket() failed!");
}
// connect to an internet frame
struct ifreq if_hwaddr;
memset(&if_hwaddr, 0, sizeof(struct ifreq));
strncpy(if_hwaddr.ifr_name, interfaceName, IFNAMSIZ-1);
if(ioctl(sockfd, SIOCGIFHWADDR, &if_hwaddr) < 0){
perror("SIOCGIFHWADDR");
}
struct arp_hdr arpRequest;
arpRequest = construstArpRequest(interfaceName, sockfd, router_ip, if_hwaddr);
If I need to include code about how I am actually sending the request, I can but not sure if it is necessary code. Throughout my research I have come across some answers saying that you will not get the broadcast response because you are running it over a network, it that's the case, how do you get the target MAC address?
ARP requests are for IPv4 only, and use broadcast (IPv6 does not have broadcast, and it uses NDP, not ARP), but routers do not forward broadcasts to a different network.
A source host will mask the destination address with its configured mask to determine if the destination address is on the same network. If the destination is on the same network, it will use ARP (either in the ARP table, or send a new ARP request) to determine the destination host data-link address and use that to build the data-link frame. If the destination is on a different network, the source host will use ARP (either in the ARP table, or send a new ARP request) to determine the data-link address of its configured gateway, and it will use the gateway data-link address to build the data-link frame.
You are trying to use an ARP request for a host on a different network, and that will not work. Trying to send an ARP request for a destination on a different network will get no response, and you are seeing that (you need to implement a timeout for your ARP requests, and send an error message up the network stack to the requesting process when it times out).
The IPv6 traffic you see is normal IPv6 maintenance traffic that periodically happens on a LAN where IPv6 is configured.
Unicast(one to one) UDP communication, each time the packet received is not same; if I am sending 1000 packets within interval of 500ms I get 9 packets missed. I am working on windows platform VCC 6.0; using sendto system call to send the Ethernet packet. In the host side I miss the packets by checksum error or header error.
Please let me know if you need more details. My agenda is that I should not miss any packets in target side.
Any help regarding this issue will be highly appreciated.
{
//Initialize local variables
MAINAPP(pAppPtr);
int iResult = 0;
int sRetVal = 0;
static char cTransmitBuffer[1024];
unsigned long ulTxPacketLength =0;
int in_usTimeOut = 0;
unsigned short usTimeout = 0;
S_QJB_POWER_CNTRL S_Out_QJB_Power_Cntrl = {0};
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_ucHeader[0] = QJB_TCP_HEADER_BYTE1;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_ucHeader[1] = QJB_TCP_HEADER_BYTE2;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usCmdID = QJB_ETH_POWER_ON;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usCmdResults = 0;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usDataSize = sizeof(S_QJB_POWER_CNTRL);
//Fill the controls & delay
sRetVal = PowerCntrlStructFill(&pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.U_Tcp_Msg.S_QJB_PowerCntrl,&usTimeout);
if(sRetVal)
{
return sRetVal;
}
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usReserved = 0;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usChecksum = 0;
//Perform Endian Swap
pAppPtr->objEndianConv.EndianSwap(&pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.U_Tcp_Msg.S_QJB_PowerCntrl, &S_Out_QJB_Power_Cntrl);
//Frame the transmission packet
QJB_Frame_TXBuffer(cTransmitBuffer, &(pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg), &ulTxPacketLength,(void *)&S_Out_QJB_Power_Cntrl);
//Send the data to the target
iResult = sendto(pAppPtr->sktConnectSocket,cTransmitBuffer,ulTxPacketLength,0,(struct sockaddr *)&pAppPtr->g_dest_sin, sizeof(pAppPtr->g_dest_sin));
if(iResult == SOCKET_ERROR)
{
return QJB_TARGET_DISCONNECTED;
}
memset(&pAppPtr->S_Tcp_Handle.Tcp_Rx_Msg,0,sizeof(S_QJB_ETHERNET_PKT));// 1336
//Send the Command and obtain the response
sRetVal = QJB_ETHResRev(pAppPtr->sktConnectSocket,&pAppPtr->S_Tcp_Handle.Tcp_Rx_Msg,3);
return sRetVal;
}
Sathishkumar.
Unfortunately, since UDP makes no guarantees about delivery, the network stack can drop your sent packets at any time for any reason. It is worth noting there is no order gurantee of the orders the packets will arrive in as well.
If ordering and delivery is prime to your application, which I think it is, consider switching to TCP.
I am trying to view http traffic going to and from my loopback network adapter using libpcap. I just beginning with network programming and completely new to this library. Thanks to an answer I received previously I have been successful at detecting the link-layer type on my machine's "lo0" adapter (Mac OSx).
//lookup link-layer header type
link_layer_type = pcap_datalink(handle);
if(link_layer_type == DLT_NULL){
printf("DLT_NULL"); // this true in the case of "lo0"
}
The Programming with Pcap guide makes the assumption that each packet will contain an ethernet header. So the logic used to find a packet's payload is as follows:
ethernet = (struct sniff_ethernet*)(packet);
ip = (struct sniff_ip*)(packet + SIZE_ETHERNET);
size_ip = IP_HL(ip)*4;
if (size_ip < 20) {
printf(" * Invalid IP header length: %u bytes\n", size_ip);
return;
}
tcp = (struct sniff_tcp*)(packet + SIZE_ETHERNET + size_ip);
size_tcp = TH_OFF(tcp)*4;
if (size_tcp < 20) {
printf(" * Invalid TCP header length: %u bytes\n", size_tcp);
return;
}
}
payload = (u_char *)(packet + SIZE_ETHERNET + size_ip + size_tcp);
This logic is clearing not going to work when inspecting the contents of packet originating from the loopback interface where an ethernet header does not exists. The Link-Layer Header Types documentation states that a Link-Layer type of "DTL_NULL" contains a 4 byte header which consist of a PF_ value containing the network-layer protocol (I'm guess IPv4 in my case).
Given the above information.. how can I properly locate the packet's payload location?
Any guidance or information would be very appreciated. Thanks!
Given the above information.. how can I properly locate the packet's payload location?
For DLT_NULL, your program should extract the first 4 bytes of the packet data as a 32-bit number. If you're doing a live capture, you can extract it in the host's byte order and compare it against your OS's values of AF_INET and AF_INET6 (if it has an AF_INET6 definition; these days, most current OS versions should, as they should support IPv6); if you're reading a capture file, you'd need to byte-swap the value if pcap_is_swapped() returns a non-zero value (you can also use it for live captures; it always returns zero for live captures), and you'll need to compare against several different "IPv6" values (24, 28, and 30), each of which mean "IPv6" on some particular OS (fortunately, AF_INET is 2 on all OSes that support DLT_NULL, as they all took that value from 4.2BSD).
If the value is the IPv4 value (2, as per the above), then after those 4 bytes you have the IPv4 header for the packet. If it's one of the IPv6 values, then after those 4 bytes you have the IPv6 header for the packet. If it's not any of those values, it's some other protocol.