I'm writing a packet sniffer program in c. Now it can only find HTTP packets but I want to make it in a way to get also DNS packets. I know DNS packets are UDP but I don't know how to identify DNS ones. Is there a specific thing in DNS packets to check to find them? I know port 53 is default port for DNS requests, but is it a reliable way to find them?
There is no good way to tell if a UDP packet contains DNS data: There is nothing in the UDP header, or IP header that directly tells you the data is DNS. However what you could do is first see if the source port in the UDP header is port 53 (DNS's standard UDP port) and second see if the data fits the data structure you're using to decode the header (most likely a struct). This is a very good question.
To fit the packet to a strcut you can use the following code:
This is the actual structure of a DNS header packet laid out in a struct in c:
#pragma pack(push, 1)
typedef struct
{
uint16_t id; // identification number 2b
uint8_t rd : 1; // recursion desired
uint8_t tc : 1; // truncated message
uint8_t aa : 1; // authoritive answer
uint8_t opcode : 4; // purpose of message
uint8_t qr : 1; // query/response flag
uint8_t rcode : 4; // response code
uint8_t cd : 1; // checking disabled
uint8_t ad : 1; // authenticated data
uint8_t z : 1; // its z! reserved
uint8_t ra : 1; // recursion available 4b
uint16_t q_count; // number of question entries 6b
uint16_t ans_count; // number of answer entries 8b
uint16_t auth_count; // number of authority entries 10b
uint16_t add_count; // number of resource entries 12b
}Dns_Header, *Dns_Header_P;
#pragma pack(pop)
To test this you can do this:
Dns_Header_P header = (Dns_Header_P)capture;
capture being a byte array with you DNS header.
Depending on how you're capturing the packets and how you're storing them you might need to change the endianness of the struct. If you test this with your program and it doesn't seem to have the right data or the data is switched around let me know.
Related
I am using DPDK19.11.10 on centos.
The application is working fine with HW offloading if I send only the IPV4 packet without the VLAN header.
If I add the VLAN header with IPV4, HW offloading is not working.
If capture the pcap on ubuntu gateway the IP header is corrupted with Fragmented IP packet even though we are not fragmenting IP packet.
We verified capabalities like this:
if (!(dev->tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT)) {
rte_panic(" VLAN offload not supported");
}
Below is my code:
.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_VLAN_INSERT),
m->l2_len = L2_HDR_SIZE;
m->l3_len = L3_IPV4_HDR_SIZE;
ip_hdr->check = 0;
m->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CKSUM;
ip_hdr = rte_pktmbuf_mtod(m, struct iphdr *);
vlan1_hdr = (struct vlan1_hdr *) rte_pktmbuf_prepend(m, sizeof(struct vlan1_hdr));
eth_hdr = (struct ethernet_hdr *) rte_pktmbuf_prepend(m, (uint16_t)sizeof(struct ethernet_hdr));
Once I received the packet in the ubuntu gateway the IP packet is corrupted as a fragmented IP packet.
The same code works fine if I removed the VLAN header.
Does anything else need to add here?
By the sound of it,
You might misunderstand the way how HW Tx VLAN offload is supposed to work;
Your code does not update m->l2_len when it inserts a VLAN header.
First of all, your code enables support for HW Tx VLAN offload, but, oddly enough, it does not actually attempt to use it. If one wants to use hardware Tx VLAN offload, they should set PKT_TX_VLAN in m->ol_flags and fill out m->vlan_tci. The VLAN header will be added by the hardware.
However, your code prepends the header itself, like if there was no intent to use a hardware offload in the first place. Your code does m->l2_len = L2_HDR_SIZE;, which, as I presume, only counts for Ethernet header. When your code prepends a VLAN header, this variable has to be updated accordingly:
m->l2_len += sizeof(struct rte_vlan_hdr);
Most DPDK NIC PMD supports HW VLAN offload (RX direction). But a limited number of PMD support the DEV_TX_OFFLOAD_VLAN_INSERT feature namely
Aquantia Atlantic
Marvell OCTEON CN9K/CN10K SoC
Cisco VIC adapter
Pensando NIC
OCTEON TX2
Wangxun 10 Gigabit Ethernet NIC and
Intel NIC - i40e, ice, iavf, ixgbe, igb
To enable HW VLAN INSERT one needs to check
if DEV_TX_OFFLOAD_VLAN_INSERT by checking get_dev_info
configure tx offload for the port with DEV_TX_OFFLOAD_VLAN_INSERT
enable MBUF descriptor with ol_flags = PKT_TX_VLAN and vlan_tci = [desired TCI in big-endian format]
This will allow the driver code in xmit function to check mbuf descriptors ol_flags for PKT_TX_VLAN and enables VLAN Insert offload to Hardware by registering the appropriate command with the Packet Descriptor before DMA.
From DPDK conditions are to be satisfied
at a given instance there should only be 1 thread access and updating the mbuf.
no modification for the mbuf is to be done on the original mbuf (with payload).
If the intention is to perform VLAN insert via SW (especially if HW or virtual NIC PMD does not support), in dpdk one has to do the following
Ensure the refcnt is 1 to prevent multiple thread access and modification on the intended buffer.
There is enough headroom to shift the packet 4 bytes to accommodate the Ether type and VLAN values.
ensure pkt_len and data_len are in bound (greater than 60 bytes and less than 4 bytes of MTU)
MBUF offload descriptors is not enabled for PKT_TX_VLAN
update data_len on the modified MBUF by 4 Bytes.
Update total pkt_len by 4.
(optional for performance consideration) prefetch the 4 bytes prior to mtod of mbuf memory address
Note: All the above things are easily achieved by using the DPDK function rte_vlan_insert. TO use the same follow the steps as
Do not configure the port with DEV_TX_OFFLOAD_VLAN_INSERT.
Update ol_flags with PKT_TX_VLAN and vlan_tci desired value.
Invoke rte_vlan_insert with the mbuf
Sample code:
/* Get burst of RX packets, from first port of pair. */
struct rte_mbuf *bufs[BURST_SIZE];
const uint16_t nb_rx = rte_eth_rx_burst(port, 0, bufs, BURST_SIZE);
if (unlikely(nb_rx == 0))
continue;
for (int i = 0; i < nb_rx; i++) {
bufs[i]->ol_flags = PKT_TX_VLAN;
bufs[i]->vlan_tci = 0x10;
rte_vlan_insert(&bufs[i]);
}
/* Send burst of TX packets, to second port of pair. */
const uint16_t nb_tx = rte_eth_tx_burst(port, 0,
bufs, nb_rx);
/* Free any unsent packets. */
if (unlikely(nb_tx < nb_rx)) {
uint16_t buf;
for (buf = nb_tx; buf < nb_rx; buf++)
rte_pktmbuf_free(bufs[buf]);
}
I’m writing a C program which builds an Ethernet/IPv4/TCP network packet, then writes the packet into a PCAP file for inspection. I build my code off the SO post here. The first version of my code worked perfectly, but it was one big main() function, and that is not portable into larger programs.
So I reorganized the code so I could port it into another program. I don’t want to get into the differences between Version 1 and Version 2 in this post. But needless to say, Version 2 works great, except for one annoying quirk. When Wireshark opened a Version 1 PCAP file, it saw that my Layer 2 was Ethernet II:
Frame 1: 154 bytes on wire (1232 bits), 154 bytes captured (1232 bits)
Ethernet II, Src: 64:96:c8:fa:fc:ff (64:96:c8:fa:fc:ff), Dst: Woonsang_04:05:06 (01:02:03:04:05:06)
Destination: Woonsang_04:05:06 (01:02:03:04:05:06)
Source: 64:96:c8:fa:fc:ff (64:96:c8:fa:fc:ff)
Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 10.10.10.10, Dst: 20.20.20.20
Transmission Control Protocol, Src Port: 22, Dst Port: 55206, Seq: 1, Ack: 1, Len: 100
SSH Protocol
But in Version 2, the Layer 2 header became 802.3 Ethernet:
Frame 1: 154 bytes on wire (1232 bits), 134 bytes captured (1072 bits)
IEEE 802.3 Ethernet
Destination: Vibratio_1c:08:00 (00:09:70:1c:08:00)
Source: 45:00:23:28:06:cf (45:00:23:28:06:cf)
Length: 64
Trailer: 050401040204000001020506040400070602040704060202…
Logical-Link Control
Data (61 bytes)
[Packet size limited during capture: Ethernet truncated]
I’m no expert in networking, but I’m guessing my Version 2 PCAP file is malformed somewhere. I should not have a Logical-Link Control header in there; my code thinks it is writing Ethernet II / IPv4 / TCP headers. At this point, my instinct is that either the PCAP Packet header (necessary to proceed every packet in a PCAP file) or my Ethernet header is incorrect, somehow. Which would tell Wireshark “the next X bytes are an Ethernet II header?"
Here’s my code, in excerpts:
The structs for the PCAP header and Ethernet frames were cribbed directly from the before-mentioned SO post. The solution in that post was to use the pcap_sf_pkthdr struct for the PCAP Packet header:
// struct for PCAP Packet Header - Timestamp
struct pcap_timeval {
bpf_int32 tv_sec; // seconds
bpf_int32 tv_usec; // microseconds
};
// struct for PCAP Packet Header
struct pcap_sf_pkthdr {
struct pcap_timeval ts; // time stamp
bpf_u_int32 caplen; // length of portion present
bpf_u_int32 len; // length this packet (off wire)
};
And the Ethernet header is from the original post:
// struct for the Ethernet header
struct ethernet {
u_char mac1[6];
u_char mac2[6];
u_short protocol; // will be ETHERTYPE_IP, for IPv4
};
There’s not much to either struct, right? I don’t really understand how Wireshark looks at this and knows the first 20 bytes of the packet are Ethernet.
Here’s the actual code, slightly abridged:
#include <netinet/in.h> // for ETHERTYPE_IP
struct pcap_sf_pkthdr* allocatePCAPPacketHdr(struct pcap_sf_pkthdr* pcapPacketHdr ){
pcapPacketHdr = malloc( sizeof(struct pcap_sf_pkthdr) );
if( pcapPacketHdr == NULL ){
return NULL;
}
uint32_t frameSize = sizeof( struct ethernet) + …correctly computed here
bzero( pcapPacketHdr, sizeof( struct pcap_sf_pkthdr ) );
pcapPacketHdr->ts.tv_sec = 0; // for now
pcapPacketHdr->ts.tv_usec = 0; // for now
pcapPacketHdr->caplen = frameSize;
pcapPacketHdr->len = frameSize;
return pcapPacketHdr;
}
void* allocateL2Hdr( packetChecklist* pc, void* l2header ){
l2header = malloc( sizeof( struct ethernet ) );
if( l2header == NULL ){
return NULL;
}
bzero( ((struct ethernet*)l2header)->mac1, 6 );
bzero( ((struct ethernet*)l2header)->mac2, 6 );
// …MAC addresses filled in later…
((struct ethernet*)l2header)->protocol = ETHERTYPE_IP; // This is correctly set
return l2header;
}
...and the code which uses the above functions...
struct pcap_sf_pkthdr* pcapPacketHdr;
pcapPacketHdr = allocatePCAPPacketHdr( pcapPacketHdr );
struct ethernet* l2header;
l2header = allocateL2Hdr( l2header );
Later, the code populates these structs and writes them into a file, along with an IPv4 header, a TCP header, and so on.
But I think my problem is that I don’t really understand how Wireshark is supposed to know that my Ethernet header is Ethernet II and not 802.3 Ethernet with an Logical-Link Header. Is that communicated in the PCAP Packet Header? Or in the ethernet frame somewhere? I’m hoping for advice. Thank you
Wireshark is supposed to know that my Ethernet header is Ethernet II and not 802.3 Ethernet with an Logical-Link Header. Is that communicated in the PCAP Packet Header?
No.
Or in the ethernet frame somewhere?
Yes.
If you want the details, see, for example, the "Types" section of the Wikipedia "Ethernet frame" page.
However, the problem appears to be that the packet you're writing to the file doesn't have the full 6-byte destination and source addresses in it - the last two bytes of the destination address are 0x08 0x00, which are the first two bytes of a big-endian value of ETHERTYPE_IP (0x0800), and the first byte of the source address is 0x45, which is the first byte of an IPv4 header for an IPv4 packet with no IP options.
Somehow, Version 1 of your program put the destination and source addresses into the data part of the pcap record, but Version 2 didn't.
Hi there for fun i'm developing a tiny dns client on a unix system.
I've read the documentation about dns protocol i wrote a tiny function
int makeQuestion(char* dns_addr,char *name){
int s = socket(AF_INET,SOCK_DGRAM,IPPROTO_UDP);
register int len_name = strlen(name);
if(s<0)
return errno;
struct sockaddr_in address;
bzero(&address,sizeof(address));
address.sin_port = htons(53);
address.sin_addr.s_addr = inet_addr(dns_addr);
dns_header header;
memset(&header,0,sizeof(dns_header));
header.id = htons(getpid());
header.q_count = htons(1);
dns_question quest = {
.qclass = htons(IN),
.qtype = htons(A)
};
register int pack_size = sizeof(dns_header)+len_name+2+sizeof(dns_question);
char *packet = malloc(pack_size);
memcpy(packet,&header,sizeof(dns_header));
for(int i = 0;i<len_name;i++)
*(packet +i +sizeof(dns_header)) = name[i];
packet[len_name+sizeof(dns_header)] = '.';
packet[len_name+sizeof(dns_header)+1] = '\0';
memcpy(packet+sizeof(dns_header)+len_name+2,&quest,sizeof(dns_question));
sendto(s,packet,pack_size,NULL,&address,sizeof(address));
return OK;
}
The structure for the dns header and dns query are declared like:
//DNS header structures
typedef struct dns_header
{
uint16_t id; // identification number
uint8_t rd :1; // recursion desired
uint8_t tc :1; // truncated message
uint8_t aa :1; // authoritive answer
uint8_t opcode :4; // purpose of message
uint8_t qr :1; // query/response flag
uint8_t rcode :4; // response code
uint8_t cd :1; // checking disabled
uint8_t ad :1; // authenticated data
uint8_t z :1; // its z! reserved
uint8_t ra :1; // recursion available
uint16_t q_count; // number of question entries
uint16_t ans_count; // number of answer entries
uint16_t auth_count; // number of authority entries
uint16_t add_count; // number of resource entries
}dns_header;
typedef struct dns_question
{
uint16_t qtype;
uint16_t qclass;
}dns_question;
Now i executed the code while wireshark was running and i saw the packet that seemed to be correct but in the query section wireshark said
Name: <Unknown extended label>
So the question is there is a way i have to use to store the dns name of the queried host in the packet or there is something wrong in the implementation. Sorry for the loosing of time and sorry for my English. Thanks indeed
I solved finally. Studing better the protocoll ( the domain name system) where the reference is at this link a the wrong part was in the section called qname ( the name of the host that in my case the protocoll wasn't able to determinate the size)
So as the document said qname is:
a domain name represented as a sequence of labels, where
each label consists of a length octet followed by that
number of octets. The domain name terminates with the
zero length octet for the null label of the root. Note
that this field may be an odd number of octets; no
padding is used.
So i changed my code to transform www.example.com in 3www7example3com
and everything works
Today I was investing a little more time to learn about ARP packets. To understand it's structure I tried to build one on myself in C using libpcap. I structured a simple ARP request packet and used pcap_inject function to send the packet. This function returns the number of bytes that are sent.
When I debug my code I saw that my packet was 42 bytes long. I search the Internet a bit and couldn't find a answer that tells me if this is the appropriate size for an ARP request or not. Even the wikipedia entry confused me a little. And the I discovered this post. From the answer provided by the user:
If the ARP message is to be sent in an untagged frame then the frame overhead itself is 18 bytes. That would result in a frame of
28+18=46 bytes without padding. Additional 18 bytes of padding are
necessary in this case to bloat the frame to the 64 byte length.
If the ARP message is to be sent in an 802.1Q-tagged frame then the frame overhead is 22 bytes, resulting in the total frame size of
28+22=50 bytes. In this case, the padding needs to be 14 bytes long.
If the ARP message is to be sent in a double-tagged frame then the frame overhead is 26 bytes, resulting in the total frame size of 54
bytes. In this case, the padding needs to be 10 bytes long.
My question is what do I have to do in this situation. Do I have to use padding or not?
Bellow I post the structure of my packet.
#define ETH_P_ARP 0x0806 /* Address Resolution packet */
#define ARP_HTYPE_ETHER 1 /* Ethernet ARP type */
#define ARP_PTYPE_IPv4 0x0800 /* Internet Protocol packet */
/* Ethernet frame header */
typedef struct {
uint8_t dest_addr[ETH_ALEN]; /* Destination hardware address */
uint8_t src_addr[ETH_ALEN]; /* Source hardware address */
uint16_t frame_type; /* Ethernet frame type */
} ether_hdr;
/* Ethernet ARP packet from RFC 826 */
typedef struct {
uint16_t htype; /* Format of hardware address */
uint16_t ptype; /* Format of protocol address */
uint8_t hlen; /* Length of hardware address */
uint8_t plen; /* Length of protocol address */
uint16_t op; /* ARP opcode (command) */
uint8_t sha[ETH_ALEN]; /* Sender hardware address */
uint32_t spa; /* Sender IP address */
uint8_t tha[ETH_ALEN]; /* Target hardware address */
uint32_t tpa; /* Target IP address */
} arp_ether_ipv4;
In the end I just copy each structure member in the bellow order and send the packet:
void packageARP(unsigned char *buffer, ether_hdr *frameHeader, arp_ether_ipv4 *arp_packet, size_t *bufferSize) {
unsigned char *cp;
size_t packet_size;
cp = buffer;
packet_size = sizeof(frameHeader->dest_addr)
+ sizeof(frameHeader->src_addr)
+ sizeof(frameHeader->frame_type)
+ sizeof(arp_packet->htype)
+ sizeof(arp_packet->ptype)
+ sizeof(arp_packet->hlen)
+ sizeof(arp_packet->plen)
+ sizeof(arp_packet->op)
+ sizeof(arp_packet->sha)
+ sizeof(arp_packet->spa)
+ sizeof(arp_packet->tha)
+ sizeof(arp_packet->tpa);
/*
* Copy the Ethernet frame header to the buffer.
*/
memcpy(cp, &(frameHeader->dest_addr), sizeof(frameHeader->dest_addr));
cp += sizeof(frameHeader->dest_addr);
memcpy(cp, &(frameHeader->src_addr), sizeof(frameHeader->src_addr));
cp += sizeof(frameHeader->src_addr);
/* Normal Ethernet-II framing */
memcpy(cp, &(frameHeader->frame_type), sizeof(frameHeader->frame_type));
cp += sizeof(frameHeader->frame_type);
/*
* Add the ARP data.
*/
memcpy(cp, &(arp_packet->htype), sizeof(arp_packet->htype));
cp += sizeof(arp_packet->htype);
memcpy(cp, &(arp_packet->ptype), sizeof(arp_packet->ptype));
cp += sizeof(arp_packet->ptype);
memcpy(cp, &(arp_packet->hlen), sizeof(arp_packet->hlen));
cp += sizeof(arp_packet->hlen);
memcpy(cp, &(arp_packet->plen), sizeof(arp_packet->plen));
cp += sizeof(arp_packet->plen);
memcpy(cp, &(arp_packet->op), sizeof(arp_packet->op));
cp += sizeof(arp_packet->op);
memcpy(cp, &(arp_packet->sha), sizeof(arp_packet->sha));
cp += sizeof(arp_packet->sha);
memcpy(cp, &(arp_packet->spa), sizeof(arp_packet->spa));
cp += sizeof(arp_packet->spa);
memcpy(cp, &(arp_packet->tha), sizeof(arp_packet->tha));
cp += sizeof(arp_packet->tha);
memcpy(cp, &(arp_packet->tpa), sizeof(arp_packet->tpa));
cp += sizeof(arp_packet->tpa);
*bufferSize = packet_size;
}
Is this the correct way of structuring an ARP request packet?
That's the correct structure -- except that the C compiler is free to insert padding in order to ensure structure members are placed at the most efficient boundaries. In particular, spa and tpa are not at natural 32-bit boundaries (due to the preceding 6-byte MAC address fields) and so the compiler might want to insert two bytes of padding before each.
If you are using gcc, you can ensure that doesn't happen with __attribute__((packed)):
struct {
[fields]
} __attribute__((packed)) arp_ether_ipv4;
Other compilers might have a different but equivalent mechanism (a #pragma directive for example).
The ARP payload should be 28 bytes. Adding the 14-byte ethernet header, that gives 42 total bytes. As your cite said, an 802.1Q (VLAN) header inserts an additional 4 bytes and a "double-tagged" frame (not common outside of Internet service providers) will add 2 X 4 = 8 bytes. If you're on an ordinary endpoint machine, you wouldn't typically add these headers anyway. The IT department will have configured your switches to automatically insert/remove these headers as needed.
The 42 bytes will get padded automatically to 64 bytes (Ethernet minimum packet size) by your network driver. 64 is actually 60 + the 4-byte Ethernet FCS [frame checksum]. (The post you cited is apparently including the 4-byte FCS in their calculations, which is why their numbers seem whack.)
Also, don't forget to use network byte order for all uint16_t and uint32_t fields: (ntohs and ntohl)
I'm writing a custom protocol in the linux kernel. I'm using the following structures
struct syn {
__be32 id;
__be64 cookie;
};
struct ack {
__be32 id; // Right now, setting it to 14 (Just a random choice)
__be32 sequence;
};
struct hdr {
............
__be32 type; //last element
};
When I send and receive packets, I map the structures syn and ack (for different packets) to the address of hdr->type.
This should ideally mean that the id (in syn and ack structures) should be mapped to the hdr->type and whatever follows the struct hdr should be mapped to either syn->cookie or ack->sequence, depending on which struct I'm mapping on to the hdr->type.
But on printing out the memory addresses for these variables, I get the following
//For struct syn
hdr->type at ffff880059f55444
syn->id at ffff880059f55444
syn->cookie at ffff880059f5544c //See the last two bits
//For struct ack_frame
hdr->type at ffff880059f55044
ack->id at ffff880059f55044
ack->sequence at ffff880059f55048 //See the last two bits
So why do syn->cookie and ack->sequence start at different offsets relative to hdr->type when ack->id and syn->id have the same size?
EDIT 1: I map these structures using
char *ptr = (char *)&hdr->type;
//For syn
struct syn *syn1 = (struct syn *)ptr
//For ack
struct ack *ack1 = (struct ack *)ptr
since you work in 64 bits the compiler fills struct the following:
struct syn {
uint32_t id
uint32_t hole -- the compiler must add here cause it mist align
uint64_t seq
}
I guess the data doesn't have holes, so to fix it you will need to set seq to uint32_t and cast it later.
https://gcc.gnu.org/onlinedocs/gcc/Common-Type-Attributes.html#Common-Type-Attributes
Look at packed. For whatever reason, GCC doesn't let me link directly to that section.
You should define your structure as following. Attribute packed means allignment should be as 1 byte.
Following structure will be 12 bytes longs. If you dont use "attribyte" key word, your structure would be 16 bytes long.
struct yours{
__be32 id;
__be64 cookie;
}__attribute__((__packed__));