IP checksum calculating - c

i'm trying to calculate the checksum of an ip address header(without options), following the algorithm: divide the header in 16 bit words, sum all the words, apply NOT operator on the result to obtain the checksum, but i still got wrong results, sniffing the packets with wireshark i can see that they are wrong, for example, this is my method:
void compute_ip_checksum(struct ip_hdr* ip){
unsigned short* begin = (unsigned short*)ip;
unsigned short* end = begin + (IP_NOPT_HEADER_LENGTH / 2);
unsigned short checksum = 0;
ip->checksum = 0;
for (; begin != end; begin++){
checksum += *begin;
}
ip->checksum = htons(~checksum);
}
the ip header i build is:
ip.version_and_length = (IPV4 << 4) | (IP_NOPT_HEADER_LENGTH/4);
ip.type_of_service = 0;
ip.total_length = htons(IP_NOPT_HEADER_LENGTH + TCP_NOPT_HEADER_LENGTH);
ip.frag_id = 0;
ip.flags_and_frag_offset = htons(DONT_FRAGMENT << 13);
ip.time_to_live = 128;
ip.protocol = TCP_PAYLOAD;
ip.src_ip = inet_addr("1.1.1.1");
ip.dst_ip = inet_addr("1.1.1.2");
Since i'm converting all the values to the network byte order, i'm not doing any conversions in the checksum sum, only after the NOT operation, cause i'm almost sure that my windows is LITTLEENDIAN, and if thats the case the result will be put in this byteorder. the result of my functions is: 0x7a17 and the wireshark result is 0x7917 for this header. Can someone explain what is wrong here? my references are: RFC 791 and How to Calculate IpHeader Checksum

So after reading this link: wikipedia i could see that checksum is a little bit more tricky than expected, now this is the code that works for me:
void compute_ip_checksum(struct ip_hdr* ip, struct ip_options* opt){
unsigned short* begin = (unsigned short*)ip;
unsigned short* end = begin + IP_NOPT_HEADER_LENGTH / 2;
unsigned int checksum = 0, first_half, second_half;
ip->checksum = 0;
for (; begin != end; begin++){
checksum += *begin;
}
first_half = (unsigned short)(checksum >> 16);
while (first_half){
second_half = (unsigned short)((checksum << 16) >> 16);
checksum = first_half + second_half;
first_half = (unsigned short)(checksum >> 16);
}
ip->checksum = ~checksum;
}
as you can see, there is no need for conversion after the NOT operation, i've put the carry calculation in a loop because i don't know how many time i have to do this step, i think that in my case it dont exceed one.

Related

what if my network like tcp and icmp headers sizes are not multiple of 2 bytes(16 bits)then if I have one end byte how to do checksum on this one byte

So my icmp header is 9 byte long along with ping data. I used a algorithm like following to calculate checksum, but its not working
int calculate_checksum(void *vdata,size_t size)
{
uint16_t *ptr = vdata;
int i = 0;
uint16_t sum = -0xffff;
printf("size = %zu\n",size);
while(i < (size/2))
{
//printf("%d = %x \n",i,*(ptr + i));
sum += ntohs(*(ptr+i));
printf("%x %x\n",ntohs(*(ptr+i)),sum);
if(sum > 0xffff)
{
sum -=0xffff;
}
i++;
}
printf("checksum = %x\n", ~sum&0x0000FFFF);
sum = ~sum;
return htons(sum);
}
problem it looks to be my 9th byte is not being included. but its just one byte with hex value of 0x63 so to make 16 bit number so what do I need to do. Is there a way or solution for this?
what I know calcusum is doing sum on each 16 bits of headers

CRC32C - appending 0s/CRC to message

I am trying to get a better understanding of CRC, however I am stuck a bit.
There are few sample vectors here 1 which I can calculate correctly, however I am stuck with verifying that the calculated CRC is correct.
For example, given a message of 32 bytes:
000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
my understanding is that you first append 32 bits of 0's to get a payload:
000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f00000000
and calculate CRC on that message to obtain 0x73c2a486
To verify that CRC is correct you should then append CRC value to the original value, in this case:
000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f73c2a486
And this should return 0, however I don't get that.
Would greatly appreciate if anyone could point out where I am going wrong.
Edit:
Sample code that I am using:
static uint32_t crc32c_table_small[256] =
{
0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB,
0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24,
0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B,
0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35,
0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A,
0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595,
0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198,
0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38,
0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789,
0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46,
0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829,
0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93,
0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC,
0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033,
0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982,
0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622,
0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F,
0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0,
0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F,
0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1,
0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E,
0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351
};
static inline uint32_t crc32c_software_simple(uint32_t crc, const uint8_t * data, size_t num_bytes)
{
while (num_bytes--)
{
crc = (crc >> 8) ^ crc32c_table_small[(crc & 0xFF) ^ *data++];
}
return crc;
}
uint32_t num_bytes = 32;
uint32_t num_bytes_padded = num_bytes + sizeof(uint32_t);
uint8_t * test_data = (uint8_t*) malloc(num_bytes_padded);
for(uint32_t i = num_bytes; i < num_bytes_padded; i++) test_data[i] = 0;
for(uint32_t i = 0; i < num_bytes; i++)
{
test_data[i] = i;
}
binary(num_bytes_padded, test_data);
hex(num_bytes_padded, test_data);
uint32_t crc = 0xFFFFFFFF;
crc = ~crc32c_software_simple(crc, test_data, num_bytes_padded);
for(uint32_t i = 0; i < sizeof(uint32_t); i++) test_data[num_bytes + i] = ((uint8_t*)&crc)[i];
crc = 0xFFFFFFFF;
crc = ~crc32c_software_simple(crc, test_data, num_bytes_padded);
What you have there is a complete calculation of a CRC that does not require appending zeros to the end. The way it is used is to simply compute the CRC on the message (with nothing appended), and then append the computed CRC. On the other end, compute the CRC on the just the message (not including the CRC) and compare the computed CRC with the one that followed the message in the transmission. As opposed to looking for a zero. Super simple, and the way you would do it for any hash value.
It is true however that if you compute the CRC on the message and the appended CRC, assuming that the CRC is encoded in the proper bit and byte order, then the mathematics assures that the result will be the same constant, the "residual" for that CRC, for all correct message/CRC combinations. The residual in this case is not all zeros, because the CRC is exclusive-ored with a non-zero constant. You could do it by checking for the residual if you like, but it seems like a waste of time to compute the CRC on four more bytes, as well as adding some obscurity to the code, when you could just compare.
The example code does a post complement of the CRC. This will cause the verify CRC to be a constant non-zero value, in this case verify CRC == 0x48674bc7 if there are no errors (regardless of message size). The calling code needed a fix on the first call to crc32c_software_simple, to use num_bytes instead of num_bytes_padded, as noted in the comment below.
If there was no post complement of the CRC, then the verify would produce a zero CRC.
The code also does a pre-complement of the CRC, but this will not affect the verify.
int main()
{
uint32_t num_bytes = 32;
uint32_t num_bytes_padded = num_bytes + sizeof(uint32_t);
uint8_t * test_data = (uint8_t*) malloc(num_bytes_padded);
for(uint32_t i = num_bytes; i < num_bytes_padded; i++) test_data[i] = 0;
for(uint32_t i = 0; i < num_bytes; i++)
{
test_data[i] = i;
}
uint32_t crc = 0xFFFFFFFF;
crc = ~crc32c_software_simple(crc, test_data, num_bytes); // num_bytes fix
for(uint32_t i = 0; i < sizeof(uint32_t); i++) test_data[num_bytes + i] = ((uint8_t*)&crc)[i];
crc = 0xFFFFFFFF;
crc = ~crc32c_software_simple(crc, test_data, num_bytes_padded);
// if no errors, crc == 0x48674bc7
return 0;
}

RFC 1071 - Calculating IP header checksum confusion in C

I'm trying to calculate a proper IP header checksum by using the example C code of RFC 1071 but have a problem that is best described with code:
Setting up the IP Header:
#include <linux/ip.h>
typedef struct iphdr tipheader;
int main(int argc, char **argv)
{
tipheader * iphead = (tipheader *) malloc(sizeof(tipheader));
iphead->ihl = 5;
iphead->version = 4;
iphead->tos = 0;
iphead->tot_len = 60;
....
unsigned short checksum = getChecksum((unsigned short *) iphead, 20);
}
Checksum function:
unsigned short getChecksum(unsigned short * iphead, int count)
{
unsigned long int sum = 0;
unsigned short checksum = 0;
printf("\nStarting adress: %p\n", iphead);
while(count > 1) {
sum += * (unsigned short *) (iphead);
count -=2;
printf("a: %p, content is: %d, new sum: %ld\n", iphead, (unsigned short) *(iphead), sum);
iphead++;
}
if(count > 0) {
sum += * (unsigned short *) (iphead);
}
while(sum >> 16) {
sum = (sum & 0xffff) + (sum >> 16);
}
checksum = ~sum;
return checksum;
}
Iterating over the memory pointed to by iphead with an unsigned short pointer displays the following output after the first two iterations:
Starting address: 0x603090
a: 0x603090, content is: 69, new sum: 69
a: 0x603092, content is: 60, new sum: 129
So the pointer works 'as expected' and increases by 2 for every iteration.
But why is the content of the first two bytes interpreted as 69 (0x45) where it should be 0x4500
Thx for the clarification
the first two fields are only 4 bits long so,
in memory is 0x4500.
However, due to Endian'ness, it is read as 0x0045.
When printing, unless forced otherwise, leading 0s are suppressed.
so the result is 0x45 = 69

Is there a pre-existing function or code I can use to compute a TCP segment checksum in a POSIX program

I am writing a little POSIX program and I need to compute the checksum of a TCP segment, I would like use an existing function in order to avoid to writing one myself.
Something like (pseudocode) :
char *data = ....
u16_integer = computeChecksum(data);
I searched on the web but I did not find a right answer, any suggestion ?
Here, it's taken more or less directly from the RFC:
uint16_t ip_calc_csum(int len, uint16_t * ptr)
{
int sum = 0;
unsigned short answer = 0;
unsigned short *w = ptr;
int nleft = len;
while (nleft > 1) {
sum += *w++;
nleft -= 2;
}
sum = (sum >> 16) + (sum & 0xFFFF);
sum += (sum >> 16);
answer = ~sum;
return (answer);
}

How to encode a numeric value as bytes

I need to be able to be able to send a numeric value to a remote socket server and so I need to encode possible numbers as bytes.
The numbers are up to 64 bit, ie requiring up to 8 bytes. The very first byte is the type, and it is always a number under 255 so fits in 1 byte.
For example, if the number was 8 and the type was a 32 bit unsigned integer then the type would be 7 which would be copied to the first (leftmost) byte and then the next 4 bytes would be encoded with the actual number (8 in this case).
So in terms of bytes:
byte1: 7
byte2: 0
byte3: 0
byte4: 0
byte5: 8
I hope this is making sense.
Does this code to perform this encoding look like a reasonable approach?
int type = 7;
uint32_t number = 8;
unsigned char* msg7 = (unsigned char*)malloc(5);
unsigned char* p = msg7;
*p++ = type;
for (int i = sizeof(uint32_t) - 1; i >= 0; --i)
*p++ = number & 0xFF << (i * 8);
You'll want to explicitly cast type to avoid a warning:
*p++ = (unsigned char) type;
You want to encode the number with most significant byte first, but you're shifting in the wrong direction. The loop should be:
for (int i = sizeof(uint32_t) - 1; i >= 0; --i)
*p++ = (unsigned char) ((number >> (i * 8)) & 0xFF);
It looks good otherwise.
Your code is reasonable (although I'd use uint8_t, since you are not using the bytes as “characters”, and Peter is of course right wrt the typo), and unlike the commonly found alternatives like
uint32_t number = 8;
uint8_t* p = (uint8_t *) &number;
or
union {
uint32_t number;
uint8_t bytes[4];
} val;
val.number = 8;
// access val.bytes[0] .. val.bytes[3]
is even guaranteed to work. The first alternative will probably work in a debug build, but more and more compilers might break it when optimizing, while the second one tends to work in practice just about everywhere, but is explicitly marked as a bad thing™ by the language standard.
I would drop the loop and use a "caller allocates" interface, like
int convert_32 (unsigned char *target, size_t size, uint32_t val)
{
if (size < 5) return -1;
target[0] = 7;
target[1] = (val >> 24) & 0xff;
target[2] = (val >> 16) & 0xff;
target[3] = (val >> 8) & 0xff;
target[4] = (val) & 0xff;
return 5;
}
This makes it easier for the caller to concatenate multiple fragments into one big binary packet and keep track of the used/needed buffer size.
Do you mean?
for (int i = sizeof(uint32_t) - 1; i >= 0; --i)
*p++ = (number >> (i * 8)) & 0xFF;
Another option to might be to do
// this would work on Big endian systems, e.g. sparc
struct unsignedMsg {
unsigned char type;
uint32_t value;
}
unsignedMsg msg;
msg.type = 7;
msg.value = number;
unsigned char *p = (unsigned char *) &msg;
or
unsigned char* p =
p[0] = 7;
*((uint32_t *) &(p[1])) = number;

Resources