I've been trying to print unsigned short int values in C with no luck. As far as I know, it's a 16 bit value, so I've tried several different methods to print this 2 bytes together, but I've only been able to print it correctly when doing it byte by byte.
Note that I want to print this 16 bits in both a decimal and hexadecimal format, for example 00 01 as 1, another example would be: 00 ff as 255.
I have this struct:
struct arphdr {
unsigned short int ar_hrd;
unsigned short int ar_pro;
unsigned char ar_hln;
unsigned char ar_pln;
unsigned short int ar_op;
// I'm commenting the following part because I won't need it now
// for the explanation
/* unsigned char __ar_sha[ETH_ALEN];
unsigned char __ar_sip[4];
unsigned char __ar_tha[ETH_ALEN];
unsigned char __ar_tip[4]; */
};
And this function:
void print_ARP_msg(struct arphdr arp_hdr) {
// I need to print the arp_hdr.ar_hrd in both decimal and hex.
printf("Format HW: %04x\n", arp_hdr.ar_hrd);
printf("Format Proto: %04x\n", arp_hdr.ar_pro);
printf("HW Len: %x\n", arp_hdr.ar_hln);
printf("Proto Len: %x\n", arp_hdr.ar_pln);
printf("Command: %04x\n", arp_hdr.ar_op);
}
The print_ARP_msg function returns me this:
Format HW: 0100
Format Proto: 0008
HW Len: 6
Proto Len: 4
Command: 0100
The hex values of the struct are "00 01 08 00 06 04 00 01", so I don't know why it's returning me 0100 in the arp_hdr.ar_hrd value.
Also, I made a function which prints the struct in hex, to make sure that I'm doing it right, and I was able to check, that all the fields where correctly assigned.
PS: before sending this question, I realized that it's printing the correct Hex values but disordered. Could it be related to the little/big endian "difference"?
Could it be related to the little/big endian "difference"?
Yes. If you're dealing with packets that have arrived over a network - and you're printing fields of an ARP packet, so that's exactly what you're doing - you may have to convert from the byte order of the fields as they are when sent over the network to the byte order on the machine on which you're running.
For example:
printf("Format HW: %04x\n", ntohs(arp_hdr.ar_hrd));
In that particular case, you can get away without the ntohs() call on a big-endian machine (SPARC, System/3x0, z/Architecture, PowerPC/Power ISA running AIX, PowerPC/Power ISA running Mac OS X, possibly PowerPC/Power ISA running Linux, etc.), but you can't get away without it on a little-endian machine (anything with an x86 processor, including x86-64 processors, probably most ARM, etc.).
You can use it on both types of processor.
The hex values of the struct are "00 01 08 00 06 04 00 01", so I don't know why it's returning me 0100 in the arp_hdr.ar_hrd value.
Looks like your platform uses the little endian system.
What appears to be 00 01 in memory is interpreted as 01 x 2^8 + 00. In other words, the hex representation of the number is 0100.
ARP packets, like all internet protocol packets, are transmitted in "network order", which is big-endian. The client must use, for example, ntohs (network to host short) to convert from network order to the machine's local byte order, and htons to go the other way.
See man byteorder for details.
Related
I have a program that reads a binary file of 20 bytes, modifies the data and then write on another binary file. I am trying to check if the source port is bigger than 32767 or(0x7FFF).
If it is bigger then I must zero out the most significant two bits of it. I am only allowed to use bitwise operators. Anybody has any idea how could I do it? Thanks.
Data: 32 27 A9 49 03 8C BD 89 01 87 9B 8D 50 13 00 00 FF FF 00 00
Source port: 32 27 (16 bits)
void modify(const unsigned char oldData [], unsigned char newData []){
/*Accessing the souce port oldData[0] and oldData[1].*/
}
Just pick out the port value from the data. This depends on endianness. little or big.
You will need to know how your data is organized - if the 32 is the high or low part.
Assuming your data is little-endian, you can read it like this:
#include <endian.h>
void modify()
{
uint16_t port = le16toh( *((uint16_t*)oldData) );
if(port>32767)
{/*whatever*/}
For the particular case of checking against >=32768 , which is >=0x8000, can test the high-bit as already commented:
if(oldData[1]&0x80) // [1] is highbyte if using little-endian data.
{/*whatever*/}
I am trying to convert a char array into integer, then I have to increment that integer (both in little and big endian).
Example:
char ary[6 ] = { 01,02,03,04,05,06};
long int b=0; // 64 bits
this char will be stored in memory
address 0 1 2 3 4 5
value 01 02 03 04 05 06 (big endian)
Edit : value 01 02 03 04 05 06 (little endian)
-
memcpy(&b, ary, 6); // will do copy in bigendian L->R
This how it can be stored in memory:
01 02 03 04 05 06 00 00 // big endian increment will at MSByte
01 02 03 04 05 06 00 00 // little endian increment at LSByte
So if we increment the 64 bit integer, the expected value is 01 02 03 04 05 07. But endianness is a big problem here, since if we directly increment the value of the integer, it will results some wrong numbers. For big endian we need to shift the value in b, then do an increment on it.
For little endian we CAN'T increment directly. ( Edit : reverse and inc )
Can we copy the w r t to endianess? So we don't need to worry about shift operations and all.
Any other solution for incrementing char array values after copying it into integer?
Is there any API in the Linux kernel to it copy w.r.t to endianess ?
Unless you want the byte array to represent a larger integer, which doesn't seem to be the case here, endianess does not matter. Endianess only applies to integer values of 16 bits or larger. If the character array is an array of 8 bit integers, endianess does not apply. So all your assumptions are incorrect, the char array will always be stored as
address 0 1 2 3 4 5
value 01 02 03 04 05 06
No matter endianess.
However, if you memcpy the array into a uint64_t, endianess does apply. For a big endian machine, simply memcpy() and you'll get everything in the expected format. For little endian, you'll have to copy the array in reverse, for example:
#include <stdio.h>
#include <stdint.h>
int main (void)
{
uint8_t array[6] = {1,2,3,4,5,6};
uint64_t x=0;
for(size_t i=0; i<sizeof(uint64_t); i++)
{
const uint8_t bit_shifts = ( sizeof(uint64_t)-1-i ) * 8;
x |= (uint64_t)array[i] << bit_shifts;
}
printf("%.16llX", x);
return 0;
}
You need to read up on documentation. This page lists the following:
__u64 le64_to_cpup(const __le64 *);
__le64 cpu_to_le64p(const __u64 *);
__u64 be64_to_cpup(const __be64 *);
__be64 cpu_to_be64p(const __u64 *);
I believe they are sufficient to do what you want to do. Convert the number to CPU format, increment it, then convert back.
I was trying to parse the header from an SQLite database file, using this (fragment of the actual) code:
struct Header_info {
char *filename;
char *sql_string;
uint16_t page_size;
};
int read_header(FILE *db, struct Header_info *header)
{
assert(db);
uint8_t sql_buf[100] = {0};
/* load the header */
if(fread(sql_buf, 100, 1, db) != 1) {
return ERR_SIZE;
}
/* copy the string */
header->sql_string = strdup((char *)sql_buf);
/* verify that we have a proper header */
if(strcmp(header->sql_string, "SQLite format 3") != 0) {
return ERR_NOT_HEADER;
}
memcpy(&header->page_size, (sql_buf + 16), 2);
return 0;
}
Here are the relevant bytes of the file I'm testing it on:
0000000: 5351 4c69 7465 2066 6f72 6d61 7420 3300 SQLite format 3.
0000010: 1000 0101 0040 2020 0000 c698 0000 1a8e .....# ........
Following this spec, the code looks correct to me.
Later I print header->page_size with this line:
printf("\tPage size: %"PRIu16"\n", header->page_size);
But that line prints out 16, instead of the expected 4096. Why? I'm almost certain it's some basic thing that I've just overlooked.
It's an endianness problem. x86 is little-endian, that is, in memory, the least significant byte is stored first. When you load 10 00 into memory on a little-endian architecture, you therefore get 00 10 in human-readable form, which is 16 instead of 4096.
Your problem is therefore that memcpy is not an appropriate tool to read the value.
See the following section of the SQLite file format spec :
1.2.2 Page Size
The two-byte value beginning at offset 16 determines the page size of
the database. For SQLite versions 3.7.0.1 and earlier, this value is
interpreted as a big-endian integer and must be a power of two between
512 and 32768, inclusive. Beginning with SQLite version 3.7.1, a page
size of 65536 bytes is supported. The value 65536 will not fit in a
two-byte integer, so to specify a 65536-byte page size, the value is
at offset 16 is 0x00 0x01. This value can be interpreted as a
big-endian 1 and thought of is as a magic number to represent the
65536 page size. Or one can view the two-byte field as a little endian
number and say that it represents the page size divided by 256. These
two interpretations of the page-size field are equivalent.
It seems an endianness issue. If you are on a little-endian machine this line:
memcpy(&header->page_size, (sql_buf + 16), 2);
copies the two bytes 10 00 into an uint16_t which will have the low-order byte at the lower address.
You can do this instead:
header->page_size = sql_buf[17] | (sql_buf[16] << 8);
Update
For the record, note that the solution I propose will work regardless of the endianness of the machine (see this Rob Pike's Article).
The variable 'value' is uint32_t
value = htonl(value);
printf("after htonl is %ld\n\n",value);
This prints -201261056
value = htons(value);
printf("after htons is %ld\n\n",value);
This prints 62465
Please suggest what could be the reason?
I guess your input is 500, isn't it?
500 is 2**8+2**7+2**6+2**5+2**4+2**2 or 0x00 0x00 0x01 0xF4 in little endian order.
TCP/IP uses big endian. So after the htonl, the sequence is 0xF4 0x01 0x00 0x00.
If you print it as signed integer, since the first digit is 1, it is negative then. Negative numbers are regarded as complement, The value is -(2**27 + 2**25+2**24+2**23+2**22+2**21+2**20+2**19+2**18+2**17+2**16) == -201261056
Host Order is the order which your machine understands the data (assuming your machine is little endian) correctly. Network Order is Big Endian, which cannot be understood by your system properly. This is the reason for your so called garbage values.
So, basically, there is nothing with the code. : )
Google "Endianness" to get all the details about Big Endian and Little Endian.
Providing some more info, In Big endian, first byte or lowest address will have the most significant byte and in little endian, at the same place, the least significant byte will be present. So, when you use htonl, your first byte will now contain the most significant byte, but your system will think it is as the least significant byte.
Considering the wikipedia example of decimal 1000 (hex 3E8) in big endian will be 03 E8 and in little endian will be E8 03. Now, if you pass 03 E8 to a little machine, it will consider to be decimal 59395.
htonl() and htons() are functions which is used to convert the data from host's endianess to networks endiness.
Network uses big-endian. so if your system is X86, then it is little-endian.
Host to Network byte order(long data) is htonl(). i.e converts 32bit value to network byte order.
Host to Network byte order(short data) is htons(). i.e converts 16bit value to network byte order.
sample program to show how htonl() works as well as effect of using 32bit value in htons() function.
#include <stdio.h>
#include <arpa/inet.h>
int main()
{
long data = 0x12345678;
printf("\n After htonl():0x%x , 0x%x\n", htonl(data), htons(data));
return 0;
}
It will print After htonl():0x78563412 , 0x7856 on X86_64.
Reference:
http://en.wikipedia.org/wiki/Endianess
http://msdn.microsoft.com/en-us/library/windows/desktop/ms738557%28v=vs.85%29.aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ms738556%28v=vs.85%29.aspx
#halfelf>I jut want to put my findings.I just tried the below program with the same
value 500.I guess you have mistakenely mentioned output of LE as BE and vice versa
Actual output i got is 0xf4 0x01 0x00 0x00 as little Endian format.My Machine is
LE
#include <stdio.h>
#include <netinet/in.h>
/* function to show bytes in memory, from location start to start+n*/
void show_mem_rep(char *start, int n)
{
int i;
for (i = 0; i < n; i++)
printf(" %.2x-%p", start[i],start+i);
printf("\n");
}
/*Main function to call above function for 0x01234567*/
int main()
{
int i = 500;//0x01234567;
int y=htonl(i); --->(1)
printf("i--%d , y---%d,ntohl(y):%d\n",i,y,ntohl(ntohl(y)));
printf("_------LITTLE ENDIAN-------\n");
show_mem_rep((char *)&i, sizeof(i));
printf("-----BIG ENDIAN-----\n");/* i just used int y=htonl(i)(-1) for reversing
500 ,so that
i can print as if am using a BE machine. */
show_mem_rep((char *)&y, sizeof(i));
getchar();
return 0;
}
output is
i--500 , y----201261056,ntohl(y):-201261056
_------LITTLE ENDIAN-------
f4-0xbfda8f9c 01-0xbfda8f9d 00-0xbfda8f9e 00-0xbfda8f9f
-----BIG ENDIAN-----
00-0xbfda8f98 00-0xbfda8f99 01-0xbfda8f9a f4-0xbfda8f9b
Im trying to parse a bmp file with fread() and when I begin to parse, it reverses the order of my bytes.
typedef struct{
short magic_number;
int file_size;
short reserved_bytes[2];
int data_offset;
}BMPHeader;
...
BMPHeader header;
...
The hex data is 42 4D 36 00 03 00 00 00 00 00 36 00 00 00;
I am loading the hex data into the struct by fread(&header,14,1,fileIn);
My problem is where the magic number should be 0x424d //'BM' fread() it flips the bytes to be 0x4d42 // 'MB'
Why does fread() do this and how can I fix it;
EDIT: If I wasn't specific enough, I need to read the whole chunk of hex data into the struct not just the magic number. I only picked the magic number as an example.
This is not the fault of fread, but of your CPU, which is (apparently) little-endian. That is, your CPU treats the first byte in a short value as the low 8 bits, rather than (as you seem to have expected) the high 8 bits.
Whenever you read a binary file format, you must explicitly convert from the file format's endianness to the CPU's native endianness. You do that with functions like these:
/* CHAR_BIT == 8 assumed */
uint16_t le16_to_cpu(const uint8_t *buf)
{
return ((uint16_t)buf[0]) | (((uint16_t)buf[1]) << 8);
}
uint16_t be16_to_cpu(const uint8_t *buf)
{
return ((uint16_t)buf[1]) | (((uint16_t)buf[0]) << 8);
}
You do your fread into an uint8_t buffer of the appropriate size, and then you manually copy all the data bytes over to your BMPHeader struct, converting as necessary. That would look something like this:
/* note adjustments to type definition */
typedef struct BMPHeader
{
uint8_t magic_number[2];
uint32_t file_size;
uint8_t reserved[4];
uint32_t data_offset;
} BMPHeader;
/* in general this is _not_ equal to sizeof(BMPHeader) */
#define BMP_WIRE_HDR_LEN (2 + 4 + 4 + 4)
/* returns 0=success, -1=error */
int read_bmp_header(BMPHeader *hdr, FILE *fp)
{
uint8_t buf[BMP_WIRE_HDR_LEN];
if (fread(buf, 1, sizeof buf, fp) != sizeof buf)
return -1;
hdr->magic_number[0] = buf[0];
hdr->magic_number[1] = buf[1];
hdr->file_size = le32_to_cpu(buf+2);
hdr->reserved[0] = buf[6];
hdr->reserved[1] = buf[7];
hdr->reserved[2] = buf[8];
hdr->reserved[3] = buf[9];
hdr->data_offset = le32_to_cpu(buf+10);
return 0;
}
You do not assume that the CPU's endianness is the same as the file format's even if you know for a fact that right now they are the same; you write the conversions anyway, so that in the future your code will work without modification on a CPU with the opposite endianness.
You can make life easier for yourself by using the fixed-width <stdint.h> types, by using unsigned types unless being able to represent negative numbers is absolutely required, and by not using integers when character arrays will do. I've done all these things in the above example. You can see that you need not bother endian-converting the magic number, because the only thing you need to do with it is test magic_number[0]=='B' && magic_number[1]=='M'.
Conversion in the opposite direction, btw, looks like this:
void cpu_to_le16(uint8_t *buf, uint16_t val)
{
buf[0] = (val & 0x00FF);
buf[1] = (val & 0xFF00) >> 8;
}
void cpu_to_be16(uint8_t *buf, uint16_t val)
{
buf[0] = (val & 0xFF00) >> 8;
buf[1] = (val & 0x00FF);
}
Conversion of 32-/64-bit quantities left as an exercise.
I assume this is an endian issue. i.e. You are putting the bytes 42 and 4D into your short value. But your system is little endian (I could have the wrong name), which actually reads the bytes (within a multi-byte integer type) left to right instead of right to left.
Demonstrated in this code:
#include <stdio.h>
int main()
{
union {
short sval;
unsigned char bval[2];
} udata;
udata.sval = 1;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
udata.sval = 0x424d;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
udata.sval = 0x4d42;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
return 0;
}
Gives the following output
DEC[ 1] HEX[0001] BYTES[01][00]
DEC[16973] HEX[424d] BYTES[4d][42]
DEC[19778] HEX[4d42] BYTES[42][4d]
So if you want to be portable you will need to detect the endian-ness of your system and then do a byte shuffle if required. There will be plenty of examples round the internet of swapping the bytes around.
Subsequent question:
I ask only because my file size is 3 instead of 196662
This is due to memory alignment issues. 196662 is the bytes 36 00 03 00 and 3 is the bytes 03 00 00 00. Most systems need types like int etc to not be split over multiple memory words. So intuitively you think your struct is laid out im memory like:
Offset
short magic_number; 00 - 01
int file_size; 02 - 05
short reserved_bytes[2]; 06 - 09
int data_offset; 0A - 0D
BUT on a 32 bit system that means files_size has 2 bytes in the same word as magic_number and two bytes in the next word. Most compilers will not stand for this, so the way the structure is laid out in memory is actually like:
short magic_number; 00 - 01
<<unused padding>> 02 - 03
int file_size; 04 - 07
short reserved_bytes[2]; 08 - 0B
int data_offset; 0C - 0F
So when you read your byte stream in the 36 00 is going into your padding area which leaves your file_size as getting the 03 00 00 00. Now if you used fwrite to create this data it should have been OK as the padding bytes would have been written out. But if your input is always going to be in the format you have specified it is not appropriate to read the whole struct as one with fread. Instead you will need to read each of the elements individually.
Writing a struct to a file is highly non-portable -- it's safest to just not try to do it at all. Using a struct like this is guaranteed to work only if a) the struct is both written and read as a struct (never a sequence of bytes) and b) it's always both written and read on the same (type of) machine. Not only are there "endian" issues with different CPUs (which is what it seems you've run into), there are also "alignment" issues. Different hardware implementations have different rules about placing integers only on even 2-byte or even 4-byte or even 8-byte boundaries. The compiler is fully aware of all this, and inserts hidden padding bytes into your struct so it always works right. But as a result of the hidden padding bytes, it's not at all safe to assume a struct's bytes are laid out in memory like you think they are. If you're very lucky, you work on a computer that uses big-endian byte order and has no alignment restrictions at all, so you can lay structs directly over files and have it work. But you're probably not that lucky -- certainly programs that need to be "portable" to different machines have to avoid trying to lay structs directly over any part of any file.