C 40bit byte swap (endian) - c

I'm reading/writing a binary file in little-endian format from big-endian using C and bswap_{16,32,64} macros from byteswap.h for byte-swapping.
All values are read and written correctly, except a bit-field of 40 bits.
The bswap_40 macro doesn't exist and I don't know how do it or if a better solution is possible.
Here is a small code showing this problem:
#include <stdio.h>
#include <inttypes.h>
#include <byteswap.h>
#define bswap_40(x) bswap_64(x)
struct tIndex {
uint64_t val_64;
uint64_t val_40:40;
} s1 = { 5294967296, 5294967296 };
int main(void)
{
// write swapped values
struct tIndex s2 = { bswap_64(s1.val_64), bswap_40(s1.val_40) };
FILE *fp = fopen("index.bin", "w");
fwrite(&s2, sizeof(s2), 1, fp);
fclose(fp);
// read swapped values
struct tIndex s3;
fp = fopen("index.bin", "r");
fread(&s3, sizeof(s3), 1, fp);
fclose(fp);
s3.val_64 = bswap_64(s3.val_64);
s3.val_40 = bswap_40(s3.val_40);
printf("val_64: %" PRIu64 " -> %s\n", s3.val_64, (s1.val_64 == s3.val_64 ? "OK" : "Error"));
printf("val_40: %" PRIu64 " -> %s\n", s3.val_40, (s1.val_40 == s3.val_40 ? "OK" : "Error"));
return 0;
}
That code is compiled with:
gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
swap_40.c -o swap_40
How can I define bswap_40 macro for read and write these values of 40 bits doing byte-swap?

By defining bswap_40 to be the same as bswap_64, you're swapping 8 bytes instead of 5. So if you start with this:
00 00 00 01 02 03 04 05
You end up with this:
05 04 03 02 01 00 00 00
Instead of this:
00 00 00 05 04 03 02 01
The simplest way to handle this is to take the result of bswap_64 and right shift it by 24:
#define bswap_40(x) (bswap_64(x) >> 24)

EDIT
I got better performance writing this macro (comparing with my initial code, this produced less assembly instructions):
#define bswap40(s) \
((((s)&0xFF) << 32) | (((s)&0xFF00) << 16) | (((s)&0xFF0000)) | \
(((s)&0xFF000000) >> 16) | (((s)&0xFF00000000) >> 32))
use:
s3.val_40 = bswap40(s3.val_40);
... but it might be an optimizer issue. I thinks they should be optimized to the same thing.
Original Post
I love dbush's answer better... I was about to write this:
static inline void bswap40(void* s) {
uint8_t* bytes = s;
bytes[0] ^= bytes[3];
bytes[1] ^= bytes[2];
bytes[3] ^= bytes[0];
bytes[2] ^= bytes[1];
bytes[0] ^= bytes[3];
bytes[1] ^= bytes[2];
}
It's a destructive inline function for switching the bytes...

I'm reading/writing a binary file in little-endian format from big-endian using C and bswap_{16,32,64} macros from byteswap.h for byte-swapping.
Suggest a different way of approaching this problem: Far more often, code needs to read a file in a known endian format and then convert to the code's endian. This may involve a byte swap, it may not the trick is to write code that works under all conditions.
unsigned char file_data[5];
// file data is in big endidan
fread(file_data, sizeof file_data, 1, fp);
uint64_t y = 0;
for (i=0; i<sizeof file_data; i++) {
y <<= 8;
y |= file_data[i];
}
printf("val_64: %" PRIu64 "\n", y);
uint64_t val_40:40; is not portable. Bit ranges on types other that int, signed int, unsigned are not portable and have implementation specified behavior.
BTW: Open the file in binary mode:
// FILE *fp = fopen("index.bin", "w");
FILE *fp = fopen("index.bin", "wb");

Related

Store C structs for multiple platform use - would this approach work?

Compiler: GNU GCC
Application type: console application
Language: C
Platforms: Win7 and Linux Mint
I wrote a program that I want to run under Win7 and Linux. The program writes C structs to a file and I want to be able to create the file under Win7 and read it back in Linux and vice versa.
By now, I have learned that writing complete structs with fwrite() will give almost 100% assurance that it won't be read back correctly by the other platform. This due to padding and maybe other causes.
I defined all structs myself and they (now, after my previous question on this forum) all have members of type int32_t, int64_t and char. I am thinking about writing a WriteStructname() function for each struct that will write the individual members as int32_t, int64_t and char to the outputfile. Likewise, a ReadStructname() function to read the individual struct members from the file and copy them to an empty struct again.
Would this approach work? I prefer to have maximum control over my sourcecode, so I'm not looking for libraries or other dependencies to achieve this unless I really have to.
Thanks for reading
Element-wise writing of data to a file is your best approach, since structs will differ due to alignment and packing differences between compilers.
However, even with the approach you're planning on using, there are still potential pitfalls, such as different endianness between systems, or different encoding schemes (ie: two's complement versus one's complement encoding of signed numbers).
If you're going to do this, you should consider something like a JSON parser to encode and decode your data so you don't corrupt it due to the issues mentioned above.
Good luck!
If you use GCC or any other compiler that supports "packed" structs, as long you avoid yourself from using anything but [u]intX_t types in the struct, and execute endianness fix in any field where type is bigger than 8 bits, you are platform safe :)
This is an example code where you get portability between platforms, do not forget to manually edit the endianness UIP_BYTE_ORDER.
#include <stdint.h>
#include <stdio.h>
/* These macro are set manually, you should use some automated detection methodology */
#define UIP_BIG_ENDIAN 1
#define UIP_LITTLE_ENDIAN 2
#define UIP_BYTE_ORDER UIP_LITTLE_ENDIAN
/* Borrowed from uIP */
#ifndef UIP_HTONS
# if UIP_BYTE_ORDER == UIP_BIG_ENDIAN
# define UIP_HTONS(n) (n)
# define UIP_HTONL(n) (n)
# define UIP_HTONLL(n) (n)
# else /* UIP_BYTE_ORDER == UIP_BIG_ENDIAN */
# define UIP_HTONS(n) (uint16_t)((((uint16_t) (n)) << 8) | (((uint16_t) (n)) >> 8))
# define UIP_HTONL(n) (((uint32_t)UIP_HTONS(n) << 16) | UIP_HTONS((uint32_t)(n) >> 16))
# define UIP_HTONLL(n) (((uint64_t)UIP_HTONL(n) << 32) | UIP_HTONL((uint64_t)(n) >> 32))
# endif /* UIP_BYTE_ORDER == UIP_BIG_ENDIAN */
#else
#error "UIP_HTONS already defined!"
#endif /* UIP_HTONS */
struct __attribute__((__packed__)) s_test
{
uint32_t a;
uint8_t b;
uint64_t c;
uint16_t d;
int8_t string[13];
};
struct s_test my_data =
{
.a = 0xABCDEF09,
.b = 0xFF,
.c = 0xDEADBEEFDEADBEEF,
.d = 0x9876,
.string = "bla bla bla"
};
void save()
{
FILE * f;
f = fopen("test.bin", "w+");
/* Fix endianness */
my_data.a = UIP_HTONL(my_data.a);
my_data.c = UIP_HTONLL(my_data.c);
my_data.d = UIP_HTONS(my_data.d);
fwrite(&my_data, sizeof(my_data), 1, f);
fclose(f);
}
void read()
{
FILE * f;
f = fopen("test.bin", "r");
fread(&my_data, sizeof(my_data), 1, f);
fclose(f);
/* Fix endianness */
my_data.a = UIP_HTONL(my_data.a);
my_data.c = UIP_HTONLL(my_data.c);
my_data.d = UIP_HTONS(my_data.d);
}
int main(int argc, char ** argv)
{
save();
return 0;
}
Thats the saved file dump:
fanl#fanl-ultrabook:~/workspace-tmp/test3$ hexdump -v -C test.bin
00000000 ab cd ef 09 ff de ad be ef de ad be ef 98 76 62 |..............vb|
00000010 6c 61 20 62 6c 61 20 62 6c 61 00 00 |la bla bla..|
0000001c
This is a good approach. If all fields are integer types of a specific size such as int32_t, int64_t, or char, and you read/write the appropriate number of them to/from arrays, you should be fine.
The one thing you need to watch out for is endianness. Any integer type should be written in a known byte order and read back in the proper byte order for the system in question. The simplest way to do this is with the ntohs and htons functions for 16-bit ints and the ntohl and htonl functions for 32-bit ints. There's no corresponding standard functions for 64-bit ints, but that shouldn't be to difficult to write.
Here's a sample of how you could write these functions for 64 bit:
uint64_t htonll(uint64_t val)
{
uint8_t v[8];
uint64_t *result = (uint64_t *)v;
int i;
for (i=0; i<8; i++) {
v[i] = (uint8_t)(val >> ((7-i) * 8));
}
return *result;
}
uint64_t ntohll(uint64_t val)
{
uint8_t *v = (uint8_t *)&val;
uint64_t result = 0;
int i;
for (i=0; i<8; i++) {
result |= (uint64_t)v[i] << ((7-i) * 8);
}
return result;
}

How would you read from a text file in C if you know the character coding, then display it on the console?

Consider this example in Java:
public final class Meh
{
private static final String HELLO = "Hello world";
private static final Charset UTF32 = Charset.forName("UTF-32");
public static void main(final String... args)
throws IOException
{
final Path tmpfile = Files.createTempFile("test", "txt");
try (
final Writer writer = Files.newBufferedWriter(tmpfile, UTF32);
) {
writer.write(HELLO);
}
final String readBackFromFile;
try (
final Reader reader = Files.newBufferedReader(tmpfile, UTF32);
) {
readBackFromFile = CharStreams.toString(reader);
}
Files.delete(tmpfile);
System.out.println(HELLO.equals(readBackFromFile));
}
}
This program prints true. Now, some notes:
a Charset in Java is a class wrapping a character coding, both ways; you can get a CharsetDecoder to decode a stream of bytes to a stream of characters, or a CharsetEncoder to encode a stream of characters into a stream of bytes;
this is why Java has char vs byte;
for historical reasons however, a char is only a 16bit unsigned number: this is because when Java was born, Unicode did not define code points outside of what is now known as the BMP (Basic Multilingual Plane; that is, any code points defined in range U+0000-U+FFFF, inclusive).
With all this out of the way, the code above performs the following:
given some "text", represented here as a String, it first applies a transformation of this text into a byte sequence before writing it to a file;
then it reads back that file: it is only a sequence of bytes, but then it applies the reverse transformation to find back the "original text" stored in it;
note that CharStreams.toString() is not in the standard JDK; this is a class from Guava.
Now, as to C... My question is as follows:
discussing the matter on the C chat room, I have learned that the C11 standard has, with <uchar.h>, what seems to be appropriate to store a Unicode code point, regardless of the encoding;
however, there doesn't seem to be the equivalent of Java's Charset; another comment on the chat room is that with C you're SOL but that C++ has codecvt...
And yes, I'm aware that UTF-32 is endianness-dependent; with Java, that is BE by default.
But basically: how would I program the above in C? Let's say I want to program the writing side or reading side in C, how would I do it?
In C, you'd typically use a library like libiconv, libunistring, or ICU.
If you only want to process UTF-32, you can directly write and read an array of 32-bit integers containing the Unicode code points, either in little or big endian. Unlike UTF-8 or UTF-16, a UTF-32 string doesn't need any special encoding and decoding. You can use any 32-bit integer type. I'd prefer C99's uint32_t over C11's char32_t. For example:
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main() {
// Could also contain non-ASCII code points.
static const uint32_t hello[] = {
'H', 'e', 'l', 'l', 'o', ' ',
'w', 'o', 'r', 'l', 'd'
};
static size_t num_chars = sizeof(hello) / sizeof(uint32_t);
const char *path = "test.txt";
FILE *outstream = fopen(path, "wb");
// Write big endian 32-bit integers
for (size_t i = 0; i < num_chars; i++) {
uint32_t code_point = hello[i];
for (int j = 0; j < 4; j++) {
int c = (code_point >> ((3 - j) * 8)) & 0xFF;
fputc(c, outstream);
}
}
fclose(outstream);
FILE *instream = fopen(path, "rb");
// Get file size.
fseek(instream, 0, SEEK_END);
long file_size = ftell(instream);
rewind(instream);
if (file_size % 4) {
fprintf(stderr, "File contains partial UTF-32");
exit(1);
}
if (file_size > SIZE_MAX) {
fprintf(stderr, "File too large");
exit(1);
}
size_t num_chars_in = file_size / sizeof(uint32_t);
uint32_t *read_back = malloc(file_size);
// Read big endian 32-bit integers
for (size_t i = 0; i < num_chars_in; i++) {
uint32_t code_point = 0;
for (int j = 0; j < 4; j++) {
int c = fgetc(instream);
code_point |= c << ((3 - j) * 8);
}
read_back[i] = code_point;
}
fclose(instream);
bool equal = num_chars == num_chars_in
&& memcmp(hello, read_back, file_size) == 0;
printf("%s\n", equal ? "true" : "false");
free(read_back);
return 0;
}
(Most error checks omitted for brevity.)
Compiling and running this program:
$ gcc -std=c99 -Wall so.c -o so
$ ./so
true
$ hexdump -C test.txt
00000000 00 00 00 48 00 00 00 65 00 00 00 6c 00 00 00 6c |...H...e...l...l|
00000010 00 00 00 6f 00 00 00 20 00 00 00 77 00 00 00 6f |...o... ...w...o|
00000020 00 00 00 72 00 00 00 6c 00 00 00 64 |...r...l...d|
0000002c

Reading double from binary file (byte order?)

I have a binary file, and I want to read a double from it.
In hex representation, I have these 8 bytes in a file (and then some more after that):
40 28 25 c8 9b 77 27 c9 40 28 98 8a 8b 80 2b d5 40 ...
This should correspond to a double value of around 10 (based on what that entry means).
I have used
#include<stdio.h>
#include<assert.h>
int main(int argc, char ** argv) {
FILE * f = fopen(argv[1], "rb");
assert(f != NULL);
double a;
fread(&a, sizeof(a), 1, f);
printf("value: %f\n", a);
}
However, that prints
value: -261668255698743527401808385063734961309220864.000000
So clearly, the bytes are not converted into a double correctly. What is going on?
Using ftell, I could confirm that 8 bytes are being read.
Just like integer types, floating point types are subject to platform endianness. When I run this program on a little-endian machine:
#include <stdio.h>
#include <stdint.h>
uint64_t byteswap64(uint64_t input)
{
uint64_t output = (uint64_t) input;
output = (output & 0x00000000FFFFFFFF) << 32 | (output & 0xFFFFFFFF00000000) >> 32;
output = (output & 0x0000FFFF0000FFFF) << 16 | (output & 0xFFFF0000FFFF0000) >> 16;
output = (output & 0x00FF00FF00FF00FF) << 8 | (output & 0xFF00FF00FF00FF00) >> 8;
return output;
}
int main()
{
uint64_t bytes = 0x402825c89b7727c9;
double a = *(double*)&bytes;
printf("%f\n", a);
bytes = byteswap64(bytes);
a = *(double*)&bytes;
printf("%f\n", a);
return 0;
}
Then the output is
12.073796
-261668255698743530000000000000000000000000000.000000
This shows that your data is stored in the file in little endian format, but your platform is big endian. So, you need to perform a byte swap after reading the value. The code above shows how to do that.
Endianness is convention. Reader and writer should agree on what endianness to use and stick to it.
You should read your number as int64, convert endianness and then cast to double.

How to randomly access word aligned data on ARM processors?

ARM CPUs at least up to ARMv5 do not allow random access to memory addresses which are not word aligned. The problem is described in length here: http://lecs.cs.ucla.edu/wiki/index.php/XScale_alignment – One solution is to rewrite your code or consider this alignment in the first place. However it's not said how. Given a byte stream where I have 2- or 4-byte integers which are not word aligned in the stream. How do I access this data in a smart way without losing to much performance?
I have a code snippet which illustrates the problem:
#include <stdio.h>
#include <stdlib.h>
#define BUF_LEN 17
int main( int argc, char *argv[] ) {
unsigned char buf[BUF_LEN];
int i;
unsigned short *p_short;
unsigned long *p_long;
/* fill array */
(void) printf( "filling buffer:" );
for ( i = 0; i < BUF_LEN; i++ ) {
/* buf[i] = 1 << ( i % 8 ); */
buf[i] = i;
(void) printf( " %02hhX", buf[i] );
}
(void) printf( "\n" );
/* testing with short */
(void) printf( "accessing with short:" );
for ( i = 0; i < BUF_LEN - sizeof(unsigned short); i++ ) {
p_short = (unsigned short *) &buf[i];
(void) printf( " %04hX", *p_short );
}
(void) printf( "\n" );
/* testing with long */
(void) printf( "accessing with long:" );
for ( i = 0; i < BUF_LEN - sizeof(unsigned long); i++ ) {
p_long = (unsigned long *) &buf[i];
(void) printf( " %08lX", *p_long );
}
(void) printf( "\n" );
return EXIT_SUCCESS;
}
On a x86 CPU this is the output:
filling buffer: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10
accessing with short: 0100 0201 0302 0403 0504 0605 0706 0807 0908 0A09 0B0A 0C0B 0D0C 0E0D 0F0E
accessing with long: 03020100 04030201 05040302 06050403 07060504 08070605 09080706 0A090807 0B0A0908 0C0B0A09 0D0C0B0A 0E0D0C0B 0F0E0D0C
On a ATMEL AT91SAM9G20 ARMv5 core I get (note: this is the expected behaviour of this CPU!):
filling buffer: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10
accessing with short: 0100 0100 0302 0302 0504 0504 0706 0706 0908 0908 0B0A 0B0A 0D0C 0D0C 0F0E
accessing with long: 03020100 00030201 01000302 02010003 07060504 04070605 05040706 06050407 0B0A0908 080B0A09 09080B0A 0A09080B 0F0E0D0C
So given I want or have to access the byte stream at not aligned addresses: how would I do that efficiently on ARM?
You write your own packing/unpacking functions, which translate between aligned variables and the unaligned byte stream. For example,
void unpack_uint32(uint8_t* unaligned_stream, uint32_t* aligned_var)
{
// copy byte-by-byte from stream to var, you can fill in the details
}
Your example will demonstrate problems on any platform. the simple fix of course:
unsigned char *buf;
int i;
unsigned short *p_short;
unsigned long p_long[BUF_LEN>>2];
if you cannot organize the data with better alignment (more bytes can at times equal better performance) then do the obvious and address everything as 32 bits and chop out portions from there, the optimizer will take care of a lot of it for the shorts and bytes within a word (actually including bytes and shorts in your structures, be they structures or bytes picked out of memory, can be more costly as there will be extra instructions than if you passed everything around as words, you have to do your system engineering).
An example to extract an unaligned word. (have to manage your endians of course)
a = (lptr[offset]<<16)|(lptr[offset+1]>>16);
All arm cores from the armv4 to the present allow unaligned access, most by default have the exception turned on but you can turn it off. Now the older ones rotate within the word but others can grab other byte lanes if I am not mistaken.
Do your system engineering, do your performance analysis and determine if moving everything as words is faster or slower. The actual moving of data will have some overhead, but code on both sides will run much faster if everything is aligned. Can you suffer some number X times slower data move to have a 2x to 4x improvement on generation and reception of that data?
This function always uses aligned 32-bit accesses:
uint32_t fetch_unaligned_uint32 (uint8_t *unaligned_stream)
{
switch (((uint32_t )unaligned_stream) & 3u)
{
case 3u:
return ((*(uint32_t *)unaligned_stream[-3]) << 24)
| ((*(uint32_t *)unaligned_stream[ 1]) & 0xffffffu);
case 2u:
return ((*(uint32_t *)unaligned_stream[-2]) << 16)
| ((*(uint32_t *)unaligned_stream[ 2]) & 0x00ffffu);
case 1u:
return ((*(uint32_t *)unaligned_stream[-1]) << 8)
| ((*(uint32_t *)unaligned_stream[ 3]) & 0x0000ffu);
case 0u:
default:
return *(uint32_t *)unaligned_stream;
}
}
It may be faster than reading and shifting all 4 bytes separately.

Why does fread mess with my byte order?

Im trying to parse a bmp file with fread() and when I begin to parse, it reverses the order of my bytes.
typedef struct{
short magic_number;
int file_size;
short reserved_bytes[2];
int data_offset;
}BMPHeader;
...
BMPHeader header;
...
The hex data is 42 4D 36 00 03 00 00 00 00 00 36 00 00 00;
I am loading the hex data into the struct by fread(&header,14,1,fileIn);
My problem is where the magic number should be 0x424d //'BM' fread() it flips the bytes to be 0x4d42 // 'MB'
Why does fread() do this and how can I fix it;
EDIT: If I wasn't specific enough, I need to read the whole chunk of hex data into the struct not just the magic number. I only picked the magic number as an example.
This is not the fault of fread, but of your CPU, which is (apparently) little-endian. That is, your CPU treats the first byte in a short value as the low 8 bits, rather than (as you seem to have expected) the high 8 bits.
Whenever you read a binary file format, you must explicitly convert from the file format's endianness to the CPU's native endianness. You do that with functions like these:
/* CHAR_BIT == 8 assumed */
uint16_t le16_to_cpu(const uint8_t *buf)
{
return ((uint16_t)buf[0]) | (((uint16_t)buf[1]) << 8);
}
uint16_t be16_to_cpu(const uint8_t *buf)
{
return ((uint16_t)buf[1]) | (((uint16_t)buf[0]) << 8);
}
You do your fread into an uint8_t buffer of the appropriate size, and then you manually copy all the data bytes over to your BMPHeader struct, converting as necessary. That would look something like this:
/* note adjustments to type definition */
typedef struct BMPHeader
{
uint8_t magic_number[2];
uint32_t file_size;
uint8_t reserved[4];
uint32_t data_offset;
} BMPHeader;
/* in general this is _not_ equal to sizeof(BMPHeader) */
#define BMP_WIRE_HDR_LEN (2 + 4 + 4 + 4)
/* returns 0=success, -1=error */
int read_bmp_header(BMPHeader *hdr, FILE *fp)
{
uint8_t buf[BMP_WIRE_HDR_LEN];
if (fread(buf, 1, sizeof buf, fp) != sizeof buf)
return -1;
hdr->magic_number[0] = buf[0];
hdr->magic_number[1] = buf[1];
hdr->file_size = le32_to_cpu(buf+2);
hdr->reserved[0] = buf[6];
hdr->reserved[1] = buf[7];
hdr->reserved[2] = buf[8];
hdr->reserved[3] = buf[9];
hdr->data_offset = le32_to_cpu(buf+10);
return 0;
}
You do not assume that the CPU's endianness is the same as the file format's even if you know for a fact that right now they are the same; you write the conversions anyway, so that in the future your code will work without modification on a CPU with the opposite endianness.
You can make life easier for yourself by using the fixed-width <stdint.h> types, by using unsigned types unless being able to represent negative numbers is absolutely required, and by not using integers when character arrays will do. I've done all these things in the above example. You can see that you need not bother endian-converting the magic number, because the only thing you need to do with it is test magic_number[0]=='B' && magic_number[1]=='M'.
Conversion in the opposite direction, btw, looks like this:
void cpu_to_le16(uint8_t *buf, uint16_t val)
{
buf[0] = (val & 0x00FF);
buf[1] = (val & 0xFF00) >> 8;
}
void cpu_to_be16(uint8_t *buf, uint16_t val)
{
buf[0] = (val & 0xFF00) >> 8;
buf[1] = (val & 0x00FF);
}
Conversion of 32-/64-bit quantities left as an exercise.
I assume this is an endian issue. i.e. You are putting the bytes 42 and 4D into your short value. But your system is little endian (I could have the wrong name), which actually reads the bytes (within a multi-byte integer type) left to right instead of right to left.
Demonstrated in this code:
#include <stdio.h>
int main()
{
union {
short sval;
unsigned char bval[2];
} udata;
udata.sval = 1;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
udata.sval = 0x424d;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
udata.sval = 0x4d42;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
return 0;
}
Gives the following output
DEC[ 1] HEX[0001] BYTES[01][00]
DEC[16973] HEX[424d] BYTES[4d][42]
DEC[19778] HEX[4d42] BYTES[42][4d]
So if you want to be portable you will need to detect the endian-ness of your system and then do a byte shuffle if required. There will be plenty of examples round the internet of swapping the bytes around.
Subsequent question:
I ask only because my file size is 3 instead of 196662
This is due to memory alignment issues. 196662 is the bytes 36 00 03 00 and 3 is the bytes 03 00 00 00. Most systems need types like int etc to not be split over multiple memory words. So intuitively you think your struct is laid out im memory like:
Offset
short magic_number; 00 - 01
int file_size; 02 - 05
short reserved_bytes[2]; 06 - 09
int data_offset; 0A - 0D
BUT on a 32 bit system that means files_size has 2 bytes in the same word as magic_number and two bytes in the next word. Most compilers will not stand for this, so the way the structure is laid out in memory is actually like:
short magic_number; 00 - 01
<<unused padding>> 02 - 03
int file_size; 04 - 07
short reserved_bytes[2]; 08 - 0B
int data_offset; 0C - 0F
So when you read your byte stream in the 36 00 is going into your padding area which leaves your file_size as getting the 03 00 00 00. Now if you used fwrite to create this data it should have been OK as the padding bytes would have been written out. But if your input is always going to be in the format you have specified it is not appropriate to read the whole struct as one with fread. Instead you will need to read each of the elements individually.
Writing a struct to a file is highly non-portable -- it's safest to just not try to do it at all. Using a struct like this is guaranteed to work only if a) the struct is both written and read as a struct (never a sequence of bytes) and b) it's always both written and read on the same (type of) machine. Not only are there "endian" issues with different CPUs (which is what it seems you've run into), there are also "alignment" issues. Different hardware implementations have different rules about placing integers only on even 2-byte or even 4-byte or even 8-byte boundaries. The compiler is fully aware of all this, and inserts hidden padding bytes into your struct so it always works right. But as a result of the hidden padding bytes, it's not at all safe to assume a struct's bytes are laid out in memory like you think they are. If you're very lucky, you work on a computer that uses big-endian byte order and has no alignment restrictions at all, so you can lay structs directly over files and have it work. But you're probably not that lucky -- certainly programs that need to be "portable" to different machines have to avoid trying to lay structs directly over any part of any file.

Resources