How to randomly access word aligned data on ARM processors? - c

ARM CPUs at least up to ARMv5 do not allow random access to memory addresses which are not word aligned. The problem is described in length here: http://lecs.cs.ucla.edu/wiki/index.php/XScale_alignment – One solution is to rewrite your code or consider this alignment in the first place. However it's not said how. Given a byte stream where I have 2- or 4-byte integers which are not word aligned in the stream. How do I access this data in a smart way without losing to much performance?
I have a code snippet which illustrates the problem:
#include <stdio.h>
#include <stdlib.h>
#define BUF_LEN 17
int main( int argc, char *argv[] ) {
unsigned char buf[BUF_LEN];
int i;
unsigned short *p_short;
unsigned long *p_long;
/* fill array */
(void) printf( "filling buffer:" );
for ( i = 0; i < BUF_LEN; i++ ) {
/* buf[i] = 1 << ( i % 8 ); */
buf[i] = i;
(void) printf( " %02hhX", buf[i] );
}
(void) printf( "\n" );
/* testing with short */
(void) printf( "accessing with short:" );
for ( i = 0; i < BUF_LEN - sizeof(unsigned short); i++ ) {
p_short = (unsigned short *) &buf[i];
(void) printf( " %04hX", *p_short );
}
(void) printf( "\n" );
/* testing with long */
(void) printf( "accessing with long:" );
for ( i = 0; i < BUF_LEN - sizeof(unsigned long); i++ ) {
p_long = (unsigned long *) &buf[i];
(void) printf( " %08lX", *p_long );
}
(void) printf( "\n" );
return EXIT_SUCCESS;
}
On a x86 CPU this is the output:
filling buffer: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10
accessing with short: 0100 0201 0302 0403 0504 0605 0706 0807 0908 0A09 0B0A 0C0B 0D0C 0E0D 0F0E
accessing with long: 03020100 04030201 05040302 06050403 07060504 08070605 09080706 0A090807 0B0A0908 0C0B0A09 0D0C0B0A 0E0D0C0B 0F0E0D0C
On a ATMEL AT91SAM9G20 ARMv5 core I get (note: this is the expected behaviour of this CPU!):
filling buffer: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10
accessing with short: 0100 0100 0302 0302 0504 0504 0706 0706 0908 0908 0B0A 0B0A 0D0C 0D0C 0F0E
accessing with long: 03020100 00030201 01000302 02010003 07060504 04070605 05040706 06050407 0B0A0908 080B0A09 09080B0A 0A09080B 0F0E0D0C
So given I want or have to access the byte stream at not aligned addresses: how would I do that efficiently on ARM?

You write your own packing/unpacking functions, which translate between aligned variables and the unaligned byte stream. For example,
void unpack_uint32(uint8_t* unaligned_stream, uint32_t* aligned_var)
{
// copy byte-by-byte from stream to var, you can fill in the details
}

Your example will demonstrate problems on any platform. the simple fix of course:
unsigned char *buf;
int i;
unsigned short *p_short;
unsigned long p_long[BUF_LEN>>2];
if you cannot organize the data with better alignment (more bytes can at times equal better performance) then do the obvious and address everything as 32 bits and chop out portions from there, the optimizer will take care of a lot of it for the shorts and bytes within a word (actually including bytes and shorts in your structures, be they structures or bytes picked out of memory, can be more costly as there will be extra instructions than if you passed everything around as words, you have to do your system engineering).
An example to extract an unaligned word. (have to manage your endians of course)
a = (lptr[offset]<<16)|(lptr[offset+1]>>16);
All arm cores from the armv4 to the present allow unaligned access, most by default have the exception turned on but you can turn it off. Now the older ones rotate within the word but others can grab other byte lanes if I am not mistaken.
Do your system engineering, do your performance analysis and determine if moving everything as words is faster or slower. The actual moving of data will have some overhead, but code on both sides will run much faster if everything is aligned. Can you suffer some number X times slower data move to have a 2x to 4x improvement on generation and reception of that data?

This function always uses aligned 32-bit accesses:
uint32_t fetch_unaligned_uint32 (uint8_t *unaligned_stream)
{
switch (((uint32_t )unaligned_stream) & 3u)
{
case 3u:
return ((*(uint32_t *)unaligned_stream[-3]) << 24)
| ((*(uint32_t *)unaligned_stream[ 1]) & 0xffffffu);
case 2u:
return ((*(uint32_t *)unaligned_stream[-2]) << 16)
| ((*(uint32_t *)unaligned_stream[ 2]) & 0x00ffffu);
case 1u:
return ((*(uint32_t *)unaligned_stream[-1]) << 8)
| ((*(uint32_t *)unaligned_stream[ 3]) & 0x0000ffu);
case 0u:
default:
return *(uint32_t *)unaligned_stream;
}
}
It may be faster than reading and shifting all 4 bytes separately.

Related

C 40bit byte swap (endian)

I'm reading/writing a binary file in little-endian format from big-endian using C and bswap_{16,32,64} macros from byteswap.h for byte-swapping.
All values are read and written correctly, except a bit-field of 40 bits.
The bswap_40 macro doesn't exist and I don't know how do it or if a better solution is possible.
Here is a small code showing this problem:
#include <stdio.h>
#include <inttypes.h>
#include <byteswap.h>
#define bswap_40(x) bswap_64(x)
struct tIndex {
uint64_t val_64;
uint64_t val_40:40;
} s1 = { 5294967296, 5294967296 };
int main(void)
{
// write swapped values
struct tIndex s2 = { bswap_64(s1.val_64), bswap_40(s1.val_40) };
FILE *fp = fopen("index.bin", "w");
fwrite(&s2, sizeof(s2), 1, fp);
fclose(fp);
// read swapped values
struct tIndex s3;
fp = fopen("index.bin", "r");
fread(&s3, sizeof(s3), 1, fp);
fclose(fp);
s3.val_64 = bswap_64(s3.val_64);
s3.val_40 = bswap_40(s3.val_40);
printf("val_64: %" PRIu64 " -> %s\n", s3.val_64, (s1.val_64 == s3.val_64 ? "OK" : "Error"));
printf("val_40: %" PRIu64 " -> %s\n", s3.val_40, (s1.val_40 == s3.val_40 ? "OK" : "Error"));
return 0;
}
That code is compiled with:
gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
swap_40.c -o swap_40
How can I define bswap_40 macro for read and write these values of 40 bits doing byte-swap?
By defining bswap_40 to be the same as bswap_64, you're swapping 8 bytes instead of 5. So if you start with this:
00 00 00 01 02 03 04 05
You end up with this:
05 04 03 02 01 00 00 00
Instead of this:
00 00 00 05 04 03 02 01
The simplest way to handle this is to take the result of bswap_64 and right shift it by 24:
#define bswap_40(x) (bswap_64(x) >> 24)
EDIT
I got better performance writing this macro (comparing with my initial code, this produced less assembly instructions):
#define bswap40(s) \
((((s)&0xFF) << 32) | (((s)&0xFF00) << 16) | (((s)&0xFF0000)) | \
(((s)&0xFF000000) >> 16) | (((s)&0xFF00000000) >> 32))
use:
s3.val_40 = bswap40(s3.val_40);
... but it might be an optimizer issue. I thinks they should be optimized to the same thing.
Original Post
I love dbush's answer better... I was about to write this:
static inline void bswap40(void* s) {
uint8_t* bytes = s;
bytes[0] ^= bytes[3];
bytes[1] ^= bytes[2];
bytes[3] ^= bytes[0];
bytes[2] ^= bytes[1];
bytes[0] ^= bytes[3];
bytes[1] ^= bytes[2];
}
It's a destructive inline function for switching the bytes...
I'm reading/writing a binary file in little-endian format from big-endian using C and bswap_{16,32,64} macros from byteswap.h for byte-swapping.
Suggest a different way of approaching this problem: Far more often, code needs to read a file in a known endian format and then convert to the code's endian. This may involve a byte swap, it may not the trick is to write code that works under all conditions.
unsigned char file_data[5];
// file data is in big endidan
fread(file_data, sizeof file_data, 1, fp);
uint64_t y = 0;
for (i=0; i<sizeof file_data; i++) {
y <<= 8;
y |= file_data[i];
}
printf("val_64: %" PRIu64 "\n", y);
uint64_t val_40:40; is not portable. Bit ranges on types other that int, signed int, unsigned are not portable and have implementation specified behavior.
BTW: Open the file in binary mode:
// FILE *fp = fopen("index.bin", "w");
FILE *fp = fopen("index.bin", "wb");

Convert a char array into an integer (big and little endian)

I am trying to convert a char array into integer, then I have to increment that integer (both in little and big endian).
Example:
char ary[6 ] = { 01,02,03,04,05,06};
long int b=0; // 64 bits
this char will be stored in memory
address 0 1 2 3 4 5
value 01 02 03 04 05 06 (big endian)
Edit : value 01 02 03 04 05 06 (little endian)
-
memcpy(&b, ary, 6); // will do copy in bigendian L->R
This how it can be stored in memory:
01 02 03 04 05 06 00 00 // big endian increment will at MSByte
01 02 03 04 05 06 00 00 // little endian increment at LSByte
So if we increment the 64 bit integer, the expected value is 01 02 03 04 05 07. But endianness is a big problem here, since if we directly increment the value of the integer, it will results some wrong numbers. For big endian we need to shift the value in b, then do an increment on it.
For little endian we CAN'T increment directly. ( Edit : reverse and inc )
Can we copy the w r t to endianess? So we don't need to worry about shift operations and all.
Any other solution for incrementing char array values after copying it into integer?
Is there any API in the Linux kernel to it copy w.r.t to endianess ?
Unless you want the byte array to represent a larger integer, which doesn't seem to be the case here, endianess does not matter. Endianess only applies to integer values of 16 bits or larger. If the character array is an array of 8 bit integers, endianess does not apply. So all your assumptions are incorrect, the char array will always be stored as
address 0 1 2 3 4 5
value 01 02 03 04 05 06
No matter endianess.
However, if you memcpy the array into a uint64_t, endianess does apply. For a big endian machine, simply memcpy() and you'll get everything in the expected format. For little endian, you'll have to copy the array in reverse, for example:
#include <stdio.h>
#include <stdint.h>
int main (void)
{
uint8_t array[6] = {1,2,3,4,5,6};
uint64_t x=0;
for(size_t i=0; i<sizeof(uint64_t); i++)
{
const uint8_t bit_shifts = ( sizeof(uint64_t)-1-i ) * 8;
x |= (uint64_t)array[i] << bit_shifts;
}
printf("%.16llX", x);
return 0;
}
You need to read up on documentation. This page lists the following:
__u64 le64_to_cpup(const __le64 *);
__le64 cpu_to_le64p(const __u64 *);
__u64 be64_to_cpup(const __be64 *);
__be64 cpu_to_be64p(const __u64 *);
I believe they are sufficient to do what you want to do. Convert the number to CPU format, increment it, then convert back.

Why do I get incorrect results "ffff..." when inspecting the bytes that make up a compiled function stored in memory?

I've been delving deeper into Linux and C, and I'm curious how functions are stored in memory.
I have the following function:
void test(){
printf( "test\n" );
}
Simple enough. When I run objdump on the executable that has this function, I get the following:
08048464 <test>:
8048464: 55 push %ebp
8048465: 89 e5 mov %esp,%ebp
8048467: 83 ec 18 sub $0x18,%esp
804846a: b8 20 86 04 08 mov $0x8048620,%eax
804846f: 89 04 24 mov %eax,(%esp)
8048472: e8 11 ff ff ff call 8048388 <printf#plt>
8048477: c9 leave
8048478: c3 ret
Which all looks right.
The interesting part is when I run the following piece of code:
int main( void ) {
char data[20];
int i;
memset( data, 0, sizeof( data ) );
memcpy( data, test, 20 * sizeof( char ) );
for( i = 0; i < 20; ++i ) {
printf( "%x\n", data[i] );
}
return 0;
}
I get the following (which is incorrect):
55
ffffff89
ffffffe5
ffffff83
ffffffec
18
ffffffc7
4
24
10
ffffff86
4
8
ffffffe8
22
ffffffff
ffffffff
ffffffff
ffffffc9
ffffffc3
If I opt to leave out the memset( data, 0, sizeof( data ) ); line, then the right-most byte is correct, but some of them still have the leading 1s.
Does anyone have any explanation for why
using memset to clear my array results in an incorrect (or inaccurate) representation of the function, and
what is this byte stored as in memory? ints? char? I don't quite understand what's going on here. (clarification: what type of pointer would I use to traverse such data in memory?)
My immediate thought is that this is a result of x86 having an instructions that don't end on a byte or half-byte boundary. But that doesn't make a whole lot of sense, and shouldn't cause any problems.
I believe your chars are being sign-extended to the width of an integer. You might get results closer to what you want by explicitly casting the value when you print it.
Here is a much simpler case of the code you tried to do:
int main( void ) {
unsigned char *data = (unsigned char *)test;
int i;
for( i = 0; i < 20; ++i ) {
printf( "%02x\n", data[i] );
}
return 0;
}
The changes I made is to remove your superfluous buffer, instead using a pointer to test, use unsigned char instead of char, and change the printf to use %02x, so that it always prints two characters (it wouldn't fix the 'negative' numbers coming out as ffffff89 or so - that's fixed with the unsigned on the data pointer).
All instructions in x86 end on byte boundaries, and the compiler will often insert extra "padding-instructions" to make sure branch-targets are aligned to 4, 8 or 16-byte boundaries for efficiency.
The problem is in your code to print.
One byte is loaded from the data array. (one byte == one char)
The byte is converted to an 'int' since that's what the compiler knows 'printf' wants. To do so it sign extends the byte to a 32 bit double-word. That's what gets printed out as hex. (This means a byte with the high bit of one will get converted to a 32 bit value with bits 8-31 all set. That's the ffffffxx values you see.)
What I do in this case is to convert it myself:
printf( "%x\n", ((int)data[i] && 0xFF) );
Then it will print correctly. (If you were loading 16 bit values you'd AND with 0xffff.)
Answer to 2.: byte is stored as byte in the memory. A memory location with exactly 1 byte contained in the memory location (a byte is unsigned char).
Hint: Pick up a good book on Computer Organization(my favorite is one by Carl Hamachar and understand a good deal about how memory is internally represented)
In your code:
memset( data, 0, sizeof( data ) );// must be memset(data,0,20);
memcpy( data, test, 20 * sizeof( char ) );
for( i = 0; i < 20; ++i ) {
printf( "%x\n", data[i] );// prints a CHARACTER up-casted to an INTEGER in HEX representation, hence the extra `0xFFFFFF`
}
The printing looks odd because you're printing signed values, so they're being sign extended.
However the function being printed is also slightly different. It looks like instead of loading up EAX with the address of the string, and stuffing it onto the stack, it's just directly stored the address.
push ebp
mov ebp,esp
sub esp,18h
mov dword ptr [esp],8048610h
call <printf>
leave
ret
As to why it changes when you make seemingly benign changes elsewhere in the code - well, it's allowed to. That's why it's good not to rely on undefined behaviour.

c get data from BMP

I find myself writing a simple program to extract data from a bmp file. I just got started and I am at one of those WTF moments.
When I run the program and supply this image: http://www.hack4fun.org/h4f/sites/default/files/bindump/lena.bmp
I get the output:
type: 19778
size: 12
res1: 0
res2: 54
offset: 2621440
The actual image size is 786,486 bytes. Why is my code reporting 12 bytes?
The header format specified in,
http://en.wikipedia.org/wiki/BMP_file_format matches my BMP_FILE_HEADER structure. So why is it getting filled with wrong information?
The image file doesn't appear to be corrupt and other images are giving equally wrong outputs. What am I missing?
#include <stdio.h>
#include <stdlib.h>
typedef struct {
unsigned short type;
unsigned int size;
unsigned short res1;
unsigned short res2;
unsigned int offset;
} BMP_FILE_HEADER;
int main (int args, char ** argv) {
char *file_name = argv[1];
FILE *fp = fopen(file_name, "rb");
BMP_FILE_HEADER file_header;
fread(&file_header, sizeof(BMP_FILE_HEADER), 1, fp);
if (file_header.type != 'MB') {
printf("ERROR: not a .bmp");
return 1;
}
printf("type: %i\nsize: %i\nres1: %i\nres2: %i\noffset: %i\n", file_header.type, file_header.size, file_header.res1, file_header.res2, file_header.offset);
fclose(fp);
return 0;
}
Here the header in hex:
0000000 42 4d 36 00 0c 00 00 00 00 00 36 00 00 00 28 00
0000020 00 00 00 02 00 00 00 02 00 00 01 00 18 00 00 00
The length field is the bytes 36 00 0c 00`, which is in intel order; handled as a 32-bit value, it is 0x000c0036 or decimal 786,486 (which matches the saved file size).
Probably your C compiler is aligning each field to a 32-bit boundary. Enable a pack structure option, pragma, or directive.
There are two mistakes I could find in your code.
First mistake: You have to pack the structure to 1, so every type size is exactly the size its meant to be, so the compiler doesn't align it for example in 4 bytes alignment. So in your code, short, instead of being 2 bytes, it was 4 bytes. The trick for this, is using a compiler directive for packing the nearest struct:
#pragma pack(1)
typedef struct {
unsigned short type;
unsigned int size;
unsigned short res1;
unsigned short res2;
unsigned int offset;
} BMP_FILE_HEADER;
Now it should be aligned properly.
The other mistake is in here:
if (file_header.type != 'MB')
You are trying to check a short type, which is 2 bytes, with a char type (using ''), which is 1 byte. Probably the compiler is giving you a warning about that, it's canonical that single quotes contain just 1 character with 1-byte size.
To get this around, you can divide this 2 bytes into 2 1-byte characters, which are known (M and B), and put them together into a word. For example:
if (file_header.type != (('M' << 8) | 'B'))
If you see this expression, this will happen:
'M' (which is 0x4D in ASCII) shifted 8 bits to the left, will result in 0x4D00, now you can just add or or the next character to the right zeroes: 0x4D00 | 0x42 = 0x4D42 (where 0x42 is 'B' in ASCII). Thinking like this, you could just write:
if (file_header.type != 0x4D42)
Then your code should work.

Why does fread mess with my byte order?

Im trying to parse a bmp file with fread() and when I begin to parse, it reverses the order of my bytes.
typedef struct{
short magic_number;
int file_size;
short reserved_bytes[2];
int data_offset;
}BMPHeader;
...
BMPHeader header;
...
The hex data is 42 4D 36 00 03 00 00 00 00 00 36 00 00 00;
I am loading the hex data into the struct by fread(&header,14,1,fileIn);
My problem is where the magic number should be 0x424d //'BM' fread() it flips the bytes to be 0x4d42 // 'MB'
Why does fread() do this and how can I fix it;
EDIT: If I wasn't specific enough, I need to read the whole chunk of hex data into the struct not just the magic number. I only picked the magic number as an example.
This is not the fault of fread, but of your CPU, which is (apparently) little-endian. That is, your CPU treats the first byte in a short value as the low 8 bits, rather than (as you seem to have expected) the high 8 bits.
Whenever you read a binary file format, you must explicitly convert from the file format's endianness to the CPU's native endianness. You do that with functions like these:
/* CHAR_BIT == 8 assumed */
uint16_t le16_to_cpu(const uint8_t *buf)
{
return ((uint16_t)buf[0]) | (((uint16_t)buf[1]) << 8);
}
uint16_t be16_to_cpu(const uint8_t *buf)
{
return ((uint16_t)buf[1]) | (((uint16_t)buf[0]) << 8);
}
You do your fread into an uint8_t buffer of the appropriate size, and then you manually copy all the data bytes over to your BMPHeader struct, converting as necessary. That would look something like this:
/* note adjustments to type definition */
typedef struct BMPHeader
{
uint8_t magic_number[2];
uint32_t file_size;
uint8_t reserved[4];
uint32_t data_offset;
} BMPHeader;
/* in general this is _not_ equal to sizeof(BMPHeader) */
#define BMP_WIRE_HDR_LEN (2 + 4 + 4 + 4)
/* returns 0=success, -1=error */
int read_bmp_header(BMPHeader *hdr, FILE *fp)
{
uint8_t buf[BMP_WIRE_HDR_LEN];
if (fread(buf, 1, sizeof buf, fp) != sizeof buf)
return -1;
hdr->magic_number[0] = buf[0];
hdr->magic_number[1] = buf[1];
hdr->file_size = le32_to_cpu(buf+2);
hdr->reserved[0] = buf[6];
hdr->reserved[1] = buf[7];
hdr->reserved[2] = buf[8];
hdr->reserved[3] = buf[9];
hdr->data_offset = le32_to_cpu(buf+10);
return 0;
}
You do not assume that the CPU's endianness is the same as the file format's even if you know for a fact that right now they are the same; you write the conversions anyway, so that in the future your code will work without modification on a CPU with the opposite endianness.
You can make life easier for yourself by using the fixed-width <stdint.h> types, by using unsigned types unless being able to represent negative numbers is absolutely required, and by not using integers when character arrays will do. I've done all these things in the above example. You can see that you need not bother endian-converting the magic number, because the only thing you need to do with it is test magic_number[0]=='B' && magic_number[1]=='M'.
Conversion in the opposite direction, btw, looks like this:
void cpu_to_le16(uint8_t *buf, uint16_t val)
{
buf[0] = (val & 0x00FF);
buf[1] = (val & 0xFF00) >> 8;
}
void cpu_to_be16(uint8_t *buf, uint16_t val)
{
buf[0] = (val & 0xFF00) >> 8;
buf[1] = (val & 0x00FF);
}
Conversion of 32-/64-bit quantities left as an exercise.
I assume this is an endian issue. i.e. You are putting the bytes 42 and 4D into your short value. But your system is little endian (I could have the wrong name), which actually reads the bytes (within a multi-byte integer type) left to right instead of right to left.
Demonstrated in this code:
#include <stdio.h>
int main()
{
union {
short sval;
unsigned char bval[2];
} udata;
udata.sval = 1;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
udata.sval = 0x424d;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
udata.sval = 0x4d42;
printf( "DEC[%5hu] HEX[%04hx] BYTES[%02hhx][%02hhx]\n"
, udata.sval, udata.sval, udata.bval[0], udata.bval[1] );
return 0;
}
Gives the following output
DEC[ 1] HEX[0001] BYTES[01][00]
DEC[16973] HEX[424d] BYTES[4d][42]
DEC[19778] HEX[4d42] BYTES[42][4d]
So if you want to be portable you will need to detect the endian-ness of your system and then do a byte shuffle if required. There will be plenty of examples round the internet of swapping the bytes around.
Subsequent question:
I ask only because my file size is 3 instead of 196662
This is due to memory alignment issues. 196662 is the bytes 36 00 03 00 and 3 is the bytes 03 00 00 00. Most systems need types like int etc to not be split over multiple memory words. So intuitively you think your struct is laid out im memory like:
Offset
short magic_number; 00 - 01
int file_size; 02 - 05
short reserved_bytes[2]; 06 - 09
int data_offset; 0A - 0D
BUT on a 32 bit system that means files_size has 2 bytes in the same word as magic_number and two bytes in the next word. Most compilers will not stand for this, so the way the structure is laid out in memory is actually like:
short magic_number; 00 - 01
<<unused padding>> 02 - 03
int file_size; 04 - 07
short reserved_bytes[2]; 08 - 0B
int data_offset; 0C - 0F
So when you read your byte stream in the 36 00 is going into your padding area which leaves your file_size as getting the 03 00 00 00. Now if you used fwrite to create this data it should have been OK as the padding bytes would have been written out. But if your input is always going to be in the format you have specified it is not appropriate to read the whole struct as one with fread. Instead you will need to read each of the elements individually.
Writing a struct to a file is highly non-portable -- it's safest to just not try to do it at all. Using a struct like this is guaranteed to work only if a) the struct is both written and read as a struct (never a sequence of bytes) and b) it's always both written and read on the same (type of) machine. Not only are there "endian" issues with different CPUs (which is what it seems you've run into), there are also "alignment" issues. Different hardware implementations have different rules about placing integers only on even 2-byte or even 4-byte or even 8-byte boundaries. The compiler is fully aware of all this, and inserts hidden padding bytes into your struct so it always works right. But as a result of the hidden padding bytes, it's not at all safe to assume a struct's bytes are laid out in memory like you think they are. If you're very lucky, you work on a computer that uses big-endian byte order and has no alignment restrictions at all, so you can lay structs directly over files and have it work. But you're probably not that lucky -- certainly programs that need to be "portable" to different machines have to avoid trying to lay structs directly over any part of any file.

Resources