Problems with converting byte array to float/long in C - c

Long:
char long_num[8];
for(i=0; i<8; i++)
long_num[i] = data[pos++];
memcpy(&res, long_num, 8);
The values in the long_num are as follows:
127 -1 -1 -1 -1 -1 -1 -1
res should be the maximum value of signed long, but is -129 instead.
EDIT: This one is taken care of. It was a result of communication problems: For the person providing data, a long is eight bytes; for my C it's four.
Float:
float *res;
/* ... */
char float_num[4];
for(i=0; i<4; i++)
float_num[i] = data[pos++];
res = (float *)float_num;
It's zero. Array values:
62 -1 24 50
I also tried memcpy(), but it yields zero as well. What am I doing wrong?
My system: Linux 2.6.31-16-generic i686 GNU/Linux

You are running the code on a little-endian system. Reverse the order of bytes in the array and try again:
signed char long_num[] = {-1, -1, -1, -1, -1, -1, -1, 127};
// ...

These are two questions, quite unrelated.
In the first one, your computer is little-endian. The sign bit is set in the long that you piece together so the result is negative. It is close to zero because many "most significant bits" are set.
In the second example, the non-respect of strict aliasing rules could be an explanation for weird behavior. I am not sure. If you are using gcc, try using an union instead, gcc guarantees what happens when you convert data this way using an union.

Given this code:
#include <stdio.h>
#include <string.h>
int main(void)
{
{
long res;
char long_num[8] = { 0x7F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
memcpy(&res, long_num, 8);
printf("%ld = 0x%lX\n", res, res);
}
{
float res;
char float_num[4] = { 62, 0xFF, 24, 50 };
memcpy(&res, float_num, 4);
printf("%f = %19.14e\n", res, res);
}
return 0;
}
Compiling in 64-bit mode on MacOS X 10.6.4 with GCC 4.5.1 gives:
-129 = 0xFFFFFFFFFFFFFF7F
0.000000 = 8.90559981314709e-09
This is correct for a little-endian Intel machine (well, the 'long' value is correct).
What you are trying to do is a little unusual - not recommended. It is not portable, not least because of issues with endian-ness.
I previously wrote some related code on a SPARC machine (which is a big-endian machine):
union u_double
{
double dbl;
char data[sizeof(double)];
};
union u_float
{
float flt;
char data[sizeof(float)];
};
static void dump_float(union u_float f)
{
int exp;
long mant;
printf("32-bit float: sign: %d, ", (f.data[0] & 0x80) >> 7);
exp = ((f.data[0] & 0x7F) << 1) | ((f.data[1] & 0x80) >> 7);
printf("expt: %4d (unbiassed %5d), ", exp, exp - 127);
mant = ((((f.data[1] & 0x7F) << 8) | (f.data[2] & 0xFF)) << 8) | (f.data[3] & 0xFF);
printf("mant: %16ld (0x%06lX)\n", mant, mant);
}
static void dump_double(union u_double d)
{
int exp;
long long mant;
printf("64-bit float: sign: %d, ", (d.data[0] & 0x80) >> 7);
exp = ((d.data[0] & 0x7F) << 4) | ((d.data[1] & 0xF0) >> 4);
printf("expt: %4d (unbiassed %5d), ", exp, exp - 1023);
mant = ((((d.data[1] & 0x0F) << 8) | (d.data[2] & 0xFF)) << 8) | (d.data[3] & 0xFF);
mant = (mant << 32) | ((((((d.data[4] & 0xFF) << 8) | (d.data[5] & 0xFF)) << 8) | (d.data[6] & 0xFF)) << 8) | (d.data[7] & 0xFF);
printf("mant: %16lld (0x%013llX)\n", mant, mant);
}
static void print_value(double v)
{
union u_double d;
union u_float f;
f.flt = v;
d.dbl = v;
printf("SPARC: float/double of %g\n", v);
image_print(stdout, 0, f.data, sizeof(f.data));
image_print(stdout, 0, d.data, sizeof(d.data));
dump_float(f);
dump_double(d);
}
int main(void)
{
print_value(+1.0);
print_value(+2.0);
print_value(+3.0);
print_value( 0.0);
print_value(-3.0);
print_value(+3.1415926535897932);
print_value(+1e126);
return(0);
}
This is what I got on that platform. Note that there is an implicit '1' bit in the mantissa, so the value of '3' only has a single bit set because the other 1-bit is implied.
SPARC: float/double of 1
0x0000: 3F 80 00 00 ?...
0x0000: 3F F0 00 00 00 00 00 00 ?.......
32-bit float: sign: 0, expt: 127 (unbiassed 0), mant: 0 (0x000000)
64-bit float: sign: 0, expt: 1023 (unbiassed 0), mant: 0 (0x0000000000000)
SPARC: float/double of 2
0x0000: 40 00 00 00 #...
0x0000: 40 00 00 00 00 00 00 00 #.......
32-bit float: sign: 0, expt: 128 (unbiassed 1), mant: 0 (0x000000)
64-bit float: sign: 0, expt: 1024 (unbiassed 1), mant: 0 (0x0000000000000)
SPARC: float/double of 3
0x0000: 40 40 00 00 ##..
0x0000: 40 08 00 00 00 00 00 00 #.......
32-bit float: sign: 0, expt: 128 (unbiassed 1), mant: 4194304 (0x400000)
64-bit float: sign: 0, expt: 1024 (unbiassed 1), mant: 2251799813685248 (0x8000000000000)
SPARC: float/double of 0
0x0000: 00 00 00 00 ....
0x0000: 00 00 00 00 00 00 00 00 ........
32-bit float: sign: 0, expt: 0 (unbiassed -127), mant: 0 (0x000000)
64-bit float: sign: 0, expt: 0 (unbiassed -1023), mant: 0 (0x0000000000000)
SPARC: float/double of -3
0x0000: C0 40 00 00 .#..
0x0000: C0 08 00 00 00 00 00 00 ........
32-bit float: sign: 1, expt: 128 (unbiassed 1), mant: 4194304 (0x400000)
64-bit float: sign: 1, expt: 1024 (unbiassed 1), mant: 2251799813685248 (0x8000000000000)
SPARC: float/double of 3.14159
0x0000: 40 49 0F DB #I..
0x0000: 40 09 21 FB 54 44 2D 18 #.!.TD-.
32-bit float: sign: 0, expt: 128 (unbiassed 1), mant: 4788187 (0x490FDB)
64-bit float: sign: 0, expt: 1024 (unbiassed 1), mant: 2570638124657944 (0x921FB54442D18)
SPARC: float/double of 1e+126
0x0000: 7F 80 00 00 ....
0x0000: 5A 17 A2 EC C4 14 A0 3F Z......?
32-bit float: sign: 0, expt: 255 (unbiassed 128), mant: 0 (0x000000)
64-bit float: sign: 0, expt: 1441 (unbiassed 418), mant: -1005281217 (0xFFFFFFFFC414A03F)
You'd have to do some diddling to the code to make it work sanely on a little-endian machine like an Intel machine.

If you are communicating over a network between different machines (as the update implies), you have to define your protocol to ensure that both ends know how to get the data accurately to the other end. It is not necessarily trivial - there are many complex systems available around the world.
One standard method is to define a canonical ordering for the bytes - and a canonical size for the types. This is often called 'network byte order' when dealing with IPv4 addresses, for example. It is partially defining the endianness of the data; it is also about defining that the value is sent as a 4-byte value rather than as an 8-byte value - or vice versa.
Another technique is based on ASN.1 - which encodes the data with a type, a length, and a value (TLV encoding). Each bit of data is sent with information that identifies what it is that is being sent.
The DRDA protocol used by IBM DB2 DBMS has a different policy - 'receiver makes right'. The sender identifies what sort of machine they are somehow when the session starts, and then sends the data in their own most convenient format. The receiver is responsible for fixing what was sent. (This applies to both the DB server and the DB client; the client sends in its preferred notation and the server fixes what it receives, while the server sends in its preferred notation and the client fixes what it receives.)
Another extremely effective way of dealing with the problems is to use a textual protocol. The data is transmitted as the text version of the data, with a clear mechanism for identifying the different fields. This is much easier to debug than the various binary-encoding mechanisms because you can dump the data and see what is going on. It is not necessarily much less efficient than a binary protocol - especially if you typically send 8-byte integers that actually contain single-digit integer values.

Related

Construct 4-byta data from int

I have a device (hardware I cannot modify) that I have to communicate with data bytes. The problem I am experiencing is that I cannot simply send value converted to byte. As an example, I have value of 23, that has to be converted into the 4-byte command:
byte[0] -> MSB
byte[1] -> Mid Hi
byte[2] -> Mid Lo
byte[3] -> LSB
I cannot simply convert 25 -> 0x19 and use it like that. I am not a person who understands much of a lower-level language, I am truly trying to learn, but I know I have to shift bytes. My assumption is something like this:
int value = 23;
byte[0] = (value >> 24) & 0xFF;
byte[1] = (value >> 16) & 0xFF;
byte[2] = (value >> 8) & 0xFF;
byte[3] = value & 0xFF;
Is this correct? Will this produce outcome I am looking for? As an example how would you translate FF FF FF FB back to INT (based on the description above):
MSB: ff
Mid Hi: ff
Mid Lo:: ff
LSB: fb
What does this represent?

CRC16 CCITT code - how to adapt manufacturer sample source

I try to create a code that would read data from RFID reader module. In order to do this I need to do CRC16 CCITT calculation.
I have found C source code for the CRC16 checksum calculation in the reader manufacturer application technical datasheet http://www.card-sys.com/manuals/framer_eng.pdf
Unfortunately this is just a part of code not a full working example.
When the RFID reader is put in automatic mode, it automatically sends 11 bytes every time it reads a tag. CRC - this value is calculated using all of the bytes except the last two bytes which are the CRCH (CRC high byte) and CRCL (CRC low byte).
When I read RFID tag from a reader I got 11 bytes transferred... i.e. (hex) 01 0B 03 01 06 87 DB C7 FF E5 68. Last two bytes E5 68 are the CRC16 checksum for the message. In order to confirm the data is OK I need to calculate the same CRC16 against 01 0B 03 01 06 87 DB C7 FF at the destination point.
I tried putting everything together in one piece, but I do not have much experience with C programing and my code does not work.
Here is the source code:
#include <stdio.h>
#include <stdlib.h>
// CRC16 from Netronix datasheet
void CRC16(unsigned char * Data, unsigned short * CRC, unsigned char Bytes)
{
int i, byte;
unsigned short C;
*CRC = 0;
for (byte = 1; byte <= Bytes; byte ++, Data ++)
{
C = ((*CRC >> 8) ^ *Data) << 8;
for (i = 0; i < 8; i ++)
{
if (C & 0x8000)
C = (C << 1) ^ 0x1021;
else
C = C << 1;
}
*CRC = C ^ (*CRC << 8);
}
}
int main(void)
{
puts("Test...");
unsigned char * Data16="10ac0501ff";
unsigned short * CRC=0;
unsigned char Bytes16=4;
CRC16(Data16,CRC,Bytes16);
puts(CRC);
return EXIT_SUCCESS;
}
What I would like to do is learn how to use manufacturer code in working example - means how to get crc16 calculated.
Could you please help me with this? Thanks.
Using your source code I created the following program.
#include <stdio.h>
#include <stdlib.h>
// CRC16 from Netronix datasheet
void CRC16(unsigned char * Data, unsigned short * CRC, unsigned char Bytes)
{
int i, byte;
unsigned short C;
*CRC = 0;
for (byte = 1; byte <= Bytes; byte++, Data++)
{
C = ((*CRC >> 8) ^ *Data) << 8;
for (i = 0; i < 8; i++)
{
if (C & 0x8000)
C = (C << 1) ^ 0x1021;
else
C = C << 1;
}
*CRC = C ^ (*CRC << 8);
}
}
int main(void)
{
// When I read RFID tag from a reader I got 11 bytes transferred... i.e.
// (hex)01 0B 03 01 06 87 DB C7 FF E5 68.
// Last two bytes E5 68 are crc16.
// In order to confirm the data is OK I need to calculate the same crc16
// against 01 0B 03 01 06 87 DB C7 FF at the destination point.
unsigned char Data16[] = { 0x01, 0x0B, 0x03, 0x01, 0x06, 0x87, 0xDB, 0xC7, 0xFF };
unsigned short CRC = 0;
unsigned char Bytes16 = 9;
CRC16(Data16, &CRC, Bytes16);
printf(" CRC calculated is %x\n", CRC);
return EXIT_SUCCESS;
}
The output is CRC calculated is e568.
There are a couple of changes I made.
First is the data I used which is from your comment on the RFID tag reader output.
When I read RFID tag from a reader I got 11 bytes transferred... i.e.
(hex) 01 0B 03 01 06 87 DB C7 FF E5 68. Last two bytes E5 68 are
crc16. In order to confirm the data is OK I need to calculate the same
crc16 against 01 0B 03 01 06 87 DB C7 FF at the destination point. You
are probably right about the Data16[]... I will change this later
today and let you know what current status is. Thanks for helping :)
I used a length of the data that excludes the checksum. So the length in the frame data is 0x0B or 11 and since the checksum is 2 bytes, I used 11 - 2 or 9 for the length.
Finally I changed the definition of the variable CRC to unsigned short CRC = 0; and when calling the CRC function, I used the address of operator as in CRC16(Data16, &CRC, Bytes16);.
Frame format for serial transmission
From the documentation you referenced there are two types of frames or messages whose formats are as follows:
Command frame:
module address (1 byte) unique address of each module in network
frame length (1 byte) full length of frame (includes 2 byte checksum)
command (1 byte) command code which is an even value
parameters (variable length) optional parameters depending on command
CRCH (1 byte) upper byte of the CRC16
CRCL (1 byte) lower byte of the CRC16
Answer frame:
module address (1 byte) unique address of each module in network
frame length (1 byte) full length of frame (includes 2 byte checksum)
answer(1 byte) answer code which is an odd value
parameters (variable length) optional parameters depending on command
operation code (1 byte) command execution status
CRCH (1 byte) upper byte of the CRC16
CRCL (1 byte) lower byte of the CRC16

uint64_t setting a range of bits to 1 [duplicate]

This question already has answers here:
bit shifting with unsigned long type produces wrong results
(4 answers)
Closed 5 years ago.
I am trying to create a method to change a range of bits to all 1 using a high and a low and a source. The code works from 0 to 30, then it outputs incorrect numbers. The correct result for setBits(0, 31, 0) should be ffffffff instead of 0.
What is causing my code to reset to zero?
setBits(0,0,0): 1
setBits(0,1,0): 3
setBits(0,2,0): 7
setBits(0,3,0): f
setBits(0,4,0): 1f
setBits(0,5,0): 3f
setBits(0,6,0): 7f
setBits(0,7,0): ff
setBits(0,8,0): 1ff
setBits(0,9,0): 3ff
setBits(0,10,0): 7ff
setBits(0,11,0): fff
setBits(0,12,0): 1fff
setBits(0,13,0): 3fff
setBits(0,14,0): 7fff
setBits(0,15,0): ffff
setBits(0,16,0): 1ffff
setBits(0,17,0): 3ffff
setBits(0,18,0): 7ffff
setBits(0,19,0): fffff
setBits(0,20,0): 1fffff
setBits(0,21,0): 3fffff
setBits(0,22,0): 7fffff
setBits(0,23,0): ffffff
setBits(0,24,0): 1ffffff
setBits(0,25,0): 3ffffff
setBits(0,26,0): 7ffffff
setBits(0,27,0): fffffff
setBits(0,28,0): 1fffffff
setBits(0,29,0): 3fffffff
setBits(0,30,0): 7fffffff
setBits(0,31,0): 0
uint64_t setBits(unsigned low, unsigned high, uint64_t source)
{
assert(high < 64 && (low <= high));
uint64_t mask;
mask = ((1 << (high-low + 1))-1) << low;
uint64_t extracted = mask | source;
return extracted;
}
You need to make the initial bit into the type unsigned long long (or uint64_t) so that it doesn't overflow when bitshifted.
mask = ((1ULL << (high - low + 1)) - 1) << low;
^^^
For number 1 of int type, it'll overflow when leftshifted for 32 bits:
((1 << (high-low + 1))-1) // Where (high-low + 1) == 31 - 0 + 1 == 32
^
00000000 00000000 00000000 00000001 = 1
v <-- Left shift for 32 bits --------<
(1) 00000000 00000000 00000000 00000000 = 0
But that would work for a 64-bit integer type. So change it to 1ULL and the problem is gone.
unsigned is unsigned int, so a 32 bits value, as well as constant 1 which is a signed int, so when you're shifting 1 << (high-low + 1) you're doing it on 32 bits integers.
Use ull to transform all your constants to unsigned 64 bits int during the shifts.
mask = ((1ull << (high-low + 1ull))-1ull) << low

How is the carry flag being set in this assembly code?

Given the following assembly code for a 16-bit PRNG function,
$80/8111 E2 20 SEP #$20 ; set 8-bit mode accumulator
$80/8113 AD E5 05 LDA $05E5 ; load low byte of last random number
$80/8116 8D 02 42 STA $4202
$80/8119 A9 05 LDA #$05 ; multiply it by 5
$80/811B 8D 03 42 STA $4203
$80/811E EA NOP
$80/811F C2 20 REP #$20 ; set 16-bit mode accumulator
$80/8121 AD 16 42 LDA $4216 ; load the resultant product
$80/8124 48 PHA ; push it onto the stack
$80/8125 E2 20 SEP #$20 ; 8-bit
$80/8127 AD E6 05 LDA $05E6 ; load high byte of last random number
$80/812A 8D 02 42 STA $4202
$80/812D A9 05 LDA #$05 ; multiply by 5
$80/812F 8D 03 42 STA $4203
$80/8132 EB XBA ; exchange high and low bytes of accumulator
$80/8133 EA NOP
$80/8134 AD 16 42 LDA $4216 ; load low byte of product
$80/8137 38 SEC
$80/8138 63 02 ADC $02,s ; add to it the high byte of the original product
$80/813A 83 02 STA $02,s ; save it to the high byte of the original product
$80/813C C2 20 REP #$20 ; 16-bit
$80/813E 68 PLA ; pull it from the stack
$80/813F 69 11 00 ADC #$0011 ; add 11
$80/8142 8D E5 05 STA $05E5 ; save as new random number
$80/8145 6B RTL
a user by the name of #sagara translated the code to C:
#define LOW(exp) ((exp) & 0x00FF)
#define HIGH(exp) (((exp) & 0xFF00) >> 8)
uint16_t prng(uint16_t v) {
uint16_t low = LOW(v);
uint16_t high = HIGH(v);
uint16_t mul_low = low * 5;
uint16_t mul_high = high * 5;
// need to check for overflow, since final addition is adc as well
uint16_t v1 = LOW(mul_high) + HIGH(mul_low) + 1;
uint8_t carry = HIGH(v1) ? 1 : 0;
uint16_t v2 = (LOW(v1) << 8) + LOW(mul_low);
return (v2 + 0x11 + carry);
}
I'm confused by two things.
In this line...
uint16_t v1 = LOW(mul_high) + HIGH(mul_low) + 1;
Why is there a + 1? I think it's because of the ADC operation, but how can we be sure that the carry flag is set to 1? What previous operation would guarantee this? The XBC? I read a few posts such as Assembly ADC (Add with carry) to C++ and Overflow and Carry flags on Z80 but it's not clear to me because the instruction set appears to be different I'm not familiar with 65C816 assembly. (This is from a popular 1994 SNES game whose NA release anniversary recently passed; free upvote to the correct guess :-)
In the next line...
uint8_t carry = HIGH(v1) ? 1 : 0;
Why would it work this way? I read this as, "Set the carry flag if and only if the high byte is non-zero." But wouldn't the indication of an overflow be only if the high byte is zero? (I'm probably misinterpreting what the line is doing.)
Thanks in advance for any insights.
but how can we be sure that the carry flag is set to 1? What previous operation would guarantee this?
$80/8137 38 SEC ; SEt Carry flag
uint8_t carry = HIGH(v1) ? 1 : 0;
Why would it work this way? I read this as, "Set the carry flag if and only if the high byte is non-zero." But wouldn't the indication of an overflow be only if the high byte is zero?
The addition ADC #$0011 is using the carry from ADC $02,s. When ADC $02,s is performed, the accumulator is set to 8-bit (because of SEP #$20), so the carry flag will be set if the result of ADC $02,s would've exceeded 8 bits (i.e. if you would've got something >= $100 in 16-bit mode).
In the C version you've got a 16-bit variable (v1) to hold the result, so your carry will be in bit 8 of v1, which you can test with HIGH(v1) == 1, or simply HIGH(v1) since it will either be 1 or 0.
1) The line
$80/8137 38 SEC
is explicity setting the carry just before an ADC add-with-carry instruction, that's why the +1 in the C code.
2) The processor has an 8-bit accumulator, and the addition will overflow to the carry, ready for the next ADC instruction. However, the C code is using a 16-bit variable v1, and the carry remains in the upper 8 bits. Hence the test of those upper 8 bits to extract the so-called "carry".

Binary 0000 to FFFF using C

I am trying to program using C to write binary data to a .bin file and just iterate through to write from 0000 to FFFF. I figured I would use fopen with a 'wb' tag and then be able to write binary data but I'm unsure how to iterate from 0000 to FFFF using C. Thanks for any help.
Here's my code now:
#include <stdio.h>
#include <stdlib.h>
int main()
{
FILE *f = fopen("binary.bin", "wb");
unsigned long i;
//if(f == NULL) { ...error handling... }
for(i = 0x0000; i <= 0xFFFF; i++){
// Write something to the file, e.g. the 16-bit (2 byte) value of "i"
unsigned short someData = i;
fwrite(&someData, 1, 2, f);
}
fclose(f);
return 0;
//printf("Hello World\n");
getchar();
}
This will output 00 00 01 00 02 00 ...
Here's my question now. Isn't this supposed to read out 00 00 00 01 00 02...Shouldn't there be an extra '00' at the beginning?
Also, I've been trying to see how could I copy it and extend it therefore making it 0000 0000 0001 0001 etc?
[Update: I just copied the fwrite line and did it again and it solved this problem]
This is a simple example of writing some binary numbers to a file.
FILE *f = fopen("yourfile", "wb");
if(f == NULL) { ...error handling... }
for(unsigned long i = 0x0000; i <= 0xFFFF; ++i)
{
// Write something to the file, e.g. the 16-bit (2 byte) value of "i"
unsigned short someData = i;
fwrite(&someData, 1, 2, f);
}
fclose(f);
Note that the variable i here must be bigger than 16-bit so that it does not wrap around (see my comments on the other answers). The long type guarantees a size of at least 32 bit.
for (int i = 0x0000; i <= 0xffff; ++i)
To loop from 0 to 0xffff, both inclusive, you do:
for (i=0; i <= 0xffff; ++i)
Now, the first interesting question is, what should be the type of i? In C, an unsigned int is guaranteed to hold values in the range [0, 0xffff], which means that i <= 0xffff will always be true for unsigned int i; if UINT_MAX is 0xffff. so i can't be a type of size smaller or equal to unsigned int. long or unsigned long is the smallest type guaranteed to be able to store 0xffff + 1 portably. So, we need i to be of unsigned long or long type. In C99, you can make things easier by including stdint.h and then using uint32_t type.
The second interesting question is, what do you want to write? Is your file's layout going to be:
00 00 00 01 00 02 00 03 00 04 00 05 00 06 00 07
...
FF F8 FF F9 FF FA FF FB FF FC FF FD FF FE FF FF
or do you want to write values to a file using your favorite data type above and then be able to read them back again quickly? For example, if int is 32 bits, and your system is little-endian, writing those values will give you a file such as:
00 00 00 00 01 00 00 00 02 00 00 00 03 00 00 00 ...
If you want the first, you have to make sure you write two bytes per number, in the correct order, and that endian-ness of your OS doesn't affect the output. The easiest way to do so is probably something like this:
for (i=0; i <= 0xff; ++i) {
unsigned char values[2];
values[0] = (i & 0xff00) >> 8;
values[1] = i & 0xff;
fwrite(values, 1, 2, fp);
}
If you want the second, your life is easier, particularly if you don't care about endian-ness:
for (i=0; i <= 0xff; ++i) {
fwrite(&i, sizeof i, 1, fp);
}
will write your values so you can read them back on the same system with the same kind of variable.
for (i = 0x0000; i <= 0xFFFF; ++i)
To control the Endianess of your output, you will have to write the bytes (octets) yourself:
for (unsigned int i = 0; // Same as 0x0000
i <= 0xFFFF;
++i)
{
unsigned char c;
c = i / 256; // In Big Endian, output the Most Significant Byte (MSB) first.
fputc(/*...*/);
c = i % 256;
fputc(/*...*/);
}
This is a preferred method when the file must be Big Endian. This will ensure the byte ordering regardless of the processor's endianess. This can be adjusted to output in Little Endican as well.
Alternate method for portably writing bytes in big endian style: check out htons and htonl (and their inverses).
These convert from whatever format your machine uses (Intel chips are little endian, as several people have pointed out) into "Network" order (big endian). htons does this in 16-bit words; htonl in 32-bit words. As an added benefit, if your program is on a Big Endian machine, these compile out to no-ops. They're defined in <arpa/inet.h> or <netinet/in.h>, depending on the system.
BSD (and Linux) also provide(s) a collection of routines named things like htobe16 (host to big endian 16-bit) in <endian.h>.
These also help save the overhead of writing one byte at a time.
If you do want to extract high bytes / low bytes yourself, you probably should also use bit masking to do it. Your compiler might be smart enough to convert divide/modulo into bit masks, but if it doesn't, you'll have deplorable performance (division is slow).
{
unsigned int x = 0xdead;
unsigned char hi = (x & 0xff00) >> 8;
unsigned char lo = (x & 0x00ff);
}
{
unsigned long int x = 0xdeadbeef;
unsigned char by0 = (x & 0xff000000) >> 24;
unsigned char by1 = (x & 0x00ff0000) >> 16;
unsigned char by2 = (x & 0x0000ff00) >> 8;
unsigned char by3 = (x & 0x000000ff);
}
(It looks like gcc is smart enough to do the optimization out of the division, though… nice.)

Resources