Why doesn't my bit left shift? - c

I'm using embedded C on Keil. I'm trying to program such that it stores a bit, bit shifts and then it stores it again and it repeats until all eight bits are stored.
However, when I debug (maybe debug wrongly), the value only shows "01 00 00 00 00 00 00...". When it stores logic'1' and then when it shift left, it shows "02 00 00 00 00 00 00...". When the loop repeats, it shows the same thing over and over again. What I expected was "01 01 01 01 01 01 01..." (Let's say all the input bits was '1'). How do I solve this problem?
#include <reg51.h>
sbit Tsignal = P1^2;
unsigned char xdata x[500];
for(u=0; u<8; u++)
{
x[i] = x[i] << 1;
x[i] = Tsignal; //Store Tsignal in x
}
Ah, I have solved it already.
unsigned int u;
unsigned char p;
unsigned char xdata x[500];
for(u=0; u<8; u++) //Bit Shift Loop
{
x[i] = x[i] <<1; //Left Bit Shift by 1
p = Tsignal; //Store Tsignal to Buffer p
x[i] |= p;
} //End Bitshift loop

I think you want to do something like this:
for(u=0;u<8;u++)
{
// Update Tsignal.
//Tsignal = GetBitValue();
// Store it to x.
x = (x << 1) | (Tsignal & 0x1)
}

Related

Why final pointer is being aligned to size of int?

Here's the code under consideration:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
char buffer[512];
int pos;
int posf;
int i;
struct timeval *tv;
int main(int argc, char **argv)
{
pos = 0;
for (i = 0; i < 512; i++) buffer[i] = 0;
for (i = 0; i < 4; i++)
{
printf("pos = %d\n", pos);
*(int *)(buffer + pos + 4) = 0x12345678;
pos += 9;
}
for (i = 0; i < 9 * 4; i++)
{
printf(" %02X", (int)(unsigned char)*(buffer + i));
if ((i % 9) == 8) printf("\n");
}
printf("\n");
// ---
pos = 0;
for (i = 0; i < 512; i++) buffer[i] = 0;
*(int *)(buffer + 4) = 0x12345678;
*(int *)(buffer + 9 + 4) = 0x12345678;
*(int *)(buffer + 18 + 4) = 0x12345678;
*(int *)(buffer + 27 + 4) = 0x12345678;
for (i = 0; i < 9 * 4; i++)
{
printf(" %02X", (int)(unsigned char)*(buffer + i));
if ((i % 9) == 8) printf("\n");
}
printf("\n");
return 0;
}
And the output of code is
pos = 0
pos = 9
pos = 18
pos = 27
00 00 00 00 78 56 34 12 00
00 00 00 78 56 34 12 00 00
00 00 78 56 34 12 00 00 00
00 78 56 34 12 00 00 00 00
00 00 00 00 78 56 34 12 00
00 00 00 00 78 56 34 12 00
00 00 00 00 78 56 34 12 00
00 00 00 00 78 56 34 12 00
I can not get why
*(int *)(buffer + pos + 4) = 0x12345678;
is being placed into the address aligned to size of int (4 bytes). I expect the following actions during the execution of this command:
pointer to buffer, which is char*, increased by the value of pos (0, 9, 18, 27) and then increased by 4. The resulting pointer is char* pointing to char array index [pos + 4];
char* pointer in the brackets is being converted to the int*, causing resulting pointer addressing integer of 4 bytes size at base location (buffer + pos + 4) and integer array index [0];
resulting int* location is being stored with bytes 78 56 34 12 in this order (little endian system).
Instead I see pointer in brackets being aligned to size of int (4 bytes), however direct addressing using constants (see second piece of code) works properly as expected.
target CPU is i.MX287 (ARM9);
target operating system is OpenWrt Linux [...] 3.18.29 #431 Fri Feb 11 15:57:31 2022 armv5tejl GNU/Linux;
compiled on Linux [...] 4.15.0-142-generic #146~16.04.1-Ubuntu SMP Tue Apr 13 09:27:15 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux, installed in Virtual machine;
GCC compiler version gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
I compile as a part of whole system image compilation, flags are CFLAGS = -Os -Wall -Wmissing-declarations -g3.
Update: thanks to Andrew Henle, I now replace
*(int*)(buffer + pos + 4) = 0x12345678;
with
buffer[pos + 4] = value & 0xff;
buffer[pos + 5] = (value >> 8) & 0xff;
buffer[pos + 6] = (value >> 16) & 0xff;
buffer[pos + 7] = (value >> 24) & 0xff;
and can't believe I must do it on 32-bit microprocessor system, whatever architecture it has, and that GCC is not able to properly slice int into bytes or partial int words and perform RMW for those parts.
char* pointer in the brackets is being converted to the int*, causing resulting pointer addressing integer of 4 bytes size at base location (buffer + pos + 4) and integer array index [0]
This incurs undefined behavior (UB) when the alignments requirements of int * are not met.
Instead copy with memcpy(). A good compiler will emit valid optimized code.
// *(int*)(buffer + pos + 4) = 0x12345678;
memcpy(buffer + pos + 4, &(int){0x12345678}, sizeof (int));

How is the ospf checksum calculated?

I'm having trouble getting an accurate checksum calculated. My router is rejecting my Hello messages because of it. Here's the package in hex (starting from the ospf header)
vv vv <-- my program's checksum
0000 02 01 00 30 c0 a8 03 0a 00 00 00 00 f3 84 00 00 ...0............
0010 00 00 00 00 00 00 00 00 ff ff ff 00 00 0a 00 01 ................
0020 00 00 00 28 c0 a8 03 0a c0 a8 03 0a ...(........
Wireshark calls 0xf384 bogus and says the expected value is 0xb382. I've confirmed that my algorythm picks the bytes in pairs and in the right order and then adds them together:
0201 + 0030 + c0a8 + ... + 030a
There is no authentication to worry about. We weren't even taught how to set it up. Lastly, here's my shot at calculating the checksum:
OspfHeader result;
result.Type = type;
result.VersionNumber = 0x02;
result.PacketLength = htons(24 + content_len);
result.AuthType = htons(auth_type);
result.AuthValue = auth_val;
result.AreaId = area_id;
result.RouterId = router_id;
uint16_t header_buffer[12];
memcpy(header_buffer, &result, 24);
uint32_t checksum;
for(int i = 0; i < 8; i++)
{
checksum += ntohs(header_buffer[i]);
checksum = (checksum & 0xFFFF) + (checksum >> 16);
}
for(int i = 0; i + 1 < content_len; i += 2)
{
checksum += (content_buffer[i] << 8) + content_buffer[i+1];
checksum = (checksum & 0xFFF) + (checksum >> 16);
}
result.Checksum = htons(checksum xor 0xFFFF);
return result;
I'm not sure where exactly I messed up here. Any clues?
Mac_3.2.57$cat ospfCheckSum2.c
#include <stdio.h>
int main(void){
// array was wrong len
unsigned short header_buffer[22] = {
0x0201, 0x0030, 0xc0a8, 0x030a,
0x0000, 0x0000, 0x0000, 0x0000,
0x0000, 0x0000, 0x0000, 0x0000,
0xffff, 0xff00, 0x000a, 0x0001,
0x0000, 0x0028, 0xc0a8, 0x030a,
0xc0a8, 0x030a
};
// note ospf len is also wrong, BTW
// was not init'd
unsigned int checksum = 0;
// was only scanning 8 vs 22
for(int i = 0; i < 22; i++)
{
checksum += header_buffer[i];
checksum = (checksum & 0xFFFF) + (checksum >> 16);
}
// len not odd, so this is not needed
//for(int i = 0; i + 1 < content_len; i += 2)
//{
// checksum += (content_buffer[i] << 8) + content_buffer[i+1];
// checksum = (checksum & 0xFFF) + (checksum >> 16);
//}
printf("result is %04x\n", checksum ^ 0xFFFF); // no "xor" for me (why not do ~ instead?)
// also: where/what is: content_len, content_buffer, result, OspfHeader
return(0);
}
Mac_3.2.57$cc ospfCheckSum2.c
Mac_3.2.57$./a.out
result is b382
Mac_3.2.57$

Converting a floating point number in C to IEEE standard

I am trying to write a code in C that generates a random integer , performs simple calculation and then I am trying to print the values of the file in IEEE standard. But I am unable to do so , Please help.
I am unable to print it in Hexadecimal/Binary which is very important.
If I type cast the values in fprintf, I am getting this Error expected expression before double.
int main (int argc, char *argv) {
int limit = 20 ; double a[limit], b[limit]; //Inputs
double result[limit] ; int i , k ; //Outputs
printf("limit = %d", limit ); double q;
for (i= 0 ; i< limit;i++)
{
a[i]= rand();
b[i]= rand();
printf ("A= %x B = %x\n",a[i],b[i]);
}
char op;
printf("Enter the operand used : add,subtract,multiply,divide\n");
scanf ("%c", &op); switch (op) {
case '+': {
for (k= 0 ; k< limit ; k++)
{
result [k]= a[k] + b[k];
printf ("result= %f\n",result[k]);
}
}
break;
case '*': {
for (k= 0 ; k< limit ; k++)
{
result [k]= a[k] * b[k];
}
}
break;
case '/': {
for (k= 0 ; k< limit ; k++)
{
result [k]= a[k] / b[k];
}
}
break;
case '-': {
for (k= 0 ; k< limit ; k++)
{
result [k]= a[k] - b[k];
}
}
break; }
FILE *file; file = fopen("tb.txt","w"); for(k=0;k<limit;k++) {
fprintf (file,"%x\n
%x\n%x\n\n",double(a[k]),double(b[k]),double(result[k]) );
}
fclose(file); /*done!*/
}
If your C compiler supports IEEE-754 floating point format directly (because the CPU supports it) or fully emulates it, you may be able to print doubles simply as bytes. And that is the case for the x86/64 platform.
Here's an example:
#include <limits.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <float.h>
void PrintDoubleAsCBytes(double d, FILE* f)
{
unsigned char a[sizeof(d)];
unsigned i;
memcpy(a, &d, sizeof(d));
for (i = 0; i < sizeof(a); i++)
fprintf(f, "%0*X ", (CHAR_BIT + 3) / 4, a[i]);
}
int main(void)
{
PrintDoubleAsCBytes(0.0, stdout); puts("");
PrintDoubleAsCBytes(0.5, stdout); puts("");
PrintDoubleAsCBytes(1.0, stdout); puts("");
PrintDoubleAsCBytes(2.0, stdout); puts("");
PrintDoubleAsCBytes(-2.0, stdout); puts("");
PrintDoubleAsCBytes(DBL_MIN, stdout); puts("");
PrintDoubleAsCBytes(DBL_MAX, stdout); puts("");
PrintDoubleAsCBytes(INFINITY, stdout); puts("");
#ifdef NAN
PrintDoubleAsCBytes(NAN, stdout); puts("");
#endif
return 0;
}
Output (ideone):
00 00 00 00 00 00 00 00
00 00 00 00 00 00 E0 3F
00 00 00 00 00 00 F0 3F
00 00 00 00 00 00 00 40
00 00 00 00 00 00 00 C0
00 00 00 00 00 00 10 00
FF FF FF FF FF FF EF 7F
00 00 00 00 00 00 F0 7F
00 00 00 00 00 00 F8 7F
If IEEE-754 isn't supported directly, the problem becomes more complex. However, it can still be solved.
Here are a few related questions and answers that can help:
How do I handle byte order differences when reading/writing floating-point types in C?
Is there a tool to know whether a value has an exact binary representation as a floating point variable?
C dynamically printf double, no loss of precision and no trailing zeroes
And, of course, all the IEEE-754 related info can be found in Wikipedia.
Try this in your fprint part:
fprintf (file,"%x\n%x\n%x\n\n",*((int*)(&a[k])),*((int*)(&b[k])),*((int*)(&result[k])));
That would translate the double as an integer so it's printed in IEEE standard.
But if you're running your program on a 32-bit machine on which int is 32-bit and double is 64-bit, I suppose you should use:
fprintf (file,"%x%x\n%x%x\n%x%x\n\n",*((int*)(&a[k])),*((int*)(&a[k])+1),*((int*)(&b[k])),*((int*)(&b[k])+1),*((int*)(&result[k])),*((int*)(&result[k])+1));
In C, there are two ways to get at the bytes in a float value: a pointer cast, or a union. I recommend a union.
I just tested this code with GCC and it worked:
#include <stdio.h>
typedef unsigned char BYTE;
int
main()
{
float f = 3.14f;
int i = sizeof(float) - 1;
BYTE *p = (BYTE *)(&f);
p[i] = p[i] | 0x80; // set the sign bit
printf("%f\n", f); // prints -3.140000
}
We are taking the address of the variable f, then assigning it to a pointer to BYTE (unsigned char). We use a cast to force the pointer.
If you try to compile code with optimizations enabled and you do the pointer cast shown above, you might run into the compiler complaining about "type-punned pointer" issues. I'm not exactly sure when you can do this and when you can't. But you can always use the other way to get at the bits: put the float into a union with an array of bytes.
#include <stdio.h>
typedef unsigned char BYTE;
typedef union
{
float f;
BYTE b[sizeof(float)];
} UFLOAT;
int
main()
{
UFLOAT u;
int const i = sizeof(float) - 1;
u.f = 3.14f;
u.b[i] = u.b[i] | 0x80; // set the sign bit
printf("%f\n", u.f); // prints -3.140000
}
What definitely will not work is to try to cast the float value directly to an unsigned integer or something like that. C doesn't know you just want to override the type, so C tries to convert the value, causing rounding.
float f = 3.14;
unsigned int i = (unsigned int)f;
if (i == 3)
printf("yes\n"); // will print "yes"
P.S. Discussion of "type-punned" pointers here:
Dereferencing type-punned pointer will break strict-aliasing rules

(C) how to fix this algorithm for z827 ASCII compression?

noob warning.
I'm trying to create a compression program. It takes a .txt with ASCII characters as an argument, and cuts off the leading 0 of the binary representation of each character.
It does this by using the last 2 bytes of two different integers. A character with a leading zero is put into the 4th byte of the integer 'write', and the next character is put into the 3rd byte of the integer 'temp'. The 'temp' int is then shifted to the right once, and then OR'd with 'write', so that the leading zero slot has been filled with data we need. This repeats, with the shift counter increasing after every character. The first case is a bit odd. The algorithm isn't very complex if written out on paper.
I feel like I've tried everything. I've been over the algorithm so many times. I'm pretty sure the problem is when shift_counter gets to 8.. but it should work fine. It just doesn't. I can show you why here (the code is further down):
This is the hex dump of my output:
0000000 3f 00 00 00 41 10 68 9e 6e c3 d9 65 10 88 5e c6
0000020 d3 41 e6 74 9a 5d 06 d1 df a0 7a 7d 5e 06 a5 dd
0000040 20 3a bd 3c a7 a7 dd 67 10 e8 5d a7 83 e8 e8 72
0000060 19 a4 c7 c9 6e a0 f1 f8 dd 86 cb cb f3 f9 3c
0000077
And the correct output:
0000000 3f 00 00 00 41 d0 3c dd 86 b3 cb 20 7a 19 4f 07
0000020 99 d3 ec 32 88 fe 06 d5 e7 65 50 da 0d a2 97 e7
0000040 f4 b4 fb 0c 7a d7 e9 20 3a ba 0c d2 e3 64 37 d0
0000060 f8 dd 86 cb cb f3 79 fa ed 76 29 00 0a 0a
0000076
code:
int compress(char *filename_ptr){
int in_fd;
in_fd = open(filename_ptr, O_RDONLY);
//set pointer to the end of the file, find file size, then reset position
//by closing/opening
unsigned int file_bytes = lseek(in_fd, 0, SEEK_END);
close(in_fd);
in_fd = open(filename_ptr, O_RDONLY);
//store file contents in buffer
unsigned char read_buffer[file_bytes];
read(in_fd, read_buffer, file_bytes);
//file where the output will be stored
int out_fd;
creat("output.txt", 0644);
out_fd = open("output.txt", O_WRONLY);
//sets file size in header (needed for decompression, this is the size of the
//file before compression. everything after this we write this 4-byte int
//is a 1 byte char
write(out_fd, &file_bytes, 4);
unsigned int writer;
unsigned int temp;
unsigned char out_char;
int i;
int shift_count = 8;
for(i = 0; i < file_bytes; i++){
if(shift_count == 8){
writer = read_buffer[i];
temp = temp & 0x00000000;
temp = read_buffer[i+1] << 8;
shift_count = 1;
}else{
//moves the next char's bits to the left, for the purpose of filling the
//8 bit buffer (writer) via OR operation
temp = read_buffer[i] << 8;
}
temp = temp >> shift_count;
writer = writer | temp;
//output right byte of writer
unsigned int right_byte = writer & 0x000000ff;
//output right_byte as a char
out_char = (char) right_byte;
//write_buffer[i] = out_char;
write(out_fd, &out_char, 1);
//clear right side of writer
writer = writer & 0x0000ff00;
//shift left side of writer to the right by 8
writer = writer >> 8;
shift_count++;
}
return 0;
}
It seems to me that input and output are too strongly coupled.
At some point, the program should be reading (roughly) the 80th octet from the input and writing (roughly) the 70th octet to the output, because you want to (on average) write 7 bits out for every 8 bits you read in, right?
What the loop
for(i = 0; i < file_bytes; i++){
...
... = read_buffer[i];
...
write(out_fd, &out_char, 1);
...
}
actually seems to be doing is:
On the 70th pass through the loop -- when 70==i --
it's reading the 70th octet from the input and writing the 70th octet to the output.
On the 80th pass through the loop -- when 80==i --
it's reading the 80th octet from the input and writing the 80th octet to the output.
You must decide:
Do you want "i" to represent the number of input characters processed, or the number of output chars processed?
Because it's not possible to do both -- it's not possible to have 70 equal 80.
Perhaps something like this is closer to what you wanted:
/* test.c
http://stackoverflow.com/questions/15080239/c-how-to-fix-this-algorithm-for-z827-ascii-compression
WARNING: untested code.
*/
int compress(char *filename_ptr){
int in_fd;
in_fd = open(filename_ptr, O_RDONLY);
//set pointer to the end of the file, find file size, then reset position
//by closing/opening
unsigned int file_bytes = lseek(in_fd, 0, SEEK_END);
close(in_fd);
in_fd = open(filename_ptr, O_RDONLY);
//store file contents in buffer
unsigned char read_buffer[file_bytes];
read(in_fd, read_buffer, file_bytes);
//file where the output will be stored
int out_fd;
creat("output.txt", 0644);
out_fd = open("output.txt", O_WRONLY);
//sets file size in header (needed for decompression, this is the size of the
//file before compression. everything after this we write this 4-byte int
//is a 1 byte char
write(out_fd, &file_bytes, 4);
unsigned int writer;
unsigned int temp;
unsigned char out_char;
int i;
int writer_bits = 0; // 0 bits of data in writer so far
for(i = 0; i < file_bytes; i++){
// i is the number of (7 bit ASCII) characters
// read from the input so far.
// add 7 more bits to the writer
temp = read_buffer[i];
//moves the next char's bits to the left, for the purpose of filling the
//8 bit buffer (writer) via OR operation
//(avoid overwriting the "writer_bits" of good bits
//already in the buffer).
temp = read_buffer[i] << writer_bits;
writer = writer | temp;
writer_bits = writer_bits + 7;
//output right byte of writer
unsigned int right_byte = writer & 0x000000ff;
//output right_byte as a char
out_char = (unsigned char) right_byte;
// output 8 bits of data whenever
// we have *at least* 8 bits of data in the writer buffer.
if(writer_bits >= 8){
//write_buffer[i] = out_char;
write(out_fd, &out_char, 1);
//shift left side of writer to the right by 8
writer = writer >> 8;
writer_bits = writer_bits - 8;
}else{
// 7 or fewer bits in writer --
// skip writing until next time.
}
}
// is there any leftover bits still in writer?
if(writer_bits > 0){
//write_buffer[i] = out_char;
write(out_fd, &out_char, 1);
}
return 0;
}
(Currently the program reads the entire input file into RAM, then writes the entire output file. Some programmers prefer to read a little at a time, then write a little at a time. Both approaches have advantages and disadvantages).

How do I handle byte order differences when reading/writing floating-point types in C?

I'm devising a file format for my application, and I'd obviously like for it to work on both big-endian and little-endian systems. I've already found working solutions for managing integral types using htonl and ntohl, but I'm a bit stuck when trying to do the same with float and double values.
Given the nature of how floating-point representations work, I would assume that the standard byte-order functions won't work on these values. Likewise, I'm not even entirely sure if endianness in the traditional sense is what governs the byte order of these types.
All I need is consistency. A way to write a double out, and ensure I get that same value when I read it back in. How can I do this in C?
Another option could be to use double frexp(double value, int *exp); from <math.h> (C99) to break down the floating-point value into a normalized fraction (in the range [0.5, 1)) and an integral power of 2. You can then multiply the fraction by FLT_RADIXDBL_MANT_DIG to get an integer in the range [FLT_RADIXDBL_MANT_DIG/2, FLT_RADIXDBL_MANT_DIG). Then you save both integers big- or little-endian, whichever you choose in your format.
When you load a saved number, you do the reverse operation and use double ldexp(double x, int exp); to multiply the reconstructed fraction by the power of 2.
This will work best when FLT_RADIX=2 (virtually all systems, I suppose?) and DBL_MANT_DIG<=64.
Care must be taken to avoid overflows.
Sample code for doubles:
#include <limits.h>
#include <float.h>
#include <math.h>
#include <string.h>
#include <stdio.h>
#if CHAR_BIT != 8
#error currently supported only CHAR_BIT = 8
#endif
#if FLT_RADIX != 2
#error currently supported only FLT_RADIX = 2
#endif
#ifndef M_PI
#define M_PI 3.14159265358979324
#endif
typedef unsigned char uint8;
/*
10-byte little-endian serialized format for double:
- normalized mantissa stored as 64-bit (8-byte) signed integer:
negative range: (-2^53, -2^52]
zero: 0
positive range: [+2^52, +2^53)
- 16-bit (2-byte) signed exponent:
range: [-0x7FFE, +0x7FFE]
Represented value = mantissa * 2^(exponent - 53)
Special cases:
- +infinity: mantissa = 0x7FFFFFFFFFFFFFFF, exp = 0x7FFF
- -infinity: mantissa = 0x8000000000000000, exp = 0x7FFF
- NaN: mantissa = 0x0000000000000000, exp = 0x7FFF
- +/-0: only one zero supported
*/
void Double2Bytes(uint8 buf[10], double x)
{
double m;
long long im; // at least 64 bits
int ie;
int i;
if (isnan(x))
{
// NaN
memcpy(buf, "\x00\x00\x00\x00\x00\x00\x00\x00" "\xFF\x7F", 10);
return;
}
else if (isinf(x))
{
if (signbit(x))
// -inf
memcpy(buf, "\x00\x00\x00\x00\x00\x00\x00\x80" "\xFF\x7F", 10);
else
// +inf
memcpy(buf, "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x7F" "\xFF\x7F", 10);
return;
}
// Split double into normalized mantissa (range: (-1, -0.5], 0, [+0.5, +1))
// and base-2 exponent
m = frexp(x, &ie); // x = m * 2^ie exactly for FLT_RADIX=2
// frexp() can't fail
// Extract most significant 53 bits of mantissa as integer
m = ldexp(m, 53); // can't overflow because
// DBL_MAX_10_EXP >= 37 equivalent to DBL_MAX_2_EXP >= 122
im = trunc(m); // exact unless DBL_MANT_DIG > 53
// If the exponent is too small or too big, reduce the number to 0 or
// +/- infinity
if (ie > 0x7FFE)
{
if (im < 0)
// -inf
memcpy(buf, "\x00\x00\x00\x00\x00\x00\x00\x80" "\xFF\x7F", 10);
else
// +inf
memcpy(buf, "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x7F" "\xFF\x7F", 10);
return;
}
else if (ie < -0x7FFE)
{
// 0
memcpy(buf, "\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00", 10);
return;
}
// Store im as signed 64-bit little-endian integer
for (i = 0; i < 8; i++, im >>= 8)
buf[i] = (uint8)im;
// Store ie as signed 16-bit little-endian integer
for (i = 8; i < 10; i++, ie >>= 8)
buf[i] = (uint8)ie;
}
void Bytes2Double(double* x, const uint8 buf[10])
{
unsigned long long uim; // at least 64 bits
long long im; // ditto
unsigned uie;
int ie;
double m;
int i;
int negative = 0;
int maxe;
if (!memcmp(buf, "\x00\x00\x00\x00\x00\x00\x00\x00" "\xFF\x7F", 10))
{
#ifdef NAN
*x = NAN;
#else
*x = 0; // NaN is not supported, use 0 instead (we could return an error)
#endif
return;
}
if (!memcmp(buf, "\x00\x00\x00\x00\x00\x00\x00\x80" "\xFF\x7F", 10))
{
*x = -INFINITY;
return;
}
else if (!memcmp(buf, "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x7F" "\xFF\x7F", 10))
{
*x = INFINITY;
return;
}
// Load im as signed 64-bit little-endian integer
uim = 0;
for (i = 0; i < 8; i++)
{
uim >>= 8;
uim |= (unsigned long long)buf[i] << (64 - 8);
}
if (uim <= 0x7FFFFFFFFFFFFFFFLL)
im = uim;
else
im = (long long)(uim - 0x7FFFFFFFFFFFFFFFLL - 1) - 0x7FFFFFFFFFFFFFFFLL - 1;
// Obtain the absolute value of the mantissa, make sure it's
// normalized and fits into 53 bits, else the input is invalid
if (im > 0)
{
if (im < (1LL << 52) || im >= (1LL << 53))
{
#ifdef NAN
*x = NAN;
#else
*x = 0; // NaN is not supported, use 0 instead (we could return an error)
#endif
return;
}
}
else if (im < 0)
{
if (im > -(1LL << 52) || im <= -(1LL << 53))
{
#ifdef NAN
*x = NAN;
#else
*x = 0; // NaN is not supported, use 0 instead (we could return an error)
#endif
return;
}
negative = 1;
im = -im;
}
// Load ie as signed 16-bit little-endian integer
uie = 0;
for (i = 8; i < 10; i++)
{
uie >>= 8;
uie |= (unsigned)buf[i] << (16 - 8);
}
if (uie <= 0x7FFF)
ie = uie;
else
ie = (int)(uie - 0x7FFF - 1) - 0x7FFF - 1;
// If DBL_MANT_DIG < 53, truncate the mantissa
im >>= (53 > DBL_MANT_DIG) ? (53 - DBL_MANT_DIG) : 0;
m = im;
m = ldexp(m, (53 > DBL_MANT_DIG) ? -DBL_MANT_DIG : -53); // can't overflow
// because DBL_MAX_10_EXP >= 37 equivalent to DBL_MAX_2_EXP >= 122
// Find out the maximum base-2 exponent and
// if ours is greater, return +/- infinity
frexp(DBL_MAX, &maxe);
if (ie > maxe)
m = INFINITY;
else
m = ldexp(m, ie); // underflow may cause a floating-point exception
*x = negative ? -m : m;
}
int test(double x, const char* name)
{
uint8 buf[10], buf2[10];
double x2;
int error1, error2;
Double2Bytes(buf, x);
Bytes2Double(&x2, buf);
Double2Bytes(buf2, x2);
printf("%+.15E '%s' -> %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
x,
name,
buf[0],buf[1],buf[2],buf[3],buf[4],buf[5],buf[6],buf[7],buf[8],buf[9]);
if ((error1 = memcmp(&x, &x2, sizeof(x))) != 0)
puts("Bytes2Double(Double2Bytes(x)) != x");
if ((error2 = memcmp(buf, buf2, sizeof(buf))) != 0)
puts("Double2Bytes(Bytes2Double(Double2Bytes(x))) != Double2Bytes(x)");
puts("");
return error1 || error2;
}
int testInf(void)
{
uint8 buf[10];
double x, x2;
int error;
x = DBL_MAX;
Double2Bytes(buf, x);
if (!++buf[8])
++buf[9]; // increment the exponent beyond the maximum
Bytes2Double(&x2, buf);
printf("%02X %02X %02X %02X %02X %02X %02X %02X %02X %02X -> %+.15E\n",
buf[0],buf[1],buf[2],buf[3],buf[4],buf[5],buf[6],buf[7],buf[8],buf[9],
x2);
if ((error = !isinf(x2)) != 0)
puts("Bytes2Double(Double2Bytes(DBL_MAX) * 2) != INF");
puts("");
return error;
}
#define VALUE_AND_NAME(V) { V, #V }
const struct
{
double value;
const char* name;
} testData[] =
{
#ifdef NAN
VALUE_AND_NAME(NAN),
#endif
VALUE_AND_NAME(0.0),
VALUE_AND_NAME(+DBL_MIN),
VALUE_AND_NAME(-DBL_MIN),
VALUE_AND_NAME(+1.0),
VALUE_AND_NAME(-1.0),
VALUE_AND_NAME(+M_PI),
VALUE_AND_NAME(-M_PI),
VALUE_AND_NAME(+DBL_MAX),
VALUE_AND_NAME(-DBL_MAX),
VALUE_AND_NAME(+INFINITY),
VALUE_AND_NAME(-INFINITY),
};
int main(void)
{
unsigned i;
int errors = 0;
for (i = 0; i < sizeof(testData) / sizeof(testData[0]); i++)
errors += test(testData[i].value, testData[i].name);
errors += testInf();
// Test subnormal values. A floating-point exception may be raised.
errors += test(+DBL_MIN / 2, "+DBL_MIN / 2");
errors += test(-DBL_MIN / 2, "-DBL_MIN / 2");
printf("%d error(s)\n", errors);
return 0;
}
Output (ideone):
+NAN 'NAN' -> 00 00 00 00 00 00 00 00 FF 7F
+0.000000000000000E+00 '0.0' -> 00 00 00 00 00 00 00 00 00 00
+2.225073858507201E-308 '+DBL_MIN' -> 00 00 00 00 00 00 10 00 03 FC
-2.225073858507201E-308 '-DBL_MIN' -> 00 00 00 00 00 00 F0 FF 03 FC
+1.000000000000000E+00 '+1.0' -> 00 00 00 00 00 00 10 00 01 00
-1.000000000000000E+00 '-1.0' -> 00 00 00 00 00 00 F0 FF 01 00
+3.141592653589793E+00 '+M_PI' -> 18 2D 44 54 FB 21 19 00 02 00
-3.141592653589793E+00 '-M_PI' -> E8 D2 BB AB 04 DE E6 FF 02 00
+1.797693134862316E+308 '+DBL_MAX' -> FF FF FF FF FF FF 1F 00 00 04
-1.797693134862316E+308 '-DBL_MAX' -> 01 00 00 00 00 00 E0 FF 00 04
+INF '+INFINITY' -> FF FF FF FF FF FF FF 7F FF 7F
-INF '-INFINITY' -> 00 00 00 00 00 00 00 80 FF 7F
FF FF FF FF FF FF 1F 00 01 04 -> +INF
+1.112536929253601E-308 '+DBL_MIN / 2' -> 00 00 00 00 00 00 10 00 02 FC
-1.112536929253601E-308 '-DBL_MIN / 2' -> 00 00 00 00 00 00 F0 FF 02 FC
0 error(s)
Depending on the application it could be a good idea to use a plain text data format (a possibility being XML). If you don't want to waste disk space you can compress it.
XML is probably the most portable way to do it.
However, it appears that you already have most of the parser built, but are stuck on the float/double issue. I would suggest writing it out as a string (to whatever precision you desire) and then reading that back in.
Unless all your target platforms use IEEE-754 floats (and doubles), no byte-swapping tricks will work for you.
If you guarantee that your implementations always treat serialized floating point representations in a specified format, then you will be fine (IEEE 754 is common).
Yes, architectures may order floating point numbers differently (e.g. in big or little endian). Therefore, you will want to somehow specify the endianness. This could be in the format's specification or variable and recorded in the file's data.
The last major pitfall is that alignment for builtins may vary. How your hardware/processor handles malaligned data is implementation defined. So you may need to swap the data/bytes, then move it to the destination float/double.
A library like HDF5 or even NetCDF is probably a bit heavyweight for this as High Performance Mark said, unless you also need the other features available in those libraries.
A lighter-weight alternative that only deals with the serialization would be e.g. XDR (see also wikipedia description). Many OS'es supply XDR routines out of the box, if this is not enough free-standing XDR libraries exist as well.

Resources