How is the ospf checksum calculated? - c

I'm having trouble getting an accurate checksum calculated. My router is rejecting my Hello messages because of it. Here's the package in hex (starting from the ospf header)
vv vv <-- my program's checksum
0000 02 01 00 30 c0 a8 03 0a 00 00 00 00 f3 84 00 00 ...0............
0010 00 00 00 00 00 00 00 00 ff ff ff 00 00 0a 00 01 ................
0020 00 00 00 28 c0 a8 03 0a c0 a8 03 0a ...(........
Wireshark calls 0xf384 bogus and says the expected value is 0xb382. I've confirmed that my algorythm picks the bytes in pairs and in the right order and then adds them together:
0201 + 0030 + c0a8 + ... + 030a
There is no authentication to worry about. We weren't even taught how to set it up. Lastly, here's my shot at calculating the checksum:
OspfHeader result;
result.Type = type;
result.VersionNumber = 0x02;
result.PacketLength = htons(24 + content_len);
result.AuthType = htons(auth_type);
result.AuthValue = auth_val;
result.AreaId = area_id;
result.RouterId = router_id;
uint16_t header_buffer[12];
memcpy(header_buffer, &result, 24);
uint32_t checksum;
for(int i = 0; i < 8; i++)
{
checksum += ntohs(header_buffer[i]);
checksum = (checksum & 0xFFFF) + (checksum >> 16);
}
for(int i = 0; i + 1 < content_len; i += 2)
{
checksum += (content_buffer[i] << 8) + content_buffer[i+1];
checksum = (checksum & 0xFFF) + (checksum >> 16);
}
result.Checksum = htons(checksum xor 0xFFFF);
return result;
I'm not sure where exactly I messed up here. Any clues?

Mac_3.2.57$cat ospfCheckSum2.c
#include <stdio.h>
int main(void){
// array was wrong len
unsigned short header_buffer[22] = {
0x0201, 0x0030, 0xc0a8, 0x030a,
0x0000, 0x0000, 0x0000, 0x0000,
0x0000, 0x0000, 0x0000, 0x0000,
0xffff, 0xff00, 0x000a, 0x0001,
0x0000, 0x0028, 0xc0a8, 0x030a,
0xc0a8, 0x030a
};
// note ospf len is also wrong, BTW
// was not init'd
unsigned int checksum = 0;
// was only scanning 8 vs 22
for(int i = 0; i < 22; i++)
{
checksum += header_buffer[i];
checksum = (checksum & 0xFFFF) + (checksum >> 16);
}
// len not odd, so this is not needed
//for(int i = 0; i + 1 < content_len; i += 2)
//{
// checksum += (content_buffer[i] << 8) + content_buffer[i+1];
// checksum = (checksum & 0xFFF) + (checksum >> 16);
//}
printf("result is %04x\n", checksum ^ 0xFFFF); // no "xor" for me (why not do ~ instead?)
// also: where/what is: content_len, content_buffer, result, OspfHeader
return(0);
}
Mac_3.2.57$cc ospfCheckSum2.c
Mac_3.2.57$./a.out
result is b382
Mac_3.2.57$

Related

Why final pointer is being aligned to size of int?

Here's the code under consideration:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
char buffer[512];
int pos;
int posf;
int i;
struct timeval *tv;
int main(int argc, char **argv)
{
pos = 0;
for (i = 0; i < 512; i++) buffer[i] = 0;
for (i = 0; i < 4; i++)
{
printf("pos = %d\n", pos);
*(int *)(buffer + pos + 4) = 0x12345678;
pos += 9;
}
for (i = 0; i < 9 * 4; i++)
{
printf(" %02X", (int)(unsigned char)*(buffer + i));
if ((i % 9) == 8) printf("\n");
}
printf("\n");
// ---
pos = 0;
for (i = 0; i < 512; i++) buffer[i] = 0;
*(int *)(buffer + 4) = 0x12345678;
*(int *)(buffer + 9 + 4) = 0x12345678;
*(int *)(buffer + 18 + 4) = 0x12345678;
*(int *)(buffer + 27 + 4) = 0x12345678;
for (i = 0; i < 9 * 4; i++)
{
printf(" %02X", (int)(unsigned char)*(buffer + i));
if ((i % 9) == 8) printf("\n");
}
printf("\n");
return 0;
}
And the output of code is
pos = 0
pos = 9
pos = 18
pos = 27
00 00 00 00 78 56 34 12 00
00 00 00 78 56 34 12 00 00
00 00 78 56 34 12 00 00 00
00 78 56 34 12 00 00 00 00
00 00 00 00 78 56 34 12 00
00 00 00 00 78 56 34 12 00
00 00 00 00 78 56 34 12 00
00 00 00 00 78 56 34 12 00
I can not get why
*(int *)(buffer + pos + 4) = 0x12345678;
is being placed into the address aligned to size of int (4 bytes). I expect the following actions during the execution of this command:
pointer to buffer, which is char*, increased by the value of pos (0, 9, 18, 27) and then increased by 4. The resulting pointer is char* pointing to char array index [pos + 4];
char* pointer in the brackets is being converted to the int*, causing resulting pointer addressing integer of 4 bytes size at base location (buffer + pos + 4) and integer array index [0];
resulting int* location is being stored with bytes 78 56 34 12 in this order (little endian system).
Instead I see pointer in brackets being aligned to size of int (4 bytes), however direct addressing using constants (see second piece of code) works properly as expected.
target CPU is i.MX287 (ARM9);
target operating system is OpenWrt Linux [...] 3.18.29 #431 Fri Feb 11 15:57:31 2022 armv5tejl GNU/Linux;
compiled on Linux [...] 4.15.0-142-generic #146~16.04.1-Ubuntu SMP Tue Apr 13 09:27:15 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux, installed in Virtual machine;
GCC compiler version gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
I compile as a part of whole system image compilation, flags are CFLAGS = -Os -Wall -Wmissing-declarations -g3.
Update: thanks to Andrew Henle, I now replace
*(int*)(buffer + pos + 4) = 0x12345678;
with
buffer[pos + 4] = value & 0xff;
buffer[pos + 5] = (value >> 8) & 0xff;
buffer[pos + 6] = (value >> 16) & 0xff;
buffer[pos + 7] = (value >> 24) & 0xff;
and can't believe I must do it on 32-bit microprocessor system, whatever architecture it has, and that GCC is not able to properly slice int into bytes or partial int words and perform RMW for those parts.
char* pointer in the brackets is being converted to the int*, causing resulting pointer addressing integer of 4 bytes size at base location (buffer + pos + 4) and integer array index [0]
This incurs undefined behavior (UB) when the alignments requirements of int * are not met.
Instead copy with memcpy(). A good compiler will emit valid optimized code.
// *(int*)(buffer + pos + 4) = 0x12345678;
memcpy(buffer + pos + 4, &(int){0x12345678}, sizeof (int));

What's wrong with this binary representation of a 1-bit monochrome B&W bmp image?

I've written an ANSI C code to construct a simple monochrome BMP image (black and white only) and each pixel is represented with 1 bit.
the number of image pixels is 16*16 (32 byte) and I chose this size to avoid padding as the number of bytes divisible by 4 then I filled all the 32 bytes with 0xff to show a white image.
This is binary output on my Ubuntu terminal
00000000 42 4d 5e 00 00 00 00 00 00 00 3e 00 00 00 28 00 |BM^.......>...(.|
00000010 00 00 10 00 00 00 10 00 00 00 01 00 01 00 00 00 |................|
00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000030 00 00 00 00 00 00 00 00 00 00 00 00 ff 00 ff ff |................|
00000040 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
*
0000005e
My code:
#include "stdio.h"
#include "stdint.h"
#include "stdlib.h"
struct bmpHeader{
unsigned char Signature[2];
unsigned char FileSize[4];
unsigned char reserved[4];
unsigned char DataOffset[4];
}__attribute__( (packed) );
struct bmpInfoHeader{
unsigned char Size[4];
unsigned char Width[4];
unsigned char Height[4];
unsigned char Planes[2];
unsigned char BitsPerPixel[2];
unsigned char Compression[4];
unsigned char ImageSize[4];
unsigned char XpixelsPerM[4];
unsigned char YpixelsPerM[4];
unsigned char ColorsUsed[4];
unsigned char ImportantColors[4];
}__attribute__( (packed) );
int main(void)
{
uint32_t QR_width = 16; uint32_t QR_height = 16;
unsigned char palette_data[8] = {0x00,0x00,0x00,0x00,0x00,0x00,0xff,0x00};
uint32_t size = ( ((QR_width*QR_height)/8)+ ( ((QR_width*QR_height) % 8 )? 1 : 0 ) + 54 + sizeof(palette_data));
struct bmpHeader a;
struct bmpInfoHeader b;
unsigned char headerSize = sizeof(a) + sizeof(b) + sizeof(pallete_data);
printf(" size of struct is %d\n", sizeof(a) + sizeof(b));
printf(" size of image data in bytes = %d\n", size-54-8);
printf(" size of file in bytes = %d\n", size);
a.Signature[0] = 0x42; a.Signature[1] = 0x4d; //'BM'
a.FileSize[0] = (unsigned char) size; a.FileSize[1] = (unsigned char) size>>8; a.FileSize[2] = (unsigned char) size>>16; a.FileSize[3] = (unsigned char) size>>24;
a.reserved[0] = 0; a.reserved[1] = 0; a.reserved[2] = 0; a.reserved[3] = 0;
a.DataOffset[0] = headerSize; a.DataOffset[1] = headerSize>>8; a.DataOffset[2] = headerSize>>16; a.DataOffset[3] = headerSize>>24;
b.Size[0]=40;b.Size[1]=0;b.Size[2]=0;b.Size[3]=0;
b.Width[0]=QR_width; b.Width[1]=QR_width >> 8; b.Width[2]=QR_width >> 16; b.Width[3]=QR_width >> 24;
b.Height[0]=QR_height; b.Height[1]=QR_height>>8; b.Height[2]=QR_height>>16; b.Height[3]=QR_height>>24;
b.Planes[0] = 1; b.Planes[1] = 0;
b.BitsPerPixel[0] = 1; b.BitsPerPixel[1] = 0;
b.Compression[0]=0; b.Compression[1]=0; b.Compression[2]=0; b.Compression[3]=0;
b.ImageSize[0]=0; b.ImageSize[1]=0; b.ImageSize[2]=0; b.ImageSize[3]=0;
b.XpixelsPerM[0]=0; b.XpixelsPerM[1]=0; b.XpixelsPerM[2]=0; b.XpixelsPerM[3]=0;
b.YpixelsPerM[0]=0; b.YpixelsPerM[1]=0; b.YpixelsPerM[2]=0; b.YpixelsPerM[3]=0;
b.ColorsUsed[0]=0; b.ColorsUsed[1]=0; b.ColorsUsed[2]=0; b.ColorsUsed[3]=0;
b.ImportantColors[0]=0; b.ImportantColors[1]=0; b.ImportantColors[2]=0; b.ImportantColors[3]=0;
int i=0;
char* img = calloc( (QR_width*QR_height)/8, sizeof(unsigned char));
for(i=0; i<32; i++)
img[i] = 0xff;
FILE *fptr = fopen("QRcode.bmp","wb");
if(fptr != NULL)
{
fwrite(&a, sizeof(struct bmpHeader), 1, fptr);
fwrite(&b, sizeof(struct bmpInfoHeader), 1, fptr);
fwrite(palette_data, sizeof(palette_data), 1, fptr);
fwrite(img, 32, 1, fptr);
fclose(fptr);
}
free(img);
return 0;
}
The expected result a small white bmp image, but I am not getting it correctly. what went wrong here?

Determine size of character array in C

I am trying to write a program that will talk to a XBee Pro S2B over serial from a PIC16 and I have some odd memory stuff going on...
I am a bit weak when it comes to pointers in C so I have been writing a test program using GCC to get it to work before I move it over to the PIC microcontroller. The biggest thing to note with the PIC is the memory constraints. So for this I would like to avoid adding any external libraries unless I absolutely have to (and avoid having to use malloc/free).
Here is my code:
#include <stdio.h>
#define FALSE 0
#define TRUE 1
void printPacketArray(unsigned char *packetArray, int packetSize){
printf("OUTPUT PACKET: ");
for(int i = 0; i < packetSize; i++){
printf("%02X ", packetArray[i]);
}
printf("\n");
}
/**
* This function builds all of the frames to be wrapped into a packet
*/
int buildFrame(
unsigned char frameType,
unsigned char requestResponse,
unsigned char *destinationAddress64Bit,
unsigned char *destinationAddress16Bit,
unsigned char *rfPacketData,
int destinationAddress64BitSize,
int destinationAddress16BitSize,
int rfDataSize,
unsigned char **frameArray,
int **frameArraySize){
// printf("Entering buildFrame\n");
// FT RESP (Optional Dest Addr x64) + (Optional Dest Addr x16) Packet
int outputPacketSize = (1 + 1 + destinationAddress64BitSize + destinationAddress16BitSize + rfDataSize);
// printf("Packet Size: %d\n", outputPacketSize);
unsigned char outputPacketArray[outputPacketSize];
// Add the frame type
outputPacketArray[0] = frameType;
// printf("Frame type: %02X\n", frameType);
// Request response
outputPacketArray[1] = requestResponse;
// printf("Response: %02X\n", requestResponse);
int arrayCount = 2;
// Add the destination address (64 bit)
if(destinationAddress64Bit != 0x00){
for(int i = 0; i < 8; i++){
outputPacketArray[arrayCount] = destinationAddress64Bit[i];
// printf("outputPacketArray[%d] = %02X\n", arrayCount, destinationAddress64Bit[i]);
arrayCount++;
}
}
// Ad the destination address (16 bit)
if(destinationAddress16Bit != 0x00){
for(int i = 0; i < 2; i++){
outputPacketArray[arrayCount] = destinationAddress16Bit[i];
// printf("outputPacketArray[%d] = %02X\n", arrayCount, destinationAddress16Bit[i]);
arrayCount++;
}
}
// Add the packet data
for(int i = 0; i < rfDataSize; i++){
outputPacketArray[arrayCount] = rfPacketData[i];
// printf("outputPacketArray[%d] = %02X\n", arrayCount, rfPacketData[i]);
arrayCount++;
}
*frameArray = outputPacketArray;
// printf("*frameArray = %p\n", outputPacketArray);
*frameArraySize = &outputPacketSize;
// printf("*frameArraySize = %p\n", &outputPacketSize);
// printf("Packet: ");
// for(int i = 0; i < outputPacketSize; i++){
// printf("%02X ", outputPacketArray[i]);
// }
// printf("\n");
// printf("Exiting buildFrame\n");
return TRUE;
}
/**
* This function wraps the frame data into the packet.
*/
int buildPacket(
unsigned char *frameData,
int frameDataSize,
unsigned char **packetArrayPtr,
int **packetArraySizePtr){
// 7E MSB LSB Packet Checksum
int outputPacketSize = (1 + 1 + 1 + frameDataSize + 1);
int checksum = 0;
unsigned char outputPacketArray[outputPacketSize];
// Add the start delimiter
outputPacketArray[0] = 0x7E;
// Add the MSB (should always be 0x00)
outputPacketArray[1] = 0x00;
// Add the LSB (size of frameData)
outputPacketArray[2] = frameDataSize;
// Add the frame data
int arrayCount = 3;
for(int i = 0; i < frameDataSize; i++){
// printf("CNT: %d\n", arrayCount);
outputPacketArray[arrayCount] = frameData[i];
checksum += frameData[i];
arrayCount++;
}
// Add the checksum
outputPacketArray[arrayCount] = (0xFF - (checksum & 0xFF));
// printf("CNT: %d\n", arrayCount);
// printf("Packet: ");
// for(int i = 0; i < outputPacketSize; i++){
// printf("%02X ", outputPacketArray[i]);
// }
// printf("\n");
*packetArrayPtr = outputPacketArray;
*packetArraySizePtr = &outputPacketSize;
return TRUE;
}
int sendAPICommand(unsigned char* inputFrameData, int inputFrameDataLength){
unsigned char destinationAddress64Bit[] = {0x00, 0x13, 0xA2, 0x00, 0x40, 0x9C, 0x26, 0xE1};
unsigned char destinationAddress16Bit[] = {0xFF, 0xFE};
unsigned char *frameArrayPtr = 0x00;
int *frameArraySizePtr = 0x00;
// printf("COMPARE: 7E 00 13 10 01 00 13 A2 00 40 9C 26 E1 FF FE 00 00 01 02 03 04 05 4A\n");
// We need to add in the radius and options before the data
unsigned char frameData[(inputFrameDataLength + 2)];
frameData[0] = 0x00; // Radius
frameData[1] = 0x00; // Options
for(int i = 0; i < inputFrameDataLength; i++){
frameData[(i + 2)] = inputFrameData[i];
}
if(buildFrame(0x10, 0x01, destinationAddress64Bit, destinationAddress16Bit, frameData, 8, 2, (inputFrameDataLength + 2), &frameArrayPtr, &frameArraySizePtr) == TRUE){
printf("COMPARE: 10 01 00 13 A2 00 40 9C 26 E1 FF FE 00 00 01 02 03 04 05\n");
printPacketArray(frameArrayPtr, *frameArraySizePtr);
// The building of the frame was a success
if(buildPacket(frameArrayPtr, *frameArraySizePtr, &frameArrayPtr, &frameArraySizePtr) == TRUE){
printf("COMPARE: 7E 00 13 10 01 00 13 A2 00 40 9C 26 E1 FF FE 00 00 01 02 03 04 05 4A\n");
printPacketArray(frameArrayPtr, *frameArraySizePtr);
return TRUE;
}
}
return FALSE;
}
int main(){
unsigned char packetData[] = {0x01, 0x02, 0x03, 0x04, 0x05};
sendAPICommand(packetData, 5);
return 0;
}
When I run it, I cannot seem to find out why the results are not consistent. Using the Eclipse debugger, I get the following error:
frameData.15 Error: Multiple errors reported.\ Failed to execute MI command: -var-create - * frameData.15 Error message from debugger back end: mi_cmd_var_create: unable to create variable object\ Failed to execute MI command: -var-create - * frameData.15 Error message from debugger back end: mi_cmd_var_create: unable to create variable object\ Failed to execute MI command: -data-evaluate-expression frameData.15 Error message from debugger back end: No symbol "frameData" in current context.\ Failed to execute MI command: -var-create - * frameData.15 Error message from debugger back end: mi_cmd_var_create: unable to create variable object\ Unable to create variable object
on this line of code:
if(buildFrame(0x10, 0x01, destinationAddress64Bit, destinationAddress16Bit, frameData, 8, 2, (inputFrameDataLength + 2), &frameArrayPtr, &frameArraySizePtr) == TRUE){
When I run the program completely (it silently fails) and this is my text output:
COMPARE: 10 01 00 13 A2 00 40 9C 26 E1 FF FE 00 00 01 02 03 04 05
OUTPUT PACKET: 01 00 00 00 00 00 00 00 D8 8C 28 03 01 00 00 00 B0 8A 97 5C FF 7F 00 00 08 00 00 00 00 00 00 00 00 8A 97 5C 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 FF 00 00 00 00 00 00 00 00 00 00 FF 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 00 00 00 30 00 00 00 90 8A 97 5C FF 7F 00 00 90 89 97 5C FF 7F 00 00 00 00 00 00 00 00 00 00
I suspect the problem is I have a bad pointer (or pointers) in returning the size or the pointer to the character array but I don't know that for sure (or know how to get the program to fail in a way I can determine it).
In buildFrame:
unsigned char outputPacketArray[outputPacketSize];
outputPacketArray is a local variable. Its lifetime ends when you leave the function. However you assign a pointer to it to an output parameter of the function:
*frameArray = outputPacketArray;
(Same for outputPacketSize)
These pointers do not point to valid memory anymore when you leave the function. You need to dynamically allocate the array:
unsigned char* outputPacketArray = malloc(outputPacketSize);
and then free it later. Alternatively you could provide an appropriately sized array to the function, but then you would need to calculate the size already in main.
For the size you don't need a pointer to pointer at all, just return it from the function, or use a simple pointer (instead of double pointer) as output parameter.

How do I handle byte order differences when reading/writing floating-point types in C?

I'm devising a file format for my application, and I'd obviously like for it to work on both big-endian and little-endian systems. I've already found working solutions for managing integral types using htonl and ntohl, but I'm a bit stuck when trying to do the same with float and double values.
Given the nature of how floating-point representations work, I would assume that the standard byte-order functions won't work on these values. Likewise, I'm not even entirely sure if endianness in the traditional sense is what governs the byte order of these types.
All I need is consistency. A way to write a double out, and ensure I get that same value when I read it back in. How can I do this in C?
Another option could be to use double frexp(double value, int *exp); from <math.h> (C99) to break down the floating-point value into a normalized fraction (in the range [0.5, 1)) and an integral power of 2. You can then multiply the fraction by FLT_RADIXDBL_MANT_DIG to get an integer in the range [FLT_RADIXDBL_MANT_DIG/2, FLT_RADIXDBL_MANT_DIG). Then you save both integers big- or little-endian, whichever you choose in your format.
When you load a saved number, you do the reverse operation and use double ldexp(double x, int exp); to multiply the reconstructed fraction by the power of 2.
This will work best when FLT_RADIX=2 (virtually all systems, I suppose?) and DBL_MANT_DIG<=64.
Care must be taken to avoid overflows.
Sample code for doubles:
#include <limits.h>
#include <float.h>
#include <math.h>
#include <string.h>
#include <stdio.h>
#if CHAR_BIT != 8
#error currently supported only CHAR_BIT = 8
#endif
#if FLT_RADIX != 2
#error currently supported only FLT_RADIX = 2
#endif
#ifndef M_PI
#define M_PI 3.14159265358979324
#endif
typedef unsigned char uint8;
/*
10-byte little-endian serialized format for double:
- normalized mantissa stored as 64-bit (8-byte) signed integer:
negative range: (-2^53, -2^52]
zero: 0
positive range: [+2^52, +2^53)
- 16-bit (2-byte) signed exponent:
range: [-0x7FFE, +0x7FFE]
Represented value = mantissa * 2^(exponent - 53)
Special cases:
- +infinity: mantissa = 0x7FFFFFFFFFFFFFFF, exp = 0x7FFF
- -infinity: mantissa = 0x8000000000000000, exp = 0x7FFF
- NaN: mantissa = 0x0000000000000000, exp = 0x7FFF
- +/-0: only one zero supported
*/
void Double2Bytes(uint8 buf[10], double x)
{
double m;
long long im; // at least 64 bits
int ie;
int i;
if (isnan(x))
{
// NaN
memcpy(buf, "\x00\x00\x00\x00\x00\x00\x00\x00" "\xFF\x7F", 10);
return;
}
else if (isinf(x))
{
if (signbit(x))
// -inf
memcpy(buf, "\x00\x00\x00\x00\x00\x00\x00\x80" "\xFF\x7F", 10);
else
// +inf
memcpy(buf, "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x7F" "\xFF\x7F", 10);
return;
}
// Split double into normalized mantissa (range: (-1, -0.5], 0, [+0.5, +1))
// and base-2 exponent
m = frexp(x, &ie); // x = m * 2^ie exactly for FLT_RADIX=2
// frexp() can't fail
// Extract most significant 53 bits of mantissa as integer
m = ldexp(m, 53); // can't overflow because
// DBL_MAX_10_EXP >= 37 equivalent to DBL_MAX_2_EXP >= 122
im = trunc(m); // exact unless DBL_MANT_DIG > 53
// If the exponent is too small or too big, reduce the number to 0 or
// +/- infinity
if (ie > 0x7FFE)
{
if (im < 0)
// -inf
memcpy(buf, "\x00\x00\x00\x00\x00\x00\x00\x80" "\xFF\x7F", 10);
else
// +inf
memcpy(buf, "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x7F" "\xFF\x7F", 10);
return;
}
else if (ie < -0x7FFE)
{
// 0
memcpy(buf, "\x00\x00\x00\x00\x00\x00\x00\x00" "\x00\x00", 10);
return;
}
// Store im as signed 64-bit little-endian integer
for (i = 0; i < 8; i++, im >>= 8)
buf[i] = (uint8)im;
// Store ie as signed 16-bit little-endian integer
for (i = 8; i < 10; i++, ie >>= 8)
buf[i] = (uint8)ie;
}
void Bytes2Double(double* x, const uint8 buf[10])
{
unsigned long long uim; // at least 64 bits
long long im; // ditto
unsigned uie;
int ie;
double m;
int i;
int negative = 0;
int maxe;
if (!memcmp(buf, "\x00\x00\x00\x00\x00\x00\x00\x00" "\xFF\x7F", 10))
{
#ifdef NAN
*x = NAN;
#else
*x = 0; // NaN is not supported, use 0 instead (we could return an error)
#endif
return;
}
if (!memcmp(buf, "\x00\x00\x00\x00\x00\x00\x00\x80" "\xFF\x7F", 10))
{
*x = -INFINITY;
return;
}
else if (!memcmp(buf, "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x7F" "\xFF\x7F", 10))
{
*x = INFINITY;
return;
}
// Load im as signed 64-bit little-endian integer
uim = 0;
for (i = 0; i < 8; i++)
{
uim >>= 8;
uim |= (unsigned long long)buf[i] << (64 - 8);
}
if (uim <= 0x7FFFFFFFFFFFFFFFLL)
im = uim;
else
im = (long long)(uim - 0x7FFFFFFFFFFFFFFFLL - 1) - 0x7FFFFFFFFFFFFFFFLL - 1;
// Obtain the absolute value of the mantissa, make sure it's
// normalized and fits into 53 bits, else the input is invalid
if (im > 0)
{
if (im < (1LL << 52) || im >= (1LL << 53))
{
#ifdef NAN
*x = NAN;
#else
*x = 0; // NaN is not supported, use 0 instead (we could return an error)
#endif
return;
}
}
else if (im < 0)
{
if (im > -(1LL << 52) || im <= -(1LL << 53))
{
#ifdef NAN
*x = NAN;
#else
*x = 0; // NaN is not supported, use 0 instead (we could return an error)
#endif
return;
}
negative = 1;
im = -im;
}
// Load ie as signed 16-bit little-endian integer
uie = 0;
for (i = 8; i < 10; i++)
{
uie >>= 8;
uie |= (unsigned)buf[i] << (16 - 8);
}
if (uie <= 0x7FFF)
ie = uie;
else
ie = (int)(uie - 0x7FFF - 1) - 0x7FFF - 1;
// If DBL_MANT_DIG < 53, truncate the mantissa
im >>= (53 > DBL_MANT_DIG) ? (53 - DBL_MANT_DIG) : 0;
m = im;
m = ldexp(m, (53 > DBL_MANT_DIG) ? -DBL_MANT_DIG : -53); // can't overflow
// because DBL_MAX_10_EXP >= 37 equivalent to DBL_MAX_2_EXP >= 122
// Find out the maximum base-2 exponent and
// if ours is greater, return +/- infinity
frexp(DBL_MAX, &maxe);
if (ie > maxe)
m = INFINITY;
else
m = ldexp(m, ie); // underflow may cause a floating-point exception
*x = negative ? -m : m;
}
int test(double x, const char* name)
{
uint8 buf[10], buf2[10];
double x2;
int error1, error2;
Double2Bytes(buf, x);
Bytes2Double(&x2, buf);
Double2Bytes(buf2, x2);
printf("%+.15E '%s' -> %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
x,
name,
buf[0],buf[1],buf[2],buf[3],buf[4],buf[5],buf[6],buf[7],buf[8],buf[9]);
if ((error1 = memcmp(&x, &x2, sizeof(x))) != 0)
puts("Bytes2Double(Double2Bytes(x)) != x");
if ((error2 = memcmp(buf, buf2, sizeof(buf))) != 0)
puts("Double2Bytes(Bytes2Double(Double2Bytes(x))) != Double2Bytes(x)");
puts("");
return error1 || error2;
}
int testInf(void)
{
uint8 buf[10];
double x, x2;
int error;
x = DBL_MAX;
Double2Bytes(buf, x);
if (!++buf[8])
++buf[9]; // increment the exponent beyond the maximum
Bytes2Double(&x2, buf);
printf("%02X %02X %02X %02X %02X %02X %02X %02X %02X %02X -> %+.15E\n",
buf[0],buf[1],buf[2],buf[3],buf[4],buf[5],buf[6],buf[7],buf[8],buf[9],
x2);
if ((error = !isinf(x2)) != 0)
puts("Bytes2Double(Double2Bytes(DBL_MAX) * 2) != INF");
puts("");
return error;
}
#define VALUE_AND_NAME(V) { V, #V }
const struct
{
double value;
const char* name;
} testData[] =
{
#ifdef NAN
VALUE_AND_NAME(NAN),
#endif
VALUE_AND_NAME(0.0),
VALUE_AND_NAME(+DBL_MIN),
VALUE_AND_NAME(-DBL_MIN),
VALUE_AND_NAME(+1.0),
VALUE_AND_NAME(-1.0),
VALUE_AND_NAME(+M_PI),
VALUE_AND_NAME(-M_PI),
VALUE_AND_NAME(+DBL_MAX),
VALUE_AND_NAME(-DBL_MAX),
VALUE_AND_NAME(+INFINITY),
VALUE_AND_NAME(-INFINITY),
};
int main(void)
{
unsigned i;
int errors = 0;
for (i = 0; i < sizeof(testData) / sizeof(testData[0]); i++)
errors += test(testData[i].value, testData[i].name);
errors += testInf();
// Test subnormal values. A floating-point exception may be raised.
errors += test(+DBL_MIN / 2, "+DBL_MIN / 2");
errors += test(-DBL_MIN / 2, "-DBL_MIN / 2");
printf("%d error(s)\n", errors);
return 0;
}
Output (ideone):
+NAN 'NAN' -> 00 00 00 00 00 00 00 00 FF 7F
+0.000000000000000E+00 '0.0' -> 00 00 00 00 00 00 00 00 00 00
+2.225073858507201E-308 '+DBL_MIN' -> 00 00 00 00 00 00 10 00 03 FC
-2.225073858507201E-308 '-DBL_MIN' -> 00 00 00 00 00 00 F0 FF 03 FC
+1.000000000000000E+00 '+1.0' -> 00 00 00 00 00 00 10 00 01 00
-1.000000000000000E+00 '-1.0' -> 00 00 00 00 00 00 F0 FF 01 00
+3.141592653589793E+00 '+M_PI' -> 18 2D 44 54 FB 21 19 00 02 00
-3.141592653589793E+00 '-M_PI' -> E8 D2 BB AB 04 DE E6 FF 02 00
+1.797693134862316E+308 '+DBL_MAX' -> FF FF FF FF FF FF 1F 00 00 04
-1.797693134862316E+308 '-DBL_MAX' -> 01 00 00 00 00 00 E0 FF 00 04
+INF '+INFINITY' -> FF FF FF FF FF FF FF 7F FF 7F
-INF '-INFINITY' -> 00 00 00 00 00 00 00 80 FF 7F
FF FF FF FF FF FF 1F 00 01 04 -> +INF
+1.112536929253601E-308 '+DBL_MIN / 2' -> 00 00 00 00 00 00 10 00 02 FC
-1.112536929253601E-308 '-DBL_MIN / 2' -> 00 00 00 00 00 00 F0 FF 02 FC
0 error(s)
Depending on the application it could be a good idea to use a plain text data format (a possibility being XML). If you don't want to waste disk space you can compress it.
XML is probably the most portable way to do it.
However, it appears that you already have most of the parser built, but are stuck on the float/double issue. I would suggest writing it out as a string (to whatever precision you desire) and then reading that back in.
Unless all your target platforms use IEEE-754 floats (and doubles), no byte-swapping tricks will work for you.
If you guarantee that your implementations always treat serialized floating point representations in a specified format, then you will be fine (IEEE 754 is common).
Yes, architectures may order floating point numbers differently (e.g. in big or little endian). Therefore, you will want to somehow specify the endianness. This could be in the format's specification or variable and recorded in the file's data.
The last major pitfall is that alignment for builtins may vary. How your hardware/processor handles malaligned data is implementation defined. So you may need to swap the data/bytes, then move it to the destination float/double.
A library like HDF5 or even NetCDF is probably a bit heavyweight for this as High Performance Mark said, unless you also need the other features available in those libraries.
A lighter-weight alternative that only deals with the serialization would be e.g. XDR (see also wikipedia description). Many OS'es supply XDR routines out of the box, if this is not enough free-standing XDR libraries exist as well.

Why doesn't my bit left shift?

I'm using embedded C on Keil. I'm trying to program such that it stores a bit, bit shifts and then it stores it again and it repeats until all eight bits are stored.
However, when I debug (maybe debug wrongly), the value only shows "01 00 00 00 00 00 00...". When it stores logic'1' and then when it shift left, it shows "02 00 00 00 00 00 00...". When the loop repeats, it shows the same thing over and over again. What I expected was "01 01 01 01 01 01 01..." (Let's say all the input bits was '1'). How do I solve this problem?
#include <reg51.h>
sbit Tsignal = P1^2;
unsigned char xdata x[500];
for(u=0; u<8; u++)
{
x[i] = x[i] << 1;
x[i] = Tsignal; //Store Tsignal in x
}
Ah, I have solved it already.
unsigned int u;
unsigned char p;
unsigned char xdata x[500];
for(u=0; u<8; u++) //Bit Shift Loop
{
x[i] = x[i] <<1; //Left Bit Shift by 1
p = Tsignal; //Store Tsignal to Buffer p
x[i] |= p;
} //End Bitshift loop
I think you want to do something like this:
for(u=0;u<8;u++)
{
// Update Tsignal.
//Tsignal = GetBitValue();
// Store it to x.
x = (x << 1) | (Tsignal & 0x1)
}

Resources