ATmega64a float to IEEE-754 unexpected result - c

I am trying to convert a float to an IEEE-754 Hex representation. The following code works on my Mac.
#include <stdio.h>
#include <stdlib.h>
union Data {
int i;
float f;
};
int main() {
float var = 502.7;
union Data value;
value.f = var;
printf("%08X\n", value.i);
return 0;
}
This is giving me the expected result of 43FB599A.
When I run this code on an ATmega64a I am getting 0000599A not 04A2599A as originally posted which was a mistake.
The first two bytes are not expected but the final two bytes seem correct?
Any ideas?
As mentioned in the accepted answer the I was assuming that int was 4 bytes. I was writing the code on my mac and sending it to someone that was downloading it to an 8-bit ATmega64a. On the ATmega64a int is 2 bytes, not 4. I changed int to unsigned long which is 4 bytes on the ATmega64a.
In addition, I had to add a length sub-specifier of l to the format given to printf. This is because when given a specifier of x, printf uses a type of unsigned int to interpret the corresponding argument. Adding the length sub-specifier of l tells printf to use the type of unsigned long to interpret the corresponding argument.
Using only the length sub-specifier of l and not changing the variable i to unsigned long was causing printf to grab some extra bytes and output 04A2599A as originally posted. I, of course, needed to change the type of i to unsigned long as well as use the length sub-specifier of l.
http://www.cplusplus.com/reference/cstdio/printf/

This processor is an 8 bit one which mean size of int is most likely 2 byte not 4 as your code assume.
Try to use uint32_t rather then int if you can.

Related

wrong conversion of two bytes array to short in c

I'm trying to convert 2 bytes array to an unsigned short.
this is the code for the conversion :
short bytesToShort(char* bytesArr)
{
short result =(short)((bytesArr[1] << 8)|bytesArr[0]);
return result;
}
I have an InputFile which stores bytes, and I read its bytes via loop (2 bytes each time) and store it in char N[] arr in this manner :
char N[3];
N[2]='\0';
while(fread(N,1,2,inputFile)==2)
when the (hex) value of N[0]=0 the computation is correct otherwise its wrong, for example :
0x62 (N[0]=0x0,N[1]=0x62) will return 98 (in short value), but 0x166 in hex (N[0]=0x6,N[1]=0x16) will return 5638 (in short value).
In the first place, it's generally best to use type unsigned char for the bytes of raw binary data, because that correctly expresses the semantics of what you're working with. Type char, although it can be, and too frequently is, used as a synonym for "byte", is better reserved for data that are actually character in nature.
In the event that you are furthermore performing arithmetic on byte values, you almost surely want unsigned char instead of char, because the signedness of char is implementation-defined. It does vary among implementations, and on many common implementations char is signed.
With that said, your main problem appears simple. You said
166 in hex (N[0]=6,N[1]=16) will return 5638 (in short value).
but 0x166 packed into a two-byte little-endian array would be (N[0]=0x66,N[1]=0x1). What you wrote would correspond to 0x1606, which indeed is the same as decimal 5638.
The problem is sign extension due to using char. You should use unsigned char instead:
#include <stdio.h>
short bytesToShort(unsigned char* bytesArr)
{
short result = (short)((bytesArr[1] << 8) | bytesArr[0]);
return result;
}
int main()
{
printf("%04x\n", bytesToShort("\x00\x11")); // expect 0x1100
printf("%04x\n", bytesToShort("\x55\x11")); // expect 0x1155
printf("%04x\n", bytesToShort("\xcc\xdd")); // expect 0xddcc
return 0;
}
Note: the problem in the code is not the one presented by the OP. The problem is returning the wrong result upon the input "\xcc\xdd". It will produce 0xffcc where it should be 0xddcc

Pointer not giving expected output in c

Why doesn't the double variable show a garbage value?
I know I am playing with pointers, but I meant to. And is there anything wrong with my code? It threw a few warnings because of incompatible pointer assignments.
#include "stdio.h"
double y= 0;
double *dP = &y;
int *iP = dP;
void main()
{
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10lf %#10lf \n",y,*dP,*iP,*(iP+1));
scanf("%lf %d %d",&y,iP,iP+1);
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10d %#10d \n",y,*dP,*iP,*(iP+1));
}
Welcome to Stack Overflow. It's not very clear what you're trying to do with this code, but the first thing I'll say is that it does exactly what it says it does. It tries to format data with the wrong format string. The result is garbage, but that doesn't necessarily mean it will look like garbage.
If part of the idea is to print out the internal bit pattern of a double in hexadecimal, you can do that--but the code will be implementation-dependent. The following should work on just about any modern 32 or 64-bit desktop implementation using 64-bits for both double and long long int types:
double d = 3.141592653589793238;
printf("d = %g = 0x%016llX\n", d, *(long long*)&d);
The %g specification is a quick way to print out a double in (usually) easily readable form. The %llX format prints an unsigned long long int in hexadecimal. The byte order is implementation-dependent; even if you know that both double and long long int have the same number of bits. On a Mac, PC or other Intel/AMD architecture machine, you'll get the display in most-significant-digit-first order.
The *(long long *)&d expression (reading from right to left) will take the address of d, convert that double* pointer to a long long * pointer, then dereference that pointer to get a long long value to format.
Almost every implementation uses IEEE 754 format for hardware floating point this century.
64-bit IEEE format (aka double)
You can find out more about printf formatting at:
http://www.cplusplus.com/reference/cstdio/printf/

C printf of an integer with %lu produces large number

I know that is bad practice to print an integer with %lu which is a unsigned long. In a project i was working on i got a large number when trying to print 11 with %lu in the snprint format.(old code) I am using gcc 4.9.3.
This code below i thought would produce the wrong number since snprintf is told to read more than the 4 bytes occupied. Its doesnt though. Works perfectly. It reads everything correctly. Either it does not go past the 4 bytes in to the unknown or the extra 4 bytes in the long are fully of zeros when it gets promoted to long from int.
I am wondering out of curiosity is when does printf print the wrong number? What conditions does it need produce a wrong big number? There has to be garbage in the upper 4 bytes but it seems like it does not set that garbage for me.
I read the answers here but the code worked for me. I know its a different compiler.
Printing int type with %lu - C+XINU
#include<inttypes.h>
#include<stdio.h>
int main(void){
uint32_t number1 = 11;
char sentence[40];
snprintf(sentence,40,"Small number :%lu , Big number:%lu \n",number1,285212672);
printf(sentence);
}
On OP's machine, uint32_t, unsigned long and int appear to be the same size #R Sahu. OP's code is not portable and may produce incorrect output on another machine.
when does printf print the wrong number?
Use the matching printf() specifier for truly portable code. Using mis-matched specifiers may print the wrong number.
The output string may be well over 40 characters. Better to use a generous or right-sized buffer.
#include <inttypes.h>
#include <stdio.h>
int main(void){
uint32_t number1 = 11;
// char sentence[40];
char sentence[80];
snprintf(sentence, sizeof sentence,
"Small number :%" PRIu32 " , Big number:%d \n",
number1, 285212672);
// printf(sentence); // Best not to print a string using the printf() format parameter
fputs(sentence, stdout);
}

Assigning Value to unsigned long long in C

While assigning value to a unsigned long long variable in C, value of variable is not getting assigned properly. The code is:
#include <stdio.h>
int main()
{
unsigned long long x;
printf("%d\n\n",sizeof(x));
x=0xAAAAAAAAAAAAAAAAULL;
printf("%u\n\n",x);
printf("%ld\n\n",x);
return 0;
}
Rightmost 32 bits of the variable are being ignored. Can someone please tell me how to do this correctly.
Print unsigned long long with %llu.
Use llu or Lu format specifier for printf if you want to print unsigned long long. The format specifier depends on the compiler.
The assignment occurs correctly. However, the program is not displaying the value correctly.
printf("%Lu\n\n",x);
or
printf("%llu\n\n",x);
or maybe even
printf("%LLu\n\n",x);
depending on the compiler and specific runtime library.
You need to change the printfs to print properly.
#include <stdio.h>
int main()
{
unsigned long long x;
printf("%d\n\n",sizeof(x));
x=0xAAAAAAAAAAAAAAAAULL;
printf("%u\n\n",x); // not work
printf("%llu\n\n",x); // works
printf("%016llx\n\n",x); // bonus check
return 0;
}

C convert hex to decimal format

Compiling on linux using gcc.
I would like to convert this to hex. 10 which would be a.
I have managed to do this will the code below.
unsigned int index = 10;
char index_buff[5] = {0};
sprintf(index_buff, "0x%x", index);
data_t.un32Index = port_buff;
However, the problem is that I need to assign it to a structure
and the element I need to assign to is an unsigned int type.
This works however:
data_t.un32index = 0xa;
However, my sample code doesn't work as it thinks I am trying to convert
from an string to a unsigned int.
I have tried this, but this also failed
data_t.un32index = (unsigned int) *index_buff;
Many thanks for any advice,
Huh? The decimal/hex doesn't matter if you have the value in a variable. Just do
data_t.un32index = index;
Decimal and hex are just notation when printing numbers so humans can read them.
For a C (or C++, or Java, or any of a number of languages where these types are "primitives" with semantics closely matching those of machine registers) integer variable, the value it holds can never be said to "be in hex".
The value is held in binary (in all typical modern electronic computers, which are digital and binary in nature) in the memory or register backing the variable, and you can then generate various string representations, which is when you need to pick a base to use.
I agree with the previous answers, but I thought I'd share code that actually converts a hex string to an unsigned integer just to show how it's done:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
char *hex_value_string = "deadbeef";
unsigned int out;
sscanf(hex_value_string, "%x", &out);
printf("%o %o\n", out, 0xdeadbeef);
printf("%x %x\n", out, 0xdeadbeef);
return 0;
}
Gives this when executed:
emil#lanfear /home/emil/dev $ ./hex
33653337357 33653337357
deadbeef deadbeef
However, my sample code doesn't work as it thinks I am trying to convert from an string to a unsigned int.
This is because when you write the following:
data_t.un32index = index_buff;
you do have a type mismatch. You are trying to assign a character array index_buff to an unsigned int i.e. data_t.un32index.
You should be able to assign the index as suggested directly to data_t.un32index.

Resources