Convert integer to byte array produces unexepected results - c

OK...this seems like a fairly easy problem but I can't figure out what is going on here. I have the following code to convert an integer to a byte array and test the output:
#include <stdio.h>
void int2bytearray(char *byte_array, int size, int num)
{
for (int i = 0; i < size; i++)
{
byte_array[i] = (num >> 8*i) & 0xFF;
}
}
int main()
{
int test_int = 657850;
int size = sizeof(int);
printf("Size is %d\n", size);
printf("Size of char is %d\n", (int)sizeof(char));
char test_array[size];
printf("Size of first entry %d\n", (int)sizeof(test_array[0]));
int2bytearray(test_array, size, test_int);
for (int i=0; i < size; i++)
{
printf("%#02x", test_array[i]);
}
printf("\nFirst byte is %#.2X", test_array[0]);
return 0;
}
I expect main to return an array of bytes, but the first "byte" appears to be a 32 bit integer.
Unfortunately, I get the following output:
Size is 4
Size of char is 1
Size of first entry 1
0xffffffba0x90xa00
First byte is 0XFFFFFFBA
Can someone tell me why this first byte is printing out as 0XFFFFFFBA? Why is it a 32-bit integer and not a byte as defined?

The problem is related both to the type being used and how you're printing the values.
The first element in the array has value 0xBA which as a signed char is -70 in decimal. When this value is passed to printf it is converted to type int, so now you have a 32 bit value of -70 whose representation is 0XFFFFFFBA you then print it using %X it which prints an unsigned int in hex.
There are two ways to fix this.
First, change the type of test_array to unsigned char. Then the values stored will be positive and print correctly. The other option is to use %hhX as the format specifier which states to print the value as an unsigned char which will print the proper number of digits.

Related

c programming how do i read the bytes stored as little indian?

unsigned short num = 258;
//How can i read the byte value as on how this num 258 is getting store (by default is stored as litle indian right?) so the value should be something like this [ 2, 1 ]or [0x02,0x01] <- as litle endian
how do i printf it out?
A pointer to an unsigned char can be used to read the byte representation of an object.
unsigned char *p = (unsigned char *)&num;
int i;
for (i=0; i<sizeof num; i++) {
printf("%02x ", p[i]);
}

Putting Hex into a byte array

I am trying to put hex values into a Byte[], Trying to achieve 78,66,1E,B5,4F,E7,67,63
#define BYTE unsigned char
int num = 1;
long long hex[]
{
0x78661EB54FE76763,
};
int main()
{
for (int c = 0; c < num; c++)
{
printf("%llx\n", hex[c]);
unsigned int number = hex[c];
unsigned int ef = number & 0xff;
unsigned int cd = (number >> 8) & 0xff;
unsigned int ab = (number >> 16) & 0xff;
printf("%x", number & 0xff);
BYTE data2[8]{
ef, cd, ab
};
}
}
Update:
Basically I have an array of 30 odd hex values. I am trying to loop through that array called hex[], then for each hex value separate it into 2's ie 78,66,1E,B5,4F,E7,67,63 then add each one into an array of type BYTE which will hold the hex value in 8 pairs so data[0] will have the value of 78 up to data[8] which will have the value of 63, So I can then pass the array of type BYTE to another method for further work
Here is the solution that you want:
#include <stdio.h>
typedef unsigned char BYTE;
int main()
{
int i,j;
long long hex=0x78661EB54FE76763;
BYTE val[8];
for(j=0,i=7; i>=0; i--,j++){
val[j]= (hex>>(i*8))&0xff;
}
printf("Hexadecimal bytes are:\n");
for(j=0;j<8;j++){
printf("%02X ",val[j]);
}
return 0;
}
And the output is:
Hexadecimal bytes are:
78 66 1E B5 4F E7 67 63
Assuming that BYTE is a type
BYTE data2[] = {ef, cd, ab, '\0'};
Use the null terminator to allow printing with "%s" and printf().
Of course, you can add the rest of the bytes.
This is a pretty simple task to accomplish all you need is the initialization values that you have given. long long hex[] = { 0x78661EB54FE76763 };
Next, you need a char pointer that is pointing to the hex array
unsigned char* pByte = (unsigned char*)hex; // Point to the first element in hex array
Then create an 8 byte buffer and clear it with memset:
unsigned char byteArray[8];
memset(byteArray, 0, sizeof(byteArray));
All you need to do now is dereference your char pointer and place the value into your 8 byte buffer.
// Loop through the hex array and assign the current byte
// the byte pointer is pointing to then increment the byte pointer
// to look at the next value.
// This for loop is for a system that is little endian (i.e. win32)
for (int i = 7; i >= 0; i--)
{
byteArray[i] = *pByte;
pByte++;
}
Note: Since I ran this with Visual Studio 2015 everything is little endian. This is why the first for loop is written the way it is.
See below for the full program listing I wrote & tested below demonstrating this:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(void)
{
long long hex[] = { 0x78661EB54FE76763 }; // Value of an element
unsigned char* pByte = (unsigned char*)hex; // Assign a char pointer to point to the first element in hex array
unsigned char byteArray[8]; // This is the byte buffer that will be populated from the hex array
memset(byteArray, 0, sizeof(byteArray)); // Clear out the memory of byte array before filling in the indices
// Loop through the hex array and assign the current byte
// the byte pointer is pointing to then increment the byte pointer
// to look at the next value.
// This for loop is for a system that is little endian (i.e. win32)
for (int i = 7; i >= 0; i--)
{
byteArray[i] = *pByte; // Dereference the pointer and assign value to buffer
pByte++; // Point to the next byte
}
// Print the byte array
for(int i = 0; i < 8; i++)
{
printf("The value of element %i is: %X\n", i, byteArray[i]);
}
printf("Press any key to continue..."); // Hold the console window open
getchar();
return 0;
}

Why am i not able to print 47th fibonacci number correctly?

I am using 64 bit operating system ,then also i am not able to print 46th fibonacci number correctly which is less than 4 billion.
#include<cs50.h>
#include<stdio.h>
int main(void)
{
unsigned int n=50;
int array[n];
array[0]=0;
array[1]=1;
printf("%i\n",array[0]);
printf("%i\n",array[1]);
for(int i=2;i<n;i++)
{
array[i]=array[i-1]+array[i-2];
printf("%i\n",array[i]);
}
You have to use long long as your data type of the array. because You are going to store out-range numbers of the integer range.(-2,147,483,648 to 2,147,483,647)
And declaration of int i should be before the for loop.
#include<stdio.h>
int main(void)
{
int n=50;
long long array[n];
array[0]=0;
array[1]=1;
printf("%lli\n",array[0]);
printf("%lli\n",array[1]);
int i;
for(i=2;i<n;i++)
{
array[i]=array[i-1]+array[i-2];
printf("%lli\n",array[i]);
}
}
i am not able to print 46th fibonacci number correctly which is less than 4 billion.
You are most probably going out of range of an integer, which is from -4294967296 to 4294967295.
Change int array[n]; to long long array[n];
Also, the printf's should be changed from %i to %lli
Edit : On running the numbers, you get expected value of F(48) as 4807526976 which is out of range of an integer.
Using Rishikesh Raje's counting system (i.e. 1st Fibonacci is 1) where F(48) is 4807526976, then you weren't able to get F(47) 2971215073 because, as #kaylum commented, you used a signed integer array to hold your values which you need to change to unsigned, and well as change your printf statement to print an unsigned. This would allow you to reach the limit of 32 bit arithmetic:
#include <stdio.h>
#define LIMIT (50)
int main(void) {
unsigned int array[LIMIT] = {0, 1};
printf("%u\n", array[0]);
printf("%u\n", array[1]);
for (size_t i = 2; i < LIMIT; i++)
{
array[i] = array[i - 1] + array[i - 2];
printf("%u\n", array[i]);
}
return 0;
}
To get beyond 32 bits, you can switch to long, or long longs as Rishikesh Raje suggests, but work with unsigned variants if you want to reach the maximum result you can with a given number of bits.
Either Use an unsigned integer array or for more higher values use unsigned long long long array but you don't need an array to print fibonacci series you can simply do this:-
void main()
{
unsigned long long i=1, num1=1, num2=0;
printf("1 \n");
for(i; i<100 ; i++)
{
num1=num1+num2;
num2=num1-num2;
printf("%lli \n", num1);
}
getch();
}

How to convert byte array (containing hex values) to decimal

I am writing some code for an Atmel micro-controller. I am getting some data via Uart, and I store these hex values into an array.
Suppose the elements of the array are: 1F, 29, and 3C.
I want to have one hex number like 0x1F293C, and convert it to a decimal number. So, I want to get “2042172” at the end.
The array could have n elements, so I need a general solution.
Thank you.
sprintf(buffer, "%d", (((unsigned)array[0])<<16)+(((unsigned)array[1])<<8)+(unsigned)array[2];
this will write the hex values in array to buffer as readable string in decimal representation.
assuming sizeof(int)=4
If you have a array of characters like "0x1F293C" and want to convert it to int try this code:
char receivedByte[] = "0x1F293C";
char *p;
int intNumber = strtol(receivedByte, &p, 16);
printf("The received number is: %ld.\n", intNumber);
If data is declared as stated in the comments (char data[]={0x1F, 0x29, 0x3C}), you can run this program.
#include <stdio.h>
#include <stdlib.h>
int main()
{
char receivedByte[9], *p;
char data[] = { 0x1F, 0x29, 0x3C };
sprintf(receivedByte, "0x%X%X%X", data[0], data[1], data[2]);
int intNumber = strtol(receivedByte, &p, 16);
printf("The received number is: %ld.\n", intNumber);
return 0;
}
If the input consists of n bytes and are stored starting from a pointer array, you can add the values up in the order you "received" them - i.e., in the order they are written in the array.
unsigned int convertToDecimal (unsigned char *array, int n)
{
unsigned int result = 0;
while (n--)
{
result <<= 8;
result += *array;
array++;
}
return result;
}
Note that your sample input contains 3 bytes and you want a "general solution for n bytes", and so you may run out of space really fast. This function will only work for 0..4 bytes. If you need more bytes, you can switch to long long (8 bytes, currently).
For longer sequences than that you need to switch to a BigNum library.
#include <stdio.h>
int main(){
char data[] = {0x1F, 0x29, 0x3C};
int i, size = sizeof(data);
unsigned value = 0;
for(i=0;i<size;++i)
value = value * 256 + (unsigned char)data[i];
printf("0x%X %d\n", value, (int)value);
return 0;
}

Converting hex/binary values stored in char to int - Getting strange results

I am having problems with converting hex/binary values received over socket to an integer.
I am getting the hex value over a socket with the following code:
void get_msg(int sockfd, char *buf)
{
int n;
bzero(buf,256);
n = read(sockfd,buf,255);
if (n < 0)
error("ERROR writing socket");
}
I then pass the received binary located in *buf to this function to see them in hex (so that I can check if the calc_msg function below is working properly):
void print_msg(char *buf)
{
int i;
char *buffer = malloc(4);
printf("[ ");
for(i = 0; i < 4; i++) {
printf("%02x ", ((const unsigned char *) buf)[i] & 0xff);
}
printf("]\n");
}
Now, in an attempt to convert the received message to decimal I call this function:
void calc_msg(char *buf)
{
int i;
int dec[3];
for(i = 0; i < 4; i++) {
dec[i] = buf[i];
printf("Transformation %d: %u\n", i, dec[i]);
}
}
This function only converts the message sometimes. Other times, it give ridiculously high values.
Here is an example output:
[ 94 cc 78 28 ]
Transformation 0: 4294967188
Transformation 1: 4294967244
Transformation 2: 120
Transformation 3: 40
As you can see, 94 and cc end up being ridiculous values, whereas 78 and 28 convert just fine. The only relationship I see is that this only happens for higher values. I have not found any useful information using search engines.
Thanks!
Surculus
int dec[3]; should be int dec[4];, for starters.
Furthermore, this probably happens because all values greater than 127 (assuming an 8-bit char, but this is true in general to any value which doesn't fit into a signed char) are sign extended during the signed upcast (char -> int). You'd be better off using unsigned char * for your buffers everywhere.
Hex 94 and cc are 148 and 204 in decimal. You are trying to store it in a signed char. Which cannot store number more than +127. This results in being represented as negative value in dec. When you convert to unsigned int gets converted to large values.

Resources