Actually, I have to convert from command arguments which are three strings to bit field(three unsigned integers inside). This program is going to convert from bits into float. I firstly thought about using array to store the three arguments,but I don't really know how to convert from array to unsigned int. Should I just use atoi to change arg into int and then directly into unsigned int? It doesn't word on my computer. Got no idea.
Union32 getBits(char *sign, char *exp, char *frac)
{
Union32 new;
// this line is just to keep gcc happy
// delete it when you have implemented the function
//new.bits.sign = new.bits.exp = new.bits.frac = 0;
new.bits.sign = *(unsigned int *)atoi(sign);
new.bits.exp = *(unsigned int *)atoi(exp);
new.bits.frac = *(unsigned int *)atoi(frac);
//int i ;
//int balah[8] = {};
//for(i = 0; i < 8; i++){
//balah[i] = sign[i];
//}
//int j ;
//int bili[23] = {};
//for(j = 0; j < 23; j++){
//bili[j] = sign[j];
//}
//convert array into unsigned integer?
printf("%u %u %u\n", new.bits.sign, new.bits.exp, new.bits.frac);
// convert char *sign into a single bit in new.bits
// convert char *exp into an 8-bit value in new.bits
// convert char *frac into a 23-bit value in new.bits
enter code here
return new;
}
The following are the details about the typedef and unions that needed in this program, also the four functions in this program.
typedef uint32_t Word;
struct _float {
// define bit_fields for sign, exp and frac
// obviously they need to be larger than 1-bit each
// and may need to be defined in a different order
unsigned int sign:1, exp:8, frac:23;
};
typedef struct _float Float32;
union _bits32 {
float fval; // interpret the bits as a float
Word xval; // interpret as a single 32-bit word
Float32 bits; // manipulate individual bits
};
typedef union _bits32 Union32;
void checkArgs(int, char **);
Union32 getBits(char *, char *, char *);
char *showBits(Word, char *);
int justBits(char *, int);
getBits asks us to convert bits into float,
and showBits asks us to convert float into bits.
assuming the correct typedefs in your code:
new.bits.sign = (unsigned int)atoi(sign);
new.bits.exp = (unsigned int)atoi(exp);
new.bits.frac = (unsigned int)atoi(frac);
Related
I have a 64-bit number written as two 32-bit unsinged ints: unsigned int[2]. unsigned int[0] is MSB, and unsigned int[1] is LSB. How would I convert it to double?
double d_from_u2(unsigned int*);
memcpy it from your source array to a double object in proper order. E.g. if you want to swap the unsigned parts
unsigned src[2] = { ... };
double dst;
assert(sizeof dst == sizeof src);
memcpy(&dst, &src[1], sizeof(unsigned));
memcpy((unsigned char *) &dst + sizeof(unsigned), &src[0], sizeof(unsigned));
Of course, you can always just reinterpret both source and destination objects as arrays of unsigned char and copy them byte-by-byte in any order you wish
unsigned src[2] = { ... };
double dst;
unsigned char *src_bytes = (unsigned char *) src;
unsigned char *dst_bytes = (unsigned char *) &dst;
assert(sizeof dst == 8 && sizeof src == 8);
dst_bytes[0] = src_bytes[7];
dst_bytes[1] = src_bytes[6];
...
dst_bytes[7] = src_bytes[0];
(The second example is not intended to be equivalent to the first one.)
There are several ways to copy the bits of your two integers into an object of type double.
At the lowest level, you can convert your input pointer to a [unsigned] char *, create a [unsigned] char * to the first byte of the return value, and copy between those by whatever means you choose. This provides you every opportunity to adjust byte order as may be needed -- for example, although your array is ordered most-significant word first, the order of the bytes within those words might not be what you need.
In the event that you need the bytes to be transferred into your double most-significant byte first, and that you do not want to depend on the machine byte order, you might do this:
double d_from_u2(unsigned int *in) {
double result;
unsigned char *result_bytes = (unsigned char *) &result;
for (int i = 0; i < 4; i++) {
result_bytes[i] = in[0] >> (24 - 8 * i);
result_bytes[i + 4] = in[1] >> (24 - 8 * i);
}
return result;
}
Using arithmetic (shifts, in this case) allows you to operate on the numeric values of the input independently of details of numeric representation.
Here is a solution that works without memcpybut using union:
#include "stdio.h"
#include "stdint.h"
double d_from_u2(unsigned int* v) {
union {
int32_t x[2];
int64_t y;
} u = { .x = { v[1], v[0] }};
printf("%llu\n", u.y); // 1311768467463794450
return (double)u.y;
}
int main(void) {
int32_t x[2];
x[0] = 0x12345678;
x[1] = 0x9abcef12;
printf("%f\n", d_from_u2(x)); // 1311768467463794432.000000
return 0;
}
See demo. In initializes the array int32_t[2] in the union and uses the int64_t to convert it to a double. The order of the initialization depends on which machine (little or big endian) it runs or where the values comes from (1 first).
I'm trying to convert an unsigned char array buffer into a signed int (vice versa).
Below is a demo code:
int main(int argv, char* argc[])
{
int original = 1054;
unsigned int i = 1054;
unsigned char c[4];
int num;
memcpy(c, (char*)&i, sizeof(int));
//num = *(int*) c; // method 1 get
memcpy((char *)&num, c, sizeof(int)); // method 2 get
printf("%d\n", num);
return 0;
}
1) Which method should I use to get from unsigned char[] to int?
method 1 get or method 2 get?
(or any suggestion)
2) How do I convert the int original into an unsigned char[]?
I need to send this integer via a buffer that only accepts unsigned char[]
Currently what i'm doing is converting the int to unsigned int then to char[], example :
int g = 1054;
unsigned char buf[4];
unsigned int n;
n = g;
memcpy(buf, (char*)&n, sizeof(int));
Although it works fine but i'm not sure if its the correct way or is it safe?
PS. I'm trying to send data between 2 devices via USB serial communication (between Raspberry Pi & Arduino)
Below approach will work regardless of endianness on machines (assuming sizeof(int)==4):
unsigned char bytes[4];
unsigned int n = 45;
bytes[3] = (n >> 24) & 0xFF;
bytes[2] = (n >> 16) & 0xFF;
bytes[3] = (n >> 8) & 0xFF;
bytes[0] = n & 0xFF;
Above code converts integer to byte array in little endian way. Here is link also with more information.
For reverse operation, see the answers here.
The approach you have with memcpy may give different results on different computers. Because memcpy will copy whatever is there in source address to destionation, and depending if computer is little endian or big endian, there maybe a LSB or MSB at the starting source address.
You could store both int (or unsigned int) and unsigned char array as union. This method is called type punning and it is fully sanitized by standard since C99 (it was common practice earlier, though). Assuming that sizeof(int) == 4:
#include <stdio.h>
union device_buffer {
int i;
unsigned char c[4];
};
int main(int argv, char* argc[])
{
int original = 1054;
union device_buffer db;
db.i = original;
for (int i = 0; i < 4; i++) {
printf("c[i] = 0x%x\n", db.c[i]);
}
}
Note that values in array are stored due to byte order, i.e. endianess.
I've created a function that turns an unsigned char into an unsigned char array of size 8 (where each index contains either 0 or 1, making up the 8 bits of the given char). Here is the 100% working version:
unsigned char * ucharToBitArray(unsigned char c)
{
unsigned char * bits = malloc(8);
int i;
for(i=sizeof(unsigned char)*8; i; c>>=1)
bits[--i] = '0'+(c&1);
return bits ;
}
I need to create a function that does the exact opposite of this now. Meaning, it will take and unsigned char array of size 8, and turn it into a regular single unsigned char. What is an effective way of doing so?
Thanks for the help!
The function is needlessly complex and obscure. I would suggest replacing it with this:
void ucharToBitArray(char bits[8], uint8_t c)
{
for(uint8_t i=0; i<8; i++)
{
if(c & (1<<i))
{
bits[7-i] = '1';
}
else
{
bits[7-i] = '0';
}
}
}
Now to convert it back, simply go the other way around. Check bits[i] and then set c |= (1<<i) if you found a '1'.
I have code snippet as Below
unsigned char p = 0;
unsigned char t[4] = {'a','b','c','d'};
unsigned int m = 0;
for(p=0;p<4;p++)
{
m |= t[p];
printf("%c",m);
m = m << 2;
}
Can anybody help me in solving this. consider i have an ascii value abcd stored in an array t[]. I want to store the same value in 'm'. m is my unsigned int variable . which stores the major number. when i copy the array into m & print m . m should print abcd. can anybody state their logic.
As I understand you, you want to encode the 4 characters into a single int.
Your bit shifting is not correct. You need to shift by 8 bits rather than 2. You also need to perform the shifting before the bitwise or. Otherwise you shift too far.
And it makes more sense, in my view, to print the character rather than m.
#include <stdio.h>
int main(void)
{
const unsigned char t[4] = {'a','b','c','d'};
unsigned int m = 0;
for(int p=0;p<4;p++)
{
m = (m << 8) | t[p];
printf("%c", t[p]);
}
printf("\n%x", m);
return 0;
}
Why not just look at the t array as an unsigned int?:
unsigned int m = *(unsigned int*)t;
Or you could use an union for nice access to the same memory block in two different ways, which I think is better than shifting bits manually.
Below is an union example. With unions, both the t char array and the unsigned int are stored in the same memory blob. You get a nice interface to each, and it lets the compiler do the bit shifting (more portable, I guess):
#include <stdio.h>
typedef union {
unsigned char t[4];
unsigned int m;
} blob;
int main()
{
blob b;
b.t[0]='a';
b.t[1]='b';
b.t[2]='c';
b.t[3]='d';
unsigned int m=b.m; /* m holds the value of blob b */
printf("%u\n",m); /* this is the t array looked at as if it were an unsignd int */
unsigned int n=m; /* copy the unsigned int to another one */
blob c;
c.m=n; /* copy that to a different blob */
int i;
for(i=0;i<4;i++)
printf("%c\n",c.t[i]); /* even after copying it as an int, you can still look at it as a char array, if you put it into the blob union -- no manual bit manipulation*/
printf("%lu\n", sizeof(c)); /* the blob has the bytesize of an int */
return 0;
}
Simply assign t[p] to m.
m = t[p];
this will implicitly promote char to unsigned int.
unsigned char p = 0;
unsigned char t[4] = {'a','b','c','d'};
unsigned int m = 0;
for(p=0;p<4;p++)
{
m = t[p];
printf("%c",m);
}
I have a unsigned char array containing the following value : "\x00\x91\x12\x34\x56\x78\x90";
That is number being sent in Hexadecimal format.
Additionally, it is in BCD format : 00 in byte, 91 in another byte (8 bits)
On the other side I require to decode this value as 0091234567890.
I'm using the following code:
unsigned int conver_bcd(char *p,size_t length)
{
unsigned int convert =0;
while (length--)
{
convert = convert * 100 + (*p >> 4) * 10 + (*p & 15);
++p
}
return convert;
}
However, the result which I get is 1430637214.
What I understood was that I'm sending hexadecimal values (\x00\x91\x12\x34\x56\x78\x90) and my bcd conversion is acting upon the decimal values.
Can you please help me so that I can receive the output as 00911234567890 in Char
Regards
Karan
It looks like you are simply overflowing your unsigned int, which is presumably 32 bits on your system. Change:
unsigned int convert =0;
to:
uint64_t convert = 0;
in order to guarantee a 64 bit quantity for convert.
Make sure you add:
#include <stdint.h>
Cast char to unsigned char, then print it with %02x.
#include <stdio.h>
int main(void)
{
char array[] = "\x00\x91\x12\x34\x56\x78\x90";
int size = sizeof(array) - 1;
int i;
for(i = 0; i < size; i++){
printf("%02x", (unsigned char )array[i]);
}
return 0;
}
Change return type to unsigned long long to insure you have a large enough integer.
Change p type to an unsigned type.
Print value with leading zeros.
unsigned long long conver_bcd(const char *p, size_t length) {
const unsigned char *up = (const unsigned char*) p;
unsigned long long convert =0;
while (length--) {
convert = convert * 100 + (*up >> 4) * 10 + (*up & 15);
++up;
}
return convert;
}
const char *p = "\x00\x91\x12\x34\x56\x78\x90";
size_t length = 7;
printf( "%0*llu\n", (int) (length*2), conver_bcd(p, length));
// 00911234567890